{"id":7387,"date":"2024-10-15T12:52:57","date_gmt":"2024-10-15T16:52:57","guid":{"rendered":"https:\/\/blogs.shu.edu\/stillmanexchange\/?p=7387"},"modified":"2024-10-15T12:52:57","modified_gmt":"2024-10-15T16:52:57","slug":"newsom-blocks-ai-safety-bill","status":"publish","type":"post","link":"https:\/\/blogs.shu.edu\/stillmanexchange\/2024\/10\/15\/newsom-blocks-ai-safety-bill\/","title":{"rendered":"Newsom Blocks AI Safety Bill"},"content":{"rendered":"<p><strong>Sheamus Finnegan<\/strong><br \/>\n<em><strong>Staff Writer<\/strong><\/em><\/p>\n<p><span data-preserver-spaces=\"true\">On September 29th, California Governor Gavin Newsom, whose state is home to over sixty percent of the world&#8217;s leading generative artificial intelligence companies, vetoed the controversial California Senate Bill No. 1047. The bill aimed to establish oversight of and regulations on the development of emerging artificial intelligence (AI) models.\u00a0\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><strong style=\"font-size: 16px\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Introduction<\/strong><span style=\"font-size: 16px\">\u00a0\u00a0<\/span><\/p>\n<figure id=\"attachment_7407\" aria-describedby=\"caption-attachment-7407\" style=\"width: 300px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-7407\" src=\"https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/ai-chart-300x194.jpeg\" alt=\"\" width=\"300\" height=\"194\" srcset=\"https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/ai-chart-300x194.jpeg 300w, https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/ai-chart-1024x664.jpeg 1024w, https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/ai-chart-768x498.jpeg 768w, https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/ai-chart-1536x996.jpeg 1536w, https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/ai-chart.jpeg 1600w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><figcaption id=\"caption-attachment-7407\" class=\"wp-caption-text\"><strong>Since the Deep Learning era began around 2010, computing power for AI has doubled on average every 6 months. That number is now projected to be around 3.4 as more resources have been designated for its research and development<\/strong><\/figcaption><\/figure>\n<p><span data-preserver-spaces=\"true\">Now more than ever, it is clear that artificial intelligence is the future. Since the release of ChatGPT in 2022, AI has experienced both rapid development and a surge in popularity. According to OpenAI, the computing power of AI models doubles every 3.4 months. Amidst this swift evolution, countless industries and corporations have been rushing to implement this quickly changing technology. The global market for generative AI is predicted to reach $1.3 trillion in under 10 years according to a projection from Bloomberg Intelligence. <\/span><span data-preserver-spaces=\"true\">In the midst of<\/span><span data-preserver-spaces=\"true\"> these rapid changes, there has been much public discourse about the safety of AI models. On September 29th, California Governor Gavin Newsom vetoed Senate Bill No. 1047, which would have implemented safety regulations on the development of large AI models and created a state entity <\/span><span data-preserver-spaces=\"true\">charged with monitoring<\/span><span data-preserver-spaces=\"true\"> the development of future models.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Breaking Down the Bill<\/strong><\/p>\n<figure id=\"attachment_7404\" aria-describedby=\"caption-attachment-7404\" style=\"width: 300px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-7404\" src=\"https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/scott-wiener-300x158.jpg\" alt=\"\" width=\"300\" height=\"158\" srcset=\"https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/scott-wiener-300x158.jpg 300w, https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/scott-wiener-390x205.jpg 390w, https:\/\/blogs.shu.edu\/stillmanexchange\/files\/2024\/10\/scott-wiener.jpg 600w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><figcaption id=\"caption-attachment-7404\" class=\"wp-caption-text\"><strong>Scott Wiener is the State Senator of California&#8217;s 11th district, which includes San Francisco, Broadmoor, Colma, Daly City, and part of South San Francisco<\/strong><strong style=\"font-size: 16px\">\u00a0 \u00a0<\/strong><span style=\"font-size: 16px\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0<\/span><\/figcaption><\/figure>\n<p><span data-preserver-spaces=\"true\">State Senator Scott Wiener\u2019s Senate Bill No. 1047, also known as the \u201cSafe and Secure Innovation for <\/span><span data-preserver-spaces=\"true\">Frontier Artificial Intelligence Models Act,\u201d starts by asserting that: \u201cIf not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security.\u201d Some of the possible threats, the bill suggests, include \u201cbiological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.\u201d To address these concerns, the bill seeks to establish the Board of Frontier Models, a government body tasked with monitoring the development of large AI models. The bill requires developers to implement certain safety measures <\/span><span data-preserver-spaces=\"true\">prior to<\/span><span data-preserver-spaces=\"true\"> training large AI models. Some of the required actions include: preventing unauthorized access to, misuse of, or unsafe modifications to the model, including the capability <\/span><span data-preserver-spaces=\"true\">to quickly shut down the entire model<\/span><span data-preserver-spaces=\"true\">; <\/span><span data-preserver-spaces=\"true\">the implementation of<\/span><span data-preserver-spaces=\"true\"> a written safety and security protocol; and an annual review by a third-party auditor.\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Support and Opposition\u00a0<\/strong><\/p>\n<p><span data-preserver-spaces=\"true\">While a majority of Californians supported the bill according to a poll from the Artificial Intelligence Policy Institute (AIPI), SB-1047 faced stringent opposition from Silicon Valley and major players in the tech industry, who believe that such regulations will hamper the development of AI. In his statement about the bill, Newsom acknowledged the importance of safety and concerns, though <\/span><span data-preserver-spaces=\"true\">also<\/span><span data-preserver-spaces=\"true\"> voiced his opposition <\/span><span data-preserver-spaces=\"true\">of<\/span><span data-preserver-spaces=\"true\"> the bill. He argues that the bill <\/span><span data-preserver-spaces=\"true\">would<\/span><span data-preserver-spaces=\"true\"> not be effective because it only focuses on models based on their size, rather than \u201c<\/span><span data-preserver-spaces=\"true\">tak[ing<\/span><span data-preserver-spaces=\"true\">] into account whether an AI system is deployed in high-risk environments.\u201d The bill targets large models that cost more than $100 million to train, or that require more than a certain amount of computing power. Newsom points out that smaller models are not necessarily safer, since small and medium-sized AI models often deal with sensitive data as well: \u201cBy focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.\u201d\u00a0\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Going Forward\u00a0<\/strong><\/p>\n<p><span data-preserver-spaces=\"true\">The safety of AI<\/span><span data-preserver-spaces=\"true\">, <\/span><span data-preserver-spaces=\"true\">and the balancing of its risks and rewards is, and will likely remain, a topic of much debate. As bills like SB-1047 show, the world is in the middle of figuring out what to do with this most recent technological revolution<\/span><span data-preserver-spaces=\"true\">, as well as<\/span><span data-preserver-spaces=\"true\"> attempting to strike a balance between an appropriate level of regulation and maintaining what Newsom describes as a \u201cfree-spirited cultivation of intellectual freedom.\u201d<\/span><span data-preserver-spaces=\"true\">\u00a0\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><em>Contact Sheamus at sheamus.finnegan@student.shu.edu<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sheamus Finnegan Staff Writer On September 29th, California Governor Gavin Newsom, whose state is home to over sixty percent of<\/p>\n","protected":false},"author":5684,"featured_media":7392,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[4,2],"tags":[],"class_list":["post-7387","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","category-trending"],"_links":{"self":[{"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/posts\/7387","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/users\/5684"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/comments?post=7387"}],"version-history":[{"count":8,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/posts\/7387\/revisions"}],"predecessor-version":[{"id":7431,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/posts\/7387\/revisions\/7431"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/media\/7392"}],"wp:attachment":[{"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/media?parent=7387"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/categories?post=7387"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.shu.edu\/stillmanexchange\/wp-json\/wp\/v2\/tags?post=7387"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}