The rapid advancement of artificial intelligence has raised both excitement and concern over the past year, leading California State Senator Scott Wiener to introduce legislation aimed at addressing potential existential risks posed by AI. The proposed law, known as SB 1047, has already passed the State Senate and is making its way to the General Assembly. The bill, informed by input from the Center for AI Safety, aims to reduce societal-scale risks associated with AI.
While Senator Wiener’s intentions behind SB 1047 are commendable, the legislation itself may not have a significant impact on existential risks. Two main categories of risks associated with AI include unintended consequences and intentional misuse. SB 1047 focuses on unintended consequences, requiring developers to conduct safety assessments on “covered models” and implement safety measures. However, intentional misuse by bad actors is a potentially bigger concern that the legislation does not effectively address.
The provisions in SB 1047, such as shut down switches and incident reporting requirements, may seem sensible but are unlikely to prevent all potential risks. The unpredictability of AI behavior and the underdeveloped science of AI alignment make it challenging to predict and prevent all possible negative outcomes. Additionally, the legislation does little to address the risk of intentional misuse by bad actors who may operate outside of California’s jurisdiction.
Moreover, SB 1047 may inadvertently slow down innovation in AI development and put California’s technology companies at a competitive disadvantage globally. By imposing burdensome regulations, the legislation risks hindering the progress of advanced AI technologies that could benefit society in various ways, from revolutionizing research to curing diseases. The precautionary principle, which advocates for banning potentially harmful technologies until they are proven safe, may not be a realistic approach in the case of AI.
Addressing existential AI risks will likely require preemptive measures from military and law enforcement agencies, similar to how America combats terrorism. This may involve increased investment in AI safety research within these agencies, as well as the development of specialized forces dedicated to monitoring and countering potential AI threats. International cooperation will also be key in identifying and mitigating risks that transcend national borders.
In conclusion, while Senator Wiener’s focus on potential existential threats from AI is important, SB 1047 may not be effective in its current form. The ongoing challenge of mitigating AI risks will likely require solutions at the federal level, with a focus on national security issues. The private sector alone cannot be expected to bear the responsibility of ensuring national security against AI threats. While the intentions behind SB 1047 are commendable, further actions may be necessary to address the complex and evolving risks associated with AI technology.