A recent congressional report has raised concerns about the federal government’s use of artificial intelligence tools to monitor protests and potentially stifle dissent. The House Subcommittee on Government Weaponization highlighted examples of AI being used for censorship abroad, such as monitoring protests related to COVID-19 lockdowns in the UK and Canada. The report also mentioned President Biden’s executive actions aimed at addressing biases in AI, which led to controversy over Google’s Gemini AI chatbot’s image generator. The report warned that government regulations could lead to the monitoring and censorship of information disfavored by the government.
The Biden administration has taken various actions to prepare the US for the age of AI, including requiring AI companies to share information about how they train specific models. While these actions are intended to ensure the safety of AI technology, the weaponization subcommittee cautioned that they could result in undue government influence over the AI market. The report also highlighted bipartisan efforts in Congress to regulate AI, although little new legislation has been passed thus far. Government pressures against AI firms also stem from funding provided for tools to combat misinformation and behavior change campaigns related to vaccine skepticism.
Last year, the Biden administration secured voluntary commitments from major AI companies to address harmful bias and discrimination in AI models. The National Institute of Standards and Technology was granted access to new AI models by companies like OpenAI and Anthropic, while a task force was established with several agencies to develop new AI evaluation methods and benchmarks for AI safety. The weaponization subcommittee accused the Biden administration of colluding with social media companies to suppress content online, particularly regarding pandemic-related misinformation, and warned of potential interactions with AI companies in the future.
To address these concerns, the Weaponization Subcommittee called for the federal government to refrain from involvement in AI algorithm and dataset decisions by private companies, as well as for Congress to stop funding for content moderation-related AI research. It also recommended that the US avoid regulating lawful speech on a global scale and reduce federal regulatory authority over AI. The panel proposed the Censorship Accountability Act, which would require federal agencies to be transparent about content moderation-related communications and activities. Overall, the report issued a stark warning about the potential misuse of AI tools by the government to monitor and censor dissenting views.