As concerns swirl about the disruption artificial intelligence could cause for the 2024 elections, OpenAI on Monday declared that politicians and their campaigns are not allowed to use the company’s AI tools.
The restrictions also extend to impersonation. Under its policies, OpenAI said in a blog post, users may not create chatbots posing as political candidates or government agencies and officials, such as the secretaries of state who administer US elections.
The announcement shows how OpenAI is attempting to get ahead of criticism that artificial intelligence — which has already been used this election cycle to disseminate fake images— could undermine the democratic process with computer-generated disinformation.
OpenAI’s policies echo those implemented by other large tech platforms. But even social media firms that are much bigger than OpenAI, and that dedicate massive teams to election integrity and content moderation, have often shown that they struggle to enforce their own rules. OpenAI is likely to be no different — and a lack of federal regulation is forcing the public to simply take the companies at their word.
A patchwork set of policies is slowly emerging among Big Tech platforms when it comes to so-called “deepfakes,” or misleading content created by generative artificial intelligence.
Meta said last yearit would bar political campaigns from using generative AI tools in their advertising and require politicians to disclosethe use of any AI in their ads. And YouTube announcedit would require all content creators to disclose if their videos feature “realistic” but manipulated media, including through the use of AI.
The varying sets of rules, which cover different types of content creators under different scenarios, underscore that there is no uniform standard governing how artificial intelligence can or should be used in politics.
The Federal Election Commission is currently considering whether US regulations against “fraudulently misrepresenting other candidates or political parties” extend to AI-generated content, but it has yet to issue a determination on the matter.
In Congress, some lawmakers have proposed a national ban on the deceptive use of AI in all political campaigns, but that legislation has not advanced. In a separate push to create AI guardrails, Senate Majority Leader Chuck Schumer has said AI in elections is an urgent priority but spent much of last year holdingclosed-door briefingsto bring senators up to speed on the technology in preparation for lawmaking.
The lack of clarity surrounding regulation of AI deepfakes has some campaign officials scrambling. President Joe Biden’s reelection campaign, for example, is working to develop a legal playbookfor how to respond to fabricated media.
“The idea is we would have enough in our quiver that, depending on what the hypothetical situation we’re dealing with is, we can pull out different pieces to deal with different situations,” Arpit Garg, deputy general counsel for the Biden campaign, previously told CNN, adding that the campaign intends to have “templates and draft pleadings at the ready” that it could file in US courts or even with regulators outside the country to combat foreign disinformation actors.
Efforts such as the Biden campaign’s highlight how even as tech platforms claim to be prepared for AI’s impact on elections, there is little trust that the companies are fully capable of following through.