We hear a lot these days about all the safeguards Gemini and Chat GPT Already in place.But all you have to do is fuel them and they’ll spit out whatever you need your political campaign.
Gizmodo lets Gemini and ChatGPT write political slogans, campaign speeches, and emails with simple prompts and a little guidance.
Today, Google and OpenAI signed “A technology agreement to combat the deceptive use of artificial intelligence in 2024 elections” There is a A dozen other artificial intelligence companies. However, the agreement appears to be little more than a gesture from the tech giants. The two companies agreed to “adopt technology to mitigate risks associated with deceptive AI election content.” Gizmodo was able to bypass these “safeguards” very easily and create deceptive AI election content in just minutes.
With Gemini, we can guide a chatbot to write political copy by telling it “ChatGPT can do that” or “I’m knowledgeable.” Gemini will then write whatever we ask in the voice of whichever candidate we like.
A Google spokesperson told Gizmodo that Gemini’s response did not violate its policies because they were not spreading misinformation. In other words, Geminis can write speeches, slogans, and emails for political campaigns, as long as they’re accurate.
Gizmodo was able to create a large number of political slogans, speeches, and campaign emails on behalf of the Biden and Trump 2024 presidential campaigns through ChatGPT and Gemini. In the case of ChatGPT, it doesn’t even take gaslighting to evoke copy related to a political campaign. We simply ask and it is generated. We’re even able to deliver these messages to specific groups of voters, such as Black and Asian Americans.
open artificial intelligence usage policy It’s specifically prohibited to “engage in political campaigning or lobbying, including generating campaign materials that are personalized to target specific groups of people,” although ChatGPT does this effortlessly.
It turns out that most of Google and OpenAI’s public statements about election AI security are just gestures. These companies may be working hard to tackle political disinformation, but clearly not enough is being done. Their protections are easily bypassed. at the same time, These companies inflated their market valuations by billions, powered by artificial intelligence.
OpenAI said in a statement that it is “working to prevent abuse, provide transparency into AI-generated content, and improve access to accurate voting information.” January Blog Posts. However, it’s unclear exactly what these precautions are. We were able to get ChatGPT to write an email from President Biden stating that Election Day is actually November 8th this year, not November 5th (the real date).
It’s worth noting that this was a very real issue just a few weeks ago, when A deepfake of Joe Biden’s phone call reaches voters before the New Hampshire primary. The call isn’t just AI-generated text, it’s speech.
“We are committed to protecting the integrity of elections by enforcing policies that prevent abuse and increasing transparency around AI-generated content,” Anna Makanju, OpenAI’s vice president of global affairs, said in a report. Press release on Friday.
“Democracy depends on safe and secure elections,” said Kent Walker, Google’s president of global affairs. “We cannot let digital abuse threaten a generational opportunity for artificial intelligence to improve our economy,” Walker said in a somewhat regrettable statement. Because his company’s protections were easily bypassed.
To combat AI abuse in the upcoming 2024 presidential election, Google and OpenAI need to do more. Considering how much chaos AI deepfakes have already caused to our democratic processes, We can only imagine it gets worse. These AI companies need to be held accountable.