OpenAI is apparently nervous that its new voice cloning tool is being used for scams

open artificial intelligence A new AI-based audio cloning tool called Speech Engine was announced on Friday. While the company is clearly proud of the technology’s potential and touts how it can be used to help children with reading and give a voice to those who have lost the ability to read, OpenAI is clearly very concerned that the technology could be misused. And for good reason.

“OpenAI is committed to developing artificial intelligence that is safe and broadly beneficial,” the company said in a statement on Friday, in which it made its concerns clear. first sentence.

The speech engine essentially uses the same technology as its text-to-speech API and ChatGPT Speech, but the application of this technology is mainly to clone speech, rather than read something out loud with the tone and intonation of a stranger. OpenAI notes that its technology is so good that it can “create emotive and realistic sounds” with just 15 seconds of samples.

“Today, we’re sharing initial insights and results from a small-scale preview of a speech engine model that uses text input and a single 15-second audio sample to generate natural speech that closely resembles the original speaker,” the company wrote.

It’s unclear what kind of training data was used to build the speech engine, a sore spot for AI companies that have been accused of violating copyright laws by training models on protected works. Companies such as OpenAI argue that their training methods qualify as “fair use” under U.S. copyright law, but many rights holders have filed lawsuits complaining that they were not compensated for their work.

OpenAI’s website has sample audio clips provided via the speech engine, and they’re pretty nifty damn impressive. The ability to change the language someone speaks is also cool. But you can’t try it yourself just yet.

There are already many voice cloning tools available such as Laboratory Elevenand translators like represent. But since the first public launch of ChatGPT in late 2022, OpenAI has become a behemoth. Once it makes Voice Engine a publicly available product (no release date has been announced yet), it could open the floodgates to all kinds of new abuses we’ve never dreamed of.

OpenAI’s statement on Friday noted that “due to the potential for misuse of synthetic speech, we are taking a cautious and informed approach to wider release,” underscoring the concerns every major company now faces about this artificial intelligence technology.

One particularly worrying example of someone using artificial intelligence voice cloning for nefarious purposes occurred earlier this year: President Joe Biden’s Voice. Steve Kramer, who worked for Democratic presidential candidate Dean Phillips, cloned Biden’s voice with message that people shouldn’t bother with New Hampshire primary vote. Kramer used ElevenLabs AI voice tools to send automated call messages to approximately 5,000 people in “less than 30 minutes.” Washington post.

“We want to start a conversation about the responsible deployment of synthetic voices and how society can adapt to these new capabilities,” OpenAI’s statement said. “Based on the results of these conversations and small-scale testing, we will make decisions about whether and how to deploy this technology at scale. Make smarter decisions.”

Of course, this is the double-edged sword of all new technologies. Scammers will always find a way to take advantage of emerging tools to defraud people of their hard-earned money. But you don’t need to use artificial intelligence-generated fake voices to trick people.As we reported earlier this week, the latest crypto scam uses Real actors hired by Fiverr Read the script that helps sell their scam as real.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *