ElevenLabs tweeted about an increase in cases of voice cloning since the tool’s launch. The startup also asked Twitter users for feedback on how to stop replicating voices.

Startup Evenlabs creates celebrity voices for you

Did celebrities like Shah Rukh Khan or Amitabh Bachchan call you? These calls are likely fake, and scammers can use AI-powered tools to trick users by creating voices that look like celebrity voices.

Recently, artificial intelligence startup ElevenLabs launched a beta version of a new artificial intelligence tool that can create new artificial voices for text-to-speech or copy someone else’s speech. Although this tool is used by scammers to trick people.

Acknowledging the problem, ElevenLabs tweeted that voice replication has increased since the tool’s launch. The startup also asked Twitter users for feedback on how to stop replicating voices.

ElevenLabs tweeted, “Crazy weekend – thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases. We want to reach out to the Twitter community for thoughts and feedback.”

According to Motherboard’s report, some clips use AI-generated voices that sound like the voices of celebrities. The report also suggested that a viral clip circulated depicting Emma Watson in a celebrity voiceover appearing to be reading Hitler’s Mein Kampf. AI-generated voices of celebrities have also been used to post homophobic, violent, transphobic, and racist sentiments.

To end voice cloning, the company plans to roll out additional account verifications to enable voice cloning, such as billing information or full identity verification. In addition, the startup plans to verify the copyright of Voice by submitting a sample along with a hint. The startup also plans to abandon Voicewoman entirely and manually check each clone request.

Meanwhile, OpenAI, which launched the ChatGPT viral chatbot last year, released a tool that checks if the text was created by an artificial intelligence program and masquerades as a human.

This tool tags content created with OpenAI products and other AI development software. However, the company said, “Because there are still various limitations, it should be used as an auxiliary tool for determining the source of other texts rather than as a primary decision-making tool.”