OpenAI says voice cloning tool too risky to release, cites fears of 'misuse' in 2024 election
OpenAI said it has developed a tool that can clone human voices from just 15 seconds of recorded audio — but it hasn’t yet released it to the public over fears that it will be misused, especially during the 2024 election.
Called Voice Engine, the software was first developed in 2022 an integrated into ChatGPT’s text-to-speech feature.
But beginning in late 2023, OpenAI “started privately testing it with a small group of trusted partners,” the artificial intelligence giant said in a blog post earlier reported on by The Guardian.
The company said that it was “impressed” by the applications of Voice Engine, which have included providing reading assistance to non-readers, serving as an educational tool for children and translating content.
In one of its most impressive use cases, researchers at the Norman Prince Neurosciences Institute in Rhode Island used a poor-quality, 15-second clip of a young woman giving a presentation at school who has since lost her voice to a vascular brain tumor to restore” her speech.
However, OpenAI has yet to release Voice Engine to the general public because there are “serious risks, which are especially top of mind in an election year,” per the blog post.
“We are choosing to preview but not widely release this technology at this time, OpenAI said, to bolster societal resilience against the challenges brought by ever more convincing generative models.”
It wasn’t immediately clear when OpenAI would debut Voice Engine at a larger scale, though in the near future, it said: We encourage steps like phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information.
We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities, OpenAI added. Based on these conversations and the results of these small-scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.
Already, AI’s use has spurred misinformation, including when a deepfake image of Donald Trump resisting arrest as his wife Melania yelled at police went viral.
As a result, Google updated its policy last year to require all verified election advertisers to prominently disclose when their ads use AI. OpenAI, however, hasn’t followed suit.
The San Francisco-based startup, however, assured in its blog post that its partners with exclusive access to Voice Engine have agreed to usage policies that bars the impersonation of another individual or organization without consent or legal right.
“We are engaging with US and international partners from across government, media, entertainment, education, civil society and beyond to ensure we are incorporating their feedback as we build,” OpenAI assured.
Representatives for OpenAI did not immediately respond to The Post’s request for comment.