Sign up to our newsletter Join our membership and be updated daily!

OpenAI introduces voice-cloning tool under strict controls amid concerns over audio foorgeries

OpenAI launches safety committee as It commences training new model
OpenAI launches safety committee as It commences training new model

On Friday, OpenAI introduced a voice-cloning tool named “Voice Engine”, which they intend to regulate closely until measures are implemented to prevent audio forgeries aimed at deceiving listeners.

The tool is capable of replicating a person’s speech from just a 15-second audio snippet, as outlined in an OpenAI blog post detailing the outcomes of a preliminary trial of the technology.

“We acknowledge the serious risks associated with generating speech that mimics individuals’ voices, particularly in an election year,” stated the San Francisco-headquartered company.

“We are engaging with U.S. and international partners from across government, media, entertainment, education, civil society and beyond to ensure we are incorporating their feedback as we build.”

Disinformation researchers are concerned about the widespread misuse of AI-powered tools, particularly in an important election year, due to the increasing availability of voice cloning technology.

These tools are inexpensive, user-friendly, and difficult to trace, raising fears of their misuse.

In response to these concerns, OpenAI stated that it is proceeding carefully and thoughtfully with a wider release of its voice cloning tool due to the risks associated with synthetic voice manipulation.

The careful release follows a few months after a political consultant, affiliated with a Democratic presidential candidate challenging Joe Biden, confessed to orchestrating a robocall that impersonated the US president.

The automated call, generated using AI technology by an aide to Minnesota congressman Dean Phillips, mimicked Biden’s voice, discouraging people from voting in the New Hampshire primary in January.

The incident sparked concern among experts, who worry about an influx of AI-driven deepfake disinformation during the 2024 White House race and other crucial elections worldwide this year.

OpenAI stated that partners testing Voice Engine have agreed to abide by rules, including obtaining explicit and informed consent from any individual whose voice is replicated using the tool.

The company also emphasized the importance of making it clear to audiences when they are listening to AI-generated voices.

“We have implemented a set of safety measures, including watermarking to trace the origin of any audio generated by Voice Engine, as well as proactive monitoring of how it’s being used,” OpenAI stated.

YOU MAY ALSO READ: AfDB approves $75m loan for Nigeria’s Indorama to boost fertilizer production

 

Share with friends