OpenAI presents a new tool that worries election administrators

Open AI , the inventor of the famous chat program “ Chat GPT ” based on generative artificial intelligence, has presented a voice cloning tool, the use of which will be limited to avoid recording incidents of fraud or crime, according to Agence France-Presse.

The tool, called “Voice Engine,” is capable of reproducing a person’s voice from a 15-second audio sample, according to an Open AI statement that touched on the results of a test conducted on a small scale.

The statement added, We realize that the ability to generate human-like voices is a step that entails great risks, especially in this election year.

He continued, We work with American and international partners from governments, media, entertainment, education, civil society, and other sectors, and we take their feedback into account during the process of creating the tool.

This year is witnessing elections in many countries, and researchers in the field of misinformation fear the misuse of generative artificial intelligence applications, especially voice cloning tools, which are cheap, easy to use, and difficult to track.

OpenAI confirmed that it had adopted a “cautious approach” before deploying the new tool more widely due to the potential for misuse of artificially generated sounds.

The presentation of the tool comes after a consultant working in the presidential campaign of a Democratic competitor to Joe Biden created an automated program that impersonated the US President running for a new term.

A voice similar to Joe Biden’s called on voters to abstain from voting in the New Hampshire primary.

The United States has since banned calls that use cloned voices generated by artificial intelligence, in order to combat political or commercial fraud.

Open AI explained that the partners testing Voice Engine have agreed on rules that require, for example, explicit consent from any person before using their voice, and the need to clearly indicate to listeners that the voices were created by artificial intelligence.

The company continued, We have adopted a set of security measures, including a watermark, so that we can trace the origin of every sound created by the new tool, in addition to proactively monitoring its use.

About Author

Related Post

Leave feedback about this

  • Rating