INDIANAPOLIS (WISH) — Recent Consumer Reports research found that most artificial intelligence voice-cloning tools on the market don’t have meaningful safeguards in place to prevent fraud or misuse.
Consumer Reports policy analyst Grace Gedye joined “Daybreak” on Thursday to break down the findings, which highlight how easy it is to clone someone’s voice without the person’s knowledge or consent.
“We found several tools available for free or cheap online that made it pretty easy to clone someone’s voice,” Gedye said. “That’s alarming because scammers can use these tools to impersonate loved ones in distress or even create fake endorsements from celebrities.”
The study looked at six different AI voice-cloning services. Four of them required nothing more than checking a box claiming legal rights to the voice being cloned.
Researchers used publicly available audio clips to create convincing voice clones with little effort.
However, two services had safeguards that made the process more difficult. One required users to record a unique consent script, ensuring that the person providing the voice sample was aware of the cloning.
“There are steps companies can take to make it harder for bad actors to misuse these tools,” Gedye said. “For example, they can require users to match uploaded audio with a live recording, prevent voices from being made to say scam-related phrases, or even block the cloning of well-known figures like politicians and celebrities.”
Another recommendation is to watermark AI-generated audio, making it easier to identify and track if it’s being used deceptively.
Gedye says AI voice-cloning services that don’t implement proper safeguards could face legal consequences.
Consumer Reports has shared its findings with attorneys general across the U.S., arguing that some AI voice-cloning tools may already violate consumer protection laws. “There are laws on the books, both at the state and federal levels, that could apply here,” Gedye said. “For example, the Federal Trade Commission has regulations that might cover these tools.”
With AI technology evolving rapidly, experts say, more oversight may be needed to prevent fraud and misinformation, the researchers found.