This Week in AI: It's shockingly easy to make a Kamala Harris deepfake

In This Article:

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

It was shockingly easy to create a convincing Kamala Harris audio deepfake on Election Day. It cost me $5 and took less than two minutes, illustrating how cheap, ubiquitous generative AI has opened the floodgates to disinformation.

Creating a Harris deepfake wasn't my original intent. I was playing around with Cartesia's Voice Changer, a model that transforms your voice into a different voice while preserving the original's prosody. That second voice can be a "clone" of another person's — Cartesia will create a digital voice double from any 10-second recording.

So, I wondered, would Voice Changer transform my voice into Harris'? I paid $5 to unlock Cartesia's voice cloning feature, created a clone of Harris' voice using recent campaign speeches, and selected that clone as the output in Voice Changer.

It worked like a charm:

I'm confident that Cartesia didn't exactly intend for its tools to be used in this way. To enable voice cloning, Cartesia requires that you check a box indicating that you won't generate anything harmful or illegal and that you consent to your speech recordings being cloned.

But that's just an honor system. Absent any real safeguards, there's nothing preventing a person from creating as many "harmful or illegal" deepfakes as they wish.

That's a problem, it goes without saying. So what's the solution? Is there one? Cartesia can implement voice verification, as some other platforms have done. But by the time it does, chances are a new, unfettered voice cloning tool will have emerged.

I spoke about this very issue with experts at TC's Disrupt conference last week. Some were supportive of the idea of invisible watermarks so that it’s easier to tell whether content has been AI-generated. Others pointed to content moderation laws such as the Online Safety Act in the U.K., which they argued might help stem the tide of disinformation.

Call me a pessimist, but I think those ships have sailed. We're looking at, as CEO of the Center for Countering Digital Hate Imran Ahmed put it, a "perpetual bulls--- machine."

Disinformation is spreading at an alarming rate. Some high-profile examples from the past year include a bot network on X targeting U.S. federal elections and a voicemail deepfake of President Joe Biden discouraging New Hampshire residents from voting. But U.S. voters and tech-savvy people aren't the targets of most of this content, according to True Media.org's analysis, so we tend to underestimate its presence elsewhere.