Attack of the Voice Clones: How AI Voice Cloning Tools threaten election integrity and democracy

PUBLISHED BY
Center for Countering Digital Hate

EXECUTIVE SUMMARY
AI voice-cloning tools generate election disinformation in 80% of tests

• The Center for Countering Digital Hate tested 6 of the most popular AI voice cloning tools – ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed – to assess their safety measures against the generation of election disinformation in politicians’ voices.

• The tools generated convincing voice clones in 80% of cases when tested a combined 240 times on producing specified false statements in the voices of high-profile politicians. Examples of disinformation generated using the tools includes:
- Donald Trump warning people not to vote because of a bomb threat
- Emmanuel Macron saying he had misused campaign funds
- Biden claiming to have manipulated election results

• One tool – Invideo AI – was not only found to produce specific statements in politicians’ voices but also auto-generated speeches filled with disinformation.

Safety measures were insufficient or nonexistent for all tools

• Speechify and PlayHT performed the worst, failing to prevent the generation of convincing voice clones in all 40 of their respective test-runs.

• Just one tool – ElevenLabs – identified US and UK politicians’ voices and blocked them from being cloned, but it failed to block major politicians from the EU.

• Descript, Invideo AI and Veed have a feature requiring users to upload a specific statement before cloning a voice, but they still produced convincing voice clones of politicians in most test-runs after researchers used ‘jailbreaking’ techniques.

READ THE FULL WHITE PAPER

RELATED RESEARCH PAPERS FROM THE INTEGRITY PROJECT

TIPAZ.org