Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms
Abstract
Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter.
- Publication:
-
Proceedings of the National Academy of Science
- Pub Date:
- November 2022
- DOI:
- 10.1073/pnas.2216035119
- Bibcode:
- 2022PNAS..11916035B