Deepfake detection research has largely focused on a threat model inherited from the 2017–2019 wave of public concern: face-swap and talking-head manipulation targeting politicians, celebrities, and public figures. This paper argues that the dominant harms that ultimately emerged between 2022 and 2026 differ substantially from those assumptions. Real-world incidents are now concentrated around peer-generated non-consensual intimate imagery (NCII), voice-clone scam calls, emotional-manipulation fraud, and private messaging-based distribution. Meanwhile, benchmark design, datasets, and detection systems remain heavily concentrated on public-figure video manipulation. We present a large-scale classification of 438 papers published between 2017 and 2025 across five threat categories and compare research allocation against observed harm distributions. Our analysis suggests that the primary bottleneck in practical deepfake defense is no longer model capability alone, but a persistent mismatch between research priorities and deployed harms. We further identify structural causes behind this misalignment and outline three concrete research agendas for under-defended categories.
Core Argument. The main limitation in real-world deepfake defense is increasingly a mismatch between the threat models prioritized by the research community and the harms observed in practice. Future work should place greater emphasis on telecommunications-scale voice-clone detection, privacy-preserving NCII protection, and messaging-layer defenses for peer-distributed synthetic media.
| ID | Threat Category | Corpus Share |
|---|---|---|
| T1 | Public-figure face-swap and talking-head video | 71.0% |
| T3 | Audio and voice-clone detection | 28.5% |
| T2 | Peer-generated NCII | Minimal representation |
| T5 | Messaging-layer and peer-distributed manipulation | Minimal representation |
| T4 | Real-time and live-stream manipulation | No dedicated papers identified |
@inproceedings{raza2026deepfakeswemissed,
title = {The Deepfakes We Missed:
We Built Detection Systems
for a Threat Model That Never Fully Materialized},
author = {Raza, Shaina},
year = {2026}
}