Christian Internó, Robert Geirhos, Markus Olhofer, Sunny Liu, Barbara Hammer, David Klindt, "AI-Generated Video Detection via Perceptual Straightening", Neural Information Processing Systems (NeurIPS2025), 2025.
AbstractThe rapid advancement of generative AI enables highly realistic synthetic video, posing significant challenges for content authentication and raising urgent concerns about misuse. Existing detection methods often struggle with generalization and capturing subtle temporal inconsistencies. We propose ReStraV (Representation Straightening for Video), a novel approach to distinguish natural from AI-generated videos. Inspired by the “perceptual straightening” hypothesis [1, 2]—which suggests real-world video trajectories become more straight in neural representation domain—we analyze deviations from this expected geometric property. Using a pre-trained self-supervised vision transformer (DINOv2), we quantify the tem10 poral curvature and stepwise distance in the model’s representation domain. We aggregate statistical and signals descriptors of these measures for each video and train a classifier. Our analysis shows that AI-generated videos exhibit significantly different curvature and distance patterns compared to real videos. A lightweight classifier achieves state-of-the-art detection performance (e.g., 97.17% accuracy and 98.63% AUROC on the VidProM benchmark), substantially outperforming existing image- and video-based methods. ReStraV is computationally efficient (≈ 45 ms per video), offering a low-cost and effective detection solution. This work provides new insights into using neural representation geometry for fake video detection