DeepGuard: Fake Video Detention
Abstract
The proliferation of deepfake technology has raised serious concerns across various domains, from political misinformation to personal defamation. Deepfakes—AI-generated synthetic videos—can closely mimic real individuals, making them a powerful tool for deception. The increasing realism and accessibility of deepfake generation tools have created an urgent need for reliable detection mechanisms. To address this challenge, this project proposes DeepGuard, a hybrid deep learning-based system that combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks for effective fake video detection. The proposed system utilizes a ResNeXt-based CNN for spatial feature extraction from individual video frames and an LSTM-based Recurrent Neural Network to analyze temporal inconsistencies across frames. By training on a diverse set of benchmark datasets such as FaceForensics++, Celeb-DF, and the DeepFake Detection Challenge (DFDC), DeepGuard demonstrates strong generalization capabilities. This dual approach ensures accurate detection of both replacement and reenactment deepfakes. The system also supports real-time analysis, enabling its deployment in live video environments such as social media platforms. Compared to traditional methods, DeepGuard achieves higher detection accuracy, better scalability, and lower computational overhead, making it suitable for real-world applications. Through comprehensive evaluation, DeepGuard has proven effective in identifying subtle artifacts and unnatural motion patterns typical in manipulated videos, offering a robust and scalable solution to combat the evolving threat of deepfakes.