Deep Fake Deection
Deep Fake Deection
BY:
Mintu Mondal (14231121023)
Tanusree Paul (14231122042)
Ayantika Paul (14231122044)
Pritha Samanta (14231121025 )
EXPLORE NOW
Chandan koley (14231121014)
Prince Kumar Thakur (14231121013)
Guided by:
Mr. Biplab Mandal
INTRODUCTION
Advances in AI have led to the creation of synthetic media that can
manipulate realistic visual content.
03
WORK FLOW
Dataset Used
FaceForensics ++
Deepfake Detection
Challenge(DFDC)
PROPOSED SOLUTION OUTLINE
03
03 GAN(Generative Adversal Network)
07
CNN
Layers of CNN
Convolution Layer: Extracts features
like edges and corners
Pooling Layer: Dimension Reduction
Fully Connected Layer: Combines all
extracted features to classify the input.
Limitations
Can't Detect Video Changes Well
Misses Small Details
Requires High Computing Power
Not Good with New DeepfakesEasily
Fooled by Small Changes
WORK DONE
*Results obtained from our baseline model
Xception
Advantages
Better Feature Extraction
Higher Accuracy
Faster Training
Improved Efficiency
Better at Detecting Fine Details
GAN
The paper "Generation and Detection of Deepfakes using GANs and Affine Transformation" by Dr. J. Vijaya et al. (2023) presents a method that uses GANs and
affine transformations to generate deepfake videos by combining target images with driving videos. It also proposes a classification model to detect the
02 authenticity of these videos, leveraging facial recognition and forgery detection. The approach aims to improve security and protect against identity theft and
cyber-crime while exploring applications in entertainment and education[2].
Ahmed Hatem Soudy et al. (2024) propose a deep learning-based deepfake detection system using CNNs and Vision Transformers. The system processes face
03 images and uses a majority voting approach for predictions, achieving 97% accuracy with CNNs. It outperforms previous models, particularly in detecting
deepfakes on social media[3].
Sabareshwar D et al. (2024) propose a lightweight CNN architecture for efficient deepfake detection in low-resolution images in the frequency domain. The model
04 achieves similar accuracy to traditional networks while reducing parameter requirements by 92%, making it ideal for memory and power-constrained
environments like mobile devices[4].
REFERENCES