
This is the codebase for Wavelet-Driven Generalizable Framework for Deepfake Face Forgery Detection
To install the required dependencies and set up the environment, run the following command in your terminal:
sh install.sh
All datasets are sourced from the SCLBD/DeepfakeBench repository, originally obtained from official websites. We are releasing the Generated sample sets; to access and preprocess the training sets, please look at the DeepFakeBench repository and follow the same procedure.
To reproduce the results, use the provided test.py
script. For specific detectors, download them from link and update the path in ./training/config/detector/detector.yaml
. An example command to test on "Celeb-DF-v1" & "Celeb-DF-v2" & "FaceShifter" datasets using clip_wavelet model, look like this:
python3 training/test.py --detector_path ./training/config/detector/detector.yaml --test_dataset "Celeb-DF-v1" "Celeb-DF-v2" "FaceShifter" --weights_path ./training/weights/clip_wavelet_best.pth
Model | Venue | Backbone | Protocol | CDFv1 | CDFv2 | Fsh | Avg |
---|---|---|---|---|---|---|---|
CLIP | CVPR-23 | ViT | Self-Supervised | 0.743 | 0.750 | 0.730 | 0.747 |
Wavelet-CLIP (ours) | - | ViT | Self-Supervised | 0.756 | 0.759 | 0.732 | 0.749 |
To reproduce the results, use the provided gen_test.py
script. For specific detectors, download them from link and update the path in ./training/config/detector/detector.yaml
. An example command to test on "DDIM" & "DDPM" & "LDM datasets using clip_wavelet model, look like this:
python3 training/gen_test.py --detector_path ./training/config/detector/detector.yaml --test_dataset "DDIM" "DDPM" "LDM" --weights_path ./training/weights/clip_wavelet_best.pth
Model | DDPM | DDIM | LDM | Avg. | ||||
---|---|---|---|---|---|---|---|---|
AUC | EER | AUC | EER | AUC | EER | AUC | EER | |
Xception | 0.712 | 0.353 | 0.729 | 0.331 | 0.658 | 0.309 | 0.699 | 0.331 |
CapsuleNet | 0.746 | 0.314 | 0.780 | 0.288 | 0.777 | 0.289 | 0.768 | 0.297 |
Core | 0.584 | 0.453 | 0.630 | 0.417 | 0.540 | 0.479 | 0.585 | 0.450 |
F3-Net | 0.388 | 0.592 | 0.423 | 0.570 | 0.348 | 0.624 | 0.386 | 0.595 |
MesoNet | 0.618 | 0.416 | 0.563 | 0.465 | 0.666 | 0.377 | 0.615 | 0.419 |
RECCE | 0.549 | 0.471 | 0.570 | 0.463 | 0.421 | 0.564 | 0.513 | 0.499 |
SRM | 0.650 | 0.393 | 0.667 | 0.385 | 0.637 | 0.397 | 0.651 | 0.392 |
FFD | 0.697 | 0.359 | 0.703 | 0.354 | 0.539 | 0.466 | 0.646 | 0.393 |
MesoInception | 0.664 | 0.372 | 0.709 | 0.339 | 0.684 | 0.353 | 0.686 | 0.355 |
SPSL | 0.735 | 0.320 | 0.748 | 0.314 | 0.550 | 0.481 | 0.677 | 0.372 |
CLIP | 0.781 | 0.292 | 0.879 | 0.203 | 0.876 | 0.210 | 0.845 | 0.235 |
Wavelet-CLIP | 0.792 | 0.282 | 0.886 | 0.197 | 0.897 | 0.190 | 0.893 | 0.192 |
Thanks to the work done by DeepfakeBench, much of the implementation relies on their framework. Please refer to their paper and repo for pre-trained weights of other detectors and preprocessed datasets. We thank the authors for releasing their code and models.
@inproceedings{baru2025wavelet,
title={Wavelet-Driven Generalizable Framework for Deepfake Face Forgery Detection},
author={Baru, Lalith Bharadwaj and Boddeda, Rohit and Patel, Shilhora Akshay and Gajapaka, Sai Mohan},
booktitle={Proceedings of the Winter Conference on Applications of Computer Vision},
pages={1661--1669},
year={2025}
}