Skip to content
forked from CoderWZW/ARLib

An open-source framework for conducting data poisoning attacks on recommendation systems, designed to assist researchers and practitioners.

Notifications You must be signed in to change notification settings

leisangcs/ARLib

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ARLib (Pytorch)

Alt text

An open-source framework for conducting data poisoning attacks on recommendation systems, designed to assist researchers and practitioners.

Members:
Zongwei Wang, Chongqing University, China, [email protected]
Hao Ma, Chongqing University, China, [email protected]

Supported by:
Prof. Min Gao, Chongqing University, China, [email protected]

Framework

Alt text

Usage

  1. Two configure files attack_parser.py and recommend_parser are in the directory named conf, and you can select and configure the recommendation model and attack model by modifying the configuration files.
  2. Run main.py.

Implemented Models

Recommend Model Paper Type
GMF Yehuda et al. Matrix Factorization Techniques for Recommender Systems, IEEE Computer'09. MF
WRMF Hu et al.Collaborative Filtering for Implicit Feedback Datasets, KDD'09. MF
NCF He et al. Neural Collaborative Filtering, WWW'17. Deep Learning
NGCF Wang et al. Neural Graph Collaborative Filtering, SIGIR'19. Graph
SGL Wu et al. Self-supervised Graph Learning for Recommendation, SIGIR'21. Graph + CL
SimGCL Yu et al. Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation, SIGIR'22. Graph + CL
  • CL is short for contrastive learning (including data augmentation); DA is short for data augmentation only
Attack Model Paper Form Method
RandomAttack Lam et al. Shilling Recommender Systems for Fun and Profit. WWW'2004 dataAttack Heuristic
BandwagonAttack Gunes et al. Shilling Attacks against Recommender Systems: A Comprehensive Survey. Artif.Intell.Rev.'2014 dataAttack Heuristic
PGA Li et al. Data poisoning attacks on factorization-based collaborative filtering. NIPS'2016. dataAttack Direct Gradient Optimization
AUSH Lin C et al. Attacking recommender systems with augmented user profiles. CIKM'2020. dataAttack GAN
GOAT Wu et al. Ready for emerging threats to recommender systems? A graph convolution-based generative shilling attack. Information Sciences'2021. dataAttack GAN
FedRecAttack Rong et al. Fedrecattack: Model poisoning attack to federated recommendation. ICDE'2022. gradientAttack Direct Gradient Optimization

Implement Your Model

Determine whether you want to implement the attack model or the recommendation model, and then add the file under the corresponding directory.

If you are an attack method, make sure:

  1. Whether you need information of the recommender model, and then set self.recommenderGradientRequired.
  2. Whether you need gradient information of training recommender model, and then set self.recommenderModelRequired.
  3. Make sure your attack type (gradientAttack/dataAttack).
  • If gradientAttack: Reimplement function gradientattack()
  • If dataAttack: Reimplement function gradientattack()

If you are an attack method, reimplement the following functions:

  • init()
  • train()
  • save()
  • predict()
  • evaluate()
  • test()

Requirements

base==1.0.4
numba==0.53.1
numpy==1.18.0
scipy==1.4.1
torch==1.7.1

About

An open-source framework for conducting data poisoning attacks on recommendation systems, designed to assist researchers and practitioners.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%