vanilla training and adversarial training in PyTorch
-
Updated
Feb 19, 2022 - Python
8000
vanilla training and adversarial training in PyTorch
This repository contains the implementation of three adversarial example attacks including FGSM, noise, semantic attack and a defensive distillation approach to defense against the FGSM attack.
Implementations for several white-box and black-box attacks.
Exploring the concept of "adversarial attacks" on deep learning models, specifically focusing on image classification using PyTorch. Implementing and demonstrating the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks against a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) trained on the MNIST.
"Neural Computing and Applications" Published Paper (2023)
Add a description, image, and links to the fgsm-attack topic page so that developers can more easily learn about it.
To associate your repository with the fgsm-attack topic, visit your repo's landing page and select "manage topics."