layout | title | nav_exclude |
---|---|---|
default |
Adversarial Deep Learning Website |
true |
- Motivation
- Overview
- Outline of following chapters
-
Types of learning paradigms:
- Supervised learning
- Semi-supervised learning
- Unsupervised learning
-
Artificial Neural Network
-
Convolutions and CNN
-
Recurrent Neural Networks
-
Encoder-Decoder
-
Auto-Encoders
-
Domain adaptation
-
Transformer
- History
- Why do adversarial examples occur
- High dimensionality
- Insufficient data
- Linearity of distribution
- Robust and Non-Robust features
- Overparameterization
- Basic Notation
- Taxonomy
- Threat Model
- Adversary’s goal
- Adversary’s knowledge
- White Box
- Black Box
- Grey Box
- Victim models
- Security Evaluation
- Generation of Adversarial Examples
- Box-Constrained L-BFGS
- Fast Gradient Sign Method (FGSM)
- Basic Iterative Method (BIM)
- Evasion Black-Box Attacks
- Houdini
- Substitute Mode
- Evasion White-Box Attacks
- Carlini and Wagner Attacks (C&W)
- Deep Fool
- Universal Attack
- Ground Truth Attack
- Evasion Black-Box Attacks
- One-Pixel Attack
- Upset and Angri
- Zeroth Order Optimization (ZOO) based Attack
- Query Efficient Black Box Attack
- Adversarial Transformation Networks (ATNs)
- Generative Adversarial Networks (GANs)
- Poisoning Attacks
- Attacks in the Real World
- Cell-phone Camera Attack
- Road Sign Attack
- Cyberspace Attack
- 3D Adversarial Object
- Robotic Vision
- Recovering the True Labels of Adversarial Examples
- Robust Optimization
- Network Regularization
- Adversarial Training
- Provable Defenses
- Gradient Masking/Obfuscation
- Defensive Distillation
- Shattered Gradients
- Randomized Gradients
- Exploding & Vanishing Gradients
- Robust Optimization
- Detecting and Rejecting Adversarial Example
- Training-based Detection
- Criteria-based Detection
- Feature Squeezing
- Artifacts
- MagNet
- Character-level
- Typo correction
- Robust encoding
- Learning to Discriminate Perturbations
- Word-level
- The Concept of Fairness
- Social Perspective
- Probabilistic Perspective
- Fairness by Adversarial Learning
- Domain Adaptation
- Supervised and Semi-Supervised Classification
- Regularization through Adversarial Learning
- Why does adversarial training work?
- Stealing Machine Learning Models via Prediction APIs
- Model Reconstruction from Model Explanations
- Membership Inference Attacks Against Machine Learning Model
- Poisoning Attacks against Support Vector Machines
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
- Stronger Data Poisoning Attacks Break Data Sanitization Defenses
- Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
- Explaining and Harnessing Adversarial Examples
- Towards Evaluating the Robustness of Neural Networks
- Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
- Certified Defenses for Data Poisoning Attacks
- Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
- Robust Logistic Regression and Classification
- Understanding Black-box Predictions via Influence Functions
- Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
- Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
- Machine Learning with Membership Privacy using Adversarial Regularization
- Privacy-preserving Prediction
- Deep Learning with Differential Privacy
- Towards Deep Learning Models Resistant to Adversarial Attacks
- Certified Defenses against Adversarial Examples
- An abstract domain for certifying neural networks
- Adversarially Robust Generalization Requires More Data
- Adversarial Examples Are Not Bugs, They Are Features
- Theoretically Principled Trade-off between Robustness and Accuracy