Demystifying Poisoning Backdoor Attacks from a Statistical Perspective
This article is intended to introduce our recent research on a fundamental understanding of backdoor in machine learning models, including discriminative and generative models, based on our recent work published at the International Conference on Machine Learning (ICML)1 and International Conference on Learning Representations (ICLR) 2.
-
Xun Xian, Ganghua Wang, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, and Jie Ding. “Understanding backdoor attacks through the adaptability hypothesis,” International Conference on Machine Learning (ICML), 2023. ↩
-
Ganghua Wang, Xun Xian, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, and Jie Ding. “Demystifying Poisoning Backdoor Attacks from a Statistical Perspective,” International Conference on Learning Representations (ICLR), 2024. ↩
Optimal rates for the kernel ridge regression
A short summary of paper: Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368, 2007.
A minimal introduction to $k$-nearest neighbors from a theoretical aspect
We introduce the convergence results of $k$-NN including the optimal convergence rate.
Adversarial attack
This article focuses on the adversarial attack for deep neural networks on image classification tasks, aiming to illustrate the problem and serve as a starting point for developing attack/defense methods.
PL condition
PL condition is an alternative assumption of convexity in the optimization analysis of deep neural networks, which might help explaining the landscape of the objective function for DNNs.