Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Published in Neurips 20, 2020
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs)
Recommended citation: Wang H, Sreenivasan K, Rajput S, Vishwakarma H, Agarwal S, Sohn JY, Lee K, Papailiopoulos D. Attack of the tails: Yes, you really can backdoor federated learning. arXiv preprint arXiv:2007.05084. 2020 Jul 9. https://papers.nips.cc/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Paper.pdf