Posts by Collection



Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Published in Neurips 20, 2020

Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs)

Recommended citation: Wang H, Sreenivasan K, Rajput S, Vishwakarma H, Agarwal S, Sohn JY, Lee K, Papailiopoulos D. Attack of the tails: Yes, you really can backdoor federated learning. arXiv preprint arXiv:2007.05084. 2020 Jul 9.