Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Neurips 20, 2020
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs)
Recommended citation: Wang H, Sreenivasan K, Rajput S, Vishwakarma H, Agarwal S, Sohn JY, Lee K, Papailiopoulos D. Attack of the tails: Yes, you really can backdoor federated learning. arXiv preprint arXiv:2007.05084. 2020 Jul 9. https://papers.nips.cc/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Paper.pdf
Undergraduate course, UW-Madison, Computer Science, 2018
I was a teaching assistant to the instructor: Beck Hasti
Undergraduate course, UW-Madison, Computer Science, 2019
I was a teaching assistant to the instructor: Prof. Eric Bach
Undergraduate course, UW-Madison, Computer Science, 2019
I was a teaching assistant to the instructor: Prof. Eric Bach
Undergraduate course, UW-Madison, Computer Science, 2020
I was a teaching assistant to the instructor: Prof. Amos Ron