Publications

Detailed Information

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

Cited 0 time in Web of Science Cited 2 time in Scopus
Authors

Yoo, KiYoon; Kwak, Nojun

Issue Date
2022
Publisher
Association for Computational Linguistics (ACL)
Citation
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, pp.72-88
Abstract
Recent advances in federated learning have demonstrated its promising capability to learn on decentralized datasets. However, a considerable amount of work has raised concerns due to the potential risks of adversaries participating in the framework to poison the global model for an adversarial purpose. This paper investigates the feasibility of model poisoning for backdoor attacks through rare word embeddings of NLP models. In text classification, less than 1% of adversary clients suffices to manipulate the model output without any drop in the performance on clean sentences. For a less complex dataset, a mere 0.1% of adversary clients is enough to poison the global model effectively. We also propose a technique specialized in the federated learning scheme called Gradient Ensemble, which enhances the backdoor performance in all our experimental settings.
URI
https://hdl.handle.net/10371/205560
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • Graduate School of Convergence Science & Technology
  • Department of Intelligence and Information
Research Area Feature Selection and Extraction, Object Detection, Object Recognition

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share