Publications
Detailed Information
Advancing Beyond Identification: Multi-bit Watermark for Large Language Models via Position Allocation
Cited 0 time in
Web of Science
Cited 0 time in Scopus
- Authors
- Issue Date
- 2024
- Citation
- Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024, Vol.1, pp.4031-4055
- Abstract
- We show the viability of tackling misuses of large language models beyond the identification of machine-generated text. While existing zero-bit watermark methods focus on detection only, some malicious misuses demand tracing the adversary user for counteracting them. To address this, we propose Multi-bit Watermark via Position Allocation, embedding traceable multi-bit information during language model generation. Through allocating tokens onto different parts of the messages, we embed longer messages in high corruption settings without added latency. By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency. Leveraging the benefits of zero-bit watermarking (Kirchenbauer et al., 2023a), our method enables robust extraction of the watermark without any model access, embedding and extraction of long messages (≥ 32-bit) without finetuning, and maintaining text quality, while allowing zero-bit detection all at the same time. Code is released here: https://github.com/bangawayoo/mb-lmwatermarking.
- Files in This Item:
- There are no files associated with this item.
Related Researcher
- Graduate School of Convergence Science & Technology
- Department of Intelligence and Information
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.