Publications

Detailed Information

Behemoth: A flash-centric training accelerator for extreme-scale DNNs

Cited 11 time in Web of Science Cited 19 time in Scopus
Authors

Kim, Shine; Jin, Yunho; Sohn, Gina; Bae, Jonghyun; Ham, Tae Jun; Lee, Jae Wook

Issue Date
2021-02
Publisher
USENIX Association
Citation
Proceedings of the 19th USENIX Conference on File and Storage Technologies, FAST 2021, pp.371-385
Abstract
© 2021 by The USENIX Association.The explosive expansion of Deep Neural Networks (DNN) model size expedites the need for larger memory capacity. This movement is particularly true for models in natural language processing (NLP), a dominant application of AI along with computer vision. For example, a recent extreme-scale language model GPT-3 from OpenAI has over 175 billion parameters. Furthermore, such a model mostly consists of FC layers with huge dimensions, and thus has a relatively high arithmetic intensity. In that sense, an extreme-scale language model does not suit well to the conventional HBM DRAM-based memory system that lacks capacity and offers extremely high bandwidth. For this reason, we propose to pair the neural network training accelerator with the flash-based memory system instead of the HBM DRAM-based memory system. To design the effective flash-based memory system, we optimize the existing SSD design to improve the SSD bandwidth as well as endurance. Finally, we evaluate our proposed platform, and show that Behemoth achieves 3.65× cost saving over TPU v3 and 2.05× training throughput improvement over the accelerator attached to a commercial SSD.
URI
https://hdl.handle.net/10371/183762
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share