Publications
Detailed Information
Pipe-BD: Pipelined Parallel Blockwise Distillation
Cited 2 time in
Web of Science
Cited 2 time in Scopus
- Authors
- Issue Date
- 2023
- Publisher
- IEEE
- Citation
- 2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, Vol.2023-April
- Abstract
- Training large deep neural network models is highly challenging due to their tremendous computational and memory requirements. Blockwise distillation provides one promising method towards faster convergence by splitting a large model into multiple smaller models. In state-of-the-art blockwise distillation methods, training is performed block-by-block in a data-parallel manner using multiple GPUs. To produce inputs for the student blocks, the teacher model is executed from the beginning until the current block under training. However, this results in a high overhead of redundant teacher execution, low GPU utilization, and extra data loading. To address these problems, we propose Pipe-BD, a novel parallelization method for blockwise distillation. Pipe-BD aggressively utilizes pipeline parallelism for blockwise distillation, eliminating redundant teacher block execution and increasing per-device batch size for better resource utilization. We also extend to hybrid parallelism for efficient workload balancing. As a result, Pipe-BD achieves significant acceleration without modifying the mathematical formulation of blockwise distillation. We implement Pipe-BD on PyTorch, and experiments reveal that Pipe-BD is effective on multiple scenarios, models, and datasets.
- ISSN
- 1530-1591
- Files in This Item:
- There are no files associated with this item.
Related Researcher
- College of Engineering
- Department of Electrical and Computer Engineering
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.