Publications

Detailed Information

Slice-and-Forge: Making Beter Use of Caches for Graph Convolutional Network Accelerators

Cited 0 time in Web of Science Cited 1 time in Scopus
Authors

Yoo, Mingi; Song, Jaeyong; Lee, Hyeyoon; Lee, Jounghoo; Kim, Namhyung; Kim, Youngsok; Lee, Jinho

Issue Date
2022-10
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT, pp.40-53
Abstract
© 2022 Association for Computing Machinery.Graph convolutional networks (GCNs) are becoming increasingly popular as they can process a wide variety of data formats that prior deep neural networks cannot easily support. One key challenge in designing hardware accelerators for GCNs is the vast size and randomness in their data access patterns which greatly reduces the effectiveness of the limited on-chip cache. Aimed at improving the effectiveness of the cache by mitigating the irregular data accesses, prior studies often employ the vertex tiling techniques used in traditional graph processing applications. While being effective at enhancing the cache efciency, those approaches are often sensitive to the tiling confgurations where the optimal setting heavily depends on target input datasets. Furthermore, the existing solutions require manual tuning through trial-and-error or rely on sub-optimal analytical models. In this paper, we propose Slice-and-Forge (SnF), an efcient hardware accelerator for GCNs which greatly improves the effectiveness of the limited on-chip cache. SnF chooses a tiling strategy named feature slicing that splits the features into vertical slices and processes them in the outermost loop of the execution. This particular choice results in a repetition of the identical computational patterns over irregular graph data over multiple rounds. Taking advantage of such repetitions, SnF dynamically tunes its tile size. Our experimental results reveal that SnF can achieve 1.73× higher performance in geomean compared to prior work on multi-engine settings, and 1.46× higher performance in geomean on small scale settings, without the need for off-line analyses.
ISSN
1089-795X
URI
https://hdl.handle.net/10371/200421
DOI
https://doi.org/10.1145/3559009.3569693
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI Accelerators, Distributed Deep Learning, Neural Architecture Search

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share