Publications

Detailed Information

SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators

DC Field Value Language
dc.contributor.authorYoo, Mingi-
dc.contributor.authorSong, Jaeyong-
dc.contributor.authorLee, Jounghoo-
dc.contributor.authorKim, Namhyung-
dc.contributor.authorKim, Youngsok-
dc.contributor.authorLee, Jinho-
dc.date.accessioned2024-05-02T05:37:54Z-
dc.date.available2024-05-02T05:37:54Z-
dc.date.created2023-06-08-
dc.date.created2023-06-08-
dc.date.issued2023-
dc.identifier.citationIEEE High-Performance Computer Architecture Symposium Proceedings, Vol.2023-February, pp.1-14-
dc.identifier.issn1530-0897-
dc.identifier.urihttps://hdl.handle.net/10371/200399-
dc.description.abstractGraph convolutional networks (GCNs) are becoming increasingly popular as they overcome the limited applicability of prior neural networks. One recent trend in GCNs is the use of deep network architectures. As opposed to the traditional GCNs, which only span only around two to five layers deep, modern GCNs now incorporate tens to hundreds of layers with the help of residual connections. From such deep GCNs, we find an important characteristic that they exhibit very high intermediate feature sparsity. This reveals a new opportunity for accelerators to exploit in GCN executions that was previously not present. In this paper, we propose SGCN, a fast and energy-efficient GCN accelerator which fully exploits the sparse intermediate features of modern GCNs. SGCN suggests several techniques to achieve significantly higher performance and energy efficiency than the existing accelerators. First, SGCN employs a GCN-friendly feature compression format. We focus on reducing the off-chip memory traffic, which often is the bottleneck for GCN executions. Second, we propose microarchitectures for seamlessly handling the compressed feature format. Specifically, we modify the aggregation phase of GCN to process compressed features, and design a combination engine that can output compressed features at no extra memory traffic cost. Third, to better handle locality in the existence of the varying sparsity, SGCN employs sparsity-aware cooperation. Sparsity-aware cooperation creates a pattern that exhibits multiple reuse windows, such that the cache can capture diverse sizes of working sets and therefore adapt to the varying level of sparsity. Through a thorough evaluation, we show that SGCN achieves 1.66x speedup and 44.1% higher energy efficiency compared to the existing accelerators in geometric mean.-
dc.language영어-
dc.publisherIEEE High-Performance Computer Architecture Symposium Proceedings-
dc.titleSGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators-
dc.typeArticle-
dc.identifier.doi10.1109/HPCA56546.2023.10071102-
dc.citation.journaltitleIEEE High-Performance Computer Architecture Symposium Proceedings-
dc.identifier.wosid000982303200001-
dc.identifier.scopusid2-s2.0-85151728027-
dc.citation.endpage14-
dc.citation.startpage1-
dc.citation.volume2023-February-
dc.description.isOpenAccessY-
dc.contributor.affiliatedAuthorLee, Jinho-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
dc.subject.keywordAuthorGraph Convolutional Networks-
dc.subject.keywordAuthorSparsity-
dc.subject.keywordAuthorCompressed Format-
dc.subject.keywordAuthorAccelerators-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI Accelerators, Distributed Deep Learning, Neural Architecture Search

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share