Publications

Detailed Information

GuardiaNN: Fast and Secure On-Device Inference in TrustZone Using Embedded SRAM and Cryptographic Hardware

DC Field Value Language
dc.contributor.authorChoi, Jinwoo-
dc.contributor.authorKim, Jaeyeon-
dc.contributor.authorLim, Chaemin-
dc.contributor.authorLee, Suhyun-
dc.contributor.authorLee, Jinho-
dc.contributor.authorSong, Dokyung-
dc.contributor.authorKim, Youngsok-
dc.date.accessioned2023-08-23T05:56:05Z-
dc.date.available2023-08-23T05:56:05Z-
dc.date.created2023-08-21-
dc.date.created2023-08-21-
dc.date.created2023-08-21-
dc.date.issued2022-
dc.identifier.citationMiddleware 2022 - Proceedings of the 23rd ACM/IFIP International Middleware Conference, pp.15-28-
dc.identifier.urihttps://hdl.handle.net/10371/195397-
dc.description.abstractAs more and more mobile/embedded applications employ Deep Neural Networks (DNNs) involving sensitive user data, mobile/embedded devices must provide a highly secure DNN execution environment to prevent privacy leaks. Aimed at securing DNN data, recent studies execute part of a DNN in a trusted execution environment (e.g., TrustZone) to isolate DNN execution from the other processes; however, as the trusted execution environments for mobile/embedded devices provide limited memory protection, DNN data remain unencrypted in DRAM and become vulnerable to physical attacks. The devices can prevent the physical attacks by keeping DNN data encrypted in DRAM; when DNN data get referenced during DNN execution, they get loaded to the SRAM and get decrypted by a CPU core. Unfortunately, using the SRAM with demand paging greatly increases DNN execution time due to the inefficient use of the SRAM and the high CPU consumption of data encryption/decryption. In this paper, we present GuardiaNN, a fast and secure DNN framework which greatly accelerates DNN execution without sacrificing security guarantees. To accelerate secure DNN execution, GuardiaNN first reduces slow DRAM accesses with direct convolutions and maximizes the reuse of SRAM-stored data with DNN-friendly SRAM management. Then, aimed at dedicating the limited CPU resources to DNN execution, GuardiaNN offloads DNN data encryption/decryption onto secure cryptographic hardware and employs pipelining to overlap DNN execution with the encryption/decryption. For eight DNNs chosen from five representative mobile/embedded application domains, our implementation of GuardiaNN on STM32MP157C-DK2 development board achieves a geomean speedup of 15.3x and a geomean energy efficiency improvement of 15.2x over a baseline secure DNN framework which employs demand-paged SRAM to secure sensitive data.-
dc.publisherAssociation for Computing Machinery, Inc-
dc.titleGuardiaNN: Fast and Secure On-Device Inference in TrustZone Using Embedded SRAM and Cryptographic Hardware-
dc.typeArticle-
dc.identifier.doi10.1145/3528535.3531513-
dc.citation.journaltitleMiddleware 2022 - Proceedings of the 23rd ACM/IFIP International Middleware Conference-
dc.identifier.wosid001061556200002-
dc.identifier.scopusid2-s2.0-85132291344-
dc.citation.endpage28-
dc.citation.startpage15-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorLee, Jinho-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
dc.subject.keywordAuthorARM TrustZone-
dc.subject.keywordAuthorcryptographic hardware-
dc.subject.keywordAuthorDRAM encryption-
dc.subject.keywordAuthorembedded SRAM-
dc.subject.keywordAuthoron-device inference-
dc.subject.keywordAuthorsecurity-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI Accelerators, Distributed Deep Learning, Neural Architecture Search

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share