Publications
Detailed Information
Occlumency: Privacy-preserving Remote Deep-learning Inference Using SGX
Cited 50 time in
Web of Science
Cited 45 time in Scopus
- Authors
- Issue Date
- 2019-10
- Citation
- The Annual International Conference on Mobile Computing and Networking (MobiCom)
- Abstract
- Deep-learning (DL) is receiving huge attention as enabling techniques for emerging mobile and IoT applications. It is a common practice to conduct DNN model-based inference using cloud services due to their high computation and memory cost. However, such a cloud-offloaded inference raises serious privacy concerns. Malicious external attackers or untrustworthy internal administrators of clouds may leak highly sensitive and private data such as image, voice and textual data. In this paper, we propose OCCLUMENCY, a novel cloud-driven solution designed to protect user privacy without compromising the benefit of using powerful cloud resources. OCCLUMENCY leverages secure SGX enclave to preserve the confidentiality and the integrity of user data throughout the entire DL inference process. DL inference in SGX enclave, however, impose a severe performance degradation due to limited physical memory space and inefficient page swapping. We designed a suite of novel techniques to accelerate DL inference inside the enclave with a limited memory size and implemented Occlumency based on Caffe. Our experiment with various DNN models shows that Occlumency improves inference speed by 3.6x compared to the baseline DL inference in SGX and achieves a secure DL inference within 72% of latency overhead compared to inference in the native environment.
- Files in This Item:
- There are no files associated with this item.
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.