Publications

Detailed Information

Modeling Cache and Application Performance on Modern Shared Memory Multiprocessors

Cited 1 time in Web of Science Cited 1 time in Scopus
Authors

Yook, Junsung; Egger, Bernhard

Issue Date
2021-01
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
19th IEEE International Symposium on Parallel and Distributed Processing with Applications, 11th IEEE International Conference on Big Data and Cloud Computing, 14th IEEE International Conference on Social Computing and Networking and 11th IEEE International Conference on Sustainable Computing and Communications, ISPA/BDCloud/SocialCom/SustainCom 2021, pp.1151-1158
Abstract
© 2021 IEEE.Modern processors include a cache to reduce the access latency to off-chip memory. In shared memory multi-processors, the same data can be stored in multiple processor-local caches. These private copies reduce contention on the memory system, however, incur a replication overhead. Multiple copies consume valuable cache resources and thus increase the likelihood for capacity misses. Maintaining cache coherence is another difficulty caused by multiple copies. In particular, to set a cache line's status to exclusive in one cache requires invalidating all other shared copies, which can significantly stress the processor interconnect. Furthermore, loading data from a remote cache incurs a large overhead. In the absence of source code or data layout modifications, a rearrangement of a parallel application's threads can often reduce cache line replication significantly. By mapping threads that frequently access the same cache lines to the same processor node, redundant duplicates and excessive invalidation can be minimized. In this paper, we devise a closed queuing network model to compare the performance of different thread arrangements onto the nodes of a multiprocessor system in order to predict the expected optimal arrangement. The inputs to the model are obtained through a single profiling run. The outputs of the queuing network are performance indices such as throughput, utilization, and latency for the different components of the memory system. Based on these metrics, we compute the memory stall time of individual cores and predict application runtime. Evaluated on a 72-core 4-node Intel Xeon architecture, the presented model is able to identify the best thread arrangement from a set of six configurations for 20 out of 21 parallel applications from various benchmark suites.
ISSN
2158-9178
URI
https://hdl.handle.net/10371/183793
DOI
https://doi.org/10.1109/ISPA-BDCloud-SocialCom-SustainCom52081.202
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share