Publications

Detailed Information

Worst-case Performance Analysis Techniques for Distributed Real-Time Embedded Systems : 분산 실시간 임베디드 시스템을 위한 최악 성능 분석 기법

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

최준철

Advisor
하순회
Major
공과대학 컴퓨터공학부
Issue Date
2018-08
Publisher
서울대학교 대학원
Description
학위논문 (박사)-- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2018. 8. 하순회.
Abstract
For the design of embedded systems that support real-time applications, it is required to guarantee the satisfaction of real-time constraints. After applications are mapped to a candidate architecture, we check the feasibility of the architecture by estimating the performance. Fast estimation enables us to explore the wider design space of architecture selection and application mapping. More accurate estimation will reduce the system cost. It remains a challenging problem to tightly estimate the worst-case response time of an application in a distributed embedded system, especially when there are dependencies between tasks.



Multi-core systems become popular design elements not only for general-purpose systems but also for real-time embedded systems since the single-core systems reach performance limitation and do not provide scalable performance increase anymore. Multi-core systems that consist of many cores usually have shared resources such as shared L2 cache, memory, and peripheral devices. Since these resources are shared by applications executed on the multi-core systems, an application may experience non-deterministic resource access delay due to resource contention caused by simultaneous accesses from several cores. In the worst-case response time (WCRT) analysis of multi-core systems with shared resources, non-deterministic arbitration delay due to resource contention should be considered conservatively.



Due to the continuous scaling in semiconductor manufacturing technology and their low-voltage/high-frequency operating condition, today's embedded systems operate under many reliability threats. In case of permanent faults a part of system can be permanently damaged out resulting in system crash, while some functions are temporarily out of order in the presence of transient faults. In deep submicron technology, it is known that transient faults are expected to represent the main source of errors in very-large-scale integration (VLSI) circuits. When a system is composed of several tasks with different reliability requirements, which is called a mixed-criticality system, tasks are typically hardened either by putting more hardware resources or by re-executing the defective part In order to fulfill the reliability requirements.



The behavior of the application may change in dynamic situations due to the increased complexity of applications. For instance, a video decoder switches to different behaviors according to the type of encoded input frame. A task may change its behavior according to its quality-of-service requirement. When the application has dynamic behaviors, it is required to analyze the worst-case response time of such applications.



In order to provide guarantee of real-time constraints, in this dissertation, we propose techniques that estimates the worst-case performance of an application in such distributed real-time systems. First of all, this dissertation proposes a baseline worst-case response time analysis technique, called hybrid performance analysis combining the response time analysis (RTA) technique and the scheduling time bound analysis (STBA) technique to compute a tighter bound faster. The proposed hybrid approach considers intra-graph and inter-graph interferences differently. The intra-graph interference is caused if a task is interfered or preempted by tasks in the same application, which can be estimated accurately by the STBA technique. On the other hand, the inter-graph interference is caused by tasks in the other task graphs and it is estimated using a different analysis method based on the RTA.



Based on the baseline technique, this dissertation proposes three techniques that support various design aspects of the multi-core real-time embedded systems. The first technique is proposed for modeling the shared resource contention to find a tight upper bound of arbitration delay. While we compute the worst-case resource demand from each processing element based on the event stream model similarly to the reference technique, we improve the estimation accuracy significantly by accounting for the scheduling pattern of tasks. Then the proposed shared resource contention analysis is extended to support dependent tasks. To find a tight upper bound of arbitration delay, we derive a shared resource demand bound for each processing element, considering the task dependency. In addition, we study the industrial-strength engine management benchmark provided by Bosch GmbH, to declare the applicability of the proposed shared resource contention analysis.



To support fault-tolerant mixed criticality systems that has reliability constraints, this dissertation proposes a novel optimization technique with worst-case response time guarantees. Typically, in fault-tolerant multi-core systems, tasks can be replicated or re-executed in order to enhance the reliability. In addition, based on the policy of mixed-criticality scheduling, low-criticality tasks can be dropped at runtime. Such uncertainties caused by hardening and mixed-criticality scheduling make WCRT analysis very difficult. We improve the analysis in order to tighten the pessimism of WCRT estimates by considering the maximum number of faults to be maximally tolerated. Further, we improve the mixed-criticality scheduling by allowing partial dropping of low-criticality tasks. On top of those, we explore the design space of hardening, task-to-core mapping, and quality-of-service of the multi-core mixed-criticality systems.



The last problem addressed in this dissertation is to analyze the WCRT of an application with dynamic behaviors. It is assumed that an application is specified as a multi-mode dataflow (MMDF) that switches to another behavior through mode transition. When a mode transition occurs during the execution of the MMDF application, the previous and the next modes may be executed at the same time, inducing a complex interference between the modes. Moreover, there may exist additional costs such as task migration cost in case a task migrates to other processors after the mode transition. The interference between modes and the task migration cost caused by mode transitions will affect the worst-case performance of the MMDF application.



Through extensive experiments with real-life benchmarks and synthetic examples, the performance of the proposed techniques is verified.
Language
English
URI
https://hdl.handle.net/10371/143338
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share