In contemporary computer-based systems, the dynamic behaviour during runtime assumes a pivotal role, particularly in determining the worst-case latency for processing critical tasks. This domain encompasses what are widely known as real-time systems, with applications spanning various industries, including but not limited to medicine, aerospace, automotive, and military sectors. The significance of real-time requirements in these domains arises from the potentially catastrophic consequences of failing to meet specified deadlines. Consequently, ensuring dependable adherence to these temporal constraints becomes imperative, necessitating a rigorous certification process against established formal standards.
Traditionally, real-time systems have relied on real-time operating systems (RTOS), meticulously tailored to meet stringent temporal demands. However, evolving real-time system requirements and increased functionality demands have prompted consideration of alternative software ecosystems like Linux. While Linux's extensive feature set and widespread adoption are enticing, its fundamental design philosophy diverges significantly from real-time systems, making the certification process challenging. Adding to the complexity, Linux currently lacks robust assurances regarding its runtime behaviour, rendering it unsuitable for deployment in the aforementioned critical domains.
In response to these challenges, this thesis introduces a novel quantitative, semiformal methodology for examining and validating Linux's runtime behaviour, with particular focus on determining worst-case latency. This approach bridges the gap between Linux's design principles and real-time application demands, making it a more viable option for domains where reliability and temporal determinism are crucial.