University of Hertfordshire

Dr Raimund Kirner

Reader

Raimund Kirner

Dr Raimund Kirner

Reader

Postal address:
University of Hertfordshire, Hatfield, Hertfordshire
United Kingdom

Expertise

Research interests

My research is based on real-world problems and I enjoy the ability to contribute to the technical state-of-the-art.

My research interests include the following topics:

Parallel Computing
    Parallel computing offers the challenge that the individual jobs executing in parallel may influence each other, for example, with regards of extra-functional properties like execution time. Adequate hardware and software architectures are necessary to bridge the gap between the many-core computing and embedded computing. Further, many-core computing also offers to be the base for novel approaches of robust computing.
    In collaboration with Alex Shafarenko I am the local coordinator at UH of the FP7 project ADVANCE, where the goal is to use of probabilistic runtime information to optimize S-Net programs. This includes program transformations and resource management like load balancing. Since 2012 I am the principal investigator at UH of the ARTMIS project CRAFTERS, where the goal is to develop multi-core systems, including predictability and reliability.

Worst-Case Execution Time Analysis
    The worst-case execution time (WCET) of a program is the maximum execution time it can take on a concrete target hardware. The knowledge of the WCET of tasks is crucial for the design of real-time systems. Only if safe upper bounds for the WCET of all time-critical tasks have been established, it becomes possible to verify the timeliness of the whole real-time system.
    I have been the principal investigator of the FORTAS-rt project, funded by the FWF. In cooperation with the research group of Prof. Helmut Veith, the reseach of FORTAS-rt focues on measurement-based timing analysis using efficient automatic generation of test-data.

Compiler Support for WCET Analysis
    Due to complexity limits and only partially available system descriptions during the analysis, the calculation of the WCET requires the provision of additional control flow information (flow information). For convenience of the developers the flow facts have to be provided at source-code level. But the WCET analysis has to be performed at object-code level to obtain tight results. Therefore, it is necessary by the compiler to transform the flow information from source-code to object-code level. The flow information has to be transformed in case of control-flow changing code optimizations performed by the compiler.
    I have been principal investigator of the CoSTA project, funded by the FWF. The research in CoSTA focuses on the generation of predictable code patterns on processors with timing anomalies and on the transformation of flow information from source code to object code.

Predictable Computer Architectures
    WCET analysis is quite complex on modern computer systems. Modern processors that contains features like pipelines or caches maintain an internal state to improve peak performance. Modelling this internal state exactly to calculate a tight WCET value is often infeasible. The infeasibility comes from the state explosion due to input-data-dependent control flow and cache states. The development of more predictable software and hardware concepts will reduce the complexity of WCET analysis. Specific programming paradigms can help to reduce the complexity of control-flow path analysis.

Verification of Embedded Systems
    Systematic testing is becoming increasingly important in the development of embedded systems. Within our research we focus on techniques to automate the generation of test cases using formal techniques like model checking.
    I have been the local coordinator together with Peter Puschner of the TeDES project at the Vienna University of Technology. Within TeDES, a functional testing framework with automatic test case generation has been developed. I have been the principal investigator of the SECCO project, funded by the FWF. With the SECCO project we initiated the novel field of research on preserving structural code coverage during code optimization. Systematic generation of test data at source-code level is especially useful for embeddedd computing, where portability of development and verification frameworks is of high importance. With the work in SECCO we enable code optimization during compilation while still preserving the structural code coverage initially achieved by a systematic test-data generation framework at source-code level. A prototype implementation based on the GCC compiler has shown that coverage preservation can be achieved at negligible performance costs, meaning that additional safety and performance are not contradicting goals.