The Laboratory for Digitalisation primarily focuses on the intersection between three research areas: Quantum Computing, Systems Engineering, and Software Engineering. Future computing systems will leverage non-classical algorithms, and their hardware and software architectures need to combine advantages of classical and quantum processing units. Consequently, scientific progress needs interdisciplinary thinking across fields now more than ever. The group seeks cross-cutting answers to highly topical scientific questions and participates in active transfer into applications.
Tom Krüger has joined the team as doctoral student in the field of quantum computing, contributing to the TAQO-PAM project. Welcome, Tom!
Wolfgang Mauerer, Ralf Ramsauer and Andrej Utz present results of the iDev 4.0 project at the SEMICON Europa 2022 in Munich.
Abstract: The advent of multi-core CPUs in nearly all embedded markets has prompted an architectural trend towards combining safety critical and uncritical software on single hardware units. We present an architecture for mixed-criticality systems based on Linux that allows for the consolidation of critical and uncritical components onto a single hardware unit.
In the context of the iDev 4.0 project, the use-case of this technological building block is to reduce the overall amount of distributed computational hardware components accross semiconductor assembly lines in fabs.
CPU virtualisation extensions enable strict and static partitioning of hardware by direct assignment of resources, which allows us to boot additional operating systems or bare metal applications running aside Linux. The hypervisor Jailhouse is at the core of the architecture and ensures that the resulting domains may serve workloads of different criticality and can not interfere in an unintended way. This retains Linux’s feature-richness in uncritical parts, while frugal safety and real-time critical applications execute in isolated domains. Architectural simplicity is a central aspect of our approach and a precondition for reliable implementability and successful certification.
In this work, we present our envisioned base system architecture, and elaborate implications on the transition from existing legacy systems to a consolidated environment.
Contribution to CORE A* conference ACM SIGMOD driven by Manuel Schönberger and Wolfgang Mauerer breaks new ground for quantum computing in the database community (PDF).
Abstract: The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached. Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size. In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs.
Felix Greiwe has joined the group as doctoral student in the field of quantum computing, contributing to the TAQO-PAM project. Welcome, Felix!
Contribution to the highly competitive Open Source Summit (with acceptance rates below 20%) in Yokohama, Japan by Benno Bielmeier and Wolfgang Mauerer.
Abstract: Software for safety-critical systems must meet strict functional and temporal requirements. Since it is impossible to exhaustively test the required qualities, formal verification techniques have been devised. However, these approaches are usually only applicable to small systems, and require software architecture and development to consider verification goals from the ground up. Safety-critical systems face an increasing demand for functionality, and need to handle the associated complexity. While the desired functionalities can be satisfied by embedded Linux, established verification techniques fail for code of such magnitude. We show a semi-formal, model-based approach to derive reliable statements about the run-time characteristic of embedded Linux. Using a-priori expert knowledge, we generate a finite automaton-based effective description of safety-relevant aspects. The real-world, system-dependent behaviour of the resulting automata, in particular timing statistics for state transitions, is empirically obtained via system instrumentation. We then show how to turn this into (statistical) guarantees on their behaviour. We show how this allows to draw conclusions that can be used in certifying systems in terms of reliability, latencies, and other real-time properties.
QPU-System Co-Design for Quantum HPC Accelerators (with contributions by Hila Safi and Wolfgang Mauerer) was accepted by the 35th GI/ITG International Conference on the Architecture of Computing Systems (PDF).
Abstract: The use of quantum processing units (QPUs) promises speed-ups for solving computational problems, but the quantum devices currently available possess only a very limited number of qubits and suffer from considerable imperfections. One possibility to progress towards practical utility is to use a co-design approach: Problem formulation and algorithm, but also the physical QPU properties are tailored to the specific application. Since QPUs will likely be used as accelerators for classical computers, details of systemic integration into existing architectures are another lever to influence and improve the practical utility of QPUs.
In this work, we investigate the influence of different parameters on the runtime of quantum programs on tailored hybrid CPU-QPU-systems. We study the influence of communication times between CPU and QPU, how adapting QPU designs influences quantum and overall execution performance, and how these factors interact. Using a simple model that allows for estimating which design choices should be subjected to optimisation for a given task, we provide an intuition to the HPC community on potentials and limitations of co-design approaches. We also discuss physical limitations for implementing the proposed changes on real quantum hardware devices.
The paper is joint work with with Siemens Technology, and was performed within the BMBF sponsored project TAQO-PAM. A reproduction package allows independent researchers to confirm our results.
Uncovering Instabilities in Variational-Quantum Deep Q-Networks (with contributions by Maja Franz, Lucas Wolf and Wolfgang Mauerer) was accepted by the Journal of the Franklin Institute (Impact Factor: 4.25)
Abstract: Deep Reinforcement Learning (RL) has considerably advanced over the past decade. At the same time, state-of-the-art RL algorithms require a large computational budget in terms of training time to converge. Recent work has started to approach this problem through the lens of quantum computing, which promises theoretical speed-ups for several traditionally hard tasks. In this work, we examine a class of hybrid quantum- classical RL algorithms that we collectively refer to as variational quantum deep Q-networks (VQ-DQN). We show that VQ-DQN approaches are subject to instabilities that cause the learned policy to diverge, study the extent to which this afflicts reproduciblity of established results based on classical simulation, and perform systematic experiments to identify potential explanations for the observed instabilities. Additionally, and in contrast to most existing work on quantum reinforcement learning, we execute RL algorithms on an actual quantum processing unit (an IBM Quantum Device) and investigate differences in behaviour between simulated and physical quantum systems that suffer from implementation deficiencies. Our experiments show that, contrary to opposite claims in the literature, it cannot be conclusively decided if known quantum approaches, even if simulated without physical imperfections, can provide an advantage as compared to classical approaches. Finally, we provide a robust, universal and well-tested implementation of VQ-DQN as a reproducible testbed for future experiments.
The paper ist joint work with with Fraunhofer IIS, and arose of the BMBF sponsored project QLindA. Of course, the publication is accompanied by an extensive reproduction package that allows independent researchers to confirm our results.
OTH Regensburg ist Konsortialführer in 3 Millionen-EUR-Leuchtturmprojekt Q-Stream
Ein federführend von der OTH Regensburg eingebrachter Vorschlag für ein Quanten-Leuchtturmprojekt im Munich Quantum Valley ist nach dem Votum einer international besetzten Expertenkommission zur Förderung ausgewählt worde. Der Projekt Q-Stream: Quantum-Accelerated Data Stream Analytics, das sich mit der Konstruktion hybrider quanten-klassischer Spezialhardware beschäftigen wird, die auf die Analyse von Datenströmen spezialisiert ist.
Das Projekt zielt darauf ab, mittels eines Hardware-Software-Codesign-Ansatzes Anwendungen für die aktuell vorhandene Generation von Quantencomputern zu finden, die aufgrund technischer Imperfektionen noch zahlreiche Unzulänglichkeiten und Störungen aufweisen, die ihre potentiell enorme Leistungsfähigkeit noch nicht zur vollen Entfaltung bringen.
Von der Gesamt-Fördersumme vom 2.98 Millionen entfallen rund 750,000 EUR auf die OTH Regensburg, die zudem eine Stelle aus der High-Tech-Agenda in der Projekt einbringt. Das Labor für Digitalisierung wird sich in den Arbeiten auf den konzeptionellen Entwurf problemspezifisch adaptierter Spezial-Rechner konzentrieren. Weitere Quantenkompetenzen werden vom Fraunhofer-Institut für integrierte Schaltungen (IIS) (Transpilation und Dekomposition von Quantenschaltkreisen) sowie der Technischen Hochschule Deggendorf (Vorabsimulation zukünftiger Quantenrechner auf klassischen HPC-Systemen) eingebracht.
Dass es die OTH Regensburg neben sechs bayerischen Universitäten, der Max-Planck- und der Fraunhofer-Gesellschaft sowie der Bayerischen Akademie der Wissenschaften auf die Liste der geförderten Einrichtungen geschafft hat, scheint auch eine Bestätigung für die langjährigen Quanten-Aktivitäten in Forschung und Lehre von Prof. Dr. Wolfgang Mauerer zu sein.
Informationen des Bayerischen Staatsministeriums für Wissenschaft und Kunst, von dem das Labor die Förderung dankbar entgegennimmt, finden sich in einer Pressemitteilung.
Static Hardware Partitioning on RISC-V - Shortcomings, Limitations, and Prospects was accepted by the IEEE IoT Forum Special Session: Virtualization for IoT Devices 2022.
Abstract: On embedded processors that are increasingly equipped with multiple CPU cores, static hardware partitioning is an established means of consolidating and isolating workloads onto single chips. This architectural pattern is suitable for mixed-criticality workloads that need to satisfy both, real-time and safety requirements, given suitable hardware properties.
In this work, we focus on exploiting contemporary virtualisation mechanisms to achieve freedom from interference respectively isolation between workloads. Possibilities to achieve temporal and spatial isolation-while maintaining real-time capabilities-include statically partitioning resources, avoiding the sharing of devices, and ascertaining zero interventions of superordinate control structures.
This eliminates overhead due to hardware partitioning, but implies certain hardware capabilities that are not yet fully implemented in contemporary standard systems. To address such hardware limitations, the customisable and configurable RISC-V instruction set architecture offers the possibility of swift, unrestricted modifications.
We present findings on the current RISC-V specification and its implementations that necessitate interventions of superordinate control structures. We identify numerous issues adverse to implementing our goal of achieving zero interventions respectively zero overhead: On the design level, and especially with regards to handling interrupts. Based on micro-benchmark measurements, we discuss the implications of our findings, and argue how they can provide a basis for future extensions and improvements of the RISC-V architecture.
Ralf Ramsauer, Stefan Huber and Wolfgang Mauerer will discuss Zero-Overhead Virtualisation: It's a Trap! at the Embedded Linux Conference in Dublin
Abstract: Embedded processors are increasingly equipped with powerful CPU cores. For mixed-criticality scenarios with multiple independent real-time appliances, this allows us to consolidate formerly distributed systems. This requires absence of unintended interaction between different computing domains, which can be achieved by exploiting virtualisation extensions of modern CPUs. Though providing strong isolation guarantees, virtualisation comes with an overhead, which may endanger global real-time properties of the system. The statically partitioning, Linux-based hypervisor Jailhouse addresses this challenge and strives at zero-overhead virtualisation, which maintains real-time capabilities of the platform by design. However, limitations of current architectures counter our architectural design goal of eliminating virtualisation overheads. In this talk, we extract architecture-independent common requirements on contemporary platforms to enable zero-trap virtualisation. We explore and compare the ARM, x86, and the RISC-V architecture, and inspect their architectural limitations for embedded zero-overhead virtualisation. We present common pitfalls and barriers of those platforms: Issues that have been addressed, that are being fixed and that need to be addressed in future.
Die Bayerische Forschungsallianz hat Gastwissenschaftleraufenthalte an der Tokio University of Science bewilligt. Prof. Dr. Wolfgang Mauerer wird seine Systems-Expertise in ein Projekt zur statistischen Analyse von Echtzeitgarantien einbringen.
PhD student Manuel Schönberger took second place in the graduate track of this year's Student Research Competition at the CORE A* SIGMOD conference in Philadelphia!
The Student Research Competition takes place annually for various ACM conferences including SIGMOD. In the first round, students submit an extended abstract about their research. Based on the quality of their submission, a select few students from universities around the globe, including Columbia University, University of Illinois at Urbana-Champaign, Hasso-Plattner-Institut and TUM were invited to present their research posters at the SIGMOD conference. Three students were selected for the third round, where they gave a more detailed presentation on their research. In the graduate category, Manuel reached the second place, competing against Alex Yao and Sughosh Kaushik, both from Columbia University, who took first and third places respectively.
In his research, Manuel analyses the applicability of quantum computing on database query processing. The research goes beyond merely mapping problems onto quantum hardware, and moreover addresses the co-design of future quantum systems, such that they become tailor-made for database problems. Congrats, Manuel, for achieving this international recognition!
Quantum technologies are on the verge of breaking out of their ivory tower existence and entering the general marketplace.
With the premier of the World of Quantum researchers and industry presented the latest findings on potential quantum applications and quantum hardware at the Laser World of Photonics in Munich. Research Master student Maja Franz and others visited the fair and explored the new platform for quantum technologies.
Exhibitors from industry and manufacturers of quantum computers gave a broad overview of current quantum technology, for instance IBM Quantum let visitors see inside its quantum computer via augmented reality. Researchers in the field of quantum computing, such as from Fraunhofer IKS, offered a lively exchange on hybrid quantum-classical algorithms.
Thanks to the sponsors of the Bayerisches Staatsministerium für Digitales and other partners the World of Quantum was a success and an interesting experience.
State Minister announces extension of KI-Transfer Plus project headed by Wolfgang Mauerer
In the state-funded project "KI-Transfer Plus", AI regional centers such as the Regensburg Center for Artificial Intelligence (RCAI) support SMEs in getting started with artificial intelligence. At the closing event, Bavarian digital minister Judith Gerlach reviewed results of the first project round. The host of the event, Horsch Maschinen GmbH from Schwandorf, showed how artificial intelligence enhances its own agricultural machinery. Horsch developed an algorithm to recognize plants and their center points, which is important for autonomous driving in the field as well as for automated weed removal. The other five project participants from Upper Bavaria and the Upper Palatinate also presented innovative AI developments in a wide range of domains. Digital minister Judith Gerlach was pleased with the results and announced the expansion of the project. As a consequence of the program's success, Gerlach announced an extension for another year to prepare the Bavarian economy for the key technologies of the future. See a summary video of the impressive work engineers Nicole Höß and Matthias Melzer did together with our industry partners!
Masterand Mario Mintel stellt seine Arbeiten zur Adressraumduplikation mit dem von ihm entworfenen Scoot-Mechanismus auf der FGDB'22 in Hamburg vor.
1-2-3 Reproducibility for Quantum Software Experiments was accepted at Q-SANER 2022.
Abstract: Various fields of science face a reproducibility crisis. For quantum software engineering as an emerging field, it is therefore imminent to focus on proper reproducibility engineering from the start. Yet the provision of reproduction packages is almost universally lacking. Actionable advice on how to build such packages is rare, particularly unfortunate in a field with many contributions from researchers with backgrounds outside computer science. In this article, we argue how to rectify this deficiency by proposing a 1-2-3~approach to reproducibility engineering for quantum software experiments: Using a meta-generation mechanism, we generate DOI-safe, long-term functioning and dependency-free reproduction packages. They are designed to satisfy the requirements of professional and learned societies solely on the basis of project-specific research artefacts (source code, measurement and configuration data), and require little temporal investment by researchers. Our scheme ascertains long-term traceability even when the quantum processor itself is no longer accessible. By drastically lowering the technical bar, we foster the proliferation of reproduction packages in quantum software experiments and ease the inclusion of non-CS researchers entering the field.
Beyond the Badge: Reproducibility Engineering as a Lifetime Skill was accepted at the SEENG@ICSE 2022.
Abstract: Ascertaining reproducibility of scientific experiments is receiving increased attention across disciplines. We argue that the necessary skills are important beyond pure scientific utility, and that they should be taught as part of software engineering (SWE) education. They serve a dual purpose: Apart from acquiring the coveted badges assigned to reproducible research, reproducibility engineering is a lifetime skill for a professional industrial career in computer science. SWE curricula seem an ideal fit for conveying such capabilities, yet they require some extensions, especially given that even at flagship conferences like ICSE, only slightly more than one-third of the technical papers (at the 2021 edition) receive recognition for artefact reusability. Knowledge and capabilities in setting up engineering environments that allow for reproducing artefacts and results over decades (a standard requirement in many traditional engineering disciplines), writing semi-literate commit messages that document crucial steps of a decision-making process and that are tightly coupled with code, or sustainably taming dynamic, quickly changing software dependencies, to name a few: They all contribute to solving the scientific reproducibility crisis, and enable software engineers to build sustainable, long-term maintainable, software-intensive, industrial systems. We propose to teach these skills at the undergraduate level, on par with traditional SWE topics.
Die Bayerische Forschungsallianz hat Gastwissenschaftleraufenthalte am FORTH-Institut in Kreta der Universität Ioannina bewilligt. Prof. Dr. Wolfgang Mauerer wird seine Software-Engineering- und Reproduzierbarkeitsexpertise in ein Projekt zur Schemaevolution in Datenbanken einbringen.
Peel | Pile? Cross-Framework Portability of Quantum Software was accepted at the QSA@ICSA 2022.
Abstract: In recent years, various vendors have made quantum software frameworks available. Yet with vendor-specific frameworks, code portability seems at risk, especially in a field where hardware and software libraries have not yet reached a consolidated state, and even foundational aspects of the technologies are still in flux. Accordingly, the development of vendor-independent quantum programming languages and frameworks is often suggested. This follows the established architectural pattern of introducing additional levels of abstraction into software stacks, thereby piling on layers of abstraction. Yet software architecture also provides seemingly less abstract alternatives, namely to focus on hardware-specific formulations of problems that peel off unnecessary layers. In this article, we quantitatively and experimentally explore these strategic alternatives, and compare popular quantum frameworks from the software implementation perspective. We find that for several specific, yet generalisable problems, the mathematical formulation of the problem to be solved is not just sufficiently abstract and serves as precise description, but is likewise concrete enough to allow for deriving framework-specific implementations with little effort. Additionally, we argue, based on analysing dozens of existing quantum codes, that porting between frameworks is actually low-effort, since the quantum- and framework-specific portions are very manageable in terms of size, commonly in the order of mere hundreds of lines of code. Given the current state-of-the-art in quantum programming practice, this leads us to argue in favour of peeling off unnecessary abstraction levels.
Projektvolumen: 8,2 Millionen EUR, Konsortialführer: Wolfgang Mauerer.
Die zunehmende Massenproduktion individualisierter Güter und die dafür notwendige komplexe Logistik innerhalb moderner Fabriken erfordern die Lösung umfangreicher Optimierungsprobleme in Echtzeit. Klassische Computer können solche Probleme nicht ausreichend gut lösen. In diesem Projekt sollen daher hybride, quanten-klassische Algorithmen entworfen werden. Diese befähigen die demnächst verfügbaren Quantencomputer mit einigen 10 Qubits zur Lösung dieser Probleme beizutragen. Dies erfolgt durch die Integration von angepassten Quantenprozessoren (QPUs) in bestehende Szenarien, und durch Erweiterung bestehender Methoden der Fabrikautomation und Produktionsplanung. Durch den Fokus auf lokale Datenverarbeitung direkt im Betrieb statt durch Nutzung externer Cloud-Dienste wird die Notwendigkeit vermieden, grundlegende Kenntnisse und Daten zur Produktionslaufzeit mit Dritten zu teilen. Zudem treten bei zeitkritischen Berechnungen keine Verzögerungen durch Datenübertragungen auf. Ausgehend von der Annahme, dass geeignete maßgefertigte QPUs mittelfristig verfügbar sein werden, befasst sich das Projekt mit dem Mangel an Quantenalgorithmen zur Optimierung von Fertigungsaufgaben, der fehlenden Integration des Quantencomputing in industrielle Prozesse und der Zugänglichkeit zur Technologie für Anwender, denen die Resultate ohne tiefe quantenmechanische und quanteninformatische Kenntnisse zugänglich gemacht werden sollen. Durch die systematische Übertragung realer Problemstellungen auf Verfahren, die Vorteile von Quantenalgorithmen mit Vorteilen klassischer Algorithmen kombinieren, sollen industriell verwertbare Anwendungsfälle erfolgreich gelöst werden. Perspektivisch lassen sich die in diesem Projekt entwickelten Algorithmen zukünftig auch auf leistungsstärkeren Quantencomputern ausführen und erweitern, sodass noch komplexere Optimierungen von Produktionsprozessen möglich werden, die die Produktivität und Wettbewerbsfähigkeit der Unternehmen weiter steigern (Textquelle: BMBF).
Abstract: Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separa- tion of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack.
Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments.
In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are funda- mental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods.
In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems.
In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve real- time capabilities of the hardware by design, while guaranteeing strict isolation across domains.