In the design of embedded control systems it is important to use the limited platform resources (e.g., CPU time, network bandwidth, energy) as efficiently as possible. At the same time, any optimistic assumptions at design time may lead to runtime failures caused by missed deadlines, lost controls, or energy depletion. Shifting our focus from off-line optimization to on-line operation, in this project we aim to develop theory and co-design methodology for robust and secure embedded control systems that should operate efficiently also in the presence of uncertainties or unforeseen events. We will consider both passive and active robustness towards, among other things, plant perturbations, malicious intrusion, execution-time overruns, and varying network capacity. In the passive approach, we aim for techniques that take parametric plant and platform uncertainty into account at design time, while the run-time system should provide predictable exception handling and provable performance bounds. In the active approach, the run-time system should be able to adapt to new and unexpected conditions via reconfiguration and self-optimization.
During 2020 we have researched different ways to on-line adapt controllers that suffer from real-time faults. We have also investigated reinforcement learning in the context of real-time networks. The proposed learning algorithm explores new routes while guaranteeing that no packet deadlines are missed. The results are included in the PhD thesis of Gautham Nayak Seetanadi. Our research partners in Linköping have focused on safe execution on control task code in the Cloud. The challenge is to verify that the control computations are correct, despite the Cloud being an unsafe execution environment.