Improving Performance of Feedback-Based Real-Time Networks using Model Checking and Reinforcement Learning

Research output: ThesisDoctoral Thesis (compilation)

634 Downloads (Pure)


Traditionally, automatic control techniques arose due to need for automation in mechanical systems. These techniques rely on robust mathematical modelling of physical systems with the goal to drive their behaviour to desired set-points. Decades of research have successfully automated, optimized, and ensured safety of a wide variety of mechanical systems.

Recent advancement in digital technology has made computers pervasive into every facet of life. As such, there have been many recent attempts to incorporate control techniques into digital technology. This thesis investigates the intersection and co-application of control theory and computer science to evaluate and improve performance of time-critical systems. The thesis applies two different research areas, namely, model checking and reinforcement learning to design and evaluate two unique real-time networks in conjunction with control technologies. The first is a camera surveillance system with the goal of constrained resource allocation to self-adaptive cameras. The second is a dual-delay real-time communication network with the goal of safe packet routing with minimal delays.

The camera surveillance system consists of self-adaptive cameras and a centralized manager, in which the cameras capture a stream of images and transmit them to a central manager over a shared constrained communication channel. The event-based manager allocates fractions of the shared bandwidth to all cameras in the network. The thesis provides guarantees on the behaviour of the camera surveillance network through model checking. Disturbances that arise during image capture due to variations in capture scenes are modelled using probabilistic and non-deterministic Markov Decision Processes (MDPs). The different properties of the camera network such as the number of frame drops and bandwidth reallocations are evaluated through formal verification.

The second part of the thesis explores packet routing for real-time networks constructed with nodes and directed edges. Each edge in the network consists of two different delays, a worst-case delay that captures high load characteristics, and a typical delay that captures the current network load. Each node in the network takes safe routing decisions by considering delays already encountered and the amount of remaining time.

The thesis applies reinforcement learning to route packets through the network with minimal delays while ensuring the total path delay from source to destination does not exceed the pre-determined deadline of the packet. The reinforcement learning algorithm explores new edges to find optimal routing paths while ensuring safety through a simple pre-processing algorithm. The thesis shows that it is possible to apply powerful reinforcement learning techniques to time-critical systems with expert knowledge about the system.
Original languageEnglish
Awarding Institution
  • Department of Automatic Control
  • Maggio, Martina, Supervisor
  • Årzén, Karl-Erik, Assistant supervisor
Award date2021 Feb 5
Place of PublicationLund
ISBN (Print)978-91-7895-719-4
ISBN (electronic) 978-91-7895-720-0
Publication statusPublished - 2021 Feb 5

Bibliographical note

Defence details
Date: 2021-02-05
Time: 13:15
Place: Lecture hall A, building KC4, Naturvetarvägen 18, Lund University, Faculty of Engineering LTH, Lund Join via Zoom:
External reviewer(s)
Name: Calinescu, Radu
Title: Doctor
Affiliation: University of York, UK

Subject classification (UKÄ)

  • Control Engineering

Free keywords

  • Model checking, Reinforcement learning, Event-based resource allocation, Camera networks, Real-Time routing, Dual-delay model


Dive into the research topics of 'Improving Performance of Feedback-Based Real-Time Networks using Model Checking and Reinforcement Learning'. Together they form a unique fingerprint.

Cite this