Testing of Self-Adaptive Software Systems

Project: Research

Description

Many cyber-physical systems change their behaviour depending on environmental data and internal states. This is the case of control systems, that compute a control signal that depends on input values like a desired position, measured values like the current position, and internal states like the previous control action. This is also the case of systems embedding machine learning algorithms, that receive new samples and incorporate what they learnt using these new samples into a policy that determines how to behave in new conditions. All these systems are adaptive, in that their behaviour changes over time in a prescribed - but a priori unpredictable - way. This project is about testing and comparing systems that incorporate some adaptivity.

Testing systems whose behaviour varies over time is difficult. Think of a machine learning algorithm: how many and which samples should we give to the system before we can consider its behaviour testable? And what is the correct outcome? Of course we can apply unit testing to each function in the code, check for coverage, select a few cases in which the ideal behaviour of the code is known. But this does not give us any guarantee that the code is behaving correctly for the task it has to complete in the physical environment.

We advocate that a formal and rigorous methodology is needed to test systems with adaptivity like self-adaptive software. This methodology should be used in conjunction with other forms of testing (e.g., unit testing) to provide guarantees on the cyber-physical system behaviour.

When learning is involved, it is impossible to provide any deterministic guarantees, since the function to be learnt may not have been explored. In such cases, drawing any general conclusion is impossible (and undesirable), unless probabilistic guarantees are targeted. We are convinced that this is true also for adaptive software and a paradigm shift is necessary for its testing: guarantees deriving from the tests' execution should be provided in the probabilistic space rather than in the deterministic one.

In the probabilistic space, we investigate three alternatives methods to analyse testing data and provide guarantees: (i) Monte Carlo experiments, (ii) Extreme Value Theory, (iii) Scenario Theory.
StatusActive
Effective start/end date2018/01/012022/12/31

Participants

Related projects

(Associated with)
(Part of)

Per Runeson

2020/01/01 → …

Project: Research

View all (4)

Related research output

Claudio Mandrioli & Martina Maggio, 2020, ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering. Devanbu, P., Cohen, M. & Zimmermann, T. (eds.). Association for Computing Machinery (ACM), p. 1002-1014 13 p. (ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering).

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceeding

View all (1)

Related prizes

Claudio Mandrioli (Recipient) & Martina Maggio (Recipient), 2020

Prizes and Distinctions: Prize (including medals and awards)

View all (1)