Project Details
Description
Many cyber-physical systems change their behaviour depending on environmental data and internal states. This is the case of control systems, that compute a control signal that depends on input values like a desired position, measured values like the current position, and internal states like the previous control action. This is also the case of systems embedding machine learning algorithms, that receive new samples and incorporate what they learnt using these new samples into a policy that determines how to behave in new conditions. All these systems are adaptive, in that their behaviour changes over time in a prescribed - but a priori unpredictable - way. This project is about testing and comparing systems that incorporate some adaptivity.
Testing systems whose behaviour varies over time is difficult. Think of a machine learning algorithm: how many and which samples should we give to the system before we can consider its behaviour testable? And what is the correct outcome? Of course we can apply unit testing to each function in the code, check for coverage, select a few cases in which the ideal behaviour of the code is known. But this does not give us any guarantee that the code is behaving correctly for the task it has to complete in the physical environment.
We advocate that a formal and rigorous methodology is needed to test systems with adaptivity like self-adaptive software. This methodology should be used in conjunction with other forms of testing (e.g., unit testing) to provide guarantees on the cyber-physical system behaviour.
When learning is involved, it is impossible to provide any deterministic guarantees, since the function to be learnt may not have been explored. In such cases, drawing any general conclusion is impossible (and undesirable), unless probabilistic guarantees are targeted. We are convinced that this is true also for adaptive software and a paradigm shift is necessary for its testing: guarantees deriving from the tests' execution should be provided in the probabilistic space rather than in the deterministic one.
In the probabilistic space, we investigate three alternatives methods to analyse testing data and provide guarantees: (i) Monte Carlo experiments, (ii) Extreme Value Theory, (iii) Scenario Theory.
Testing systems whose behaviour varies over time is difficult. Think of a machine learning algorithm: how many and which samples should we give to the system before we can consider its behaviour testable? And what is the correct outcome? Of course we can apply unit testing to each function in the code, check for coverage, select a few cases in which the ideal behaviour of the code is known. But this does not give us any guarantee that the code is behaving correctly for the task it has to complete in the physical environment.
We advocate that a formal and rigorous methodology is needed to test systems with adaptivity like self-adaptive software. This methodology should be used in conjunction with other forms of testing (e.g., unit testing) to provide guarantees on the cyber-physical system behaviour.
When learning is involved, it is impossible to provide any deterministic guarantees, since the function to be learnt may not have been explored. In such cases, drawing any general conclusion is impossible (and undesirable), unless probabilistic guarantees are targeted. We are convinced that this is true also for adaptive software and a paradigm shift is necessary for its testing: guarantees deriving from the tests' execution should be provided in the probabilistic space rather than in the deterministic one.
In the probabilistic space, we investigate three alternatives methods to analyse testing data and provide guarantees: (i) Monte Carlo experiments, (ii) Extreme Value Theory, (iii) Scenario Theory.
| Status | Finished |
|---|---|
| Effective start/end date | 2018/01/01 → 2022/12/31 |
Research output
- 1 Paper in conference proceeding
-
Testing self-adaptive software with probabilistic guarantees on performance metrics
Mandrioli, C. & Maggio, M., 2020, ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering. Devanbu, P., Cohen, M. & Zimmermann, T. (eds.). Association for Computing Machinery (ACM), p. 1002-1014 13 p. (ESEC/FSE 2020 - Proceedings of the 28th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering).Research output: Chapter in Book/Report/Conference proceeding › Paper in conference proceeding › peer-review
-
WASP: Wallenberg AI, Autonomous Systems and Software Program at Lund University
Årzén, K.-E. (Researcher)
2015/10/01 → 2029/12/31
Project: Research
-
ADMORPH: Towards Adaptively Morphing Embedded Systems
Maggio, M. (PI), Cervin, A. (Researcher) & Vreman, N. (Researcher)
European Commission - Horizon 2020
2020/01/01 → 2023/07/01
Project: Research
-
Prizes
-
ACM SIGSOFT Distinguished Paper Award
Mandrioli, C. (Recipient) & Maggio, M. (Recipient), 2020
Prize: Prize (including medals and awards)
File