Testing of Self-Adaptive Software Systems
Testing systems whose behaviour varies over time is difficult. Think of a machine learning algorithm: how many and which samples should we give to the system before we can consider its behaviour testable? And what is the correct outcome? Of course we can apply unit testing to each function in the code, check for coverage, select a few cases in which the ideal behaviour of the code is known. But this does not give us any guarantee that the code is behaving correctly for the task it has to complete in the physical environment.
We advocate that a formal and rigorous methodology is needed to test systems with adaptivity like self-adaptive software. This methodology should be used in conjunction with other forms of testing (e.g., unit testing) to provide guarantees on the cyber-physical system behaviour.
When learning is involved, it is impossible to provide any deterministic guarantees, since the function to be learnt may not have been explored. In such cases, drawing any general conclusion is impossible (and undesirable), unless probabilistic guarantees are targeted. We are convinced that this is true also for adaptive software and a paradigm shift is necessary for its testing: guarantees deriving from the tests' execution should be provided in the probabilistic space rather than in the deterministic one.
In the probabilistic space, we investigate three alternatives methods to analyse testing data and provide guarantees: (i) Monte Carlo experiments, (ii) Extreme Value Theory, (iii) Scenario Theory.
|Effective start/end date||2018/01/01 → 2022/12/31|
2020/01/01 → 2022/12/31
Related research output
Research output: Chapter in Book/Report/Conference proceeding › Paper in conference proceeding