Project Details

Description

AI is increasingly proposed to be integrated into critical cyber-physical systems. Across domains, novel safety standards are evolving to tackle the new challenges that come with this type of software solutions. Furthermore, the European Commission recently proposed the Artificial Intelligence Act to turn Europe into the global hub for trustworthy AI. While both the act and the emerging standards provide high-level guidance, quality assurance research is needed to break them down into operational test methods and tools. Due to the experimental nature of ML development, automation is essential to continuously steer the development toward a trustworthy AI system. Embarking from an automotive demonstrator, this project advances data testing, model testing, and simulation-based testing toward trustworthiness in the context of state-of-the-art MLOps pipelines.
StatusActive
Effective start/end date2023/01/012027/12/31

UKÄ subject classification

  • Software Engineering

Free keywords

  • trustworthy AI
  • MLOps
  • AI engineering
  • test automation
  • cyber-physical systems