Automated testing (AT) is one of the cornerstones of agile software engineering, with its short development cycles. In continuous integration/deployment (CI/CD) pipelines, AT is a safeguard against software regression due to side effects, unintentional changes, or changes in the environment. While AT provides huge benefits for agile software engineering, there is a risk that the test cases are too specific – only testing one sample pair of input–output – thus making them inefficient. We propose “near failure assertion” to analyse variation around the output of a test case. In contrast to the standard assertion, where test cases are asserted a specific output value or condition, the proposed approach asserts the ‘surrounding’ values, to identify if the software feature works as expected or is at risk of failing. The assertion is hence not only a binary pass/fail, but a pass/fail risk distribution. The new approach – inspired by near crash analysis in traffic monitoring – is expected to provide more information from each of the automated test cases, and thereby make regression testing more efficient. The project will be conducted within our collaboration with BTH, or extended with other relevant partners.