Verification and validation of software and systems is the essential part of the development cycle in order to meet given quality criteria including functional and non-functional requirements. Testing and in particular its automation has been an active research area for decades providing many methods and tools for automating test case generation and execution. Due to the increasing use of AI in software and systems, the question arises whether it is possible to utilize available testing techniques in the context of AI-based systems. In this position paper, we elaborate on testing issues arising when using AI methods for systems, consider the case of different stages of AI, and start investigating on the usefulness of certain testing methods for testing AI. We focus especially on testing at the system level where we are interesting not only in assuring a system to be correctly implemented but also to meet given criteria like not contradicting moral rules, or being dependable. We state that some well-known testing techniques can still be applied providing being tailored to the specific needs.
|Number of pages||6|
|Journal||CEUR Workshop Proceedings|
|Publication status||Published - 2021|
|Event||2021 Workshop on Artificial Intelligence Safety, SafeAI 2021 - Virtual, Online|
Duration: 8 Feb 2021 → …
ASJC Scopus subject areas
- Computer Science(all)