TY - JOUR
T1 - On the use of available testing methods for verification & validation of AI-based software and systems
AU - Wotawa, Franz
N1 - Funding Information:
The research was supported by ECSEL JU under the project H2020 826060 AI4DI - Artificial Intelligence for Digitising Industry. AI4DI is funded by the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) under the program ”ICT of the Future” between May 2019 and April 2022. More information can be retrieved from https: //iktderzukunft.at/en/ .
Publisher Copyright:
Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribute 4.0 International (CC BY 4.0).
PY - 2021
Y1 - 2021
N2 - Verification and validation of software and systems is the essential part of the development cycle in order to meet given quality criteria including functional and non-functional requirements. Testing and in particular its automation has been an active research area for decades providing many methods and tools for automating test case generation and execution. Due to the increasing use of AI in software and systems, the question arises whether it is possible to utilize available testing techniques in the context of AI-based systems. In this position paper, we elaborate on testing issues arising when using AI methods for systems, consider the case of different stages of AI, and start investigating on the usefulness of certain testing methods for testing AI. We focus especially on testing at the system level where we are interesting not only in assuring a system to be correctly implemented but also to meet given criteria like not contradicting moral rules, or being dependable. We state that some well-known testing techniques can still be applied providing being tailored to the specific needs.
AB - Verification and validation of software and systems is the essential part of the development cycle in order to meet given quality criteria including functional and non-functional requirements. Testing and in particular its automation has been an active research area for decades providing many methods and tools for automating test case generation and execution. Due to the increasing use of AI in software and systems, the question arises whether it is possible to utilize available testing techniques in the context of AI-based systems. In this position paper, we elaborate on testing issues arising when using AI methods for systems, consider the case of different stages of AI, and start investigating on the usefulness of certain testing methods for testing AI. We focus especially on testing at the system level where we are interesting not only in assuring a system to be correctly implemented but also to meet given criteria like not contradicting moral rules, or being dependable. We state that some well-known testing techniques can still be applied providing being tailored to the specific needs.
UR - http://www.scopus.com/inward/record.url?scp=85101205766&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85101205766
SN - 1613-0073
VL - 2808
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2021 Workshop on Artificial Intelligence Safety, SafeAI 2021
Y2 - 8 February 2021
ER -