TY - JOUR
T1 - Establishing and evaluating trustworthy AI
T2 - overview and research challenges
AU - Kowald, Dominik
AU - Scher, Sebastian
AU - Pammer-Schindler, Viktoria
AU - Müllner, Peter
AU - Waxnegger, Kerstin
AU - Demelius, Lea
AU - Fessl, Angela
AU - Toller, Maximilian
AU - Mendoza Estrada, Inti Gabriel
AU - Šimić, Ilija
AU - Sabol, Vedran
AU - Trügler, Andreas
AU - Veas, Eduardo
AU - Kern, Roman
AU - Nad, Tomislav
AU - Kopeinik, Simone
N1 - Publisher Copyright:
Copyright © 2024 Kowald, Scher, Pammer-Schindler, Müllner, Waxnegger, Demelius, Fessl, Toller, Mendoza Estrada, Šimić, Sabol, Trügler, Veas, Kern, Nad and Kopeinik.
PY - 2024/11/29
Y1 - 2024/11/29
N2 - Artificial intelligence (AI) technologies (re-)shape modern life, driving innovation in a wide range of sectors. However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: (1) human agency and oversight, (2) fairness and non-discrimination, (3) transparency and explainability, (4) robustness and accuracy, (5) privacy and security, and (6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to (1) interdisciplinary research, (2) conceptual clarity, (3) context-dependency, (4) dynamics in evolving systems, and (5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.
AB - Artificial intelligence (AI) technologies (re-)shape modern life, driving innovation in a wide range of sectors. However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: (1) human agency and oversight, (2) fairness and non-discrimination, (3) transparency and explainability, (4) robustness and accuracy, (5) privacy and security, and (6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to (1) interdisciplinary research, (2) conceptual clarity, (3) context-dependency, (4) dynamics in evolving systems, and (5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.
KW - accountability
KW - artificial intelligence
KW - fairness
KW - human agency
KW - privacy
KW - robustness
KW - transparency
KW - trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85211624733&partnerID=8YFLogxK
U2 - 10.3389/fdata.2024.1467222
DO - 10.3389/fdata.2024.1467222
M3 - Review article
AN - SCOPUS:85211624733
SN - 2624-909X
VL - 7
JO - Frontiers in Big Data
JF - Frontiers in Big Data
M1 - 1467222
ER -