TY - JOUR
T1 - Assessing trustworthy AI
T2 - Technical and legal perspectives of fairness in AI
AU - Kattnig, Markus
AU - Angerschmid, Alessa
AU - Reichel, Thomas
AU - Kern, Roman
N1 - Publisher Copyright:
© 2024 The Authors
PY - 2024/11
Y1 - 2024/11
N2 - Artificial Intelligence systems are used more and more nowadays, from the application of decision support systems to autonomous vehicles. Hence, the widespread use of AI systems in various fields raises concerns about their potential impact on human safety and autonomy, especially regarding fair decision-making. In our research, we primarily concentrate on aspects of non-discrimination, encompassing both group and individual fairness. Therefore, it must be ensured that decisions made by such systems are fair and unbiased. Although there are many different methods for bias mitigation, few of them meet existing legal requirements. Unclear legal frameworks further worsen this problem. To address this issue, this paper investigates current state-of-the-art methods for bias mitigation and contrasts them with the legal requirements, with the scope limited to the European Union and with a particular focus on the AI Act. Moreover, the paper initially examines state-of-the-art approaches to ensure AI fairness, and subsequently, outlines various fairness measures. Challenges of defining fairness and the need for a comprehensive legal methodology to address fairness in AI systems are discussed. The paper contributes to the ongoing discussion on fairness in AI and highlights the importance of meeting legal requirements to ensure fairness and non-discrimination for all data subjects.
AB - Artificial Intelligence systems are used more and more nowadays, from the application of decision support systems to autonomous vehicles. Hence, the widespread use of AI systems in various fields raises concerns about their potential impact on human safety and autonomy, especially regarding fair decision-making. In our research, we primarily concentrate on aspects of non-discrimination, encompassing both group and individual fairness. Therefore, it must be ensured that decisions made by such systems are fair and unbiased. Although there are many different methods for bias mitigation, few of them meet existing legal requirements. Unclear legal frameworks further worsen this problem. To address this issue, this paper investigates current state-of-the-art methods for bias mitigation and contrasts them with the legal requirements, with the scope limited to the European Union and with a particular focus on the AI Act. Moreover, the paper initially examines state-of-the-art approaches to ensure AI fairness, and subsequently, outlines various fairness measures. Challenges of defining fairness and the need for a comprehensive legal methodology to address fairness in AI systems are discussed. The paper contributes to the ongoing discussion on fairness in AI and highlights the importance of meeting legal requirements to ensure fairness and non-discrimination for all data subjects.
KW - AI Act
KW - Bias
KW - Fairness
KW - Group fairness
KW - Individual fairness
KW - Non-discrimination
KW - Trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85204221651&partnerID=8YFLogxK
U2 - 10.1016/j.clsr.2024.106053
DO - 10.1016/j.clsr.2024.106053
M3 - Article
AN - SCOPUS:85204221651
SN - 0267-3649
VL - 55
JO - Computer Law and Security Review
JF - Computer Law and Security Review
M1 - 106053
ER -