Abstract
The widespread use of artificial intelligence (AI) in more and more real-world applications is accompanied by challenges that are not obvious at first glance. In machine learning, class imbalance, characterized by an imbalance in the frequency of classes, is one key challenge that poses essential problems for many common machine learning algorithms. This challenge led to the development of various countermeasures to tackle class imbalance. Although these countermeasures improve the prediction performance of models, they often jeopardize interpretability for both AI users and AI experts. Especially in sensitive domains where class imbalance is regularly present, for example, medicine, meteorology, or fraud detection, interpretability is of utmost importance. In this paper, we evaluate the effect of class imbalance countermeasures on interpretability with methods of explainable AI (XAI). Our work contributes to a more in-depth understanding of these countermeasures and connects the research fields of class imbalance learning and XAI. Our experimental results suggest that only feature selection and cost-sensitive approaches are the only class imbalance countermeasures that preserve interpretability for both AI users and AI experts. In contrast, resampling and most classification algorithms for imbalance learning are not suitable in settings where knowledge should be derived and where interpretability is a key requirement.
Originalsprache | englisch |
---|---|
Seiten (von - bis) | 45342-45358 |
Seitenumfang | 17 |
Fachzeitschrift | IEEE Access |
Jahrgang | 12 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2024 |
ASJC Scopus subject areas
- Allgemeiner Maschinenbau
- Allgemeine Materialwissenschaften
- Allgemeine Computerwissenschaft