TY - JOUR
T1 - Actionable Explainable AI (AxAI)
T2 - A Practical Example with Aggregation Functions for Adaptive Classification and Textual Explanations for Interpretable Machine Learning
AU - Saranti, Anna
AU - Hudec, Miroslav
AU - Mináriková, Erika
AU - Takáč, Zdenko
AU - Großschedl, Udo
AU - Koch, Christoph
AU - Pfeifer, Bastian
AU - Angerschmid, Alessa
AU - Holzinger, Andreas
N1 - Funding Information:
This work has been supported by the Austrian Science Fund (FWF), Project: P-32554 explainable Artificial Intelligence. Partial support of the KEGA No. 025EU-4/2021 project of the Ministry of Education, Science, Research and Sport of the Slovak Republic.
Publisher Copyright:
© 2022 by the authors.
PY - 2022/12
Y1 - 2022/12
N2 - In many domains of our daily life (e.g., agriculture, forestry, health, etc.), both laymen and experts need to classify entities into two binary classes (yes/no, good/bad, sufficient/insufficient, benign/malign, etc.). For many entities, this decision is difficult and we need another class called “maybe”, which contains a corresponding quantifiable tendency toward one of these two opposites. Human domain experts are often able to mark any entity, place it in a different class and adjust the position of the slope in the class. Moreover, they can often explain the classification space linguistically—depending on their individual domain experience and previous knowledge. We consider this human-in-the-loop extremely important and call our approach actionable explainable AI. Consequently, the parameters of the functions are adapted to these requirements and the solution is explained to the domain experts accordingly. Specifically, this paper contains three novelties going beyond the state-of-the-art: (1) A novel method for detecting the appropriate parameter range for the averaging function to treat the slope in the “maybe” class, along with a proposal for a better generalisation than the existing solution. (2) the insight that for a given problem, the family of t-norms and t-conorms covering the whole range of nilpotency is suitable because we need a clear “no” or “yes” not only for the borderline cases. Consequently, we adopted the Schweizer–Sklar family of t-norms or t-conorms in ordinal sums. (3) A new fuzzy quasi-dissimilarity function for classification into three classes: Main difference, irrelevant difference and partial difference. We conducted all of our experiments with real-world datasets.
AB - In many domains of our daily life (e.g., agriculture, forestry, health, etc.), both laymen and experts need to classify entities into two binary classes (yes/no, good/bad, sufficient/insufficient, benign/malign, etc.). For many entities, this decision is difficult and we need another class called “maybe”, which contains a corresponding quantifiable tendency toward one of these two opposites. Human domain experts are often able to mark any entity, place it in a different class and adjust the position of the slope in the class. Moreover, they can often explain the classification space linguistically—depending on their individual domain experience and previous knowledge. We consider this human-in-the-loop extremely important and call our approach actionable explainable AI. Consequently, the parameters of the functions are adapted to these requirements and the solution is explained to the domain experts accordingly. Specifically, this paper contains three novelties going beyond the state-of-the-art: (1) A novel method for detecting the appropriate parameter range for the averaging function to treat the slope in the “maybe” class, along with a proposal for a better generalisation than the existing solution. (2) the insight that for a given problem, the family of t-norms and t-conorms covering the whole range of nilpotency is suitable because we need a clear “no” or “yes” not only for the borderline cases. Consequently, we adopted the Schweizer–Sklar family of t-norms or t-conorms in ordinal sums. (3) A new fuzzy quasi-dissimilarity function for classification into three classes: Main difference, irrelevant difference and partial difference. We conducted all of our experiments with real-world datasets.
KW - actionable explainable AI
KW - aggregation functions
KW - classification
KW - continuous XOR-problem
KW - interpretable machine learning
KW - ordinal sums
UR - http://www.scopus.com/inward/record.url?scp=85144729975&partnerID=8YFLogxK
U2 - 10.3390/make4040047
DO - 10.3390/make4040047
M3 - Article
AN - SCOPUS:85144729975
SN - 2504-4990
VL - 4
SP - 924
EP - 953
JO - Machine Learning and Knowledge Extraction
JF - Machine Learning and Knowledge Extraction
IS - 4
ER -