Fine-tuning language model embeddings to reveal domain knowledge: An explainable artificial intelligence perspective on medical decision making

Ceca Kraišniković, Robert Harb, Markus Plass*, Wael Al Zoughbi, Andreas Holzinger, Heimo Müller

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Integrating large language models (LLMs) to retrieve targeted medical knowledge from electronic health records enables significant advancements in medical research. However, recognizing the challenges associated with using LLMs in healthcare is essential for successful implementation. One challenge is that medical records combine unstructured textual information with highly sensitive personal data. This, in turn, highlights the need for explainable Artificial Intelligence (XAI) methods to understand better how LLMs function in the medical domain. In this study, we propose a novel XAI tool to accelerate data-driven cancer research. We apply the Bidirectional Encoder Representations from Transformers (BERT) model to German language pathology reports examining the effects of domain-specific language adaptation and fine-tuning. We demonstrate our model on a real-world pathology dataset, analyzing the contextual representations of diagnostic reports. By illustrating decisions made by fine-tuned models, we provide decision values that can be applied in medical research. To address interpretability, we conduct a performance evaluation of the classifications generated by our fine-tuned model, as assessed by an expert pathologist. In domains such as medicine, inspection of the medical knowledge map in conjunction with expert evaluation reveals valuable information about how contextual representations of key disease features are categorized. This ultimately benefits data structuring and labeling and paves the way for even more advanced approaches to XAI, combining text with other input modalities, such as images which are then applicable to various engineering problems.

Original languageEnglish
Article number109561
JournalEngineering Applications of Artificial Intelligence
Volume139
DOIs
Publication statusPublished - Jan 2025

Keywords

  • Analysis of embeddings
  • Bidirectional Encoder Representations from Transformers model
  • Digital pathology
  • Domain-language adaptation
  • Fine-tuning
  • Interpretable medical decision scores
  • Language model for German
  • Large language models in pathology
  • Pathology reports
  • Pathology-specific tasks

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Fine-tuning language model embeddings to reveal domain knowledge: An explainable artificial intelligence perspective on medical decision making'. Together they form a unique fingerprint.

Cite this