On the Role of Priors in Bayesian Causal Learning

Research output: Contribution to journalArticlepeer-review

Abstract

In this work, we investigate causal learning of independent causal mechanisms from a Bayesian perspective. Confirming previous claims from the literature, we show in a didactically accessible manner that unlabeled data (i.e., cause realizations) do not improve the estimation of the parameters defining the mechanism. Furthermore, we observe the importance of choosing an appropriate prior for the cause and mechanism parameters, respectively. Specifically, we show that a factorized prior results in a factorized posterior, which resonates with Janzing and Schülkopf's definition of independent causal mechanisms via the Kolmogorov complexity of the involved distributions and with the concept of parameter independence of Heckerman et al.

Original languageEnglish
JournalIEEE Transactions on Artificial Intelligence
Early online date25 Dec 2024
DOIs
Publication statusE-pub ahead of print - 25 Dec 2024

Keywords

  • Bayesian inference
  • causal learning
  • independent causal mechanism

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'On the Role of Priors in Bayesian Causal Learning'. Together they form a unique fingerprint.

Cite this