Reinforcement Learning Under Partial Observability Guided by Learned Environment Models

Edi Muškardin*, Martin Tappler, Bernhard K. Aichernig, Ingo Pill

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Reinforcement learning and planning under partial observability is notoriously difficult. In this setting, decision-making agents need to perform a sequence of actions with incomplete information about the underlying state of the system. As such, methods that can act in the presence of incomplete state information are of special interest to machine learning, planning, and control communities. In the scope of this paper, we consider environments that behave like a partially observable Markov decision process (POMDP) with known discrete actions, while assuming no knowledge about its structure or transition probabilities. We propose an approach for reinforcement learning (RL) in such partially observable environments. Our approach combines Q-learning with IoAlergia, an automata learning method that can learn Markov decision processes (MDPs). By learning MDP models of the environment from the experiences of the RL agent, we enable RL in partially observable domains without explicit, additional memory to track previous interactions for dealing with ambiguities stemming from partial observability. We instead provide the RL agent with additional observations in the form of abstract environment states. By simulating new experiences on a learned model we extend the agent’s internal state representation, which in turn enables better decision-making in the presence of partial observability. In our evaluation we report on the validity of our approach and its promising performance in comparison to six state-of-the-art deep RL techniques with recurrent neural networks and fixed memory.

Original languageEnglish
Title of host publicationiFM 2023 - 18th International Conference, iFM 2023, Proceedings
EditorsPaula Herber, Anton Wijs
PublisherSpringer Science and Business Media Deutschland GmbH
Pages257-276
Number of pages20
ISBN (Print)9783031477041
DOIs
Publication statusPublished - 2024
Event18th International Conference on integrated Formal Methods: iFM 2023 - Leiden, Netherlands
Duration: 13 Nov 202315 Nov 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14300 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference18th International Conference on integrated Formal Methods
Country/TerritoryNetherlands
CityLeiden
Period13/11/2315/11/23

Keywords

  • Automata Learning
  • Markov Decision Processes
  • Partially Observable Markov Decision Processes
  • Reinforcement Learning

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Reinforcement Learning Under Partial Observability Guided by Learned Environment Models'. Together they form a unique fingerprint.

Cite this