Reinforcement Learning Under Partial Observability Guided by Learned Environment Models

Edi Muškardin*, Martin Tappler, Bernhard K. Aichernig, Ingo Pill

*Korrespondierende/r Autor/-in für diese Arbeit

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

Reinforcement learning and planning under partial observability is notoriously difficult. In this setting, decision-making agents need to perform a sequence of actions with incomplete information about the underlying state of the system. As such, methods that can act in the presence of incomplete state information are of special interest to machine learning, planning, and control communities. In the scope of this paper, we consider environments that behave like a partially observable Markov decision process (POMDP) with known discrete actions, while assuming no knowledge about its structure or transition probabilities. We propose an approach for reinforcement learning (RL) in such partially observable environments. Our approach combines Q-learning with IoAlergia, an automata learning method that can learn Markov decision processes (MDPs). By learning MDP models of the environment from the experiences of the RL agent, we enable RL in partially observable domains without explicit, additional memory to track previous interactions for dealing with ambiguities stemming from partial observability. We instead provide the RL agent with additional observations in the form of abstract environment states. By simulating new experiences on a learned model we extend the agent’s internal state representation, which in turn enables better decision-making in the presence of partial observability. In our evaluation we report on the validity of our approach and its promising performance in comparison to six state-of-the-art deep RL techniques with recurrent neural networks and fixed memory.

Originalspracheenglisch
TiteliFM 2023 - 18th International Conference, iFM 2023, Proceedings
Redakteure/-innenPaula Herber, Anton Wijs
Herausgeber (Verlag)Springer Science and Business Media Deutschland GmbH
Seiten257-276
Seitenumfang20
ISBN (Print)9783031477041
DOIs
PublikationsstatusVeröffentlicht - 2024
Veranstaltung18th International Conference on integrated Formal Methods: iFM 2023 - Leiden, Niederlande
Dauer: 13 Nov. 202315 Nov. 2023

Publikationsreihe

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band14300 LNCS
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Konferenz

Konferenz18th International Conference on integrated Formal Methods
Land/GebietNiederlande
OrtLeiden
Zeitraum13/11/2315/11/23

ASJC Scopus subject areas

  • Theoretische Informatik
  • Allgemeine Computerwissenschaft

Fingerprint

Untersuchen Sie die Forschungsthemen von „Reinforcement Learning Under Partial Observability Guided by Learned Environment Models“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren