A Reinforcement Learning Environment For Job-Shop Scheduling

Pierre Paul Alain Tassel*, Martin Gebser, Konstantin Schekotihin

*Korrespondierende/r Autor/-in für diese Arbeit

Publikation: KonferenzbeitragPaperBegutachtung

Abstract

Scheduling is a fundamental task occurring in various automated systems applications, e.g., optimal schedules for machines on a job shop allow for a reduction of production costs and waste. However, finding such schedules is often intractable and cannot be achieved by Combinatorial Optimization Problem (COP) methods within a given time limit. Recent advances of Deep Reinforcement Learning (DRL) in learning complex behavior enable new COP application possibilities. This paper presents an efficient DRL environment for Job-Shop Scheduling – an important problem in the field. Furthermore, we design a meaningful and compact state representation as well as a novel, simple dense reward function, closely related to the sparse make-span minimization criteria used by COP methods.
We demonstrate that our approach significantly outperforms existing DRL methods on classic benchmark instances, coming close to state-of-the-art COP approaches.
Originalspracheenglisch
PublikationsstatusVeröffentlicht - Aug. 2021
Veranstaltung2021 PRL Workshop – Bridging the Gap Between AI Planning and Reinforcement Learning - Virtuell, China
Dauer: 5 Aug. 20216 Aug. 2021

Konferenz

Konferenz2021 PRL Workshop – Bridging the Gap Between AI Planning and Reinforcement Learning
Land/GebietChina
OrtVirtuell
Zeitraum5/08/216/08/21

Dieses zitieren