TS-MoCo: Time-series momentum contrast for self-supervised physiological representation learning

Philipp Hallgarten, David Bethge, Ozan Özdenizci, Tobias Grosse-Puppendahl, Enkelejda Kasneci

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

Limited availability of labeled physiological data often prohibits the use of powerful supervised deep learning models in the biomedical machine intelligence domain. We approach this problem and propose a novel encoding framework that relies on self-supervised learning with momentum contrast to learn representations from multivariate time-series of various physiological domains without needing labels. Our model uses a transformer architecture that can be easily adapted to classification problems by optimizing a linear output classification layer. We experimentally evaluate our framework using two publicly available physiological datasets from different domains, i.e., human activity recognition from embedded inertial sensory and emotion recognition from electroencephalography. We show that our self-supervised learning approach can indeed learn discriminative features which can be exploited in downstream classification tasks. Our work enables the development of domain-agnostic intelligent systems that can effectively analyze multivariate time-series data from physiological domains.
Originalspracheenglisch
Titel31st European Signal Processing Conference (EUSIPCO)
PublikationsstatusVeröffentlicht - 2023
Veranstaltung31st European Signal Processing Conference: EUSIPCO 2023 - Helsinki, Finnland
Dauer: 4 Sept. 20238 Sept. 2023

Konferenz

Konferenz31st European Signal Processing Conference
Land/GebietFinnland
OrtHelsinki
Zeitraum4/09/238/09/23

ASJC Scopus subject areas

  • Artificial intelligence
  • Signalverarbeitung
  • Biomedizintechnik

Fields of Expertise

  • Human- & Biotechnology
  • Information, Communication & Computing

Fingerprint

Untersuchen Sie die Forschungsthemen von „TS-MoCo: Time-series momentum contrast for self-supervised physiological representation learning“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren