Compressed Hierarchical Representations for Multi-Task Learning and Task Clustering

Joao Machado de Freitas, Sebastian Berg, Bernhard C. Geiger, Manfred Mucke

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representation learning problem, with one task-agnostic and multiple task-specific latent representations. Drawing inspiration from the information bottleneck principle and assuming an additive independent noise model between the task-agnostic and task-specific latent representations, we limit the information contained in each task-specific representation. It is shown that our resulting representations yield competitive performance for several MTL benchmarks. Furthermore, for certain setups, we show that the trained parameters of the additive noise model are closely related to the similarity of different tasks. This indicates that our approach yields a task-agnostic representation that is disentangled in the sense that its individual dimensions may be interpretable from a task-specific perspective.
Originalspracheenglisch
Titel2022 International Joint Conference on Neural Networks (IJCNN)
Seitenumfang8
DOIs
PublikationsstatusVeröffentlicht - 18 Juli 2022
Veranstaltung2022 International Joint Conference on Neural Networks: IJCNN 2022 - Padua, Italien
Dauer: 18 Juli 202223 Juli 2022

Konferenz

Konferenz2022 International Joint Conference on Neural Networks
KurztitelIJCNN 2022
Land/GebietItalien
OrtPadua
Zeitraum18/07/2223/07/22

Fingerprint

Untersuchen Sie die Forschungsthemen von „Compressed Hierarchical Representations for Multi-Task Learning and Task Clustering“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren