Learning task-specific activation functions using genetic programming

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.
Original languageEnglish
Title of host publication Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
PublisherSciTePress
Pages533-540
Volume5
ISBN (Print)978-989-758-354-4
DOIs
Publication statusPublished - 2019
Event14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications: VISIGRAPP 2019 - Prague, Czech Republic
Duration: 25 Feb 201927 Feb 2019

Conference

Conference14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Abbreviated titleVISAPP 2019
Country/TerritoryCzech Republic
CityPrague
Period25/02/1927/02/19

Fingerprint

Dive into the research topics of 'Learning task-specific activation functions using genetic programming'. Together they form a unique fingerprint.

Cite this