Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece

Roy Henha Eyono*, Ellen Boven, Anna Ghosh, Joseph Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass, Blake A Richards, Christina Savin, Katharina Wilmes, Luke Y Prince

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This perspective piece came about through the Generative Adversarial Collaboration (GAC) series of workshops organized by the Computational Cognitive Neuroscience (CCN) conference in 2020. We brought together a number of experts from the field of theoretical neuroscience to debate emerging issues in our understanding of how learning is implemented in biological recurrent neural networks. Here, we will give a brief review of the common assumptions about biological learning and the corresponding findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks commonly used in artificial intelligence. We will then outline the key issues discussed in the workshop: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions. Finally, we conclude with recommendations for both theoretical and experimental neuroscientists when designing new studies that could help to bring clarity to these issues.
Original languageEnglish
Number of pages10
JournalNeurons, Behavior, Data Analysis, and Theory
Volume2022
Issue number1
DOIs
Publication statusPublished - 2022

Fields of Expertise

  • Information, Communication & Computing

Fingerprint

Dive into the research topics of 'Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece'. Together they form a unique fingerprint.

Cite this