Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT

Eileen Salhofer, Xing Lan Liu, Roman Kern

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

State of the art performances for entity extraction tasks are achieved by supervised learning, specifically, by fine-tuning pretrained language models such as BERT. As a result, annotating application specific data is the first step in many use cases. However, no practical guidelines are available for annotation requirements. This work supports practitioners by empirically answering the frequently asked questions (1) how many training samples to annotate? (2) which examples to annotate? We found that BERT achieves up to 80% F1 when fine-tuned on only 70 training examples, especially on biomedical domain. The key features for guiding the selection of high performing training instances are identified to be pseudo-perplexity and sentence-length. The best training dataset constructed using our proposed selection strategy shows F1 score that is equivalent to a random selection with twice the sample size. The requirement of only a small number of training data implies cheaper implementations and opens door to wider range of applications.
Original languageEnglish
Title of host publicationProceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
PublisherAssociation for Computational Linguistics
Pages83-88
DOIs
Publication statusPublished - 2022
Event2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics: NAACL 2022 - Seattle, Hybrider Event, United States
Duration: 10 Jul 202215 Jul 2022

Conference

Conference2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Abbreviated titleNAACL 2022
Country/TerritoryUnited States
CityHybrider Event
Period10/07/2215/07/22

Fingerprint

Dive into the research topics of 'Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT'. Together they form a unique fingerprint.

Cite this