Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT

Eileen Salhofer, Xing Lan Liu, Roman Kern

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

State of the art performances for entity extraction tasks are achieved by supervised learning, specifically, by fine-tuning pretrained language models such as BERT. As a result, annotating application specific data is the first step in many use cases. However, no practical guidelines are available for annotation requirements. This work supports practitioners by empirically answering the frequently asked questions (1) how many training samples to annotate? (2) which examples to annotate? We found that BERT achieves up to 80% F1 when fine-tuned on only 70 training examples, especially on biomedical domain. The key features for guiding the selection of high performing training instances are identified to be pseudo-perplexity and sentence-length. The best training dataset constructed using our proposed selection strategy shows F1 score that is equivalent to a random selection with twice the sample size. The requirement of only a small number of training data implies cheaper implementations and opens door to wider range of applications.
Originalspracheenglisch
TitelProceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
Herausgeber (Verlag)Association for Computational Linguistics
Seiten83-88
DOIs
PublikationsstatusVeröffentlicht - 2022
Veranstaltung2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics: NAACL 2022 - Seattle, Hybrider Event, USA / Vereinigte Staaten
Dauer: 10 Juli 202215 Juli 2022

Konferenz

Konferenz2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics
KurztitelNAACL 2022
Land/GebietUSA / Vereinigte Staaten
OrtHybrider Event
Zeitraum10/07/2215/07/22

Fingerprint

Untersuchen Sie die Forschungsthemen von „Impact of Training Instance Selection on Domain-Specific Entity Extraction using BERT“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren