Projects per year
Abstract
With the growing interest among speech scientists in working with natural conversations also the popularity for using articulatory–acoustic features as basic unit increased. They showed to be more suitable than purely phone-based approaches. Even though the motivation for AF classification is driven by the properties of conversational speech, most of the new methods continue to be developed on read speech corpora (e.g., TIMIT). In this paper, we show in two studies that the improvements obtained on read speech do not always transfer to conversational speech. The first study compares four different variants of acoustic parameters for AF classification of both read and conversational speech using support vector machines. Our experiments show that the proposed set of acoustic parameters substantially improves AF classification for read speech, but only marginally for conversational speech. The second study investigates whether labeling inaccuracies can be compensated for by a data selection approach. Again, although an substantial improvement was found with the data selection approach for read speech, this was not the case for conversational speech. Overall, these results suggest that we cannot continue to develop methods for one speech style and expect that improvements transfer to other styles. Instead, the nature of the application data (here: read vs. conversational) should be taken into account already when defining the basic assumptions of a method (here: segmentation in phones), and not only when applying the method to the application data.
Original language | English |
---|---|
Pages (from-to) | 699-713 |
Number of pages | 15 |
Journal | International Journal of Speech Technology |
Volume | 20 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Sept 2017 |
Keywords
- Articulatory–acoustic features
- Conversational speech
- Pronunciation variability
- Segments
ASJC Scopus subject areas
- Software
- Language and Linguistics
- Human-Computer Interaction
- Linguistics and Language
- Computer Vision and Pattern Recognition
Fingerprint
Dive into the research topics of 'Rethinking classification results based on read speech, or: why improvements do not always transfer to other speaking styles'. Together they form a unique fingerprint.Projects
- 1 Finished
-
CLCS - Cross-layer pronunciation modeling for conversational speech
Schuppler, B. (Principal Investigator (PI))
1/09/12 → 30/04/17
Project: Research project
Research output
- 1 Book
-
Rethinking Reduction: Interdisciplinary Perspectives on Conditions, Mechanisms, and Domains for Phonetic Variation
Cangemi, F. (Editor), Clayards, M. (Editor), Niebuhr, O. (Editor), Schuppler, B. (Editor) & Zellers, M. (Editor), 2018, Berlin: de Gruyter Mouton. 306 p. (Phonetics and Phonology; vol. 25)Research output: Book/Report › Book › peer-review