Caitlin A. Rice (University of Pittsburgh), Barend Beekhuizen (faculty), Vladimir Dubrovsky (BSc), Suzanne Stevenson (faculty, Department of Computer Science), and Blair Armstrong (faculty) have a paper out in Behavior Research Methods, 51(3): "A comparison of homonym meaning frequency estimates derived from movie and television subtitles, free association, and explicit ratings."
Most words are ambiguous, with interpretation dependent on context. Advancing theories of ambiguity resolution is important for any general theory of language processing, and for resolving inconsistencies in observed ambiguity effects across experimental tasks. Focusing on homonyms (words such as bank with unrelated meanings 'edge of a river' versus 'financial institution'), the present work advances theories and methods for estimating the relative frequency of their meanings, a factor that shapes observed ambiguity effects. We develop a new method for estimating meaning frequency based on the meaning of a homonym evoked in lines of movie and television subtitles according to human raters. We also replicate and extend a measure of meaning frequency derived from the classification of free associates. We evaluate the internal consistency of these measures, compare them to published estimates based on explicit ratings of each meaning’s frequency, and compare each set of norms in predicting performance in lexical and semantic decision mega-studies. All measures have high internal consistency and show agreement, but each is also associated with unique variance, which may be explained by integrating cognitive theories of memory with the demands of different experimental methodologies. To derive frequency estimates, we collected manual classifications of 533 homonyms over 50,000 lines of subtitles, and of 357 homonyms across over 5000 homonym–associate pairs. This database - publicly available at: www.blairarmstrong.net/homonymnorms/ - constitutes a novel resource for computational cognitive modeling and computational linguistics, and we offer suggestions around good practices for its use in training and testing models on labeled data.
No comments:
Post a Comment