Tags

Type your tag names separated by a space and hit enter

Distributional learning of appearance.
PLoS One 2013; 8(2):e58074Plos

Abstract

Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that 'words of similar meaning tend to occur in similar contexts'. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that 'words with referents of similar appearance tend to occur in similar contexts'. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance.

Authors+Show Affiliations

Computer Science, University College London, London, United Kingdom. L.Griffin@cs.ucl.ac.ukNo affiliation info availableNo affiliation info available

Pub Type(s)

Journal Article

Language

eng

PubMed ID

23460927

Citation

Griffin, Lewis D., et al. "Distributional Learning of Appearance." PloS One, vol. 8, no. 2, 2013, pp. e58074.
Griffin LD, Wahab MH, Newell AJ. Distributional learning of appearance. PLoS ONE. 2013;8(2):e58074.
Griffin, L. D., Wahab, M. H., & Newell, A. J. (2013). Distributional learning of appearance. PloS One, 8(2), pp. e58074. doi:10.1371/journal.pone.0058074.
Griffin LD, Wahab MH, Newell AJ. Distributional Learning of Appearance. PLoS ONE. 2013;8(2):e58074. PubMed PMID: 23460927.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Distributional learning of appearance. AU - Griffin,Lewis D, AU - Wahab,M Husni, AU - Newell,Andrew J, Y1 - 2013/02/27/ PY - 2012/06/18/received PY - 2013/01/30/accepted PY - 2013/3/6/entrez PY - 2013/3/6/pubmed PY - 2013/9/4/medline SP - e58074 EP - e58074 JF - PloS one JO - PLoS ONE VL - 8 IS - 2 N2 - Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that 'words of similar meaning tend to occur in similar contexts'. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that 'words with referents of similar appearance tend to occur in similar contexts'. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance. SN - 1932-6203 UR - https://www.unboundmedicine.com/medline/citation/23460927/Distributional_learning_of_appearance_ L2 - http://dx.plos.org/10.1371/journal.pone.0058074 DB - PRIME DP - Unbound Medicine ER -