Grantee: The University of Edinburgh, Edinburgh, United Kingdom
Researcher: Sharon Goldwater, Ph.D.
Grant Title: Understanding synergies in language acquisition through computational modeling
Program Area: Understanding Human Cognition
Grant Type: Scholar Award
Year Awarded: 2013
Duration: 6 years
Language acquisition is a puzzle: exposed to different samples of the same language, how is it that different children go on to generalize in the same ways, producing new utterances that all other speakers of the same language can understand? Traditional explanations lean towards one of two extremes. Some researchers argue that language acquisition is a highly specialized process relying on detailed domain-specific learning mechanisms and cognitive constraints defining the space of possible grammars. Others claim that domain-general statistical learning mechanisms are sufficient to solve the problem. While statistical learning is surely involved, even a powerful statistical learner must be constrained (biased) somehow, since any finite data set contains many different statistical regularities, and the learner must know which are relevant to the task at hand.
In my research, I aim to discover what cognitive constraints underlie language acquisition. These constraints determine the kinds of statistical information that are attended to, how different sources of information are combined, and what mental representations are used to store and generalize over this information. My working hypothesis is that successful language learning results from a combination of probabilistic inference (a domain-general statistical learning mechanism) and structured mental representations (potentially including some domain-specific ones). Furthermore, structured correspondences between different levels of linguistic structure, such as words and phonemes or syntax and semantics, are critical for learning at all levels. Any particular level of structure cannot be learned successfully in isolation; rather, partial knowledge at different levels is combined synergistically to aid learning at all levels.
To formalize and test this hypothesis, I develop computational models that simulate learners with different kinds of constraints, exposed to different linguistic input—normally, transcripts of naturalistic child-directed speech. I then compare the output of these models to results from behavioral and observational studies of children. To date, my group has developed models implementing previously proposed synergistic interactions (e.g., syntactic-semantic learning) as well as illustrating new ones (e.g., lexical-phonetic learning). These models capture important behavioral effects and demonstrate that learning at multiple levels of structure need not entail a temporal ordering— first learn A, then use A to learn B. Rather, A and B can be learned simultaneously by using partial information about each to help learn the other.
I plan to extend this work in three ways: first, by performing additional simulations on diverse languages to ensure the universality of these models and investigate how differences between languages affect the rate or patterns of learning. Second, by working to model interactions between more than two levels of structure, where the potential correspondences between levels are more complex, but there is also even more potential for synergy. Third, by relaxing some simplifying assumptions about the input to the models, so that they more closely simulate the challenges overcome by real children. Together, these extensions will further test my basic hypothesis and lead to greater insight into the constraints needed for successful language acquisition.