By Andries W. Coetzee
Feb 17, 2013
Professor Jon Brennan gave a presentation this past Friday at Michigan's Department of Psychology as part of the Cognition & Cognitive Neuroscience Forum colloquium series. Although our Department has long had close connections to the Department of Psychology (though joint appointments and collaborations with faculty like Julie Boland, Rick Lewis, Susan Gelman, Ioulia Kovelman, Nick Ellis, etc.), these ties have been strengthened significantly when Jon joined our Department in the Fall of 2013. Jon's presentation was attended very well by both linguists and psychologists, and sparked lively discussion.
In his talk, Jon showed how computational modelling can be used to tease apart the predictions made by different models of grammar with regard to processing load. He then showed how these predictions can be tested by relying on fMRI data. The title and abstract of Jon's presentation are given below.
Modeling competence and performance effects in sentence processing with a large-scale natural text
Comprehensive models of sentence processing must articulate both the set of possible end-states (syntactic structures) and an algorithm that maps from linearly presented words to one or more best-matching structure(s). The latter question has received much attention in psycholinguistics, while the former is traditionally the domain of syntactic theory. Importantly, relatively little is known about how different theories of the syntactic end-state (competence) bear on models of on-line sentence parsing (performance); implicit in much work is that little is lost when adopting relatively simplified structures at odds with state-of-the-art syntactic and semantic proposals. I discuss modeling work testing the degree to which the choice of possible syntactic structures, the grammar, affects estimates of processing complexity, and experimental (fMRI) data testing the consequences for a measure of human syntactic processing effort. The effect of grammar is compared with the effect of using different parsing algorithms. The models and experimental work draw on a natural text which provides a broad testing ground with a range of English sentence structures and vocabulary while also connecting with growing interest in modeling language processing under more ecologically valid conditions.