Yearly Archives: 2015

Sanders in Linguistics, Friday 11/13, 2:15 p.m.

Lisa Sanders of UMass Psychology will present on ERP research on phonological learning experiments in the Linguistics Sound Workshop, Friday 11/13 at 2:15. This research includes a recently submitted paper, whose title, abstract and authors are listed below.

Event-related potential evidence of abstract phonological learning in the laboratory

Lisa Sanders, Claire Moore-Cantwell, Joe Pater, Robert Staubs and Benjamin Zobel

Abstract. The experimental study of artificial language learning has become a widely used means of investigating the predictions of theories of phonology and of learning. Although much is now known about the generalizations that learners make from various kinds of data, relatively little is known about how those generalizations are cognitively encoded. Models of phonological knowledge fall into two broad classes: lexical (analogical) vs. abstract (grammatical). This paper provides evidence that generalizations acquired in the lab can be encoded at an abstract level, based on an ERP study of brain responses to violations of lab-learned phonotactics. Novel words that violated a learned phonotactic constraint elicited a larger Late Positive Component (LPC) than novel words that satisfied it. This constitutes evidence for the abstractness of the encoded generalization in that the LPC is also associated with syntactic violations and with violations of musical structure. The LPC has also been found in the study of naturalistically learned phonotactics, providing support for the ecological validity of lab learning of phonology.

 

Yu in CSSI seminar, Fri. 11/13 at 12:30

Kristine Yu of UMass Linguistics will present “Linguistic tone and the input to computational models of sentence comprehension” in the Computational Social Science Institute seminar, Friday, November 13, 2015, 12:30-2:00 p.m., in the Integrated Learning Center 231. Lunch will be provided, beginning at 12:15. An abstract follows.

Abstract: Consider a sentence like “I met the daughter of the colonel who was on the balcony.” When you hear this sentence, there are two possible interpretations: (1) the daughter was the one who was on the balcony or (2) the colonel was the one was on the balcony. These two different interpretations correspond to different syntactic structures of the sentence: that is, the same string of words has two different interpretations because the way the parts of the sentence are related to one another is different in those two interpretations. It has long been known that aspects of how the sentence is spoken, e.g., where the speaker pauses, how the pitch of the speaker’s voice goes up and down, might offer clues to disambiguation in sentence comprehension in a sentence like the example sentence given above. However, the standard approach in computational models of sentence comprehension is to start with an input of a string of words, and to have thrown away the information about how the sentence was uttered. In this talk, I offer perspectives on (1) why clues from the way a sentence was spoken has not been incorporated into computational models of sentence comprehension and (2) why information from the way a sentence was spoken should be incorporated into these computational models, and how we might start tackling this project.

Duvenaud at MLFL Thurs. 11/12 1 p.m.

David Duvenaud (Harvard) will present “Automatically Constructing Models, and Automatically Explaining Them, too” at the Machine Learning and Friends Lunch at 1 p.m. in CS150 (arrive at 12:45 for pizza). An abstract and bio follow.

Abstract
How could an artificial intelligence do statistics? It would need an open-ended language of models, and a way to search through and compare those models. Even better would be a system that could explain the different types of structure found, even if that type of structure had never been seen before. This talk presents a prototype of such a system, which builds structured Gaussian processes regression models by combining covariance kernels to build a custom model for each dataset. The resulting models can be broken down into relatively simple components, and surprisingly, it’s not hard to write code that automatically describes each component, even for novel combinations of kernels. The result is a procedure that takes in a dataset, and outputs a report with plots and English descriptions of the different types of structure found in that dataset. I’ll also talk about advances in black-box stochastic variational inference methods, which have the potential to open the door to even broader model classes.

Bio: David Duvenaud is a postdoctoral researcher in the Harvard School of Applied Sciences and Engineering. He obtained his doctorate in machine learning at the University of Cambridge. His research has focused on probabilistic models of functions, with applications to forecasting, numeric computations, and deep learning. David previously worked on machine vision at Google research, and co-founded Invenia, an energy forecasting and trading firm.

Shakhnarovich in CS Thurs. 10/29

Greg Shakhnarovich of the University of Chicago will present “Zoom-out Features For Image Understanding” in the Machine Learning and Friends lunch Thursday Oct. 29 at 1 pm (arrive at 12:45 for pizza). An abstract follows.

I will describe a novel feed-forward architecture, which maps small image elements (pixels or superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. Applied to semantic segmentation, our approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network with skip-layer connections spanning the zoomout levels. Using off-the-shelf network, pre-trained on ImageNet classification task, this zoom-out architecture achieves near state-of-the-art accuracy on the PASCAL VOC 2012 test set.

Agreement Workshop starts Thursday 10/29 at 9 am

The Agreement Workshop begins this Thursday at 9 am in ILC with a talk from Brian Dillon of UMass Linguistics. The full program, which lasts all day Thursday and Friday and features a wide range of experimental and theoretical research on syntactic agreement by local and visiting speakers, is here. Feel free to come for any of it, even if you haven’t registered. (Rumor has it that the participants will be joining local linguists to eat local apples Friday night; please ask Brian or Joe Pater for details if this is news to you and you’d like to join us too).

Halpert in Linguistics Fri. 10/23 at 3:30

Claire Halpert of the University of Minnesota will give a talk on Friday, Oct. 23, in ILC N400 at 3:30. (The title and abstract follow.)

Escape clause

In this talk, I investigate the syntactic properties of clausal arguments, looking in particular at whether A-movement is permitted out of finite clauses and at whether these clauses themselves may undergo movement or establish agreement relationships. In English, argument clauses show some puzzling distributional properties compared to their nominal counterparts. In particular, they appear to satisfy selectional requirements of verbs, but can also combine directly with non-nominal-taking nouns and adjectives. Stowell (1981) and many others have treated these differences as arising from how syntactic case interacts with nominals and clauses. In a recent approach, Moulton (2015) argues that the distributional properties of propositional argument clauses are due to their semantic type: these clauses are type e,st and so must combine via predicate modification, unlike nominals. In contrast to English, I show that in the Bantu language Zulu, certain non-nominalized finite CPs exhibit identical selectional properties to nominals, therefore requiring a different treatment from those proposed in the previous literature. These clauses, also like nominals, appear to control phi-agreement and trigger intervention effects in predictable ways. At the same time, these clauses differ from nominals (and nominalized clauses) in the language in certain respects. I will argue that these properties shed light on the role that phi-agreement plays in the transparency/opacity of finite clauses for A-movement and on the nature of barrier effects in the syntax more generally.

Potter in Cognitive Brown Bag Weds. 10/21 at noon

Mary C. Potter, Professor of Psychology Emerita in the MIT Department of Brain and Cognitive Sciences will present “Detecting picture meaning in extreme conditions” in the Cognitive Brown Bag Wednesday at noon in Tobin 521B. An abstract follows.

Abstract. Potter, Wyble, Hagmann, & McCourt (2014) reported that a new pictured scene in an RSVP sequence can be understood (matched to a name) with durations as brief as 13 ms/picture. Although  d’ increased as duration increased from 13 ms to 27, 53, and 80 ms/picture and was higher when the name was given before than after the sequence, it was above chance at all durations, whether the name came before or after the sequence. I will describe this and subsequent research that replicated and extended those results, including recent studies using spoken vs. written names, with very tight timing between the onset of the name and the onset of the RSVP pictures. Whether these results indicate feedforward processing (as we suggest) or are accounted for in some other way, they represent a challenge to models of visual attention and perception.