Yearly Archives: 2018

Roy in MLFL Thurs. 11/8 at 11:45

From Rajarshi Das on the UMass NLP list: “Subhro Roy is visiting us this week. He has done cool work in solving algebra problems via semantic parsing. He is currently working on grounded language stuff and common sense. Please sign up to meet with him here.”

who: Subhro Roy (MIT)

when: 11/08 (Thursday) 11:45a – 1:15p

where: Computer Science Building Rm 150

food: Athena’s Pizza

 “Towards Natural Human Robot Communication”

Abstract: Robots are becoming more and more popular with the rise of self driving cars, autonomous drones, and warehouse automation. However, they still require experts to set up the goals for the task, and are usually devoid of a high level understanding of its environment. Language can address these issues. Non expert users can seamlessly instruct robots using natural language commands. Linguistic resources can be used to extract knowledge about the world, which can be distilled into actionable intelligence. In this talk, I will describe some of our recent work in this direction. The first focuses on robust referring expression grounding, allowing users to describe commands involving objects in the environment. The second focuses on grounding high levelinstructions using background knowledge from WikiHow, Conceptnet and Wordnet. I will conclude by describing some of our ongoing work in acquiring commonsense knowledge for household robots.


Subhro is a Postdoctoral Associate at the Computer Science and AI Laboratory (CSAIL) at MIT working with Prof. Nicholas Roy. His research focuses on grounding natural language instructions and commonsense knowledge acquisition; aimed towards capable service robots that interact seamlessly with humans. His research contributes towards programs funded by the US Army Research Labs and the Toyota Research Institute.
Subhro obtained his Ph.D. at the University of Illinois, Urbana Champaign, advised by Prof. Dan Roth. His doctoral research focused on models for automated numeric reasoning and word problem solving. His research led to the development of several top performing word problem solvers and the MAWPS system for standardizing datasets and evaluation in the area. His work has been published in TACL, EMNLP, NAACL, AAAI, CoRL and ISER. Subhro obtained his B. Tech. degree at the Indian Institute of Technology (IIT) Kharagpur.

Special talk by Brian Scholl Mon. 11/5 at noon

Brian Scholl (Yale; will present “Let’s See What Happens: Dynamic Events as Foundational Units of Perception and Cognition” next Monday (Nov 5) at noon in the CHC Event Hall East. A flyer is attached and the abstract is below.
Abstract. What is the purpose of perception?  Perhaps the most common answer to this question is that perception is a way of figuring out *what’s out there*, so as to better support adaptive interaction with our local environment.  Accordingly, the vast majority of work on visual processing involves representations such as features, objects, and scenes.  But the world consists of more than such static entities: out there, things *happen*.  And so I will suggest here that the underlying units of perception are often dynamic visual events.  In particular, in a series of studies that were largely inspired by developmental work, I will explore how visual event representations provide a foundation for much of our mental lives — including attention and memory, causal understanding, intuitive physics, and even social cognition.  This presentation will involve some results and some statistics, but the key claims will also be illustrated with phenomenologically vivid demonstrations in which you’ll be able to directly experience the importance of event perception — via phenomena such as transformational apparent motion, rhythmic engagement, change blindness in dynamic scenes, and the perception of chasing.  Collectively, this work presents a new way to think about how perception is attuned to an inherently dynamic world.
This event is co-sponsored by PBS, the Developmental Science Initiative, and the Initiative in Cognitive Science.

Brian Scholl lecture flyer Fall 2018.pdf

Lacreuse in cognitive bag lunch Weds. 10/31 at noon

The next brown bag speaker, on 10/31 at 12:00 in Tobin 521B, is UMass’ own Agnès Lacreuse (

Sex, hormones and cognitive aging in primates

Emerging clinical data suggest that men experience greater age-related decline that women, but little is known about the factors that drive these sex differences. Nonhuman primate models of human aging can help us answer some of these questions. I will describe several studies in nonhuman primates focusing on the effects of biological sex and sex hormones on neurocognitive aging. These studies are essential for the design of optimal therapies to alleviate age-related cognitive decline in humans. I will also argue that aging research across primate species has the potential to provide new cues for understanding healthy and pathological aging in humans.


Pavlick in MLFL Weds. Oct. 24 at 11:45

who: Ellie Pavlick (Brown University)

when: 10/24 (Wednesday) 11:45a – 1:15p

where: Computer Science Building Rm 150

food: Athena’s Pizza

Why should we care about linguistics?

Abstract: In just the past few months, a flurry of adversarial studies have pushed back on the apparent progress of neural networks, with multiple analyses suggesting that deep models of text fail to capture even basic properties of language, such as negation, word order, and compositionality. Alongside this wave of negative results, our field has stated ambitions to move beyond task-specific models and toward “general purpose” word, sentence, and even document embeddings. This is a tall order for the field of NLP, and, I argue, marks a significant shift in the way we approach our research. I will discuss what we can learn from the field of linguistics about the challenges of codifying all of language in a “general purpose” way. Then, more importantly, I will discuss what we cannot learn from linguistics. I will argue that the state-of-the-art of NLP research is operating close to the limits of what we know about natural language semantics, both within our field and outside it. I will conclude with thoughts on why this opens opportunities for NLP to advance both technology and basic science as it relates to language, and the implications for the way we should conduct empirical research.

Bio: Ellie Pavlick is an Assistant Professor of Computer Science and Brown University and a Research Scientist at Google AI. Ellie received her PhD from University of Pennsylvania under the supervision of Chris Callison-Burch. Her current research focus is on semantics, pragmatics, and building cognitively-plausible computational models of natural language inference.

Ling in Cognitive Brown Bag Oct. 24 at noon

The Cognitive Brown Bag speaker this Wednesday will be Sam Ling of Boston University ( on  “How does normalization regulate visual competition?” (abstract below). As usual, the talk is 12:00-1:15, Tobin 521B.

Abstract. How does the visual system regulate competing sensory information? Recent theories propose that a computation known as divisive normalization plays a key role in governing neural competition. Normalization is considered a canonical neural computation, potentially driving responses throughout the neural and cognitive system. Interestingly, there is evidence to suggest that normalization’s pervasive role relies on an exquisite tuning to stimulus features, such as orientation, but this feature-selective nature of normalization is surprisingly understudied, particularly in humans. In this talk, I will describe a series of studies using functional neuroimaging and psychophysics to shed light on the tuning characteristics that allow normalization to control population responses within human visual cortex, and to understand how this form of normalization can support functions as diverse as attentional selection and working memory.


Elhadad in MLFL Thurs. Oct. 18 at 11:45

who: Noémie Elhadad, Columbia University
when: October 18, 11:45 A.M. – 1:00 P.M.
where: Computer Science Building, Room 150/151
food: Athena’s Pizza

Phenotyping Endometriosis Through Mixed Membership Models Of Self-Tracking Data

Abstract: Despite the impressive past and recent advances in medical sciences, there are still a host of chronic conditions which are not well understood and lack even consensus description of their signs and symptoms. Without such consensus, research for precise treatments and ultimately a cure is at a halt. Phenotyping these conditions, that is, systematically characterizing the signs, symptoms and other aspects of these conditions, is thus particularly needed. Computational phenotyping can help identify cohorts of patients at scale and identify potential sub-groups, thus generating new hypotheses for these mysterious conditions. While traditional phenotyping algorithms rely on clinical documentation and expert knowledge, phenotyping for enigmatic conditions might benefit from patient expertise as well. In this talk I will focus on one such enigmatic condition, endometriosis, a chronic condition estimated to affect 10% of women in reproductive age. I will describe approaches needed to phenotype the condition: eliciting dimensions of disease, engaging patients in self-tracking their condition, and discovering phenotypes and sub-phenotypes of endometriosis based on patients’ accounts of the disease.

Bio: Noemie Elhadad is an Associate Professor in Biomedical Informatics, affiliated with Computer Science and the Data Science Institute at Columbia University. Her research is at the intersection of computation, technology, and medicine with a focus on machine learning for healthcare and natural language processing of clinical and health texts. Her work is funded by the National Science Foundation, the National Library of Medicine, the National Cancer Institute, and the National Institute for General Medical Sciences.

More at

Zaki in Cognitive Brown Bag Weds. Oct. 17 at noon

The cognitive brown bag speaker this week will be Safa Zaki, of Williams College ( on “Sequence Effects in Category Learning”. The abstract is below.  As usual, the talk will be on Wednesday, 12:00-1:15, Tobin 521B.


Abstract. Sequence effects have recently been reported in the category learning literature in which the particular order of presentation of exemplars in a category affects the speed of learning. I will present several experiments in which we that test the idea that some of these effects are caused by changes in attention allocation that result from comparisons between temporally juxtaposed exemplars. I will discuss eyetracking data and model fits that provide converging evidence of increased attention to the target dimension as a result of the juxtaposition of items in the list.


Discussion: Generative linguistics and neural networks at 60

From Joe Pater

The commentaries on my paper “Generative Linguistics and Neural Networks at 60: Foundation, Friction and Fusion” are all now posted on-line at the authors’ websites at the links below. The linked version of my paper and – I presume – of the commentaries are the non-copyedited but otherwise final versions that will appear in the March 2019 volume of Language in the Perspectives section.

Update March 2019: The final published versions can now be found at this link.

I decided not to write a reply to the commentaries, since they nicely illustrate a range of possible responses to the target article, and because most of what I would have written in a reply would have been to repeat or elaborate on points that are already in my paper. But there is of course lots more to talk about, so I thought I’d set up this blog post with open comments to allow further relatively well-archived discussion to continue.

Iris Berent and Gary Marcus. No integration without structured representations: reply to Pater.

Ewan Dunbar. Generative grammar, neural networks, and the implementational mapping problem.

Tal Linzen. What can linguistics and deep learning contribute to each other?

Lisa Pearl. Fusion is great, and interpretable fusion could be exciting for theory generation.

Chris Potts. A case for deep learning in semantics

Jonathan Rawski and Jeff Heinz. No Free Lunch in Linguistics or Machine Learning.