Monthly Archives: September 2016

Syrett in Linguistics Colloquium Friday Sept. 23 at 3:30

Kristen Syrett (Rutgers) will give the Linguistics colloquium on Friday, September 23 at 3:30 in ILC N400. A title and abstract of her talk follow.

Title:

Challenges in children’s acquisition of comparatives

Abstract:

Comparative constructions can range from those that are quite simple and easy to interpret (Hillary is smarter than Donald.) to those that are complex, and wreak havoc on the sentence processer, often leaving the interpreter confused as to whether the sentence is acceptable, even if it is somehow interpretable (?More reporters wanted her_i to talk her_i emails and health status than about Hillary_i/j’s considerable experience qualifying her for the Oval Office.). While comparatives such as the second example are admittedly uncommon in adult conversations, let alone child-directed speech, we somehow develop the capacity to make sense of these constructions in the course of language acquisition.

Let us assume that the ability to interpret comparative constructions rests (for the most part) with a bundle of grammatical mechanisms (for example, quantifier raising, interpretation of ellipsis), and structural relations like c-command. Given independent evidence from a number of acquisition studies that children can properly interpret constructions in which these aspects are implicated, we might predict that children would demonstrate early success with comparative constructions. However, children continue to produce non-adult-like comparatives well past age 5-6. And while comprehension of basic comparatives is manifested 2-3 years earlier (as one might expect), children’s interpretation of more complex comparatives remains notably non-adult-like. To what source(s) can we attribute this difference between children and adults?

In this talk, I will present evidence from a set of studies probing children’s interpretation of comparatives, focusing on the acquisition of differential comparatives in particular. The results highlight specific challenges children face as comparative constructions become increasingly more complex, in particular the features of the semantic representation giving rise to non-adult-like responses – even when those features are licensed by an adult-like grammar.

Nguyen in Machine Learning Lunch noon Thursday Sept. 22

who: Thien Huu Nguyen

when: 12:00pm Thursday, 9/22
where: cs150
pizza: Antonio’s
generous sponsor: Oracle

Abstract:

Neural Networks (NN) have been applied successfully to many Information Extraction (IE) tasks recently. However, most of the current models are designed for a separate task of the IE pipeline, focusing only on the local information specific to the task. Such local models are not able to capture the global information or the long range inter-dependencies between multiple prediction stages, that are necessary for many IE problems.

In this talk, we present our recent research on memory augmented networks to address such limitations of the local NN models. In particular, we introduce memory tensors to accumulate the prediction information during the course of the local stages, and provide such global memory as additional evidence for the local predictions of IE. Our experiments on event extraction and entity linking demonstrate that the memory augmented networks outperform the traditional local NN models and feature-based approaches for such problems.

Bio:

Thien Huu Nguyen is a fifth-year Ph.D. student in the Computer Science Department at New York University (NYU). His Ph.D. research centers around the development of Deep Learning models for Information Extraction of Natural Language Processing, including Relation Extraction, Event Extraction, Mention Detection, and Slot Filling. His research advisors at NYU are Professor Ralph Grishman and Professor Kyunghyun Cho.

Thien Huu Nguyen was a research intern at the IBM T.J. Watson Research Center (Yorktown Heights, New York) in the summers of 2015 and 2016, where he developed new neural network models for Mention Detection and Entity Linking. Thien is a recipient of the IBM Ph.D. fellowship (2016-2017) and he is expected to graduate in May 2017.

Gershman in Cognitive Bag Lunch noon Wednesday Sept 21

Wednesday, September 21, 2016 

12:00pm to 1:15pm

Location:  Tobin 521B

Sam Gershman PhD, Assistant Professor at Harvard University, will present a talk titled Imaginative reinforcement learning

“Reinforcement learning is typically conceived of in terms of how reward predictions and choice behavior adapt based on an agent’s experience. However, experience is too limited to provide the brain with the knowledge necessary for adaptive behavior in the real world. To go beyond experience, the brain must harness its imaginative powers. Applications of imagination to reinforcement learning include prospective simulation for planning, and learning cached values from imagined episodes. I will discuss how these ideas can be formalized along with supporting experimental evidence.”

For more information on Dr. Gershan, visit http://gershmanlab.webfactional.com/people/sam.html

Cognitive bag lunch schedule

The Cognitive bag lunch schedule for the semester is now available. Talks take place Wednesdays at noon in Tobin 521B.

Sept. 21 Sam Gershman
Sept. 28 Brian Dillon
Oct. 5 Nate Kornell
Oct. 12 Roger Levy
Oct. 19 Christoph Weidemann
Oct. 26 Amy Criss
Nov. 2 Kevin Potter
Nov. 9 Kajander/Sadil Ethics
Nov. 16 Rob Nosofsky
Nov. 30 Amanda Rysling
Dec. 7 Andrea Cataldo
Dec. 14 Michele Fornaciai

Gouskova in Linguistics colloquium Friday Sept. 16th at 3:30

Maria Gouskova of NYU will present the Linguistics colloquium in N400 of the ILC at 3:30 Friday Sept. 16th.

Title:
Learning Nonlocal Phonology

Abstract:
Sounds usually interact locally, but some phonological rules involve segments that are not adjacent. In English, the liquid “l” becomes “r” in “music-al”, “flor-al” vs. “tubul-ar” and “circul-ar” when the preceding syllable contains an “l” (*”tubul-al”, *”lun-ar”). This kind of consonant dissimilation is traditionally analyzed using a special level of representation where only “l” and “r” are present–an autosegmental tier. On this tier, the prohibition against adjacent identical liquids is local. Tiers capture the typological generalization that nonlocal interactions such as dissimilation, vowel harmony, and consonant harmony generally involve segments that belong to a natural class.

Work in inductive learning of nonlocal phonology supplies a learnability argument for tiers: without tiers, the space for possible interactions is unsearchably large, so learners are likely to miss nonlocal interactions without some bias towards tiers (Hayes and Wilson 2008, Goldsmith and Riggle 2012). But we still do not know how learners actually discover tiers. Some proposals approach this problem as a directed graph search: the graph in this case is a pre-set feature-geometric hierarchy of features (Futrell et al. 2015). Others pursue brute-force searches for of the segments that might participate in a restriction, which runs the danger of identifying accidental patterns that people do not notice (Albright and Hayes 2006 et seq.).

The alternative pursued in this talk is learners induce tiers based on observable properties of the language. In some languages, learners can be moved to posit non-local representations to explain affixal alternations that do not have a local explanation. In other languages, the learning of local phonotactics reveals distributional irregularities, which are interrogated further in search of the right non-local representation. This procedure does not assume a universal feature geometry, and it is less likely to be misled by irrelevant exceptionless generalizations.

Reddy in Machine Learning talk Friday at 10:30

who: Siva Reddy, University of Edinburgh
when: 10:30am Friday, 9/16
where: cs150
food: coffee, pastries, fruit, etc.

Abstract:

I will present three semantic parsing approaches for querying Freebase in natural language 1) training only on raw web corpus, 2) training on question-answer (QA) pairs and 3) training on both QA pairs and web corpus. For 1 and 2, we conceptualise semantic parsing as a graph matching problem, where natural language graphs built using CCG/dependency logical forms are transduced to Freebase graphs. For 3, I will present a natural-logic approach for SemanticParsing. Our methods achieve state-of-the-art on WebQuestions and Free917 QA datasets.

Bio:

Siva Reddy is a Google PhD fellow at the University of Edinburgh under the supervision of Mirella Lapata and Mark Steedman. His primary research interests are in semantic parsing, information extraction, distributional semantics and cross-language transfer. His work is published in TACL, ACL, NAACL, EMNLP. He won the best paper award in IJCNLP 2011, a first place in SemEval 2011 Compositionality Detection task, and a second place in SemEval 2010 WSD task. He worked with Google Parsing team as an intern during his PhD, and as a full-time employee for Adam Kilgarriff’s Sketch Engine before starting his PhD. Apart from language, he loves Badminton (represents Edinburgh University in league matches), and is also learning to play Irish whistle. He is currently on the job market looking for a postdoc.

Bordes in Machine Learning and Friends Lunch, noon Thursday 15th

who: Antoine Bordes, Facebook AI Research
when: 12:00pm, Thursday, 9/15
where: cs150
pizza: Antonio’s
Generous Sponsor: Oracle

Abstract:
This talk will first briefly review Memory Networks, an attention-based neural network architecture introduced in (Weston et al., 15), which has been shown to be able to reach promising performance for question answering on synthetic data. Then, we will explore and discuss the successes and remaining challenges arising when applying Memory Networks to human generated natural language, in the context of large-scale question answering, machine reading and dialog management.

 

Bio:
Antoine is a research scientist at Facebook Artificial Intelligence Research. Prior to joining Facebook in 2014, he was a CNRS staff researcher in the Heudiasyc laboratory of the University of Technology of Compiegne in France. In 2010, he was a postdoctoral fellow in Yoshua Bengio’s lab of University of Montreal. He received his PhD in machine learning from Pierre & Marie Curie University in Paris in early 2010. He received two awards for best PhD from the French Association for Artificial Intelligence and from the French Armament Agency, as well as a Scientific Excellence Scholarship awarded by CNRS in 2013.