Yearly Archives: 2019

Zhang in Computer Science Weds. Feb. 13 at 4 pm

Systems to Improve Online Discussion

Amy X. Zhang
Computer Science and Artificial Intelligence Laboratory (CSAIL)
MIT
 
Wednesday, February 13, 2019
4:00 – 5:00 p.m.

Computer Science 151

Abstract — Discussions online are integral to everyday life, affecting how we learn, work, socialize, and participate in public society. Yet the systems that we use to conduct online discourse, whether they be email, chat, or forums, have changed little since their inception many decades ago. As more people participate and more venues for discourse migrate online, new problems have arisen, and old problems have intensified. People are still drowning in information and must now juggle dozens of disparate discussion silos in addition. Finally, an unfortunately significant proportion of this online interaction is unwanted or unpleasant, with clashing norms leading to people bickering or getting harassed into silence. My research in human-computer interaction is on reimagining outdated designs towards designing novel online discussion systems that fix what’s broken about online discussion. To solve these problems, I develop tools that empower users and communities to have direct control over their experiences and information. These include: 1) summarization tools to make sense of large discussions, 2) annotation tools to situate conversations in the context of what is being discussed, as well as 3) moderation tools to give users more fine-grained control over content delivery.
 
Bio — Amy X. Zhang is a fifth-year PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), focusing on human-computer interaction and social computing. She is also a 2018-19 Fellow at the Berkman Klein Center at Harvard. She has interned at Microsoft Research and Google Research and was a software engineer at a news startup before her PhD. Her work has received awards at CHI and CSCW and has been featured by ABC News, BBC, CBC, The Verge, and New Scientist. She has an M.Phil. in advanced computer science at the University of Cambridge on a Gates Fellowship and a B.S. in computer science at Rutgers, where she was captain of the Division I Women’s tennis team. Her research is supported by a Google PhD Fellowship and an NSF Graduate Research Fellowship.
 
A reception for attendees will be held at 3:30 p.m. in CS 150

Serre in Cognitive Brown Bag Weds. Feb. 13 at noon

The next Cognitive brown bag speaker will be Thomas Serre of Brown University (http://serre-lab.clps.brown.edu/). The talk is on Wednesday 2/13, 12:00, Tobin 521B; title and abstract are below.

What are the computations underlying primate versus machine vision?

Primates excel at object recognition: For decades, the speed and accuracy of their visual system have remained unmatched by computer algorithms. But recent advances in Deep Convolutional Networks (DCNs) have led to vision systems that are starting to rival human decisions. A growing body of work also suggests that this recent surge in accuracy is accompanied by a concomitant improvement in our ability to account for neural data in higher areas of the primate visual cortex. Overall, DCNs have become de facto computational models of visual recognition.

In this talk, I will review recent work by our group which brings into relief limitations of modern DCNs as computational models of primate vision. I will show that DCNs are limited in their ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity and spatial relation judgments suggesting the need for additional neural computations beyond those implemented in current architectures. I will further demonstrate how neuroscience principles may help guide the future design for more robust computer vision architectures.

Perkins in Linguistics Fri. Feb. 15 at 3:30

Laurel Perkins of the University of Maryland (http://ling.umd.edu/~perkinsl/) will present “How to Grow a Grammar: Syntactic Development in 1-Year-Olds” on Friday Feb. 15th at 3:30 PM in N400. All are welcome – an abstract follows.

ABSTRACT: What we can learn depends on what we already know; a child who can’t count cannot learn arithmetic, and a child who can’t segment words cannot identify properties of verbs in her language. Language acquisition, like learning in general, is incremental. How do children draw the right generalizations about their language using incomplete and noisy representations of their linguistic input?

In this talk, I’ll examine some of the first steps of syntax acquisition in 1-year-old infants, using behavioral methods to probe their linguistic representations, and computational methods to ask how they learn from those representations. Taking argument structure as my case study, I will show: (1) that infants represent core clause arguments like “subject” and “object” when learning verbs, (2) that infants can cope with “non-basic” clause types, where those arguments have been displaced, by ignoring some of their input, and (3) that it is possible for infants to learn what kind of data to ignore, even before they can parse it. I will argue that the approach I take for studying this particular learning problem will generalize widely, allowing us to build new models for understanding the role of development in grammar learning.

Suhr in Machine Learning and Friends Thurs. Feb. 7 at 11:45

who: Alane Suhr
when: Feb 7 11:45am
where: Computer Science Building Rm 150
food: Athena’s Pizza

“Modeling and Learning Agents that Understand Language in Context”

Abstract: The meaning of a natural language utterance is influenced by the context in which it occurs, including interaction history and situated context. I will discuss two recent projects in context-dependent natural language understanding for building natural language interfaces to databases and following sequences of instructions. In the first part, I will introduce a model for mapping from natural language to executable SQL queries in an interaction. To resolve the meaning of later utterances, the system must consider the interaction history, including previous user utterances and previously-generated queries. We show how using both implicit and explicit mechanisms for making use of interaction history allows the system to effectively generate context-dependent representations. In the second part, I will describe an approach to map sequences of natural language instructions to system actions that modify an environment, focusing on learning without direct supervision on action sequences. We introduce an exploration-based learning approach that effectively learns to compose system actions to carry out user instructions in context of the environment and interaction.

 

Bio: Alane Suhr is a PhD student in the Computer Science department at Cornell University, focusing on building agents that understand natural language grounded in complex interactions. She is the recipient of an AI2 Key Scientific Challenges Award and a Microsoft Research Women’s Fellowship, and is a National Science Foundation Graduate Research Fellow. She has received paper awards at ACL 2017 and NAACL 2018. Alane received a Bachelor’s degree in Computer Science and Engineering from Ohio State University in 2016.

Momma in Linguistics Fri. Feb. 8 at 3:30

Shota Momma of UC San Diego (https://shotam.github.io) will present “Unifying parsing and generation” at 3:30, Friday February 8th in ILC N400. All are welcome!
Abstract: We use our grammatical knowledge in at least two ways. On one hand, we use our grammatical knowledge to say what we want to convey to others. On the other hand, we use our grammatical knowledge to understand what others say. In either case, we need to assemble sentence structures in a systematic fashion, in accordance with the grammar of our language. In this talk, I will advance the view that the same syntactic structure building mechanism is shared between comprehension and production, specifically focusing on sentences involving long-distance dependencies. I will argue that both comprehenders and speakers anticipatorily build (i.e., predict and plan) the gap structure, soon after they represent the filler and before representing the words and structures that intervene the filler and the gap. I will discuss the basic properties of the algorithm for establishing long-distance dependencies that I hypothesize to be shared between comprehension and production, and suggest that it resembles the derivational steps for establishing long-distance dependencies in an independently motivated grammatical formalism, known as Tree Adjoining Grammar.

Hopper in Cognitive bag lunch Weds. Feb. 6 at noon

Will Hopper (https://people.umass.edu/whopper/) will present “Comparing discrete and continuous evidence models of recognition memory response times”  on 2/6 at 12:00 in Tobin 521B (abstract below). All are welcome.

Memory theorists have long debated whether recognition decisions are mediated by considering the strength of a continuous memory strength signal, or by entering discrete evidence states. Historically, only models which utilized a continuous memory strength signal were able to account for both the distribution of response times and choice probabilities of recognition decisions. Recently, discrete state models have been extended to account for response times distributions, assuming the observed response times arise as a mixture of latent response time distributions associated with each discrete evidence state (Heck & Erdfelder, 2016, Starns, 2018). Here, we compare models from each class (the discrete-race model and the Ratcliff diffusion model), testing their ability to account for both speeded and unspeeded recognition decisions for items tested multiple times within a session. We conclude that the Ratcliff diffusion model provides a better account of the data, as the discrete-race model overestimates memory strength on unspeeded tests in order to describe the response times on speeded tests.

Breen in Cognitive Brown Bag Weds. Jan. 23 at noon

The first cognitive brown bag of the semester will be this Wednesday (1/23).  Our speaker will be Mara Breen of Mt. Holyoke College (https://www.mtholyoke.edu/~mbreen/); title and abstract are below.  As usual, talks are in Tobin 521B, 12:00-1:15.  All are welcome.

The remaining schedule for the semester is also provided below.

1/23  Mara Breen (Mt Holyoke)

Hierarchical linguistic metric structure in speaking, listening, and reading

In this talk, I will describe results from three experiments exploring how hierarchical timing regularities in language are realized by speakers, listeners, and readers. First, using a corpus of productions of Dr. Seuss’s The Cat in the Hat—a metrically and phonologically regular children’s book, we show that speakers’ word durations and intensities are accurately predicted by models of linguistic and musical meter, respectively, demonstrating that listeners to these texts receive consistent acoustic cues to hierarchical metric structure. In a second experiment, we recorded event-related potentials (ERPs) as participants listened to an isochronous, non-intensity-varying text-to-speech rendition of The Cat in the Hat. Pilot ERP results reveal electrophysiological indices of metric processing, demonstrating top-down realization of metric structure even in the absence of explicit prosodic cues. In a third experiment, we recorded ERPs while participants silently read metrically regular rhyming couplets where the final word sometimes mismatched the metric or prosodic context. These mismatches elicited ERP patterns similar to neurocognitive responses observed in listening experiments. In sum, these results demonstrate similarities in perceived and simulated hierarchical timing processes in listening and reading and help explain the processes by which listeners use predictable metric structure to facilitate speech segmentation and comprehension.

1/30 Andrea Cataldo

2/6  Will Hopper

2/13  Thomas Serre (Brown)

2/20 Ben Zobel

2/27  TBA

3/6  Jon Burnsky

3/13 SPRING BREAK

3/20  Mohit Iyyer (UMass CS)

3/27 Patrick Sadil

4/3  Junha Chang

4/10 Sandarsh Pandey

4/17 MONDAY SCHEDULE

4/24 Merika Wilson

5/1  First year projects