Monthly Archives: November 2017

PhD position in speech processing at UMass

We encourage potential PhD students who are interested in speech processing to apply to the Cognitive and Cognitive Neuroscience graduate program to work with Lisa Sanders (Psychological and Brain Sciences) and Joe Pater (Linguistics) at the University of Massachusetts Amherst. Admitted students will have the opportunities to contribute to ongoing electrophysiological experiments and computational modeling of speech processing and to develop independent research on speech and speech sound representations. Funding is available. General information about the program can be found at umass.edu/pbs/research/cognition-and-cognitive-neuroscience. We encourage applicants to contact Lisa Sanders (lsanders@psych.umass.edu) or Joe Pater (pater@linguist.umass.edu) directly. Apply to the Graduate School of the University of Massachusetts Amherst and specify “Cognition” as your area of concentration. The deadline is Dec. 1, but late applications will be accepted.

Ranganath in MLFL Thurs. 11/30 at 11:45

Rajesh Ranganath (NYU) will present “Black Box Variational Inference: Scalable, Generic Bayesian Computation and its Applications” in the Machine Learning and Friends Lunch Thursday Nov. 13 at 11:45 am in CS 150. Abstract and bio follow.

Abstract:

Probabilistic generative models are robust to noise, uncover unseen patterns, and make predictions about the future. Probabilistic generative models posit hidden structure to describe data. They have addressed problems in neuroscience, astrophysics, genetics, and medicine. The main computational challenge is computing the hidden structure given the data — posterior inference. For most models of interest, computing the posterior distribution requires approximations like variational inference. Classically, variational inference was feasible to deploy in only a small fraction of models. We develop black box variational inference. Black box variational inference is a variational inference algorithm that is easy to deploy on a broad class of models and has already found use in neuroscience and healthcare. The ideas around black box variational inference also facilitate new kinds of variational methods such as hierarchical variational models. Hierarchical variational models improve the approximation quality of variational inference by building higher-fidelity approximations from coarser ones. Black box variational inference opens the doors to new models and better posterior approximations. Lastly, I will discuss some of the challenges that variational methods face moving forward.

Bio:

Rajesh Ranganath is a postdoc at Columbia University’s Department of Statistics and a research affiliate at MIT’s Institute for Medical Engineering and Science. He will be an assistant professor at the Courant Institute of Mathematical Sciences at NYU starting January 2018. His research interests include approximate inference, model checking, Bayesian nonparametrics, and machine learning for healthcare. Rajesh recently completed his PhD at Princeton with David Blei. Before starting his PhD, Rajesh worked as a software engineer for AMA Capital Management. He obtained his BS and MS from Stanford University with Andrew Ng and Dan Jurafsky. Rajesh has won several awards and fellowships including the NDSEG graduate fellowship and the Porter Ogden Jacobus Fellowship, given to the top four doctoral students at Princeton University.

 

Screening of AlphaGo 11/30 in S131 Integrated Learning Center

Screening of AlphaGo
November 30
S131 Integrated Learning Center

Showing at 5pm.
Note this event is not in the Computer Science Building and seating is limited.

With more board configurations than there are atoms in the universe, the ancient Chinese game of Go has long been considered a grand challenge for artificial intelligence. On March 9, 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history.

Directed by Greg Kohs with an original score by Academy Award nominee, Hauschka, AlphaGo chronicles a journey from the halls of Oxford, through the backstreets of Bordeaux, past the coding terminals of Google DeepMind in London, and ultimately, to the seven-day tournament in Seoul. As the drama unfolds, more questions emerge: What can artificial intelligence reveal about a 3000-year-old game? What can it teach us about humanity?

https://www.alphagomovie.com

 

Gilbers on AAE and hip-hop Friday, 12/1 at 10 in ILC N451

Steven Gilbers of the University of Groningen will be giving a special talk on: “Regional variation in African American English and hip-hop. Why 2Pac’s accent changed over time and why Snoop Dogg and Jay Z have different rap flows”. It will be held Friday Dec. 1 at 10 am in ILC N451. All are welcome! An abstract and bio are below.

Abstract. Relatively little is yet known about how African American English (AAE) regiolects differ from each other. However, we do know regional variation in AAE is salient to many of its speakers, especially those involved with hip-hop culture, in which great importance is assigned to regional identity, and regional accents are a key means of expressing regional identity and affiliation (Morgan, 2001). In hip-hop music, regional variation can also be observed, with different regions’ rap performances being characterized by distinct “flows” (i.e. rhythmic and melodic delivery), possibly due to certain language varieties being better suited for certain flows (Kautny, 2015).

The observations above inform Steven Gilbers’s dissertational research on hip-hop linguistics. During his upcoming talk at UMass Amherst, he will discuss how East Coast and West Coast AAE differ from each other in terms of vowel duration and prosody as well as how these differences are reflected in the rap styles associated with both regions. Moreover, he will discuss how Tupac “2Pac” Shakur – a native New Yorker – acquired a West Coast AAE accent, and how his second dialect acquisition trajectory was influenced by his role in the East Coast/West Coast hip-hop war of the 1990s.

Bio. Steven Gilbers (26) is a hip-hop linguist from Groningen, the Netherlands. His research interests include African American English, hip-hop music, and the sociolinguistics of hip-hop culture. Steven is in the process of writing his doctoral dissertation on second African American English dialect acquisition in relation to regional hip-hop identity at the University of Groningen. Supported by a Fulbright grant, he is currently visiting the United States to conduct an African American English accent perception experiment in New York City and Los Angeles. Aside from his academic endeavors, Steven is also a hip-hop musician, spoken word artist, and co-host of the Kick Knowledge podcast.

Wu in MLFL Thurs. 11/16 at 11:45

Steven Wu (MSR) will present “A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem” in the Machine Learning and Friends Lunch Thursday Nov. 16 at 11:45 am in CS 150. Abstract and bio follow.

Abstract:

Bandit learning is characterized by the tension between long-term exploration and short-term exploitation. However, as has recently been noted, in settings in which the choices of the learning algorithm correspond to important decisions about individual people (such as criminal recidivism prediction, lending, and sequential drug trials), exploration corresponds to explicitly sacrificing the well-being of one individual for the potential future benefit of others. This raises a fairness concern. In such settings, one might like to run a “greedy” algorithm, which always makes the (myopically) optimal decision for the individuals at hand — but doing this can result in a catastrophic failure to learn. In this paper, we consider the linear contextual bandit problem and revisit the performance of the greedy algorithm. We give a smoothed analysis, showing that even when contexts may be chosen by an adversary, small perturbations of the adversary’s choices suffice for the algorithm to achieve “no regret”, perhaps (depending on the specifics of the setting) with a constant amount of initial training data. This suggests that “generically” (i.e. in slightly perturbed environments), exploration and exploitation need not be in conflict in the linear setting.

Bio:

Steven Wu is currently a Post-Doctoral Researcher at Microsoft Research in New York City, where he is a member of the Machine Learning and Algorithmic Economics groups. He will be joining the Department of Computer Science and Engineering at the University of Minnesota as an Assistant Professor starting in fall 2018. He received his Ph.D. in Computer Science from the University of Pennsylvania in 2017, under the supervision of Michael Kearns and Aaron Roth. His doctoral dissertation “Data Privacy Beyond Differential Privacy” received the 2017 Morris and Dorothy Rubinoff Dissertation Award. His research focuses on algorithm design under different social constraints. In particular, his primary research interest is on data privacy, specifically differential privacy, where he builds tools for data analysis under the constraint of privacy preservation. His recent research also studies algorithmic fairness, especially in the context of machine learning, where he investigates how we can prevent bias and unfairness in algorithmic decision making. He examines problems in these areas using methods and models from machine learning theory, economics, optimization, and beyond.

Farbood in Research in Music Series Friday, Nov. 17 at 2:30

Mary Farbood (NYU) will present “The Temporal Dynamics of Music Versus Speech Processing” in the Old Chapel Conference Room on Friday, Nov. 17, 2017 at 2:30pm. All are welcome! An abstract is below.

Two studies comparing the temporal dynamics of music and speech are presented. The first focuses on tempo and how it affects key-finding; these results are then compared to various timescales associated with speech processing. The second study examines decoding time of musical structure using a key-finding task and discusses those results in the context of analogous speech research. These experiments highlight both differences and similarities in how music and speech are processed in time.

Wang in MLFL Thurs. 11/9 at 11:45

Lu Wang (Northeastern) will present “What Makes a Good Argument: Understanding and Predicting High Quality Arguments Using NLP Methods” in the Machine Learning and Friends Lunch Thursday Nov. 2 at 11:45 am in CS 150. Abstract and bio follow.

Abstract:

Debate and deliberation play essential roles in politics and civil discourse. While argument content and linguistic style both affect debate outcomes, limited work has been done on studying the interplay between the two. In the first part of this talk, I will present a joint model that estimates the inherent persuasive strengths of different topics, the effects of numerous linguistic features, and the interactions between the two as they affect debate audience. By experimenting with Oxford-style debates, our model predicts audience-adjudicated winners with 74% accuracy, significantly outperforming models based on linguistic features alone. We also find that winning sides employ more strong arguments (as corroborated by human judgment) and debaters all tend to shift topics to stronger ground. The model further allows us to identify the linguistic features associated with strong or weak arguments.

In the second part of my talk, I will present our recent study on retrieving diverse types of supporting arguments from relevant documents for user-specified topics. We find that human writers often use different types of arguments to promote persuasiveness, which can be characterized with different linguistic features. We then show how to leverage argument type to assist the task of supporting argument detection. I will also discuss our follow-up work on automatic argument generation.

Bio:

Lu Wang is an Assistant Professor in College of Computer and Information Science at Northeastern University since 2015. She received her Ph.D. in Computer Science from Cornell University and her bachelor degrees in Intelligence Science and Technology and Economics from Peking University. Her research mainly focuses on designing machine learning algorithms and statistical models for natural language processing (NLP) tasks, including abstractive text summarization, language generation, argumentation mining, information extraction, and their applications in interdisciplinary subjects (e.g., computational social science). Lu and her collaborators received an outstanding short paper award at ACL 2017 and a best paper nomination award at SIGDIAL 2012. Her group’s work is funded by National Science Foundation (NSF), Intelligence Advanced Research Projects Activity (IARPA), and several industry gifts (Toutiao AI Lab, and NVIDIA GPU program). More information about her research can be found at www.ccs.neu.edu/home/luwang/.