Monthly Archives: November 2016

Rooshenas in Machine Learning and Friends Wednesday Nov. 30 at 11:30

who: Pedram Rooshenas, University of Oregon
when: 11:30am, Wednesday, Nov 30
where: Computer Science Building rm150
food: wraps from The Works

Learning Tractable Graphical Models

Abstract:
Probabilistic graphical models have been successfully applied to a wide variety of fields such as computational biology, computer vision, natural language processing, robotics, and many more. However, in probabilistic models for many real-world domains, exact inference is intractable, and approximate inference may be inaccurate.  In this talk, we discuss how we can learn tractable models such as arithmetic circuits (ACs) and sum-product networks (SPNs), in which marginal and conditional queries can be answered efficiently.

We also discuss how we can learn these tractable graphical models in a discriminative setting, in particular through introducing Generalized ACs, which combines ACs and neural networks.

Bio:
Pedram Rooshenas is a Ph.D. candidate at the University of Oregon working with Prof. Daniel Lowd. Pedram’s research interests include learning and inference in graphical models and deep structured models.

Pedram has an MSc. degree in Information Technology, with a thesis on data reduction in wireless sensor networks, from Sharif University, Tehran and an MSc. degree in Computer Science from the University of Oregon.

Pedram also maintains Libra, an open-source toolkit for learning and inference with discrete probabilistic models.

Rysling in Cognitive Brown Bag Weds. Nov. 30

Amanda Rysling (UMass Linguistics) will be presenting in this week’s cognitive brown bag – all are welcome!

Title: Preferential early attribution in incremental segmental parsing

Time: 12:00pm to 1:15pm Wednesday Nov. 30. Location:  Tobin 521B.

Abstract: Recognizing the speech we hear as the sounds of the languages we speak requires solving a parsing problem: mapping from the acoustic input we receive to the phonemes and words we recognize as our language. The literature on segmental perception has focused on cases in which we as listeners seem to successfully un-do the blending of speech sounds that naturally occurs in production. The field has identified many cases in which listeners seem to completely attribute the acoustic products of articulation to the sounds whose articulation created them, and so seem to solve the parsing problem in an efficient and seldom-errorful way. Only a handful of studies have examined cases in which listeners seem to systematically “mis-parse,” attributing the acoustic products of one sound’s articulation to another sound, and failing to disentangle the blend of their production. In this talk, I review the results of six phoneme categorization studies that demonstrate that such failure to completely un-do acoustic blending arises when listeners must judge one sound in a string relative to the sound that follows it, and the acoustic transitions between the two sounds are gradual. I then report the results of studies that demonstrate that listeners persist in attributing the acoustic products of a second sound’s articulation to a first sound even when the signal conveys early explicit evidence about the identity of that second sound, and so could have been leveraged to begin disentangling the first from the second before the second sound was fully realized. I go on to argue for a shift in our perspective toward segmental parsing. Attributing the product of a later sound’s articulation to an earlier sound seems inefficient or undesirable when we understand the goal of segmental parsing to be the complete attribution of acoustic products to exactly the sounds whose articulations gave rise to them. But when we consider the fact that listeners necessarily perceive the evidence for events in the world at a delay from when those events occurred, it is adaptive to prefer attributing later-incoming acoustic signal to earlier speech sounds.