Monthly Archives: February 2017

Roberts on epistemic modals in Linguistics Colloquium Friday 3:30

Craige Roberts of the Ohio State University and New York University will be presenting “Agreeing and Assessing: Epistemic modals and the question under discussion” in the Linguistics colloquium series Friday March 3rd at 3:30, in ILC N400. All are welcome!

Abstract. Important debates in the recent literature on Epistemic Modal Auxiliaries (EMAs) hinge on how we understand disagreements about the truth of assertions containing EMAs, and on a variety of attested response patterns to such assertions. Some relevant examples display evidence of faultless disagreement (Lasersohn 2005; Egan et al. 2005; MacFarlane 2005, 2011; Egan 2007; Stephenson 2007) or “faulty agreement” (Moltmann 2002). Others display a variety of patterns of felicitous response to statements with EMAs, responses which sometimes seem to target the prejacent alone and other times the entire modal claim (Lyons 1977; Swanson 2006; Stephenson 2007; von Fintel & Gillies 2007b,2008; Portner 2009; Dowell 2011; among others). I provide an alternative characterization of what it is to agree about EMA statements, arguing that this has generally been misunderstood. Then I provide evidence that the pattern of felicitous response to a given example is a function of the question under discussion in the context of utterance, undercutting a variety of criticisms of the standard semantics which trade on these phenomena.

Jiang on Reinforcement Learning in Data Science Tuesday at 4

What: DS Seminar
Date: February 28, 2017
Time: 4:00 – 5:00 P.M.
Location: Computer Science Building, Room 151
A reception will be held at 3:40 P.M. in the atrium outside the presentation room.

Nan Jiang
University of Michigan
New Results in Statistical Reinforcement Learning

Abstract:

Recently, reinforcement learning (RL) has achieved inspiring success in game playing domains, including human-level control in Atari games and mastering the game of Go. Looking into the future, we expect to build machine learning systems that use RL to turn predictions into actions; applications include robotics, dialog systems, online education, and adaptive medical treatment, to name but a few.

In this talk, Nan will show how theoretical insights from supervised learning can help understand RL, and better appreciate the unique challenges that arise from multi-stage decision making. The first part of the talk will focus on an interesting phenomenon, that a short planning horizon can produce better policies when there is limited data. He will explain it by making a formal analogy to empirical risk minimization, and argue that a short planning horizon helps avoid overfitting. The second part of the talk concerns a core algorithmic challenge in state-of-the-art RL: sample-efficient exploration in large state spaces. I introduce a new complexity measure, the Bellman rank, which allows us to apply a unified algorithm to a number of important RL settings, in some cases obtaining polynomial sample complexity for the first time.

Emily Morgan in Psycholinguistics Friday at 10 and CLC Monday at 11

Emily Morgan will be speaking to the Psycholingusitics workshop next Friday March 3rd at 10 am in ILC N400. The title and abstract are below. We’ll also be discussing a (very) related paper of hers and Roger Levy’s in preparation for that visit in a Computational Linguistics Community meeting Monday Feb. 27th at 11 am in ILC N451. The paper is available here:

http://idiom.ucsd.edu/~rlevy/papers/morgan-levy-2015-cogsci.pdf

Title: Generative and Item-Specific Knowledge in Language Processing

Abstract: The ability to generate novel utterances compositionally using generative knowledge is a hallmark property of human language. At the same time, languages contain non-compositional or idiosyncratic items, such as irregular verbs, idioms, etc. In this talk I ask how and why language achieves a balance between these two systems—generative and item-specific—from both the synchronic and diachronic perspectives.

Specifically, I focus on the case of binomial expressions of the form “X and Y”, whose word order preferences (e.g. bread and butter/#butter and bread) are potentially determined by both generative and item-specific knowledge. I show that ordering preferences for these expressions indeed arise in part from violable generative constraints on the phonological, semantic, and lexical properties of the constituent words, but that expressions also have their own idiosyncratic preferences. I argue that both the way these preferences manifest diachronically and the way they are processed synchronically is constrained by the fact that speakers have finite experience with any given expression: in other words, the ability to learn and transmit idiosyncratic preferences for an expression is constrained by how frequently it is used. The finiteness of the input leads to a rational solution in which processing of these expression relies gradiently upon both generative and item-specific knowledge as a function of expression frequency, with lower frequency items primarily recruiting generative knowledge and higher frequency items relying more upon item-specific knowledge. This gradient processing in turn combines with the bottleneck effect of cultural transmission to perpetuate across generations a frequency-dependent balance of compositionality and idiosyncrasy in the language, in which higher frequency expressions are gradiently more idiosyncratic. I provide evidence for this gradient, frequency-dependent trade-off of generativity and item-specificity in both language processing and language structure using behavioral experiments, corpus data, and computational modeling.

Trapp on Eye Tracking in Machine Learning Thurs. 2/23 at noon

who: Andrew Trapp, Worcester Polythechnic Institute
when: noon, Thursday, February 23
where: Computer Science Building rm150
food: Antonio’s pizza
generous sponsor: ORACLE LABS

Using Density To Identify Fixations In Gaze Data Optimization-Based Formulations And Algorithms

Abstract:
Eye tracking is an increasingly common technology with a variety of practical uses. Eye-tracking gaze data can be categorized into two main events: fixations, which represent attention, whereas saccades occur between fixation events. We propose a novel manner to identify fixations based on their density, which concerns both the fixation duration as well as its inter-point proximity. We develop two mixed-integer nonlinear programming formulations and corresponding algorithms to recover the densest fixations in a data set. Our approach is parameterized by a unique value that controls for the degree of desired density. We conclude by discussing computational results and insights on real data sets.

Bio:
Andrew C. Trapp completed his PhD in Industrial Engineering from the University of Pittsburgh in 2011. He is presently an Assistant Professor of Operations and Industrial Engineering at Worcester Polytechnic Institute (WPI) in Worcester, MA. His research focus is on using advanced analytical techniques, in particular mathematical optimization, to find optimal decisions to problems arising from a diverse cross-section of sectors such as humanitarian operations, healthcare, data mining, and sustainability. He develops new theory, models, and computational solution approaches to tackle such problems. He has published in leading optimization journals such as Operations Research, European Journal of Operational Research, INFORMS Journal on Computing, Annals of Operations Research, IIE Transactions, and Discrete Optimization.

Kozma on cortical oscillation and respiration in Cognitive Bag Lunch Wednesday 2/22 at noon

Robert Kozma (U. of Memphis and UMass) will give the next Cognitive Bag Lunch at 12pm in Tobin 521B on Wednesday Feb. 22. All are welcome!

Title: Respiratory modulation of sensory cortices: Experimental evidence and graph theoretical models

Abstract: The brain generates oscillatory neuronal activity at a broad range of frequencies and the presence and amplitude of certain oscillations at specific times and in specific brain regions are highly correlated with states of arousal, sleep, and with a wide range of cognitive processes. The neuronal mechanisms underlying the generation of brain rhythms are poorly understood. Here we present new evidence suggesting that respiration has a direct influence on oscillatory cortical activity, including gamma oscillations, and on transitions between synchronous and asynchronous cortical network states (phase transitions) in humans. Our findings further suggest that respiratory influence on cortical activity is present in most, and possibly in all areas of the neocortex. Taken together, our findings suggest that respiration acts as master clock exerting a subtle but unfailing synchronizing influence on the temporal organization of large-scale, dynamic cortical activity patterns and the cognitive, emotional, sensory and motor processes they control.

Dixon in Cognitive Bag Lunch noon Weds. Feb. 15

James Dixon (UConn) at 12pm in Tobin 521B Weds. Feb. 15

Sneaking Up On Biology: Lessons from Non-Living Dissipative Systems.

Abstract: All organisms develop the ability to perceive and act in the service of goals and intentions, no matter how rudimentary. Behavioral scientists have traditionally considered perception and action as properties of higher-order animals, but recent work shows that all living things, including single-celled organisms, plants, and fungi, develop the ability to detect information in their environments and use that information to guide action. The diversity of biological systems capable of perception-action suggests that, rather than reflecting a particular biological specialization, perception-action develops through general physical principles that biology has richly exploited. In this talk, I will discuss recent efforts by our group to discover these physical principles. We take the theory of dissipative structures from modern thermodynamics as a natural starting place for understanding how perception-action emerges in self-organizing, epistemic systems. Dissipative structures famously demonstrate the emergence of morphology from the flow of energy and matter. Our work shows that more complex dissipative structures detect and move to new energy sources. In addition, they can serendipitously develop sensors that allow them to act in ways related to their related to their own persistence. Implications for understanding biological systems will be discussed.

Munkhdalai in Machine Learning and Friends Tues. Feb. 14 at noon

who: Tsendsuren Munkhdalai, UMass CICS
when: noon, Tuesday, February 14
where: Computer Science Building rm150
food: Antonio’s pizza

Abstract:
This talk will first briefly review recent advances in memory augmented neural nets and then present our own contribution, Neural Semantic Encoders (NSE) [1,2]. With a special focus on NSE, we show that external memory in conjunction with attention mechanism can be a good asset in natural language understanding and reasoning. Particularly we will cover a set of real and large-scale NLP tasks ranging from sentence classification to seq-seq learning and question answering, and demonstrate how NSE is effectively applied to them.

Bio:
Tsendsuren Munkhdalai is a postdoctoral associate at Prof. Hong’s BioNLP group at Umass medical school. He recently received his PhD in biomedical information extraction and NLP from the Department of Computer Science at Chungbuk National University, South Korea under the excellent supervision of Prof. Keun Ho Ryu. His research interest includes semi-supervised learning, representation learning, meta learning and deep learning with applications to natural language understanding and (clinical/biomedical) information extraction.

Fitter in Machine Learning and Friends, noon, Wednesday, February 15

Please note:  This is the second MLFL scheduled this week and it is on Wednesday

who: Naomi Fitter, University of Pennsylvania 
when: noon, Wednesday, February 15
where: Computer Science Building rm150
food: Antonio’s pizza

Exploring Human-Inspired Haptic Interaction Skills For Robots
Abstract:  A human-inspired understanding of the world can enhance robots’ abilities to successfully and safely explore the world around them in applications from successfully manipulating delicate objects to playfully high-fiving a human teammate. Particularly in situations where human skills outweigh modern robot capabilities, data collected from people can yield models for successful robot behaviors. In this talk, I will cover my past cognitive robotics work on helping the PR2 robot to explore and label objects with haptic adjectives and my more recent work on allowing the Baxter robot to label and reciprocate human motions.
Bio:  Naomi Fitter is a PhD Candidate and member of the Haptics Group in the University of Pennsylvania GRASP Lab, working with Professor Katherine Kuchenbecker. She investigates socially relevant physical human-robot interactions like human-robot high fives and hand-clapping games. Her work involves a combination of haptics, socially assistive robotics, entertaining robotics, and physical human-robot interaction.