Marek Petrik of IBM’s T.J. Watson Research Center will be speaking in the Machine Learning and Friends lunch this Thursday, 12 March at 12:00pm in CS 150. His talk is titled “Better Solutions From Inaccurate Models” (abstract below).
Better Solutions From Inaccurate Models
It is very important in many application domains to compute good solutions from inaccurate models. Models in machine learning are inaccurate because they both simplify reality and are based on imperfect data. Robust optimization has emerged as a very powerful methodology for reducing solution sensitivity to model errors. In the first part of the talk, I will describe how robust optimization can mitigate data limitations in planning a large-scale disaster recovery operation. In the second part of the talk, I will discuss a novel use of robustness to substantially reducing error due to model simplification in reinforcement learning and large-scale regression.
Bishan Yang, a PhD candidate from Cornell University, will be speaking at the Machine Learning and Friends lunch on Tuesday, 10 March at 12:00pm in CS 150. Her talk is titled “Exploiting Relational Knowledge for Extraction of Opinions and Events in Text” (abstract below).
Exploiting Relational Knowledge For Extraction Of Opinions And Events In Text
The richness and diversity of natural language makes automatic extraction of opinions and events from texts difficult. An automatic system designed for this task would need to identify complex linguistic expressions, interpret their meanings in context, and integrate information that is often distributed over long distances. While machine learning techniques have been widely applied for information extraction, they often make strong independence assumptions about linguistic structure and make decisions myopically based on local and partial information in the text. In this talk, I argue that accurate information extraction needs machine learning algorithms that can exploit relationships within and across multiple levels — between words, phrases and sentences — facilitating globally-informed decisions.
In the first part of my talk, I will introduce the task of fine-grained opinion extraction — discovering opinions, their sources, targets and sentiment from text. I will present a joint inference approach that can account for the dependencies among different opinion entities and relations, and a context-aware learning approach that is capable of exploiting intra- and inter-sentential discourse relations for improving sentiment prediction. In the second part of my talk, I will present my recent work on event extraction and event coreference resolution — the task of extracting event mentions and integrating them within and across documents by exploiting context. I propose a novel Bayesian model that allows generative modeling of event mentions, while simultaneously accounting for event-specific similarity.
Laurence Thomas of Syracuse University will be presenting Fitting-In: Autonomy vs Evolutionary Biology as part of the Forry and Micken Lecture Series in Amherst College’s Pryne Lecture Hall, Fayerweather Hall 115 at 5:00 p.m. Thursday, March 5. Everyone is welcome.
Alvin Goldman of Rutgers University will be presenting Gettier and the Epistemic Appraisal of Philosophical Intuitions in the UMass Philosophy Colloquium in Bartlett 206 at 3:30 p.m. Friday, March 6. Everyone is welcome.
David Ross (postdoc in Rosie Cowell’s Computational Memory and Perception Lab) will be presenting Norm-Based versus Exemplar-Based Models of Face Recognition in the Cognitive Brown Bag series in Tobin 521B at noon Wednesday, March 4. Everyone is welcome – the abstract is below.
Abstract: Face space models have successfully explained a range of findings in the face recognition literature. Debate has centered on whether the faces in face space are represented with respect to norms or exemplars. Findings from a number of face adaptation studies have been taken as conclusive support for a norm-based model, wherein faces are represented with respect to an abstracted norm/prototype, over an exemplar-based model, wherein faces are represented with respect to other exemplars. Here I will summarize work from my PhD that tested and refuted these claims using computational formalizations of norm and exemplar models. I will also present the results of some new simulations that indicate that it is actually norm-based models that are unable to account for the data.
Eric Bakovic of UCSD will be giving a job talk titled “Ensuring the proper determination of identity: a model of possible constraints” (abstract below) this Friday, March 6 in the Linguistics department at 3:30 pm in ILC N400. All are welcome to attend.
“Ensuring the proper determination of identity: a model of possible constraints”
Some phonological patterns can be described as sufficient identity avoidance, where ‘sufficiently identical’ means ‘necessarily identical with respect to all but some specific feature(s)’. The first part of the talk addresses this question: why are specific features ignored for the purposes of determining sufficient identity? In previous work (Bakovic 2005, Bakovic & Kilpatrick 2006, Pajak & Bakovic 2010, Brooks et al. 2013ab), we have found that patterns of sufficient identity avoidance where a specific feature F is ignored also involve F-assimilation in the same contexts. Direct reference to sufficient identity is thus unnecessary: sufficient identity is indirectly avoided because F-assimilation would otherwise be expected, resulting in total identity. Avoiding sufficient identity without assimilation is the better option, as predicted by the minimal violation property of Optimality Theory. This analysis predicts rather than stipulates the features that will be ignored for the purposes of determining sufficient identity. (Several corollary consequences of the analysis will also be discussed in the talk.) The explanatory value of the analysis, however, is predicated on the absolute non-existence of constraints directly penalizing all-but-F identity, which could be active independently of F-assimilation. The second part of the talk addresses this question: how can such constraints be ruled out formally? I propose a deterministic model of constraint construction and evaluation that results in just the types of constraints necessary for the analysis above. More broadly, the proposed model is intended as a contribution to our formal understanding of what a ‘possible constraint’ is.
Karthik Raman, a PhD student at Cornell University working with Prof. Thorsten Joachims, will be speaking at the Machine Learning and Friends lunch this Thursday, March 5 at 12:30 pm in CS 150. His talk is titled “Man + Machine: Machine Learning with Humans-in-the-loop” (abstract below).
“Man + Machine: Machine Learning with Humans-in-the-loop”
Intelligent systems, ranging from internet search engines and online retailers to personal robots and MOOCs, live in a symbiotic relationship with their users – or at least they should. On the one hand, users greatly benefit from the services provided by these systems. On the other hand, these systems can greatly benefit from the world knowledge that users communicate through their interactions with the system. These interactions — queries, clicks, votes, purchases, answers, demonstrations, etc. — provide enormous potential for economically and autonomously optimizing these systems and for gaining unprecedented amounts of world knowledge required to solve some of the hardest AI problems.
In this talk I discuss the challenges of learning from data that results from human behavior. I will present new machine learning models and algorithms that explicitly account for the human decision making process and factors underlying it such as human expertise, skills and needs. The talk will also explore how we can look to optimize human interactions to build robust learning systems with provable performance guarantees. I will also present examples, from the domains of search, recommendation and educational analytics, where we have successfully deployed systems for cost-effectively learning with humans in the loop.