Monthly Archives: December 2015

Wong in CS MLFL 12/3 at 1 pm

Lawson L.S. Wong of MIT will present “Learning The State Of The World Object-based World Modeling For Mobile-Manipulation Robots” Thursday Dec. 3rd from 1:00pm to 2:00pm (Arrive at 12:45 to get pizza) in CS150. An abstract and bio follow.

Abstract:

Mobile-manipulation robots performing service tasks in human-centric indoor environments need to know about relevant aspects of their spatial surroundings. However, service robots rarely know the exact state of the world, unlike industrial robots in structured environments. Additionally, as the world is shared with humans, uncertainty in the complete state of the world is inevitable over time. Mobile-manipulation robots therefore need to continuously perform state estimation, using perceptual information to maintain a representation of the state, and its uncertainty, of task-relevant aspects of the world. Because indoor tasks frequently require interacting with objects, objects should be given critical emphasis in spatial representations for service robots. In my Ph.D. work, I propose a world model based on objects, their semantic attributes (task-relevant properties such as type and pose), and their geometric realizations in the physical world.

Objects are challenging to keep track of because there is significant uncertainty in their states. Object detection and recognition using robotic vision is still error-prone. Objects can also be inherently ambiguous because they have similar attributes. Besides detection noise, other agents may change the state of the world. Compounded over multitudes of objects and long temporal horizons, the above sources of uncertainty give rise to a challenging estimation problem. Fortunately, most objects do not change quickly, and sensing is relatively cheap, so we can leverage information from multiple diverse snapshots of similar world states. However, putting the information together introduces a data association problem, which I tackle with constrained Bayesian nonparametric models. By carefully aggregating information across different viewpoints, times, and sensors, I show that robots can reduce their uncertainty in the state of the world and maintain more accurate object-based world models.

Bio:

Lawson L.S. Wong is a Ph.D. candidate at the Massachusetts Institute of Technology, working in the Learning and Intelligent Systems Group under the supervision of Leslie Pack Kaelbling and Tomás Lozano-Pérez. Previously, he received his B.S. (with Honors) and M.S. in Computer Science at Stanford University, both in 2009. His current research focuses on acquiring, representing, and estimating knowledge about the world that an autonomous robot may find useful. More broadly, Lawson is interested in, and follows many topics within, the fields of robotics, machine learning, and artificial intelligence. He was recently awarded a AAAI Robotics Student Fellowship and a Croucher Foundation Fellowship for Postdoctoral Research. He will begin his postdoctoral appointment at Brown University in 2016, working with Stefanie Tellex.

Antony in Cognitive Brown Bag Weds. 12/2 at noon

Louise Antony of UMass Philosophy will present “In Praise of Loose Talk: on the notion of ‘rule-following’ in cognitive science” in the Cognitive Brown Bag series Wednesday Dec. 12th in Tobin 521B. The abstract follows.

Abstract: Philosophers often challenge cognitive scientists to clarify their foundational assumptions, and in particular, to clarify their use of intentionalistic terms like “knowledge,” “representation,” and “inference.”  Cognitive scientists often disparage these challenges, saying that any needed clarifications will come as empirical work progresses.  This is a conciliatory paper.  On the one hand I’ll argue, through a case study from vision science, that empirical inquiry need not wait for the clarification of its foundations, and that empirical scientists should not be pressed to define terms that are serving their needs perfectly well.  On the other hand, I’ll show that some philosophical clarification is both possible and salutary.  I’ll offer a taxonomy of “rule-following” cognitive systems – “rational-causal,” “intelligible-causal,” and “brute causal” systems – which shows how differences in cognitive architecture might align with distinctions philosophers deem important.