Yearly Archives: 2015

Freeman CS Distinguished Lecture, Weds. 10/21 at 4 pm

Bill Freeman of MIT will present “A Big World of Tiny Motions” in the Distinguished Lecture series, Wednesday, October 21, 2015 Computer Science Building, Room 151 from 4:00pm to 5:00pm. A reception will be held in the Atrium at 3:40 pm.

Abstract. We have developed a “motion microscope” to visualize small motions by synthesizing a video with the desired motions amplified. The project began as an algorithm to amplify small color changes in videos, allowing color changes from blood flow to be visualized. Modifications to this algorithm allow small motions to be amplified in a video. I’ll describe the algorithms, and show color-magnified videos of adults and babies, and motion-magnified videos of throats, pipes, cars, smoke, and pregnant bellies. The motion microscope lets us see the world of tiny motions, and it may be useful in areas of science and engineering.

Having this tool led us to explore other vision problems involving tiny motions. I’ll describe recent work in analyzing fluid flow and depth by exploiting small motions in video or stereo video sequences caused by refraction of turbulent air flow (joint work with the authors below and Tianfan Xue, Anat Levin, and Hossein Mobahi). We have also developed a “visual microphone” to record sounds by watching objects, like a bag of chips, vibrate (joint with the authors below and Abe Davis and Gautam Mysore).

Collaborators: Michael Rubinstein, Neal Wadhwa, and co-PI Fredo Durand.

For project web pages and radio segments, visit the events page.

TED or TEDx talks by students:

See invisible motion, hear silent sounds
How a silent video can reveal sound: Abe Davis’ knockout tech demo at TED2015

Bio: William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there. He is currently on a partial leave from MIT, starting a computer vision group at Google in Cambridge, MA.

His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time awards for papers from 1990 and 1995. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops.

He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.

Cognitive Science Grant Writing Group?

Could the Cognitive Science Initiative help you to get (more) external funding? Would bi-monthly meetings with your colleagues over lunch help you to meet grant deadlines? Would you like assistance finding successful grant writers to provide comments on your proposals? Do you need help identifying the collaborators who would make your proposal truly interdisciplinary? Would you like support to invite a potential mentor from another institution? If your answer to any or all of those questions is Yes, or Maybe, or Perhaps if … please send a quick email to Lisa Sanders (lsanders@psych.umass.edu) indicating your interest and the type of grant writing help that you would find most useful. With sufficient interest, we’ll make it happen.

Mahadevan Spring 2016 Course: Building a Deep Mind in the 21st century

Sridhar Mahadevan of Computer Science will be offering a graduate seminar in Artificial Intelligence this spring.

SPRING 2016: COMPSCI 791DM.: Building a Deep Mind in the 21st century

Many cognitive abilities once solely the province of biological systems are now routinely achievable with machines. Once no more than a dream, abilities such as complex 3D perception, natural language, machine translation, speech recognition, learning, and reasoning are now routinely achievable even on low cost hardware, such as cellphones, tablets,  or similar devices, which can access larger scale servers for offline computation.  This seminar explores the next frontiers for AI, presuming that many human cognitive abilities will be largely achieved in the next decade (vision, language, learning, reasoning, speech recognition), with the massive computing and data resources likely to be available to individuals and to corporations like Google, Baidu, Facebook, and IBM. To put it another way, core cognitive abilities are no longer “mysteries”. but matters of mundane albeit challenging engineering.

What’s left for AI? Will “artificial intelligence” be achieved when machines achieve human level performance in these core cognitive abilities, or, as one philosopher put it in the early heady days of AI, have we merely climbed a tall tree on the way to getting to the moon? Our thesis is that the really interesting aspects of AI are only now beginning to be possible, given that the “operating system” level functionalities listed above are achievable. This seminar will discuss how AI research in the next decade or two can begin to tackle each of these problems.

1. Emotion: current AI systems can learn effectively from reinforcement, but feel no emotion. Is emotion necessary to build truly intelligent cyborgs? We explore the current theories of emotion, and its connections to rational decision making.

2. Consciousness: AI systems can carry out inference and reason about choices, but are not conscious of their own self, in the sense that most mammals are. What’s missing? We review the current theories of consciousness.

3. Curiosity: AI systems can be programmed to carry out tasks using high level rewards and goals, but seem incapable of setting themselves their own tasks and goals. How can we build AI systems that are curious in the sense that children are. What are the essential elements of curiosity?

4. Mortality: AI systems have no fear of “death” and do not concern themselves with their “mortality”, in the sense that humans (and presumably other mammals) do. However, as AI systems learn from experience and compile massive knowledge bases, they may face similar challenges in being “unplugged”, faced with the loss of everything they know and have learned. (HAL in 2001: A Space Odyssey certainly feared its own death). Why is mortality important to incorporate in future AI systems?

Agreement Workshop Oct. 29th and 30th

From Brian Dillon:

The Department of Linguistics is hosting a workshop on agreement in natural language, aimed at bringing together researchers investigating agreement from theoretical and experimental points of view. The goal of this workshop is to promote cross-talk between researchers investigating agreement from grammatical and psycholinguistic perspectives. This workshop will particularly focus on the ways in which linear order can affect agreement processes, as well as the question of how best to model linear order effects on agreement. Presenters include:

Rajesh Bhatt (UMass Amherst)
Jonathan Bobaljik (UConn)
Brian Dillon (UMass Amherst)
Julie Franck (Université de Genève)
Maureen Gillespie (University of New Hampshire)
Laura Kalin (UConn)
Lap-Ching Keung (UMass Amherst)
Andrew Nevins (UCL)
Adrian Staub (UMass Amherst)
Francesco Vespignani (Università di Trento)
Martin Walkow (MIT)
Jana Willer-Gold (UCL)
Ellen Woolford (UMass Amherst)

The workshop takes place on October 29th and 30th at the University of Massachusetts Amherst. We invite anyone who is interested to attend. Registration is free and all are welcome, but if you plan to attend, please visit our website and RSVP:

http://people.umass.edu/bwdillon/AgreementWorkshop/

Altman in Cognitive Brown Bag Weds. 10/7 at noon

Gerry Altmann of UConn Psychology will be giving the Cognitive Brown Bag on Wednesday, 10/7, at 12:00 in Tobin 521B.  His abstract (I don’t have a title) is as follows:

Abstract: Language is often used to describe the changes that occur around us – changes in either state (“I cracked the glass…”) or location (“I moved the glass onto the table…”). To fully comprehend such events requires that we represent the ‘before’ and ‘after’ states of any object that undergoes change. But how do we represent these mutually exclusive states of a single object at the same time? I shall summarize a series of studies, primarily from fMRI, which show that we do represent such alternative states, and that these alternative states compete with one another in much the same way as alternative interpretations of an ambiguous word might compete. This interference, or competition, manifests in a part of the brain that has been implicated in resolving competition. Furthermore, activity in this area is predicted by the dissimilarity, elsewhere in the brain, between sensorimotor instantiations of the described object’s distinct states. I shall end with the beginnings of a new account of event representation which does away with the traditional distinctions between actions, participants, time, and space. [Prior knowledge of the brain is neither presumed, required, nor advantageous!].

UMass Rising campaign underway

The UMass rising campaign provides faculty with the opportunity to make targeted gifts that are matched dollar for dollar by the campus. The current operation of the Initiative in Cognitive Science has been made possible in part by a recent faculty gift targeted at interdisciplinary language research. Please contact Joe Pater or Lisa Sanders if you would like to discuss the impact that your gift could have.

Rotello Cognitive Brown Bag, Weds. 9/30 at 12 p.m.

Caren Rotello of UMass Cognitive Psychology will present in the Cognitive Brown Bag series at 12 p.m. The rest of this semester’s schedule follows:

10/7 –  Gerry Altmann (UConn)

10/14 –  David Ross

10/21 – Molly Potter (MIT)

10/28 – Jim Magnuson (UConn)

11/4 –  Dave Huber

11/11 – No meeting, Veterans Day

11/18 – No meeting, Psychonomics

11/25 – No meeting, Thanksgiving

12/2 – Louise Antony (UMass Philosophy)

12/9 – James Haxby (Dartmouth)

Narasimhan in MLFL, Thursday 10/1 at 1 p.m.

Karthik Narasimhan of MIT will present “Language Understanding For Text-based Games Using Deep Reinforcement Learning” in the Machine Learning and Friends Lunch at 1 p.m. in CS150 (arrive at 12:45 for pizza).

Abstract:
In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations.

Bio:
I’m a fourth year PhD student at CSAIL, working with Prof. Regina Barzilay. I am primarily interested and work in the area of Computational Semantics, specifically in language understanding, grounding and machine comprehension. My goal is to develop richer representations for meaning that can capture its variable nature and context sensitivity, while keeping learning tractable. Previously, I have worked on computational morphology – applied to Keyword Spotting and unsupervised analysis using Morphological Chains. I have a B.Tech in Computer Science from IIT Madras (2012) and an SM in Computer Science from MIT (2014).

Deo in Linguistics, Fri. 9/25 at 3:30

Ashwini Deo (Yale) will give a talk on “The Semantic and Pragmatic Underpinnings of Grammaticalization Paths” in the Linguistics department on Friday, September 25, at 3:30 in ILC N400.

Abstract: It is a well-established fact that meanings associated with functional linguistic expressions evolve in systematic ways across time. But we have little precise understanding of why and how this happens. We know even less about how formal approaches to the meanings of functional categories like tense, aspect, negation can be reconciled with the typologically robust findings of grammaticalization research. In this talk, I will take a first step towards such an understanding by analyzing a robustly attested semantic change in natural languages — the progressive-to-imperfective shift.

The facts can be described as follows: At Stage 0, a linguistic system L possesses a single imperfective or neutral aspectual marker X that is used to express two contextually disambiguable meanings ? and ?. At Stage 1, a progressive marker Y arises spontaneously in L in order to express ? in some contexts. At Stage 2, Y becomes entrenched as an obligatory grammatical element for expressing ? while X is restricted in use to expressing ?. At Stage 3, Y generalizes and is used to express both ? and ?. X is gradually driven out of L. Stage 3 (structurally identical to Stage 0) is often followed by another instantiation of Stage 1, with the innovation of a new progressive marker Z. The trajectory to be explained is thus cyclic. The analysis I provide has a semantic component that characterizes the logical relation between the progressive and imperfective operators in terms of asymmetric entailment. Its dynamic component rests on the proposal that imperfective and progressive sentences crucially distinguish between two kinds of inquiries: phenomenal and structural inquiries (Goldsmith and Woisetschleger 1982). The innovation and entrenchment of progressive marking in languages is shown to be underpinned by optimal ways of resolving both kinds of inquiries in discourse given considerations of successful and economic communication. Generalization is analyzed as the result of imperfect learning. The trajectory — consisting of the recruitment of a progressive form, its categorical use in phenomenal inquiries, and its generalization to imperfective meaning — is modeled within the framework of Evolutionary Game Theory.