Karthik Narasimhan of MIT will present “Language Understanding For Text-based Games Using Deep Reinforcement Learning” in the Machine Learning and Friends Lunch at 1 p.m. in CS150 (arrive at 12:45 for pizza).
In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations.
I’m a fourth year PhD student at CSAIL, working with Prof. Regina Barzilay. I am primarily interested and work in the area of Computational Semantics, specifically in language understanding, grounding and machine comprehension. My goal is to develop richer representations for meaning that can capture its variable nature and context sensitivity, while keeping learning tractable. Previously, I have worked on computational morphology – applied to Keyword Spotting and unsupervised analysis using Morphological Chains. I have a B.Tech in Computer Science from IIT Madras (2012) and an SM in Computer Science from MIT (2014).