Cho in MLFL Thurs. 10/20 at 10:00

Kyunghyun Cho (NYU) will present “Deep Learning, Where are you going?” in the Machine Learning and Friends Lunch Thursday Oct. 20 at 10:00am in CS 150. Abstract and bio follow.

Abstract:

There are three axes along which advances in machine learning and deep learning happen. They are (1) network architectures, (2) learning algorithms and (3) spatio-temporal abstraction. In this talk, I will describe a set of research topics I’ve pursued in each of these axes. For network architectures, I will describe how recurrent neural networks, which were largely forgotten during 90s and early 2000s, have evolved over time and have finally become a de facto standard in machine translation. I continue on to discussing various learning paradigms, how they related to each other, and how they are combined in order to build a strong learning system. Along this line, I briefly discuss my latest research on designing a query-efficient imitation learning algorithm for autonomous driving. Lastly, I present my view on what it means to be a higher-level learning system. Under this view each and every end-to-end trainable neural network serves as a module, regardless of how they were trained, and interacts with each other in order to solve a higher-level task. I will describe my latest research on trainable decoding algorithm as a first step toward building such a framework.
Bio:

Kyunghyun Cho is an assistant professor of computer science and data science at New York University. He was a postdoctoral fellow at University of Montreal until summer 2015, and received PhD and MSc degrees from Aalto University early 2014. He tries best to find a balance among machine learning, natural language processing and life, but often fails to do so.