Neubig on “What Can Neural Networks Teach us about Language?” Thurs. Feb 1 at 11:45

Machine Learning and Friends

who: Graham Neubig (CMU)
when: 11:45a – 1:15p, Feb 1st
where: Computer Science Building Rm 150
food: Antonios Pizza
generous sponsor:  ORACLE LABS 

What Can Neural Networks Teach us about Language?

Abstract:

Neural networks have led to large improvements in the accuracy of natural language processing systems. These have mainly been based on supervised learning: we create linguistic annotations for a large amount of training data, and train networks to faithfully reproduce these annotations. But what if we didn’t tell the neural net about explicitly, but instead *asked it what it thought* about language
without injecting our prior biases? Would the neural network be able
to learn from large amounts of data and confirm or discredit our
existing linguistic hypotheses? Would we be able to learn linguistic
information from lower-resourced languages where this information has not been annotated? In this talk, I will discuss methods for
unsupervised learning of linguistic information using neural networks
that attempt to answer these questions. I will also explain briefly
about automatic mini-batching, a computational method (implemented in the DyNet neural network toolkit), which greatly speeds large-scale
experiments with complicated network structures needed for this type
of unsupervised learning.

Bio:

Graham Neubig is an assistant professor at the Language Technologies Institute of Carnegie Mellon University. His work focuses on natural language processing, specifically multi-lingual models that work in many different languages, and natural language interfaces that allow humans to communicate with computers in their own language. Much of this work relies on machine learning to create these systems from data, and he is also active in developing methods and algorithms for machine learning over natural language data. He publishes regularly in the top venues in natural language processing, machine learning, and speech, and his work occasionally wins awards such as best papers at EMNLP, EACL, and WNMT. He is also active in developing open-source software and is the main developer of the DyNet neural network toolkit.

Posted in Uncategorized