Tom Griffiths, April 5, 2013

Tom Griffiths, Associate Professor at the University of California Berkeley, will give a talk co-sponsored with the Computational Social Science Initiative, from 12:30PM–2PM, with lunch provided, in the Campus Center, Room 917.

Abstract. People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good “inductive biases” – constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases. This approach provides some surprising insights into how information changes through cultural transmission (relevant to understanding processes like language evolution) and shows how ideas from computer science and statistics can lead to new empirical paradigms for cognitive science research.