No Free Lunch in Linguistics or Machine Learning
Jonathan Rawski, Jeffrey Nicholas Heinz
direct link: http://ling.auf.net/lingbuzz/004251
Pater’s target article proposes that neural networks will provide theories of learning that generative grammar lacks. We argue that his enthusiasm is premature since the biases of neural networks are largely unknown, and he disregards decades of work on machine learning and learnability. Learning biases form a two-way street: all learners have biases, and those biases constrain the space of learnable grammars in mathematically measurable ways. Analytical methods from the related fields of computational learning theory and grammatical inference allow one to study language learning, neural networks, and linguistics at an appropriate level of abstraction. The only way to satisfy our hunger and to make progress on the science of language learning is to confront these core issues directly.
|Format:||[ pdf ]|
(please use that when you cite this article)
|keywords:||learnability, computational learning theory, neural networks, poverty of the stimulus, language acquisition, syntax, phonology, semantics, morphology|