The University of Massachusetts Amherst

1957: The birth of cognitive science

1957 is the publication date of Chomsky’s Syntactic Structures, and of Rosenblatt’s “The perceptron: a perceiving and recognizing automaton”. This, I’d like to claim, is a good year to pick as the birthdate of modern cognitive science. These two pieces of work were watershed moments in the development of the two approaches to the modeling of cognition whose conflicts, and integration are the theme of the book I’d like to write. Since 2017 is only a year away, it seems like a reasonable pre-book goal for me is to write an article “1957: The birth of cognitive science” in celebration of their 60th anniversary. (Feb. 23, 2016)

The broader context: the “Cognitive Revolution” of the 1950s, as described by George Miller. He picks Sept. 11, 1956 (p. 142) as the birthdate of Cognitive Science – who am I to argue? Chomsky presented the Syntactic Structures materials that day, and IBM presented a Hebbian Neural Net simulation. Also has a great first hand account of the Sloan eraHere’s his definition of Cognitive Science (p. 144) (June 5, 2017):

[A] unified science that would discover the representational and computational capacities of the human mind and their structural and functional realization in the human brain.

1977 is given as the birthdate of cognitive science by Bara (1995) (first issue of Cognitive Science, completion of Sloan Foundation report, the basis of funding for 7 years and $20M).

Chomsky mentions neural nets in the Atlantic “Where Artificial Intelligence Went Wrong” piece, and talks about Gallistel’s work. (But there seems to have been no contact between Chomsky and Rosenblatt or their work).

Update May 2017 I am now writing “Cognitive Science at Sixty” as a Perspectives piece for Language that I plan to submit this summer. Here’s the abstract:

Chomsky’s (1957) Syntactic Structures and Rosenblatt’s (1957) “The perceptron: a perceiving and recognizing automaton” are watershed publications in the development of two very different approaches to cognitive science: generative linguistics, and neural network modeling. These two traditions clashed 30 years later in the “Past Tense Debate” (Rumelhart and McClelland 1986, Pinker and Prince 1988), which “created a degree of controversy extraordinary even in the adversarial culture of modern science” (Seidenberg and Plaut 2014). At sixty, the emergence of “Deep Learning” in Artificial Intelligence (Hinton et al. 2015), has yielded new successes in neural network modeling of language, and renewed interest in its merits relative to approaches using explicit symbolic representations, including generative syntax.

This paper elaborates this history, focusing on ways in which generative grammars and neural nets provide competing, complementary or orthogonal models of the human mind, as well as on their relationship to other symbolic models, and other probabilistic models. It argues that standard divisions, such as generative versus probabilistic theories, and standard equations, such between neural nets and associative models, create false dichotomies, and impede both integration, and deeper theory comparison.