Author: Joe Pater

Society for Computation in Linguistics

The Society for Computation in Linguistics has been launched with a call for papers at its inaugural meeting in January 2018. The deadline is August 1. Join the mailing list to stay informed on this and future events.

Posted in Uncategorized

Did Frank Rosenblatt invent deep learning in 1962?

Deep learning (Le Cun et al. 2015: Nature) involves training neural networks with hidden layers, sometimes many levels deep. Frank Rosenblatt (1928-1971) is widely acknowledged as a pioneer in the training of neural networks, especially for his development of the

Posted in Learning

Conference on Computational Approaches to Linguistics?

A group of us have recently been discussing the possibility of a new conference on computational approaches to linguistics (group=Rajesh Bhatt, Brian Dillon, Gaja Jarosz, Giorgio Magri, Claire Moore-Cantwell, Joe Pater, Brian Smith, and Kristine Yu). We’ll provide some of the

Posted in Uncategorized

What’s Harmony?

From an e-mail from Paul Smolensky, March 28, 2015. Even though he wasn’t doing phonology in the mid-1980’s when he coined the term “Harmony Theory”, Paul had apparently taken a course on phonology with Jorge Hankamer and found vowel harmony

Posted in Phonological theory

Worst abstract review ever

“No data, yet combines two or more of the worst phonological theories, resulting in an account that is far more complicated and assumption-laden than the simple if typologically odd pseudo-example given.” I received this review on an abstract I submitted

Posted in Uncategorized

Moreton, Pater and Pertsova in Cognitive Science

The nearly final version of our Phonological Concept Learning paper, to appear in Cognitive Science, is now available here. The abstract is below, and we very much welcome further discussion, either by e-mail to the authors (addresses on the first

Posted in Learning

Calamaro and Jarosz on Synthetic Learner blog

On the Synthetic Learner blog, Emmanuel Dupoux recently posted some comments on a paper co-authored by Gaja Jarosz and Shira Calamaro that recently appeared in Cognitive Science. Gaja has also written a reply. While you are there, take a peek

Posted in Learning

Wellformedness = probability?

There are some old arguments against probabilistic models as models of language, but these do not seem to have much force anymore, especially because we now have models that can compute probabilities over the same representations that we use in

Posted in Learning, Phonological theory

Representations in OT

I’ve recently had some useful discussion with people about the nature of representations in OT, and how they did or did not (or should or should not) change from a theory with inviolable constraints (= principles and parameters theory). I’d

Posted in Phonological theory

Data in generative phonology

I’d like to raise as a discussion topic the question of what the data are that we are trying to explain in generative phonology. In my view, the lack of clarity about this issue is a bigger foundational issue in

Posted in Phonological theory