Author Archives: Joe Pater

Kristine Yu presents invited talk at SIGMORPHON

Kristine Yu presented an invited talk on “Building Phonological Trees” at the Eighteenth SIGMORPHON Workshop on Computational Phonology, Morphology, and Phonetics on August 5th 2021. An abstract is below.

Computational perspectives from string grammars have richly informed our understanding of phonological patterns in natural language in the past decade. However, a prevailing theoretical assumption of phonologists since the 1980s has been that phonological patterns and processes are computed on trees built with prosodic constituents such as syllables, feet, and prosodic words. This talk explores how perspectives from tree grammars can provide insight into our understanding of prosodic representations, including different ways in which tones can enter the grammar.

Franklin Institute Symposium for Barbara Partee

The Franklin Institute Symposium “The Past, Present and Future of Formal Semantics”, in honor of Barbara Partee being awarded the 2021 Benjamin Franklin Medal in Computer and Cognitive Science, was held on April 19th. Videos of the entire symposium are now available, including talks by Barbara, Gennaro Chierchia and Pauline Jakobson in the Part 1 video, and Florian Schwartz, Seth Cable and Christopher Potts in Part 2. (Thank you to Charles Yang for sharing the videos). Abstracts for the talks are available here.

Part 1
Part 2

NSF research grant awarded to Yu, Green, Armstrong-Abrami and O’Connor

Kristine Yu (PI) and Lisa Green, Meghan Armstrong-Abrami and Brendan O’Connor (co-PIs) have been awarded a 3-year research grant of $434,027 by the NSF. The grant, entitled Understanding variation in African American Language: Corpus and prosodic fieldwork perspectives, “will pioneer inclusive tools and methods capable of reaching a wide range of AAL speakers and communities, combining community-based prosodic fieldwork and large-scale, web-based corpus analysis.”

Congratulations Kristine, Lisa, Meghan and Brendan!

SCiL is meeting this week!

The Society for Computation in Linguistics is meeting this week. Today it got started with a plenary talk by Naomi Feldman, the recording of which is now available to registered participants. The schedule is here:
To register go here: (free for students, $20 for others). Once registered, you can get the Zoom and GatherTown links here: (you can also get details for how the conference is being run there).

The SCiL proceedings are now available here:

GLSA publications now available in ScholarWorks!

The Graduate Linguistics Students Association is now making many of its older publications available through UMass Amherst library’s open access ScholarWorks platform. This is a great resource – NELS proceedings up until 2002, University of Massachusetts Occasional Papers up until 2007, and Semantics of Under-Represented Languages in the Americas to 2003. Huge thanks to Andrew Lamont and Tom Maxfield for their work on this project, as well as Erin Jerome of the UMass library.

Newer publications are available for sale on the GLSA website. One highlight of the open access release is the appearance of UMOP 37: Semantics and processing, which had remained unpublished until now.

Nelson, Pater and Prickett UCLA colloquium

Max Nelson, Joe Pater and Brandon Prickett presented “Representations in neural network learning of phonology” in the UCLA colloquium series Friday October 9th. The abstract is below, and the slides can be found here.

Abstract. The question of what representations are needed for learning of phonological generalizations in neural networks (NNs) was a central issue in the applications of NNs to learning of English past tense morphophonology in Rumelhart and McClelland (1986) and in following work of that era. It can be addressed anew given subsequent developments in NN technology. In this talk we will present computational experiments bearing on three specific questions:

Are variables needed for phonological assimilation and dissimilation?  

Are variables needed to model learning experiments involving reduplication (e.g.  Marcus et al.  1999)?  

What kind of architecture is necessary for the full range of natural language reduplication?