Max Nelson, Joe Pater and Brandon Prickett presented “Representations in neural network learning of phonology” in the UCLA colloquium series Friday October 9th. The abstract is below, and the slides can be found here.
Abstract. The question of what representations are needed for learning of phonological generalizations in neural networks (NNs) was a central issue in the applications of NNs to learning of English past tense morphophonology in Rumelhart and McClelland (1986) and in following work of that era. It can be addressed anew given subsequent developments in NN technology. In this talk we will present computational experiments bearing on three specific questions:
Are variables needed for phonological assimilation and dissimilation?
Are variables needed to model learning experiments involving reduplication (e.g. Marcus et al. 1999)?
What kind of architecture is necessary for the full range of natural language reduplication?
Brandon Prickett successfully defended his PhD dissertation “Learning Phonology with Sequence-to-Sequence Neural Networks” on Wednesday October 7th. Gaja Jarosz and Joe Pater were the co-chairs, and John Kingston and Mohit Iyyer (CICS) were the other committee members. Congratulations Brandon!
Huge congratulations are due to both Sakshi and Ivy, who arrived to UMass in the same PhD cohort. So it’s a special joy to get to make this double announcement. We’re proud of you both: Best of luck in the next phase of your careers!
Adults between 45 and 60 years of age needed for a paid study being conducted by researchers in the Language, Intersensory Perception, and Speech (LIPS) lab in the department of psychological and brain sciences.
The Audiovisual Synchrony (AVSYNC) study tests how seeing a speaker talk can help aging listeners to continue to effectively understand spoken language.
Participants must be native speakers of American English. They will be paid $10 per hour for participating. The study will require a 2-hour visit and a 3-hour visit to the lab on campus.
The first lab visit includes various computerized tasks that assess hearing, vision, and judgments about what a speaker says. The second visit involves recording the participants’ brain waves with an EEG cap on their head while they watch and listen to a speaker.
Anne-Michelle Tessier, University of British Columbia, will present “Learning morpho-phonology with Gradient Symbolic Representations: Stages and errors in the acquisition of French liaison” at 2:30pm Tuesday February 18 2020, in N458. Abstract
Things will turn decidedly more festive at 3:15, when we will celebrate Anne-Michelle’s book “Phonological Acquisition: Child Language and Constraint-Based Grammar“. Light refreshments will be served, to be followed by dinner at Michael Becker’s house.
This study uses an artificial language learning experiment and computational modelling to test Kiparsky’s claims about Maximal Utilisation and Transparency biases in phonological acquisition. A Maximal Utilisation bias would prefer phonological patterns in which all rules are maximally utilised, and a Transparency bias would prefer patterns that are not opaque. Results from the experiment suggest that these biases affect the learnability of specific parts of a language, with Maximal Utilisation affecting the acquisition of individual rules, and Transparency affecting the acquisition of rule orderings. Two models were used to simulate the experiment: an expectation-driven Harmonic Serialism learner and a sequence-to-sequence neural network. The results from these simulations show that both models’ learning is affected by these biases, suggesting that the biases emerge from the learning process rather than any explicit structure built into the model.