2nd year graduate student Alessa Farinella presented joint work with faculty members Kristine Yu and Lisa Green and collaborator Alejna Brugos (Boston University) at the annual sociolinguistics conference New Ways of Analyzing Variation 49 on October 21, 2021, hosted by UT Austin. Alessa pre-recorded the talk, entitled Biases from MAE-ToBI intonational transcription conventions in the intonational analysis of African American English, which you can watch at the departmental YouTube Channel or from the embedded link directly below.
This material was based upon work supported by the National Science Foundation under grant BCS-2042939. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Huge congratulations are due to both Sakshi and Ivy, who arrived to UMass in the same PhD cohort. So it’s a special joy to get to make this double announcement. We’re proud of you both: Best of luck in the next phase of your careers!
Adults between 45 and 60 years of age needed for a paid study being conducted by researchers in the Language, Intersensory Perception, and Speech (LIPS) lab in the department of psychological and brain sciences.
The Audiovisual Synchrony (AVSYNC) study tests how seeing a speaker talk can help aging listeners to continue to effectively understand spoken language.
Participants must be native speakers of American English. They will be paid $10 per hour for participating. The study will require a 2-hour visit and a 3-hour visit to the lab on campus.
The first lab visit includes various computerized tasks that assess hearing, vision, and judgments about what a speaker says. The second visit involves recording the participants’ brain waves with an EEG cap on their head while they watch and listen to a speaker.
Faculty member Kristine Yu will be on sabbatical and conducting research supported by a grant from Taiwan’s Ministry of Science and Technology during the Spring 2020 semester. She’ll be hosted by and collaborating with faculty at the Department of Foreign Languages and Literatures at National Chiao Tung University in Hsinchu, including Sang-Im Lee-Kim (Lecturer at UMass, 2014-2015) and Ho-hsien Pan.
Laura McPherson, Dartmouth College, will present “Decoding surrogate speech: Phonetic and phonemic levels in musical surrogate languages” in the Linguistics colloquium series at 3:30 Fri. Nov 1. An abstract follows. All are welcome!
Many cultures around the world have traditions of musical surrogate speech, i.e.communication using a musical instrument to encode linguistic structure. Stern (1957) identifies two major types of systems, so-called “abridging” systems that represent elements of phonemic structure and “lexical ideogram” systems that represent concepts directly. This talk focuses on the former. Drawing on case studies from the literature and original fieldwork, I demonstrate that decoding an abridging system means determining not only what contrasts are encoded but also at what level. Remarkably, in many systems, phonemic structure is not encoded uniformly. In the West African Sambla balafon system, for instance, tone is encoded at a morphophonemic level, eschewing postlexical processes common in the spoken language, but rhythmic encoding shows evidence of surface phonetic gradience. A similar situation holds for the Amazonian Bora drumming system, with phonemic encoding of the two tone levels but a tight correlation between interstrike duration and spoken V-to-V intervals. Languages with surrogate systems on more than one instrument offer an opportunity to determine which factors influence how a contrast will be encoded. Yòrubá, for instance, can be encoded on at least two types of drums, the tension drum (“talking drum”) dùndún and a double-headed barrel drum ensemble known as bàtá. The dùndún encodes tone at a surface level, while the bàtá encodes less phonetic detail for tone but encodes more information about vowel quality. In this talk, I show how musical surrogate languages reflect the practitioner’s nuanced understanding of their language’s sound system and offer a preliminary account of how linguistic, instrumental, and cultural constraints shape surrogate encoding.
This week (October 21-25) we will have a special visitor in the department, a Phonology/Phonetics/Psycholinguistics Guru, Matt Goldrick! Matt will be visiting the department all week. He will be giving two tutorials and a general talk (see below for schedule). Everyone in the department and beyond is welcome to attend all of these events. The schedule is rather complicated so please read it carefully – all events are scheduled to take place in N400 on Monday, Tuesday, and Wednesday of next week. Both tutorials are about Gradient Symbolic Representations and involve some hands-on software applications – one is focused on Phonology and the other on Processing. The talk is intended to be a general talk for the whole department. Matt is also available for individual meetings while he is here – please contact him directly about that.
Talk – “The acoustic effects of blended representations: co-production”
Gradient Harmonic Grammar (gradient underlying representations and learning models for them)
Instructions: Bring a laptop that can access the internet; you’ll be using Google Sheets to aid in calculations of harmony for candidate sets.
Gradient Symbolic Processing (connectionist implementations of GSR and software for generation, learning, and parsing of CFGs)
Instructions: Bring a laptop with jupyter installed (https://www.anaconda.com/distribution/). You’ll need an environment with python 3, and you should have these libraries installed: numpy, matplotlib, pickle, re.