Rysling in Cognitive Brown Bag Weds. Nov. 30

Amanda Rysling (UMass Linguistics) will be presenting in this week’s cognitive brown bag – all are welcome!

Title: Preferential early attribution in incremental segmental parsing

Time: 12:00pm to 1:15pm Wednesday Nov. 30. Location:  Tobin 521B.

Abstract: Recognizing the speech we hear as the sounds of the languages we speak requires solving a parsing problem: mapping from the acoustic input we receive to the phonemes and words we recognize as our language. The literature on segmental perception has focused on cases in which we as listeners seem to successfully un-do the blending of speech sounds that naturally occurs in production. The field has identified many cases in which listeners seem to completely attribute the acoustic products of articulation to the sounds whose articulation created them, and so seem to solve the parsing problem in an efficient and seldom-errorful way. Only a handful of studies have examined cases in which listeners seem to systematically “mis-parse,” attributing the acoustic products of one sound’s articulation to another sound, and failing to disentangle the blend of their production. In this talk, I review the results of six phoneme categorization studies that demonstrate that such failure to completely un-do acoustic blending arises when listeners must judge one sound in a string relative to the sound that follows it, and the acoustic transitions between the two sounds are gradual. I then report the results of studies that demonstrate that listeners persist in attributing the acoustic products of a second sound’s articulation to a first sound even when the signal conveys early explicit evidence about the identity of that second sound, and so could have been leveraged to begin disentangling the first from the second before the second sound was fully realized. I go on to argue for a shift in our perspective toward segmental parsing. Attributing the product of a later sound’s articulation to an earlier sound seems inefficient or undesirable when we understand the goal of segmental parsing to be the complete attribution of acoustic products to exactly the sounds whose articulations gave rise to them. But when we consider the fact that listeners necessarily perceive the evidence for events in the world at a delay from when those events occurred, it is adaptive to prefer attributing later-incoming acoustic signal to earlier speech sounds.