Current grant: Representing and learning stress: Grammatical constraints and neural networks

Joe Pater (PI) and Gaja Jarosz (co-PI) are leading an NSF research grant on “Representing and learning stress: Grammatical constraints and neural networks”. This three-year research grant that began in April 2022 is studying the learnability of a wide range of word stress patterns, using two general approaches. In one, general purpose learning algorithms are employed with representational hypotheses developed in linguistics. The goal is to develop grammar+learning systems that can cope with a broader range of typological data than current models, and that can also handle more of the details of individual languages, using more realistic data to learn from. In the other general approach, neural networks, which lack prespecified linguistic structure, are being tested on their ability to learn these same patterns, and to generalize appropriately.The public summary of this grant is available from the NSF. Grant meetings are being held in Zoom: contact Joe Pater if you wish to participate.