Max Nelson, Joe Pater and Brandon Prickett presented “Representations in neural network learning of phonology” in the UCLA colloquium series Friday October 9th. The abstract is below, and the slides can be found here.
Abstract. The question of what representations are needed for learning of phonological generalizations in neural networks (NNs) was a central issue in the applications of NNs to learning of English past tense morphophonology in Rumelhart and McClelland (1986) and in following work of that era. It can be addressed anew given subsequent developments in NN technology. In this talk we will present computational experiments bearing on three specific questions:
Are variables needed for phonological assimilation and dissimilation?
Are variables needed to model learning experiments involving reduplication (e.g. Marcus et al. 1999)?
What kind of architecture is necessary for the full range of natural language reduplication?