Fusion is great, and interpretable fusion could be exciting for theory generation
direct link: http://ling.auf.net/lingbuzz/004142
Response to “Generative linguistics and neural networks at 60: foundation, friction, and fusion” by Joe Pater. From my perspective, Pater’s (2018) target article does a great service to both researchers who work in generative linguistics and researchers who utilize neural networks – and especially to researchers who might find themselves wanting to do both by harnessing the insights of each tradition. The article does three very useful things. First, it provides primers with historical overviews of each tradition. Second, it highlights what’s been achieved by the fusion of the generative linguistics theoretical framework and the neural networks modeling technique. Third, it notes the increasing interpretability of neural network models, which I think suggests a very exciting path forward for generating linguistic theories of representation.
|Format:||[ pdf ]|
(please use that when you cite this article)
|Published in:||(submitted to Perspectives subsection of Language)|
|keywords:||generative linguistics, neural networks, probabilistic learning, language acquisition, theory generation, bayesian inference, learnability, syntax, phonology, semantics, morphology|