Data in generative phonology

I’d like to raise as a discussion topic the question of what the data are that we are trying to explain in generative phonology. In my view, the lack of clarity about this issue is a bigger foundational issue in our field than the lack of clarity about the goals we are pursuing, one of the discussion points I raised in my mfm fringe workshop presentation. It’s also a foundational issue not only for what I called Classical Universal Phonology in that presentation, but for just about any approach to phonology one can imagine. I should be clear that I don’t think that there needs to be a uniform set of data or goals. Rather, I think we’d be making quicker progress towards our broader shared goals of understanding the formal structure of phonologies, and explaining learning and typology, if we made our commitments in these respects more explicit in our work.

To get the discussion going, let me repeat the worry I expressed in the mfm fringe discussion, and mention some other data-related points that came up. When Marc van Oostendorp pressed me on my assertion that data issues were foundational issues, I brought up the lack of a definition of productivity as an example. It’s unfortunately too common that when an analysis or theory fails to capture some data pattern, the claim is made that the pattern is unproductive (e.g. that there are exceptions, that there are no alternations or that they are limited in some way, etc.), without applying the same scrutiny and criteria to the data that the theory is capturing. Probably even more common is that exceptions or variation are abstracted from, again without any clear criteria on when that can be done. My own belief is that productivity is gradient (see Hayes’ textbook ch. 9), and that we need theories that capture that gradience. But whether we are working with theories that are categorical or gradient in this respect, we need to define productivity if we are going to use it as a criterion for what data we need to explain.

In his question period at the fringe, Michael Becker pressed his interlocutors to provide evidence that the generalizations they saw in existing alternations were in fact encoded as generalizations in speakers’ minds. Becker’s approach, like that of a lot of other current work, is to test productivity experimentally. I’m on board with that program, but I’m also on board with good old analysis of corpus data (where ‘corpus’ includes the data from grammars and dictionaries that phonologists typically study), and I’m starting to get worried about what to do when the two sets of data point in very different directions. For example, the ‘stress heavy if penult’ part of the Latin stress rule is a nearly exception-free pattern in unsuffixed nouns in English. But as Claire Moore-Cantwell (p.c.) reports, it seems that it’s not particularly productive in nonce word productions/judgments. Claire has some good ideas about how to relate the corpus data to the judgments via learning, but it’s clear that the grammar is going to look very different from those posited for English from Chomsky and Halle (1968) onwards.

Wendell Kimper mentioned in his talk the issue that the set of attested human languages appears to be a small sample from the space of possible human languages. There are various kinds of statistical measures and data controls that we can use to determine how robust the typological generalizations are that we observe. But Kimper also reports that vowel harmony looked at that way may provide relatively little information, since many of the patterns of each type of harmony come from the same language families. My gut feeling, like Wendell’s I think, is that in those circumstances we should still keep going with the usual practice of just making an attested/unattested cut, and hoping that we are modeling signal rather than noise. But it is a worry, and probably one of the reasons that it’s good that we’re not putting all of our eggs in the typology modeling basket. A possible strategy is to focus on typological claims with a relatively large scope, for example, the size of stress windows, or the absence of sour grapes-style harmony (and the presence of spreading up to a blocker).

Posted in Phonological theory
4 comments on “Data in generative phonology
  1. Joe raises very important concerns. I think they are widely shared. This is
    reflected in the fact that, over the years, several phonologists have produced
    consumer guides to evidence, from John Ohala to Marc van Oostendorp.
    My own view is that we need to be prudent and strategic when dealing with the
    gaping asymmetry between the small size of the phonological community and the
    vastness of the task, which is no less than understanding the sound component of
    human language. Mindful of this, as well as of the discussions at this year’s mfm
    fringe meeting
    , I would like to propose a few recommendations.

    1) Assume that a generalization is significant unless you have empirical evidence to show that it is accidental.

    I think this is a sound methodological rule, based on similar considerations as the Subset Principle in language acquisition. There are a variety of tests that
    you can run to see if you are wrong in treating a generalization as significant.
    In contrast, if you dismiss a significant generalization as accidental, it is far harder to
    recover from that error.

    2) If there is a mismatch between the patterns in a corpus and the patterns in a wug-test, it is not enough just to model the latter; the mismatch itself constitutes an explanandum.

    I think that most people agree with this rule. Notably, it underpins the research strategy that posits learning biases as explanations for ‘surfeit of the stimulus’ results. But it should also be noted that sometimes mismatches occur because the results of elicitation tasks, including wug-tests, are misleading. In a paper on Spanish diminutives, for example, I document a situation where novel creations attested online reveal a clear systematic pattern that previous elicitation attempts had missed, probably because they failed to provide the pragmatic context where the relevant diminutives are felicitous.

    3) Whole-language description remains important.

    At the mfm fringe meeting I noted with sorrow that attempts at describing a substantial fragment of the phonology of a language, like Rubach’s monographs on Polish and Slovak, are becoming increasingly rare. This is problematic. For example, optimality-theoretic work in the Classical Universal Phonology paradigm often came to grief over the fact that a constraint-based analysis of one process in one language often failed to scale up to the whole language (see §32-§33 in my OCP4 paper for the same point). Peter Staroverov’s paper on Ajyíninka Apurucayali at this year’s mfm provided a salient example of this issue.

  2. Joe Pater says:

    Thanks Ricardo – it looks like we’re in agreement about much, if not all of this. I’d like to emphasize though, that the choice of whether to concentrate on whole language analysis or typological work seems largely independent to me of the choice between OT and some other framework. For example, one of the nicest pieces of whole language work that I know (if I’m understanding the term correctly) is a paper that just came out in Phonology on Samoan word-level prosody by Zuraw, Yu and Ortifelli that uses OT. This paper also came to mind when listening to Paul de Lacy’s OCP talk, since it addresses the quality of data issues that he raises – the fieldwork is also excellent.

  3. I agree that this is not an issue of principle, but rather of practice. There is nothing about OT that precludes whole-language description, as the admirable paper you mention shows. OT just makes the task harder because the goals become more ambitious: the aim is not just to provide a descriptively adequate analysis, but one that relies on constraints that make correct typological predictions. The question is whether enough of this hard work is being done–or, in other words, whether the community is pursuing a balanced research strategy. My point was merely that a good dollop of whole-language analysis is an essential part of a healthy diet.

  4. Ricardo: thanks for making your point that it is important to explain a mismatch between corpus and wug data. One thing that I think is really great about this strategy is that we can actually get evidence about what’s *impossible* in human phonologies, or at least about what’s difficult. If a language just doesn’t exist, that could be for a variety of reasons including accident; if there is evidence for a pattern in a language’s lexicon but learners ignore that evidence, that tells us that that pattern is either not representable in adult grammars or not learnable by the phonological acquisition system.

Leave a Reply

Your email address will not be published. Required fields are marked *

*