The University of Massachusetts Amherst
Categories
Uncategorized

Week 1 discussion

Please add comments to discuss material from week 1: HG Basics, comparison with OT and OT-LC, OT-Help intro, etc.

7 replies on “Week 1 discussion”

It seems that the probabilistic answer to the problem in Japanese loanword phonology doesn’t necessarily hinge on gang effects per se. If we just want some significant probability to be given to each possibility, MaxEnt allows this without the voiced candidate actually being worse than the voiceless one. I wonder if there’s an interesting way to characterize the probabilistic situations we gain analyses for under MaxEnt…

Comment from Joe Pater: Karen Jesney has some relevant discussion of the differences between MaxEnt and Noisy HG in this handout, and also in her forthcoming dissertation. We’ll come back to this when we talk about probabilistic versions of HG.

An attractive aspect of HG is that it can offer graded rankings of candidates, in contrast to OT’s categorical winner-take-all decisions. In addition to explaining variability, I think that could be useful in implementing models for acquisition. A concern I have though is with GEN. In OT, the task in constructing tableaux seemed to center on enumerating a diminishing number of candidates that could potentially change the rankings. With HG, it seems to me that having a single sum of violation-constraint products means that weight values will have to increase whenever longer inputs are considered (and I’m assuming that the infinite length of candidates is not a problem). I wonder whether the sums should be rescaled in some places, such as across prosodic domains.

Comment from Joe Pater: We had a nice class discussion of the differences and *similarities* between learning rankings and learning weights as word size increases. We’ll come back to this when we talk about Perceptron learning of HG grammars.

Questions from Xu Zhao: I have a problem when playing with OT-help. I put both of the java files in the same folder and restarted the computer, however, the function for “calculate typology (HG & OT)” will not work, neither will “HG solution”. As for Serial OT/HG calculations, neither of them will work. When clicking, a window will pop out, asking me to choose display unique languages only or not, with a submit button. But no matter I check the option or not, it won’t let me submit. And also, if I close the pop-up window, the whole program would terminate.

Answers from me: The or124.jar file needs to be in its own folder entitled “OTH-lib”, which itself resides in the same folder as OT-Help2.jar. This is the most common problem people have with OT-Help; Jim White suggested distributing a .zip file with the correct directory structure, which we probably will start doing. The last issue is apparently unexpected behavior only for mac users, so you’ll just have to remember not to close the window if you don’t want to quit.

You mentioned that we were to do 1 model in OT-Help and 1 in R each week. I wasn’t sure if there was a specific set of data that we should work with on this or if we should come up with our own, and if so, what should our data include?
Additionally, I’ve never actually worked in R before. What exactly do you want us to do in it?
Thank you.

I’m only asking you to do one simulation in OT-Help, and one in R over the course of the whole “semester”. I’ll clarify in class Monday.

One (informal) generalization I’ve noticed about the \textbook\ analyses showcasing additive effects in constraint interaction is that there’s often a close fit between descriptive and formal cumulativity. Japanese loanword deviocing is a nice example: descriptively, we can say with confidence that both voiced geminates and OCP-violating words are independently marked, but it’s only when these two elements occur together that we see any kind of effect (optional devoicing). The situation is mirrored formally in HG when two constraints representing the previously mentioned markedness gang up to overcome faithfulness only when both are violated.

Is this observation even worth mentioning? Should this sort of descriptive-formal correspondence be valued? Are there any good examples of \non-obvious cumulativity,\ where things look additive descriptively but not formally (or vice versa)? How do these analyses compare to those in which there is closer correspondence? On page 11 of the week 1 handout, there are a few cases where OT-LC does some weird things (e.g. onset devoicing, long-distance coda devoicing). One thing that strikes me about these LC constraints is that we have no HG- or OT- external evidence for them; that is, we can’t simply look at a typology of surface forms and come to the conclusion that combinations of *Voice-Obs and NoCoda are marked in the domain of the word (like we can for the Japanese example and its relevant markedness). We can’t call this an additive effect from a descriptive standpoint because things like long-distance coda devoicing don’t exist. Maybe this is reason enough to ban constraints like *Voice-Obs&NoCoda conjoined over the word. Similar reasoning holds for the dismissal of NoCoda&Ident-Voice: we can’t come to a conclusion about correspondence constraints by looking at surface forms only.

For the examples you mention here, cumulativity as formalized in HG yields just the descriptively attested cases of cumulativity, while cumulativity as formalized in OT-LC yields descriptively unattested cases. To me, that sounds like a straightforward argument that HG is a better formalization of cumulativity.

I guess the clearest case of non-obvious cumulativity that we’ve looked at would be the Makkan Arabic “Grandfather effect” from my 2006 slides. This is a case where formal cumulativity is getting something that might not be described as a cumulative effect at all. The Canadian raising analysis here uses similar markedness/faithfulness gang effects in producing something that is even less obviously descriptively a cumulative interaction. Going in the other direction, I guess any standard OT analysis of something that looks like cumulativity is an instance of descriptive cumulativity formalized without cumulativity.

Leave a Reply

Your email address will not be published. Required fields are marked *