The explanation of a given implicature--particularly a given scalar implicature--often goes as follows. We form what is called a Horn Scale of lexical items, such as: [all, some, none].
Each entails the others to its left (assuming a nonempty domain): l3 -> l2 -> l1. We then use a scalar implicature to determine that an assertion of one of the lexical items li implicates ~lj for any j>i. This last step goes by way of the unassertability of lj, and from an assumption that the speaker is opinionated about the truth-value of lj to the conclusion that it is unassertable because false (rather than unassertable because unknown.)
This seems a bit...formal to me, but it seems right. Geurts calls a similar procedure (for "some" and "all") a "classical Gricean account" of scalar implicature.
The point of the formality of the Horn scale might be this. (At least: here's a problem that the Horn Scale might be an attempted solution to.) The puzzle of generating implicatures is a puzzle of generating relevant alternatives: things p such that it's actually pragmatically significant that the speaker didn't say p. But putting it this way is likely to make the problem seem intractable: how on earth are we to know, on the basis of our knowledge of the language alone, what these relevant alternatives are? Isn't what might have been said, but wasn't, a hopelessly open-ended and context-sensitive matter? The Horn scale helps us cut down on our alternatives. In this way, it should probably be seen as a way of generating some relevant alternatives--surely not all, for that depends on what other words appear in the sentence.
Looked at this way, the Sauerland algorithm seems very silly. The desideratum is a good one: we want to generate, for a given disjunction "A v B," the relevant alternatives: "A," "B," and "A and B." But do we really need to shoehorn this observation into the mold of the Horn Scale by coming up with "lexical items" that provide alternatives?--these would be Sauerland's "silent binary connectives" L and R.
Why do we need, as Alonso-Ovalle reports, to generate "a [partially ordered] scale of lexical items"? Why not simply say that the relevant alternatives generated for "or" are what we think they are?--if some principled reason needs to be given for this, we could say: "whenever we have a sentence with sentential proper parts, those proper parts are relevant alternatives to the original sentence, to which scalar implicature reasoning may apply."
(This is not of course a necessary condition for being an alternative, but it is a sufficient one, and in combination with the Horn mini-scale , it will do the job.)
Alonso-Ovalle then reports that
"Fox (2006) points out that considering all maximal consistent subsets of the set of negated Sauerland competitors of a disjunction allows for the derivation of the exclusive component of disjunctions (84)."
[the story is this: take all the maximally consistent subsets of set of negated Sauerland competitors, then generate a set containing only those propositions which are in *all* the maximally consistent subsets. Voila! We get the strengthened meaning, which is the lexical meaning of the "or" plus the exclusive component.]
This seems like a (very complicated, unjustified) step in the wrong direction, though. What's doing the work in the generation of implicatures is not the truth of the competitors, but their known truth. While it's not possible for all the Sauerland competitors to be false while the disjunction is true, it is possible for all the Sauerland competitors to be unknown while the disjunction is known.
Here is Alonso-Ovalle on Sauerland's own disclaimer:
"So far, we have assumed that the atomic disjuncts are made visible to the pragmatics via the Sauerland algorithm. The visibility of the disjuncts depends on the assumption that "or" forms a lexical scale with two silent operators (L and R). But this assumption still needs to be justified. To quote Sauerland himself: 'Evidently, the adoption of [L and R] is more of a technical trick than a real solution for the problem just discussed. However, the intuition underlying it, that the use of the word "or" drives the computation of scalar implicatures, also underlies Horn's quantitative scales and seems sound. Therefore I hope future research will show that the apparent clumsiness here is due to my technical execution, not the idea.' " (92)
Alonso-Ovalle concludes, "to the extent that there is an alternative way to make the atomic disjuncts visible to the pragmatics, L and R become superfluous."
I guess I don't see why we need to make the disjuncts visible to the pragmatics; they just are visible! I can make sense of a need to "make the disjuncts visible" to the semantics; what this means is that you want a semantics on which e.g. [[left hand or right hand]] /= [[hand]], in such a way that you may quantify over the disjuncts in your semantic entries.
Alonso-Ovalle's personal beef with the Sauerland algorithm is that it gives the wrong result when one disjunct entails another:
John ate two or three bagels.
Sandy is reading Huck Finn, Great Expectations, or both.
to this, it must be added that one disjuncts entailing another is usually verboten:
Ann is wearing a dress or a red dress.
Joe drives a car or a Cadillac.
Perhaps Alonso-Ovalle's beef could be solved if we took the disjunction to be metalinguistic? Although note that
Smith is meeting a woman or Mrs Smith.
...does sound a bit odd.
***
"The observation that the interpretation mechanism needs access to each individual disjunct to capture the exclusive component of disjunctions can be traced back to Reichenbach."
No comments:
Post a Comment