Wednesday, June 30, 2010

Field, "Tarski's Theory of Truth"

The thesis of the paper is that, while Tarski is thought to have analyzed `true' using no undefined semantic terms, this is not, in fact, what he did. Instead, "Tarski succeeded in reducing the notion of truth to certain other semantic notions; but...he did not in any way explicate these other notions, so that his results ought to make the word `true' acceptable only to someone who already regarded these other semantic notions as acceptable." (347) Field will give two versions of Tarski's theory, T1 and T2. T1 will lay bare Tarski's use use of a kind of `primitive denotation'---or basic semantic value---in defining truth-at-an-assignment. T2 is sneakier---the use isn't made explicit--but it will be shown that T1 and T2 are equivalent in their explanatory power and significance. (In fact in some minor ways T1 is better than T2.)

Section I: T1
We begin with an interpreted toy language L with names (c1, c2...) one-place function symbols [I believe this is supposed to be like "father of", interpreted as a function in (e, e)] (f1, f2...) and one-place predicates (p1, p2...). Initially we "will follow Tarski in supposing that in L the sense of every expression is unambiguously determined by its form." We define:

1) singular terms: names, variables [wait, where are the variables?], and function symbols followed by singular terms [father(x), father(m)]
2) formulas: predicate^singular term, ~^formula, formula^formula, Ax(formula).
3) sentences: closed formulas.

Variables and assignments are introduced. An assignment can be thought of as a sequence s = , where si's are objects, which assigns objects to the variables `x1', `x2'... We will abbreviate ``true on assignment s" as ``true_s". We also employ ``denotes_s", e.g. `x1' denotes_s s1 on assignment s. Annoyingly, we use ``fulfills" for function symbol-pairs and ``applies to" for 1-place predicates.

T1: an inductive characterization of denotes_s and truth_s (using # for quine-corners:)
(A)
1. `xk' denotes_s sk.
2. `ck' denotes_s what it denotes.
3. `fk(e)' denotes_s an object a iff (i) there is an object b that `e' denotes_s and (ii) `fk' is fulfilled by .
(B)
1. #pk(e)# is true_s iff (i) there is an object a that e denotes_s, and (ii) `pk' applies to a.
2. #~e# is true_s iff e is not true_s.
3. #e1 & e2# is true_s iff e1 is true_s and e2 is true_s.
4. #Axk e# is true_s iff for each sequence s* that differs from s at the kth place at most, e is true_s*.

In the case of sentences truth, we have:
(C)
A sentence is true iff it is true_s for some (\every) s.

This is the Truth Characterization (TC) of T1. TC reduces one semantic notion, truth, to three others: (1) denotation for names, (2) predicate denotation (or "application"), (3) function symbol fulfillment. We introduce "primitive denotation" for these three things; T1 explains truth in terms of primitive denotation. We can also explain denotation for arbitrary closed singular terms, like `f1(c1)', in terms of the primitive denotations of its parts. Hence Tarski's T1 is compositional.

To have explained truth in terms of primitive denotation is, Field concedes, an important achievement. It is good for the purposes of model theory:

"...[I]n model theory we are interested in such questions as: given a set Gamma of sentences, is there any way to choose the denotations of the primitives of the language so that every sentence of Gamma will come out true given the usual semantics for the logical connectives? For questions such as this, what we need to know is how the truth-value of a whole sentence depends on the denotations of its primitive nonlogical parts, and that is precisely what T1 tells us. So at least for model-theoretic purposes, [T1] is precisely the kind of explication we need. (351)"

Languages where sense is determined by form.
What should we make of this stipulation of Tarski's? It is not fulfilled by natural languages: I can use the word ``John" to refer to different people on different occasions, and I can use phrases like ``takes grass" to mean different things on different occasions as well. Field writes that it``seems clear that ...there is no remotely palatable way" of extending the theory of truth Tarski actually gave, which is a bit different from T1, to sentences like `John takes grass.'"

If we use T1 as the model, though, "there is no difficulty...The only point about languages containing `John' or `grass' or `I' is that for such languages `true', `denotes', and other semantic terms make no sense as applied to expression types, they make sense only as applied to tokens." It is suggested that we would need to reinterpret e.g. B(2) of T1 as:

(B) 2. A token of #~e# is true_s iff the token of #e# that it contains is not true_s.

[Something dangerous is going on here, related to not having cleanly separated contexts and indices. We shouldn't speak of tokens occurring within tokens, because at some index values there will be no tokens. For example, if the operator were metaphysical necessity, we couldn't really speak of `the token contained in it'.]

Field notes that "this analysis leaves entirely out of account the ways in which `I' and `John' differ: it leaves out of account, for instance, [the character of `I'.] But that is no objection to the analysis, for the analysis purports merely to explain truth in terms of primitive denotation; it does not purport to say anything about primitive denotation, and the differences between `I' and `John'...are purely differences in how they denote." (352).

If we want to re-write T1-(A) [the characterization of denotation_s] so that it is flexible enough to allow for the introduction of new names into the language and so ``does not rely on the actual vocabulary that the language contains at a given time," it is easy to do so:

T1(A)
1. The kth variable denotes_s sk.
2. If e1 is a name, it denotes_s what it denotes.
3. If e1 is a singular term and e2 a function symbol, then #e2(e1)# denotes_s a iff (i) there is an object b that e1 denotes, (ii) e2 is fulfilled by the ordered pair (a, b).

We "can generalize the definition of truth_s in a similar manner. This shows that, in giving a TC, there is no need to utilize the particular vocabulary used at one temporal stage of a language, for we can instead give a more general TC which can be incorporated into a diachronic theory of the language."

Section II: T2.
T2 is more closely modeled on the theory Tarski actually offered: it does not use any semantic concepts (no "primitive denotation") in its definition of truth_s (and hence truth simpliciter.)

``How did Tarksi achieve this result? Very simply: first, he translated every name, predicate, and function symbol of L into English, then he utilized these translations in order to reformulate clauses 2 and 3(ii) of (A) and clause 1 of (B). For simplicity, let's use c1, c2, etc., as abbreviations for the English expressions that are translations of the words `c1', `c2'...of L: e.g.: if L is...German and `c1' is `Deutschland,' then `c1' is an abbreviation for `Germany.'"

Hence the formulation of T2:

T2: an inductive characterization of denotes_s and truth_s (using # for quine-corners:)
(A)
1. `xk' denotes_s sk.
2. `ck' denotes_s ck.
3. `fk(e)' denotes_s an object a iff (i) there is an object b that `e' denotes_s and (ii) a is fk(b).
(B)
1. #pk(e)# is true_s iff (i) there is an object a that e denotes_s, and (ii) pk(a).
2. #~e# is true_s iff e is not true_s.
3. #e1 & e2# is true_s iff e1 is true_s and e2 is true_s.
4. #Axk e# is true_s iff for each sequence s* that differs from s at the kth place at most, e is true_s*.

How do we get the translations right, though, in order to ensure that e.g. `ck' denotes_s ck? We need a requirement of coreferentiality to define an `adequate translation': are an adequate translation iff (i) e1 and e2 are coreferential, (ii) e2 contains no semantic terms [this is to avoid translating e.g. `Deutschland' as `the referent of `Deutschland' in German.'] (355). Of course, Tarski did not reduce the notion of an adequate translation to nonsemantic terms. This is not, by itself, a devastating objection to T2; Tarski was merely relying on the resources of a interpreted metalanguage:

"[This] is no objection to...T2, for the notion of an adequate translation is never built into the truth characterization and is not, properly speaking, part of a theory of truth. On Tarski's view we need to adequately translate the object language into the metalanguage in order to give an adequate theory of truth for the object language; this means that the notion of giving an adequate translation is employed in the methodology of giving truth theories, but it is not employed in the truth theories themselves." (355).

We are left with T1 and T2, both perfectly good on their own terms. It is still true that T1 employs semantic vocabulary while T2 does not. Now we are better suited to address the question of whether this means T2 is more philosophically significant than T1.

Philosophy.
We ask: for what purpose do we want a definition of truth with no semantic terminology? The first desideratum that might come to mind is that we want to explain the meaning of the word 'true.' But that cannot be what we are after here. For example, the kind of definition of truth we will wind up with differs for different languages. Yet that misses something important about truth!

What Tarski hints at in his writings is that he is on a quest to make semantic terminology, like ``true", compatible with physicalism. Field quotes Tarksi writing that without the project he is embarking on, "it would be difficult to bring [semantics--HF] into harmony with the postulates of the unity of science and of physicalism."

Section III. Physicalism and Ontological Reduction.
We could call the view that Tarski seems opposed to---one that holds that there is no reduction of semantic terms to Physicalist ones---``semanticalism." (Compare this to ``Cartesianism", the view that there are irreducibly mental facts.) When we confront the terms of some special science, like "gene" in biology, from the point of view of a reducing science, we have two options: we can either try to account for it, or reject the need to account for it, in the way a physicalist should reject a call to explain ghosts or vital essences.

Now consider the semantic project: someone says "schnee ist weiss" to me, and I wish to classify the utterance as true. Part of the explanation is that snow is white: that is a perfectly physicalistically acceptable fact with a physical explanation. But another part is left unaccounted for: the relationship between the (physicalistically acceptable) whiteness of snow and ``the German utterance being true...It is this connection that seems so difficult to explicate in a way that would satisfy a physicalist, i.e., in a way that does not involve the use of semantic terms."

Section IV. T2 is not superior to T1 for the purposes of ontological reduction.
Tarski also set himself a condition of formal adequacy on theories of truth, which seems, in his writings, not to be straightforwardly connected to considerations of reductionism.

(M) Any condition of the form
(2) Ae [e is true <-> B(e)]
should be accepted as an adequate definition of truth iff it is correct and `B(e)' is a well-formed formula containing no semantic terms.

As noted, T1 is a partial reduction. But T2, which meets this criterion, is, Field argues, still inadequate, because (M) is inadequate. ``Correctness", if this is glossed merely as extensional equivalence, is too weak for ontological reduction.
[Objection: we don't need to conclude that 'true' is a natural kind, only that all instances of it can be physicalistically accounted for: c.f. 'poison.' There may not be any kind of extensional unity in terms of which ontological reduction can be explained.]

Field's case study in "why extensional equivalence is not a sufficient standard of reduction" (362) is the concept of valence in chemistry. We can give a purely extensional definition of valence, which would have been useful to working chemists even in an era before chemistry was reduced to physics:

(3) (Ae) (An) (E has valence n iff: E is potassium and n is +1, or E is sulphur and n is -2, or...etc, etc.)

If we had not reduced chemistry to physics, valence would have had to be occammed, according to the physicalist. In other words, (3) by itself would not save valence from Occam's Razor, despite the fact that it is extensionally adequate and the word `valence' does not appear on the other side of the biconditional in (3).

Field wants the analogy to be taken in a certain way. He does not claim that T2 is as trivial as (3); he does claim, though, that "roughly...T2 minus T1 is as trivial as (3) is" (363).

Note, first, that the notion of valence can be used to give a recursive definition of valence for chemical compounds. This extended notion of valence is, well, compositional: the valence of the compound depends on the valences of its constituents and how they're stuck together. This is like the reduction of truth to primitive denotation. However, in both cases, something basic remains unexplained (primitive denotation/primitive valence), and this is the very thing that must be explained if reduction is to succeed.

We can get at what Field means by "T2 minus T1" by isolating a component of T2 that is like (3), the enumerative definition of valence:

(DE) To say that the name N denotes a given object a is the same as to stipulate that either a is France and N is `France', or a is Germany and N is `Germany,' or...

This is Tarski's account of proper names in English. "It seems clear," Field writes, "that DE and DG (an analogous definition of denotation for German) do not really reduce truth [primitive denotation] to nonsemantic terms, any more than (3) reduces valence to nonchemical terms." (365).

We might ask what a reduction the naming relation (N denotes a) to nonsemantic terms would look like, if there were one. For that, we can reference Russell's theory of properly proper names, which were reduced using the notions of sense-data and acquaintance. Russell's theory is not successful, but it has the form of a theory that, if successful, would be non-circular--it would be a true reduction. To this we can add that a more successful account of naming, Kripke's causal-chain theory, is still in the works: while Kripke's theory doesn't (by Kripke's own lights) give a "purely causal" account of naming, it is possible that some such theory will succeed. [For example, it seems that Kripke gets right certain counterfactuals that Russell gets wrong; and establishing the right kind of counterfactual dependence is one way of getting beyond merely extensional equivalence.]

What value did Tarski attach to clauses like DE, which we have claimed to be the difference between T1 and T2? It is hard to say. We do know that T2 was the basis for some extravagant claims on Tarski's behalf, such as his claim that "the problem of establishing semantics on a scientific basis is completely solved." [qtd. by Field, 369]. In other places Tarski suggested that such a clause could explain the meaning of "denote." But this isn't right, since once again, DE/DG are merely extensional and don't capture the fact that we should give the same meaning for "denote" to different languages, since different languages denote in the same way. "In fact, it seems prety clear that denotation definitions like DE and DG have no philosophical interest whatever" (369).

Section V. Why do we want a notion of Truth?
A notion of truth might serve any number of purposes, but a primary or original purpose is "to aid us in utilizing the utterances of others in drawing conclusions about the world." We need truth, both semantically and pragmatically conceived, to understand "(i) the circumstances under which what another says is likely to be true, and (ii) how to get from a belief in the truth of what he says to a belief about the extralinguistic world" (371). We need accounts of trust and truthfulness, as well as assertability (the connection between the assertability conditions of "p" and those of "`p' is true".)

This seems right to Field, and "it gives more insight than was given in Sections II and IV into why it is that neither T1 nor T2 can reasonably be said to explain the meaning of the term `true'--even when a theory of primitive reference is added to them." The reader is referred to Dummett's article "Truth" and what Dummett therein says about "Frege-style truth definitions".

Field also takes a swipe at a kind of complacentist position that takes its inspiration from the Neurath's boat analogy:

" 'Why [in light of an explication of truth in terms of assertability norms] do we need causal (etc.) theories of reference? The words `true' and `denotes' are made perfectly clear by schemas like (T). To ask for more than these schemas--to ask for causal theories of reference to nail language to reality--is to fail to recognize that we are at sea on Neurath's boat: we have to work within our conceptual scheme, we can't glue it [what?] to reality from the outside.' " (372)

Field responds in italics: "The reason why accounts of truth and primitive reference are needed is not to tack our conceptual scheme onto reality from the outside; the reason, rather, is that without such accounts our conceptual scheme breaks down from the inside." Here, I take Field to be using "our conceptual scheme" to mean "physicalism." [I have no better understanding of "inside" and "outside" metaphors in this context than I usually do.]

Tuesday, June 15, 2010

Refl-Heck-tions

To the slogan that meaning is use, we can pose a McDowellian dilemma: does the slogan employ a meaning-laden notion of use, or a non-meaning-laden one? If meaning-laden, the charge goes, we will get sentences like

Speakers use "snow is white" to express the thought that snow is white.

While true, this is hardly "likely to have far-reaching philosophical consequences." It is a triviality. If the notion of use employed is non-meaning-laden, on the other hand, what we will get is "a behavioristic reduction of meaning" in the Quinean mold, which is widely reviled. And so the slogan is either trivial or false. Heck's mission in the paper will be to carve a way between the horns of this dilemma: to find a role for the use-meaning thesis that is neither trivial nor false.

First, he wants to make a distinction between epistemological and metaphysical motivations for the thesis. McDowell takes the Quinean project (which rejects a meaning-laden notion of use) to rest on a mistaken epistemology: we perceive meaning in utterances directly (just as, in perception, we perceive facts directly), so the question of how we construct meaning from behavioristically-conceived noises rests on a falsehood. (The misconception is comparable to sense-datum conceptions of perception; it is a "sense-datum conception of understanding", in McD's terms.)

To this, Heck replies that the rejection of meaning-laden notions of use need not (and should not) be epistemological, but metaphysical. Use-meaning theories aim to "answer...the question [of] what it is for expressions to mean what they do, what determines what they mean, in the metaphysical sense."

Now the terms of the debate shift: is there a reasonable project in the vicinity of characterizing meaning (metaphysically) in terms of use? McDowell's objection will also shift: he will maintain that there is no metaphysically interesting project, either. Heck responds:

"Although I believe there is more to be said to motivate the metaphysical project, one part of me wants simply to say that it is obvious that there is a real problem about the nature of meaning." (3)

...in other words, this might be a difference in perspective about what is, at root, metaphysically mysterious.

But we proceed to a more concrete way in which McDowell formulates his denial of the existence of a metaphysical project: namely, his claim that homophonic theories of truth, properly understood, are metaphysically deflationary in just the way that is needed here, to dispel mysteries surrounding the question of giving a metaphysical analysis of meaning (possibly, in terms of use.) Heck's response is that McDowell doesn't correctly understand homophonic theories of truth. In particular, McD fails to appreciate the difference btw (i) semantic questions, having to do with LF and [[]], and (ii) meta-semantic questions, e.g., "what it is for a particular expression to have the reference that it does." Heck proposes to regiment calling answers to (i) a "theory of truth" and answers to (ii) a "theory of meaning."

Homophony is where semantics ends; it is the end-product of semantic machinery, hence the machinery of a "theory of truth." Take the output of such a module, such as

(1) "Snow is white" is true iff snow is white.

McD's claim is then that there can be no question about "what it might be for [(1)] to be correct: it wears its correctness on its face" (5).

Heck replies that there is a sense in which (1) is obvious upon reflection, but that we should be very careful in appreciating exactly why and how this is so. First, to say that it is obvious upon reflection is not to say that it is metaphysically necessary. (It is contingent that our words mean what they do.) Even in non-disquotational cases, such as within a semantics of indexicals, there are two ways of taking the understanding of such claims. Heck's example, borrowed from McGee, is one of a speaker who asks, "why am I here?" To say that "I am here" is true iff I am here is not helpful; nor even is a Kaplanian explanation of why such sentences express "logical truths." Certainly it is not an explanation of in what it consists that she is here. (I'm not sure what would count as such an explanation!) Secondly, no "serious" semantic theory will actually be homophonic, since ambiguity in he object-language will be reduced/eliminated and indexicals will be analyzed away. Moreover, the feeling of obviousness-upon-reflection will evaporate whenever the object and meta-languages are distinct.

So, in what does the truth of (1) consist? A use-meaning thesis is supposed to be the beginning of an answer. Davidson was interested in this question as well as in the formal semantics question, which he called "giving a theory of truth." For Davidson, a way of putting the crucial question (again, to which the use-meaning thesis is supposed to be the beginning of an answer) is: "What is it for a theory of truth to be correct?" (8) Davidson's answer, which is a version of a use-meaning theory, is developed via claiming that what meaning must consist in is data available to the radical interpreter. A radical interpreter has (of course) her own mental and perceptual states--her own intentionality--but no semantic knowledge of the language she is interpreting. Davidson rejects meaning-laden notions of use (denying them to the radical interpreter) for just the reason we met at the introduction of the McDowellian dilemma: to assume a meaning-laden notion of use would be to trivialize the question we are trying to answer.

Dummett, who also rejects meaning-laden notions of use, wants metaphysical explanations to bottom out in semantic competence. A speaker's semantic competence consists in her knowledge of a theory of truth for her language. It is by exploring the nature of semantic competence that (I gather?) Dummett wants to connect meaning and use.

Do we, as speakers, really know a theory of truth for our (native) language? Surely this knowledge would be tacit, knowledge-how. "On [Dummett's] view," Heck writes, "the structure [of a theory of truth for LFs] has little purpose other than to articulate this complex ability into component sub-abilities." To know a language is simply to be able to speak it.

Enter McD, the eternal antagonist, once again. McD now argues that Dummett's notion of a speaker's knowledge of her language is nothing better than a Quinean one--" 'no more than a mere description of outward behavior, with the mental...aspect of language use left out.' " (11). Chomsky similarly criticizes Dummett's practical, ability-based conception of what it is to speak a language, since it leaves out the speaker's knowledge of her language. (Knowledge of the language, like knowledge of swimming or walking, becomes a "facon de parler.") "The problem...concern[s] general facts about how our use of language is integrated with our conscious mental life." (12). (Consider the host of plausible-sounding arguments that only rational creatures can use language; that rationality and language-use are intimately connected.)

We are lead to the slogan that thought is prior to language. Does the use-meaning thesis face a similar dilemma when we consider whether we are entitled to explain meaning in terms of a mental-content-laden notion of use? A mental-content-laden notion of use might be called a Gricean one. NB that our explanation of meaning in terms of use, on such a notion of use, would not answer, but presuppose an answer to, the problem of intentionality.

What about arguments that purport to show that we need language to have sophisticated mental states? Children certainly have prelinguistic mental states, yet it is also undeniable that their capacity to entertain sophisticated thoughts grows with, and is facilitated by, their language ability.

We may, in some cases, be able to bootstrap from our knowledge of simple sentences (and therefore, our simple thoughts) to more complicated thoughts and sentences: an understanding of past tense and past tense sentences in terms of present-tense ones may be a good example of this. What the example suggests is a hybrid approach: part Dummettian, part Gricean. A substantial challenge remains, however, in "developing the Dummettian approach at the fundamental level."

In closing, we revisit Dummett's Martians. Dummett's Martians are taken to drive a rejection of content-laden notions of use in explaining meaning, since they present a skeptical scenario in which behavior is the same and mental contents are different. However, Dummett's argument here is epistemological, and we should take a page from McD and be as suspicious of it as we are suspicious of arguments from illusion to sense-datum theories of perception.

****
Heck, "Use and Meaning."

Thursday, June 3, 2010

Re-re-reconsidering the pragmatic analysis of choice inferences

Suppose that the textbook semantics for "or" was right. Now suppose I said

(1) You may have coffee or tea.

When, in fact, tea is forbidden. The sentence is true. Question: how can we derive via Gricean reasoning that you implicated that I may have tea?

Answer: we generate some alternatives to (1). Among them is the shorter, true

(2) You may have coffee.

Since (2) is shorter and obviously salient, why didn't I say it?
A way of capturing the thought is to argue that each disjunct must "be doing some work." Since (1) contains one more disjunct than (2), it must be doing some work. What is the nature of this work? My goal, if I am opposed to the Gricean analysis, is to give a plausible account of what this work consists in, without assigning it the role of choice.

That doesn't seem too hard, actually. It could simply be ignorance of what is permitted that causes me to cast my net wide over the disjunctions. So I'm not sure we've got a direct route from textbook semantics to the derivation of choice as an implicature. (Unless we do crazy things that amount to assuming I'm deontically omniscient.)

I think, though, that the following is a bad argument: "I want to say that each of coffee and tea is permissible, so I put them both in!" That begs the question, since it assumes that sentence of the form "you may A or B or...Z" gives one a list of options all of which are permissible. But that's just what we were trying to prove.

Maybe the following twist is a bit better?: Suppose you wanted to say that the following things, A, B...Z were permissible. (This seems like a not uncommon situation to find oneself in.*) How would you do it? Well, there's a good reason for not putting each of A-Z in it's own sentence "You may...," since that would be very longwinded. There's also a good reason for not saying, "You may A and B and ...Z" since that implies that you can do all of them (at once?) and they may be mutually exclusive. That leaves "or."

*A pipe-dream: a semantics for choice that can really be described as a "logic of agency."

There are no stone lions in my garden, either.

The puzzle posed by local pragmatics is how it comes to be that a word is interpreted in such a way that it departs from its literal or lexical meaning:

There are no stone lions in my garden.

Geurts argues that this must be a "local pragmatic process."

Metalinguistic negation negates an utterance, complete with its conventional and conversational implicatures in tow. It has been observed that we need recourse to a similar mechanism to explain cases where two sentences (non-finite clauses--type t, as far as I can tell by looking at Wikipedia) joined by connectives like

___ is better than ___

for example,

Drinking warm coffee is better than drinking hot coffee.

(looks like good data for the application of contrastivist semantics!...) The idea is that "drinking warm coffee," while subsentential, must be subject a scalar implicature that turns "warm" into "warm but not hot," so that it can be sensibly contrasted with "drinking hot coffee."

Hence we will have to take our implicatures at the subsentential level. Intuitively, this doesn't seem so bad. After all, metalinguistic negation, even though it is (by hypothesis) targeted at a full-blown speech act, is usually accompanied by stress on a particular part of the asserted sentence:

She didn't make *a* mistake, she made *many* mistakes.

The target of the metalinguistic negation--assuming that's what it is--is, first, an assertion of the form "she made a mistake"; but it is clear that within this sentence, the target is the lexical item "a".

Tractatus 2.0211, two ways

The original says:

"whether a proposition has sense cannot depend on whether another proposition is true."

But this is contradicted by, for example, the H & K semantic approach. For them [[both]] = \lambda f \in D_et st |f| = 2 . \lambda g \in D_et . f \subseteq g; any sentence of the form "Both Fs are G" will have a truth-value only if there are (exactly) two f's. But that, of course, is true iff the proposition "there are two Fs" is true. The semantic entry for "both" is presupposition-laden; if what Wittgenstein is envisioning is a semantics without presupposition, then it is certainly a semantics that is very far from the way formal semantics is currently used to model natural language.

The use Stalnaker makes in "Assertion" of Tractatus 2.0211 is importantly different:

"The same proposition is expressed relative to each possible world in the context set." (88)

Now we are ok! Perhaps the way to gloss Wittgenstein's remark should be: "which proposition is expressed by an assertion cannot be different in different worlds in the context. "

Tuesday, June 1, 2010

The Sauerland Algorithm, exclusive "or", and compositionality

The explanation of a given implicature--particularly a given scalar implicature--often goes as follows. We form what is called a Horn Scale of lexical items, such as: [all, some, none].
Each entails the others to its left (assuming a nonempty domain): l3 -> l2 -> l1. We then use a scalar implicature to determine that an assertion of one of the lexical items li implicates ~lj for any j>i. This last step goes by way of the unassertability of lj, and from an assumption that the speaker is opinionated about the truth-value of lj to the conclusion that it is unassertable because false (rather than unassertable because unknown.)

This seems a bit...formal to me, but it seems right. Geurts calls a similar procedure (for "some" and "all") a "classical Gricean account" of scalar implicature.

The point of the formality of the Horn scale might be this. (At least: here's a problem that the Horn Scale might be an attempted solution to.) The puzzle of generating implicatures is a puzzle of generating relevant alternatives: things p such that it's actually pragmatically significant that the speaker didn't say p. But putting it this way is likely to make the problem seem intractable: how on earth are we to know, on the basis of our knowledge of the language alone, what these relevant alternatives are? Isn't what might have been said, but wasn't, a hopelessly open-ended and context-sensitive matter? The Horn scale helps us cut down on our alternatives. In this way, it should probably be seen as a way of generating some relevant alternatives--surely not all, for that depends on what other words appear in the sentence.

Looked at this way, the Sauerland algorithm seems very silly. The desideratum is a good one: we want to generate, for a given disjunction "A v B," the relevant alternatives: "A," "B," and "A and B." But do we really need to shoehorn this observation into the mold of the Horn Scale by coming up with "lexical items" that provide alternatives?--these would be Sauerland's "silent binary connectives" L and R.

Why do we need, as Alonso-Ovalle reports, to generate "a [partially ordered] scale of lexical items"? Why not simply say that the relevant alternatives generated for "or" are what we think they are?--if some principled reason needs to be given for this, we could say: "whenever we have a sentence with sentential proper parts, those proper parts are relevant alternatives to the original sentence, to which scalar implicature reasoning may apply."
(This is not of course a necessary condition for being an alternative, but it is a sufficient one, and in combination with the Horn mini-scale , it will do the job.)

Alonso-Ovalle then reports that

"Fox (2006) points out that considering all maximal consistent subsets of the set of negated Sauerland competitors of a disjunction allows for the derivation of the exclusive component of disjunctions (84)."

[the story is this: take all the maximally consistent subsets of set of negated Sauerland competitors, then generate a set containing only those propositions which are in *all* the maximally consistent subsets. Voila! We get the strengthened meaning, which is the lexical meaning of the "or" plus the exclusive component.]

This seems like a (very complicated, unjustified) step in the wrong direction, though. What's doing the work in the generation of implicatures is not the truth of the competitors, but their known truth. While it's not possible for all the Sauerland competitors to be false while the disjunction is true, it is possible for all the Sauerland competitors to be unknown while the disjunction is known.

Here is Alonso-Ovalle on Sauerland's own disclaimer:

"So far, we have assumed that the atomic disjuncts are made visible to the pragmatics via the Sauerland algorithm. The visibility of the disjuncts depends on the assumption that "or" forms a lexical scale with two silent operators (L and R). But this assumption still needs to be justified. To quote Sauerland himself: 'Evidently, the adoption of [L and R] is more of a technical trick than a real solution for the problem just discussed. However, the intuition underlying it, that the use of the word "or" drives the computation of scalar implicatures, also underlies Horn's quantitative scales and seems sound. Therefore I hope future research will show that the apparent clumsiness here is due to my technical execution, not the idea.' " (92)

Alonso-Ovalle concludes, "to the extent that there is an alternative way to make the atomic disjuncts visible to the pragmatics, L and R become superfluous."

I guess I don't see why we need to make the disjuncts visible to the pragmatics; they just are visible! I can make sense of a need to "make the disjuncts visible" to the semantics; what this means is that you want a semantics on which e.g. [[left hand or right hand]] /= [[hand]], in such a way that you may quantify over the disjuncts in your semantic entries.

Alonso-Ovalle's personal beef with the Sauerland algorithm is that it gives the wrong result when one disjunct entails another:

John ate two or three bagels.

Sandy is reading Huck Finn, Great Expectations, or both.

to this, it must be added that one disjuncts entailing another is usually verboten:

Ann is wearing a dress or a red dress.

Joe drives a car or a Cadillac.

Perhaps Alonso-Ovalle's beef could be solved if we took the disjunction to be metalinguistic? Although note that

Smith is meeting a woman or Mrs Smith.

...does sound a bit odd.

***
"The observation that the interpretation mechanism needs access to each individual disjunct to capture the exclusive component of disjunctions can be traced back to Reichenbach."