In this chapter, we prescind from higher-order vagueness and its grand unification with the Tarski hierarchy. Instead, we confront simple truth-value gaps, the kind that arise for words like ‘dommal’ and Carnap's version of ‘soluble’:
(Dommal) Being a dog is sufficient for being a dommal, and being a mammal is necessary.
(Soluble 1) Ax At, if x is placed in water at time t -> (x is soluble <-> x dissolves at t)
(Soluble 2) Ax Ay, if x and y are the same chemical substance -> (x is soluble <-> y is soluble)
We should note that on Carnap's intended reading of these postulates, there is no answer to the question, "is x soluble?" if x hasn't ever been placed into water. The rule for the use of "soluble" appears to presuppose both (i) that x has been placed into water, and (ii) that solubility does not cut across chemical-substance-kind lines.
McG/McL contrast these terms, with their artificially well-defined gaps, with more "penumbral" vague terms like "heavy":
"We know exactly when and why the term [Carnap's "soluble"] was introduced into the language, and we know exactly how it is used. More important, we can say exactly which facts are relevant to the term's application...relevant considerations are neatly localized." (1)
It seems that what is important about these terms is that epistemicism's case is weak for them. I can stipulate into existence a word like "soluble_c" [_c for 'Carnap'] or "dommal", and simply ‘refuse ‘to give conditions for their application which cover all cases. What shall we say about such terms? It is unlikely that other factors, like contextual ones, will fill in the gap that I have left open; the usage patterns of other speakers cannot do so either, since I have just introduced the term. Yet such a term *could* be so introduced and adopted into the language. (Indeed, it seems like many of our terms ‘do ‘come with very substantive presuppositions---what is less clear to me is whether it's right to understand e.g. "dommal" this way.) The authors conclude that "’soluble_c’ gives us [an] unmistakable example of [a] truth-value gap" (2). This is important to the dialectic because the existence of a (nonempty) gap is the supervaluationist's entering wedge.
The authors then consider what we might call "Williamson's gambit" in response to the Dommal problem. This is that "x is a dommal" is false of any non-dog, since the rules have not done enough to make it ‘true ‘of a non-dog. For Williamson, the allocation of truth is stingy; this is how we may adjudicate the truly gap-happy cases. An analogous ruling in the soluble_c case would make "x is soluble_c" false for anything that had never been placed in water. McG/McL respond that such a ruling would be seriously out of tune with the use speakers would make of the term once they had adopted it---since "dommal" is not ‘used’ like "dog", it seems artificial to give it the same truth-conditions as "dog" as a result of applying the arbitrary "truth is stingy" rule.
We should be cautious in reflecting on the "dommal" and "soluble_c" rules. Are they rules for ‘truth’, or are they rules for ‘usage’? Rules for usage--especially unembedded usage---will systematically underdetermine truth-conditions. While I might be able to stipulate usage rules (in affect I am doing this all the time, simply by using my words as I do) it is not so clear that I can stipulate truth-conditions, except in artificial, short-lived contexts; I have wide discretion over my ‘use ‘of sentences, and far narrower control over whether what I say with those sentences is ‘true’. Meditating on this distinction does seem to tell in Williamson's favor; after all, usage patterns to not rule out the discovery of informative identities, and it is unclear whether we ever stipulate gappy truth-conditions, even if gappy usage is common.
McG/McL go on to argue that the ‘(T)-for-utterances’ schema:
If u says that p, u is true iff p.
...is falsified by gappy usage, e.g. by the pattern of usage that would arise of "dommal" were adopted by the linguistic community. The authors reintroduce the idea of a tension between two different kinds of truth here, when they consider the question of whether we should embrace the schema.
Horn 1: "We embrace the schema as an a priori maxim that we intend to hold onto whether or not it reflects the facts of usage." We want "true" as a logical device and the schema's status is axiomatic.
Horn 2: We reject the schema for "dommal" and "soluble_c". In particular, we reject it because it entails bivalence via Williamson's argument from LEM. [?...not sure this is right; the text is a bit unclear here, since it actually seems to argue ‘from’ Bivalence ‘to ‘LEM!] (...And we accept LEM, because we accept classical logic.)
****
Then there is a bit of meditation on the epistemicist's take on things: the way he understands the terms "precise", "vague" and "determinately." The contrast here is between epistemicism and semanticism.
We rehearse the semanticists's diagnosis of the fallacious inference from "there is a red tile adjacent to a nonred tile" to "the word 'red' (or the concept ‘red’) has a sharp boundary" (16); again, the diagnosis rests on the intuitive pull of two competing, but distinct, notions of truth: one which supervenes on usage [where usage can be and is gappy] and another which is disquotational-classical and therefore leaves no gap.
***
We proceed now to the indictment that the supervaluationist cannot really respect classical logic because she cannot reason classically (even if he can get all the classical tautologies.) In a nutshell, the response will be this: she can reason classically all she likes with "plue". The restrictions on reductio, proof by cases, etc. that come with "determinately" should be seen as limitations on these proof schemas for an operator which ‘enriches’ the classical language:
"The valid inferences are the ones sanctioned by the classical predicate calculus, as described in any standard logic text. The semanticist isn't proposing a nonclassical definition of validity; she's proposing a nonclassical definition of truth. She regards the classically valid modes of inference as truth preserving, and she asks whether there are any other modes of reasoning, in addition to those identified by classical logic, that we can also count on to enable us to derive true conclusions from true premises. This quest should not be understood as the search for an expanded notion of validity, because the semanticist is perfectly content with the notion of validity as we have it already." (23-24)
So the semanticist McG/McL have in mind should be seen as distinguishing between ‘valid’ forms of inference on the one hand, and ‘truth-preserving ‘forms of inference on the other. Validity is classical validity. Truth preservation is ‘supertruth’-preservation. Not every truth-preserving argument is valid.
A helpful diagram would be this: Williamson's picture is that supervaluationist logic, which rejects reductio, DS, etc., imposes a severe *restriction* classical logic: that is because the supertruth-preserving inference forms are a *subset* of the valid ones. McG/McL respond by denying that; for them, the valid arguments are a subset of the supertruth-preserving ones. That's because they are keeping all of classical logic--with its reductios, DS's, and all the rest--and augmenting it with an operator, 'D'---whose associated forms of inference, it must be granted, do not include reductio and DS. It is possible for McG/McL to hold that they have included all of classical logic, contra Williamson, and that their picture is in harmony with the inferential practices of e.g. mathematicians who reason by reductio, because their system does countenance unrestricted reasoning by reductio in any context which does not include vague terms. Mathematical languages are precise*, so the mathematicians have committed no oversight. Moreover, we account for classically valid inferences in the general population by postulating that the popular standard for validity is the preservation of supertruth. This has no effect on our domain of classical logic, by Stone's theorem [see below]. And it prevents the "collapse" of the interderivability of "Tr 'p' " and "p".
...I think I understand this picture, but it still seems to me that the populace is faulted by it. After all, it means that when the populace reasons classically by cases, they leave a third case out. It seems like what might be required here is an argument that the populace *does* reason by three cases rather than two (the populace "includes the excluded middle") when the premises involve vague terms, which is, if we take Russell's argument, always; but I am not sure that the populace does this. I believe Keefe thinks the folk do this...
***
Finally, we move on to some many-valued metalogic to flesh out the contrasting picture. The distinction is made between (i) having many truth-values [real numbers between 0 and 1], on the one hand, and (ii) having many ‘designated ‘truth-values, on the other. If there is a boolean algebra defined on these many values, we have Stone's theorem:
[Stone's Theorem] For any Boolean algebra B and any proposed inference, the following are equivalent:
(1) The inference is valid in classical sentential calculus.
(2) The inference is strongly B-valid (preserves a truth-value of 1 from premises to conclusion)
(2) The inference is weakly B-valid (preserves truth value < b from premises to conclusion, where b is the threshold ‘designated’ value)
...as far as I can tell, the Boolean algebra bit boils down to this: the many-valued logic is truth-functional. So Stone's theorem simply tells us that if we keep classical consequence, the choice between strong or weak B-consequence is a non-choice. To wit: the expansion of classical logic undertaken by McG/McL's supervaluationist system is free to choose between the preservation of truth at a point (preservation of an intermediate degree of truth), and the preservation of global truth (preservation the highest degree of truth--supertruth.)
It could be the case that good inferences, the kind that appear in math journals, are cases of strongly B-valid inferences. Strongly B-valid inferences preserve supertruth. So we shouldn't think that, just because we are introducing a logic which restricts e.g. ‘reductio’, that we are contravening the math journals; it's just that ‘they’ were assuming the premises were supertrue, while we are giving a logic in which assumptions may have a lower degree of truth than that. McG/McL's suggestion is that this take on the data of ordinary (and extraordinary) inferential practices isn't ruled out by what we know about how people reason. What I have suggested is that, so far, we seem farther along the road to vindicating the mathematicians than the folk.
Of course, the key here is that the boolean operators are truth-functional! It is quite striking that McG/McL's form of supervaluationism is truth-functional, since of course this gives up the analysis of conditionals that seemed to be such a strength of the supervaluationist account...or perhaps it does not. The concern here is first and foremost with classical logic, which makes use of the material conditional. Nothing yet said prevents McG/McL from endorsing an e.g. metalinguistic analysis of ordinary language conditionals that could capture their non-truth-functional behavior in a supervaluationist framework.
****
*...are they? It seems rather common for there to be gaps in mathematical terminology of the "dommal" kind, even if not of the more regular "heavy" kind.
Friday, August 20, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment