Friday, July 24, 2009

Restricting MP vs. restricting discharging*

Does this difference make a difference?

The inference in question is: p -> D(p)

Contraposition: ~D(p) -> ~p [looks bad!]

Imp: ~p v D(p) [looks bad!]

R-Imp: D(p) v ~p

Evans's revised proof:
~D(a=b) Assume for Red.
F(b) Pred. Abst. (F = property of being indef. = to a)
D(a=a) ? [``identity does not admit of borderline cases"]
~F(a) Pred. Abst.
~(a=b) Leibniz' Law
D~(a=b) ? [many strikes: this is a hyp. context, not clear what justifies the rule anyway, esp. in presence of higher-order vagueness.]
[we have no assumptions, so the conclusion of the proof should be an axiom when the assumption is discharged.]

Heck's way of putting the equivocal result:

``Whether Evans's arguent shows that there can be no vague objects may now seem to be but a terminological question...[It] does show that `~D(a=b)' is unsatisfiable; it does not show that `D(a=b)' is valid. If we identify the view that there are vague objects with the view that there are (or might be) true sentences of the form ~D(a=b), Evans has shown there are no vague objects. If, on the other hand, we identify the view that there a re vague objects with the view that `D(a=b)' is not valid, then he has not."

A much simpler way of making the same point appears to be suggested by McF & K. If MP is not valid, then there's no way to even get close to a contradiction. (By ``close", I mean: to get a contradiction even inside the scope of an assumption that, as it turns out, can't be discharged.)

The McF & K way is to refer us to Restricted MP, which is (always) valid when the antecedent is either world- or info-invariant and the consequent is info-invariant.

The method of evaluating the conditional (``if phi, then psi") is to contract the original information state until it is (i.a) a unique maximal phi-subset or (i.b) until you have one of n maximal phi subsets, in which case you'll need to do step (ii) for all of them; (ii) check that psi is true throughout the remaining i.

Now: let's say p -> D(p) is a rule of inference, in Heck's sense (as opposed to a rule of proof.) It does seem that it is exactly the kind of conditional which is NOT VALID in hypothetical contexts.

Compare Quine's proof, with Nec() and Pos():

Pos(~(9 = num planets)) Assume for red.
F(num pl.) Pred. Abst. (F = prop of pos. being nonidentical 9)
9 = num planets Astronomy
Nec(9=9) premise [axiom, iron law, etc.]
~Pos(~(9=9)) Modal Shift
~F(9) Pred. Abst. (note: both ok bc 9 is rigid desig.)
~F(num planets) Sub. of identicals
F(num pl.) & ~F(num pl.) & Intro
[Contradiction] ~Intro

Do we have an instance of the inference form p -> D(p) here? We do have Nec(9=9), which COULD be gotten from 9=9 and the suspect inference. But the intuition underlying Nec(9=9) isn't that at all. Also, the proof WOULD go through if we had gone from step 3, ``9 = num pl." to something stronger, namely ``Nec(9 = num pl.)" But this wouldn't have been a truth-preserving step. ``p = num pl." is only a contingent identity (if it is an identity at all; perhaps it's more perspicuous to say that it isn't. Instead, the number-of-planets-hood is predicated of 9 in the actual world. It would be off-the-wall to give ``identity doesn't admit of borderline cases" as a justification for such a step.)

Conclusion: the indefinitist case (Heck's terminology) is really stronger than he makes out. Re-interpreting the restriction on rules of inference as a restriction on Modus Ponens (although the two are equivalent in terms of what can be proved) makes Evans's argument look much more insignificant.

****

Heck, ``That there might be vague objects (so far as concerns logic)"

MacFarlane and Kolodny, ``Ifs and Oughts"

No comments:

Post a Comment