1Q) Where did we leave of on the question of whether perceptual experiences have propositional contents?
1A) It seems uncontroversial to say that experiences have contents, in the sense that they tell us things. (And in the sense that verbs like "see" take direct objects.) But this is only neutral if we don't take "content" to simply be synonymous with "propositional content"; for example, Ming vases can tell us things about their owners, but Ming vases don't have propositional content. For sentences with perceptual verbs, we often have bare infinite complements, as in
John saw Mary cry. ["bare infinitive complement" for the verb "see"]
semanticists have proposed that the semantic value [[Mary cry]] could be: an event (Davidson, Higginbotham '93) or a situation (Moltmann.) Analyzing such locutions was a project of situation semantics. Ordinary English usage for the variety of verbs of experience is very wide here; the landscape of ordinary language is not nearly as uniform here as it is in e.g. the case of "believes" and the related family of doxastic verbs. Also, it is worth keeping in mind that our project is not first and foremost to give a correct semantics for e.g. "see"; it is to get at what "sees" and "looks"-statements are getting at.
2Q) LOT. When I tried to articulate a ''contextualist" response to Hellie's claim that exact-ers (ppl who think we perceive colors exactly but "shiftily"--e.g. Jackson and Pinkerton) are saddled with "slight nonveridicality" in experiences, MM asked me what in the story was supposed to be the analog of e.g. "red_29", the predicate that applies to patch b in context 1 but not in context 2. He suggested that the answer might be the syntactic token "red_29" in a LOT.
Why would one endorse a LOT? (Other than wanting to be a contextualist on the pattern of contextualists about natural language predicates?) I asked:
2.5Q) Could perceptual constancy be marshalled as data supporting such a view? ...Siegel so suggests in the SEP. A view would go something like this: the experience of a sunlight-dappled gray cabinet presents us with the same nominatum (the grayness of the cabinet) under different modes of presentation (the lighter or darker 'color' patches in our visual field, which help communicate to us that there is a uniformly colored gray cabinet before us.)
2A) Yes, that is how LOT could function here.
2.5A) Nonono! This is gravely confused. First of all, a LOT would most naturally be used to get *rid* of the need to make a Fregean sense-reference distinction amongst the contents of perception. The purpose of a Fregean sense-reference distinction is to explain how `a=a' is different from `a=b' *not in virtue of the sentences' syntactic features*:
"But this relation would hold between the names or signs only in so far as they named or designated something. It would be mediated by the connexion of each of the two signs with the same designated thing. But this is arbitrary. Nobody can be forbidden to use any arbitrarily producible event or object as a sign for something. In that case the sentence a = b would no longer refer to the subject matter, but only to its mode of designation; we would express no proper knowledge by its means. But in many cases this is just what we want to do." (Frege, Sinn Bedeutung Paragraph I)
But if we avail ourselves of a LOT, the syntactic feature explanation becomes viable again, in which case the postulation of senses is threatened with redundancy.
Moreover: perceptual constancy is not the right kind of phenomenon to be doing the work of motivating a sense-reference distinction for perceptual content. What we need is: (i) an informative identity and an uninformative identity; (ii) a reason to think the informativeness of the informative identity cannot be explained in terms of sameness of denotation [for then there would be no difference between `a=a' and `a=b']; (iii) a reason to think the informativeness of the identity cannot be explained in terms of sameness of sign [Frege's gloss: signs may be used however we like; however, in the sense in which this is true, knowledge is not, in general, expressed by such usages...it is not as if we may create whatever knowledge we like by using signs however we like.] [my gloss: in addition, such an account could not explain why some nonequivalent strings--e.g. 'my cat', 'the cat that belongs to me'--*don't* express something cognitively significant when conjoined by '='. We need something that cuts finer than denotation but not so fine as signs or symbols.]
Perceptual constancy is a phenomenon whereby the color of the cabinet is perceived as unchanging. So it isn't clear that we do have an informative identity here. To wit: it's not clear we have an analog of the cognitively significant "a=b" (would it be "region 1 is the same color as region 2"?) as well as the cognitively insignificant "a=a" (would it be "region 1 is the same color as region 1"?)
Monday, November 1, 2010
Friday, October 29, 2010
Notes on Chierchia's Dynamic Binding
What keeps tripping me up about Chierchia's system is all the going back and forth between w's and p's. w's are assignment functions (to dynamic variables...the kind that can be co-bound across clause boundaries.) p's are sets of assignment functions. At any point in a discourse, there is a current w--a current assignment function. But also at any point in the discourse, there is a current p: this is a set of admissible continuations of the discourse. A sentence affects both: it can e.g. open up new 'card's in w, and it can also effect which ways the discourse can continue. A sentence may do each without the other. A plausible example of the first kind of sentence is "a thing_xi is self-identical". A plausible example of the second kind is "Bill smokes", or "I don't have any children."
We should ask ourselves, though, just how w is an assignment function. Typically a file (i) only records partial information about discourse referents (certainly, for example, not information that is uniquely identifying) and (ii) its domain is partial.
In order to get from our ordinary notion of an assignment to the more intuitive notion of a file (as described above), we could employ functions. A file like
1 = cat, black
2 = woman, tall, owner of 1
...could be associated with a *set* of assignment functions which satisfy it. If we called this a proposition p, then, for example, and would both be in p. But I'm not sure whether this is done! Investigation into this preliminary point is required!!
We should ask ourselves, though, just how w is an assignment function. Typically a file (i) only records partial information about discourse referents (certainly, for example, not information that is uniquely identifying) and (ii) its domain is partial.
In order to get from our ordinary notion of an assignment to the more intuitive notion of a file (as described above), we could employ functions. A file like
1 = cat, black
2 = woman, tall, owner of 1
...could be associated with a *set* of assignment functions which satisfy it. If we called this a proposition p, then, for example,
Wednesday, October 27, 2010
Byrne and Hilbert on color (in BBS)
...wherein we get a taste of what it is like to philosophize about color. In particular, we consider physicalism about color, as opposed to Eliminativism or Conventionalism. Our guiding background theory is representationalism:
What must colors be for them to be represented in our colored experiences? We assume that propositions--bearers of truth/falsity--are extractable from perceptual experiences in general: that "the proposition that *there is a red bulgy object on the table* is part of a subject's experience" when he looks at a red tomato (5). We assume that colors are represented in experience so conceived, and ask what colors must be for this to be possible.
What are the alternatives to Physicalism about color? Eliminativism: there ain't no colors. Dispositionalism: colors are psychological dispositions. To dispositionalism we pose "Berkeley's Challenge": why be dispositionalist about color and not dispositionalist about every other property (shape, size, etc.) that we can detect through the senses? (or more generally, that we can detect at all?) There are, of course, some things to say here---colors are only detectable via one sense-modality, physicalist explanations of phenomena that don't involve humans/animals rarely mention color, etc., but we set those aside for now. Primitivism: colors are not physicalist, not dispositionalist, and not nothing. Finally, Physicalism, which B&H endorse, identifies color with some physical property of (colored) objects. As they see it there are two main (related) objections to physicalism: physicalism cannot account for "the structure of phenomenal space" (they cite Boghossian and Velleman for this point), and, more particularly, that physicalism cannot account for the "opponent-process theory of vision", which presents several generalizations of which colors humans perceive in terms of the relative degrees of stimulation of their short-, medium-, and long-wavelength photoreceptors.
The first incarnation of physicalism is that colors are reflectance properties. A reflectance property for a given (uniformly colored) object is given as a graph with percentages from 0-100 on the y-axis and wavelengths on the x-axis. The height of the graph at a particular value of x indicates what proportion of incident light of wavelength x is reflected by the object. It appears that this is the best candidate for something which is both a physical property and a property to which we are sensitive in our color vision.
Three objections are raised: the first has to do with "metamers." These are pairs of objects whose reflectance graphs look very different but whose perceived colors are the same (the objects are indistinguishable under normal light.) B&H note that they will have to bundle different reflectance properties together to define *determinable* (as opposed to determinate) colors together anyway, so there is no objection in principle to identifying colors with sets of reflectance properties. These sets might not be, in any sense, "natural":
"Surfaces with grossly different reflectances can perceptually match even under fairly normal illuminants....so the reflectance-types that we identify with the colors will be quite uninteresting from the point of view of physics or any other branch of science unconcerned with the reactions of human perceivers. This fact does not, however, imply that these categories are unreal or somehow subjective. (11)"
...I take it this is a somewhat significant cost, since it seems that an account of colors in these terms would suggest that our color-concepts are highly gerrymandered. But perhaps this is the best we can do.
We then have a brief (quarantined) digression on transparent objects and colored lights. The authors suggest we shift to what they call "productance" (a sum of light reflected and light emitted) to account for these. They stress that although productance is relative to illumnants, the property productance enters into (the candidate property to be identified with color) is independent of any particular illuminant. There is also a somewhat shocking aside about so-called "related" and "unrelated" colors: related colors are only perceived when there are certain other colors in the scene. Apparently brown is such a color. I don't really understand B&H's reply, but it's got something to do with the fact that the perception of color constancy apparently relies a lot on the colors of other perceived objects in the environment.
On to the objection from phenomenal structure. Among the things to be explained are the distinction between binary hues (hues experienced as proportions of different colors--e.g. orange) and unique hues (e.g. red), as well as the opponent structure of the colors (green-red, yellow-blue, etc.) To respond to this challenge, B&H invoke the representational content of color-experiences:
"such heroism [the attempt to reject the explanatory demand from phenomenal structure] is not required. In our view, the phenomena of color similarity and opponency show us something important about the *representational content* of color experience--about the way the color properties are encoded by our visual systems And once we have a basic account of the content of color experience on the table, it will be apparent that there is no problem here for physicalism. (13)"
We complicate the picture--not the picture of color, as far as I can tell, but the picture of the content of color experiences. Before, experiences with "color content", given that color x is the physical property F, were simply of the form "a is F." Revised thesis: experience represent "objects as having proportions of hue-magnitudes." [NB here hue = color, in the physicalist sense defined and defended above.] So, for example, where F and G are primary hues, our experience tells us something like "a is both F and G, and it is twice as F as it is G."
Is there a cheat here? We started out wanting to account for e.g. the opponency of colors in terms of features of the colors themselves; instead, we wound up giving an account of the opponency of colors in terms of the content of a color-experience. Comparison: we could account for the "additive opponency" of certain pairs of integers (like 2 and -2) in terms of features of the numbers themselves---presumably this is what we do. Or we could give an account of the additive opponency of the pairs in terms of our experience. (I'm not sure what this would come to in the case of numbers.) Nothing can appear both squat and thin, since squatness consists in being wider than one is tall and thinness consists in being taller than one is wide; given this, we can account for the "opponency" of experiences of squat objects and experiences of thin objects by saying that squatness is the property of being wider than one is tall, and thinness is the property of being taller than one is wide. Moreover, we seem to have backed away from suggesting that we can actually detect reflectance properties---rather, we can merely detect relative ratios of reflectance properties. (Compare: we can actually perceive width and height and calculate squatness from these two things, or: we can only perceive the presence or absence of squatness.) This thought seems to be behind H&B's discussion of the lengths of sticks (14).
We now revisit several other objections to physicalism. The first comes from variation amongst color-perceiving subjects: which objects are perceived as "unique green" or balanced orange (a hue exactly as red as it is yellow) vary from subject to subject, often by a margin which is quite large relative to any individual subject. This fact leads some philosophers (or psychologists?--someone named Hardin) to espouse a kind of "conventionalism" about green: in the absence of consensus about which hue chip is unique green, "the question..can be answered only by convention (17)." Hardin is also an eliminativist, and his two thoughts seem connected here (although at first blush they are inconsistent--how can we just pick one when the real answer is "none"?) If the suggestion is that we must espouse an error theory about unique green (given the variation amongst color-perceivers, any choice of a hue chip as the real unique green will make a majority of perceivers wrong), and this error theory is unacceptable (it would be better to espouse eliminativism about colors), then the authors' response is simply that an error theory is not really so disastrous. They remind us that we are speaking of determinate rather than determinable properties. (We are not espousing an error theory for green---only for unique green.) They also remind us that we are not ready to espouse eliminativism about a host of other properties which are often perceived inaccurately (they use the example of spatial properties which are commonly misperceived by people with slightly mismatched retinal images across their two eyes.)
A host of responses that make reference to the peculiarity of the case of color: color is not detectable via other sense-modalities (if we are in error visually we cannot use other senses as independent checks--this distinguishes color from spatial properties). Secondly, color properties, if they exist, do not enter into any "data or theories of any sciences other than those concerned with animal behavior" (e.g., they only enter into intentional explanations--again, unlike spatial properties). So there seems to be a *more* significant cost to espousing an error theory for a color-property (and if physicalists are right, unique green *is* a color-property) than one of these other properties. Unique green will *not* enter in to very many intentional explanations--in the usual way, at least--if most perceivers are in error about which objects are unique green. (We could substitute *a belief that something is unique green* in our explanations of their behavior, but then, since the belief that A is F does not in general entail the existence of a real property F, we could do just as well without it.) (17). B&H's response seems to be this: in order to save the phenomena of ordinary perceivers, we need green to be a really existing property. But if green is a really existing property, then unique green exists.
(The response is a sort of "supertruth" response: on each acceptable adjudication of the boundaries of green [corresponding to each slightly different perceiver], some hue chip is unique green. So it is supertrue that some hue chip is green even if it isn't supertrue *of* any particular hue-chip that *it* is unique green. The weaker thing is all we need to be realists about green and unique green.)
Finally, we revisit the inverted spectrum. H&B suggest that the inverted spectrum thought-experiment is basically irrelevant to the thesis of physicalism about color as they have presented it. Their presentation relied on representationalism about color-experience. This can be true even if (as proponents of inverted spectrum scenarios often think) "what it's like" to have an experience is not exhausted by the representational content of that experience. H&B do point out, though, that to the extent that a "phenomenist" (an opponent of representationalism/intentionalism qua thesis about how representational content *exhausts* phenomenal content) believes that features of color like opponency and the binary/unique distinctions are features of "what it's like", and that's not representational, he will give a different account of these features than the representationalist does.
What must colors be for them to be represented in our colored experiences? We assume that propositions--bearers of truth/falsity--are extractable from perceptual experiences in general: that "the proposition that *there is a red bulgy object on the table* is part of a subject's experience" when he looks at a red tomato (5). We assume that colors are represented in experience so conceived, and ask what colors must be for this to be possible.
What are the alternatives to Physicalism about color? Eliminativism: there ain't no colors. Dispositionalism: colors are psychological dispositions. To dispositionalism we pose "Berkeley's Challenge": why be dispositionalist about color and not dispositionalist about every other property (shape, size, etc.) that we can detect through the senses? (or more generally, that we can detect at all?) There are, of course, some things to say here---colors are only detectable via one sense-modality, physicalist explanations of phenomena that don't involve humans/animals rarely mention color, etc., but we set those aside for now. Primitivism: colors are not physicalist, not dispositionalist, and not nothing. Finally, Physicalism, which B&H endorse, identifies color with some physical property of (colored) objects. As they see it there are two main (related) objections to physicalism: physicalism cannot account for "the structure of phenomenal space" (they cite Boghossian and Velleman for this point), and, more particularly, that physicalism cannot account for the "opponent-process theory of vision", which presents several generalizations of which colors humans perceive in terms of the relative degrees of stimulation of their short-, medium-, and long-wavelength photoreceptors.
The first incarnation of physicalism is that colors are reflectance properties. A reflectance property for a given (uniformly colored) object is given as a graph with percentages from 0-100 on the y-axis and wavelengths on the x-axis. The height of the graph at a particular value of x indicates what proportion of incident light of wavelength x is reflected by the object. It appears that this is the best candidate for something which is both a physical property and a property to which we are sensitive in our color vision.
Three objections are raised: the first has to do with "metamers." These are pairs of objects whose reflectance graphs look very different but whose perceived colors are the same (the objects are indistinguishable under normal light.) B&H note that they will have to bundle different reflectance properties together to define *determinable* (as opposed to determinate) colors together anyway, so there is no objection in principle to identifying colors with sets of reflectance properties. These sets might not be, in any sense, "natural":
"Surfaces with grossly different reflectances can perceptually match even under fairly normal illuminants....so the reflectance-types that we identify with the colors will be quite uninteresting from the point of view of physics or any other branch of science unconcerned with the reactions of human perceivers. This fact does not, however, imply that these categories are unreal or somehow subjective. (11)"
...I take it this is a somewhat significant cost, since it seems that an account of colors in these terms would suggest that our color-concepts are highly gerrymandered. But perhaps this is the best we can do.
We then have a brief (quarantined) digression on transparent objects and colored lights. The authors suggest we shift to what they call "productance" (a sum of light reflected and light emitted) to account for these. They stress that although productance is relative to illumnants, the property productance enters into (the candidate property to be identified with color) is independent of any particular illuminant. There is also a somewhat shocking aside about so-called "related" and "unrelated" colors: related colors are only perceived when there are certain other colors in the scene. Apparently brown is such a color. I don't really understand B&H's reply, but it's got something to do with the fact that the perception of color constancy apparently relies a lot on the colors of other perceived objects in the environment.
On to the objection from phenomenal structure. Among the things to be explained are the distinction between binary hues (hues experienced as proportions of different colors--e.g. orange) and unique hues (e.g. red), as well as the opponent structure of the colors (green-red, yellow-blue, etc.) To respond to this challenge, B&H invoke the representational content of color-experiences:
"such heroism [the attempt to reject the explanatory demand from phenomenal structure] is not required. In our view, the phenomena of color similarity and opponency show us something important about the *representational content* of color experience--about the way the color properties are encoded by our visual systems And once we have a basic account of the content of color experience on the table, it will be apparent that there is no problem here for physicalism. (13)"
We complicate the picture--not the picture of color, as far as I can tell, but the picture of the content of color experiences. Before, experiences with "color content", given that color x is the physical property F, were simply of the form "a is F." Revised thesis: experience represent "objects as having proportions of hue-magnitudes." [NB here hue = color, in the physicalist sense defined and defended above.] So, for example, where F and G are primary hues, our experience tells us something like "a is both F and G, and it is twice as F as it is G."
Is there a cheat here? We started out wanting to account for e.g. the opponency of colors in terms of features of the colors themselves; instead, we wound up giving an account of the opponency of colors in terms of the content of a color-experience. Comparison: we could account for the "additive opponency" of certain pairs of integers (like 2 and -2) in terms of features of the numbers themselves---presumably this is what we do. Or we could give an account of the additive opponency of the pairs in terms of our experience. (I'm not sure what this would come to in the case of numbers.) Nothing can appear both squat and thin, since squatness consists in being wider than one is tall and thinness consists in being taller than one is wide; given this, we can account for the "opponency" of experiences of squat objects and experiences of thin objects by saying that squatness is the property of being wider than one is tall, and thinness is the property of being taller than one is wide. Moreover, we seem to have backed away from suggesting that we can actually detect reflectance properties---rather, we can merely detect relative ratios of reflectance properties. (Compare: we can actually perceive width and height and calculate squatness from these two things, or: we can only perceive the presence or absence of squatness.) This thought seems to be behind H&B's discussion of the lengths of sticks (14).
We now revisit several other objections to physicalism. The first comes from variation amongst color-perceiving subjects: which objects are perceived as "unique green" or balanced orange (a hue exactly as red as it is yellow) vary from subject to subject, often by a margin which is quite large relative to any individual subject. This fact leads some philosophers (or psychologists?--someone named Hardin) to espouse a kind of "conventionalism" about green: in the absence of consensus about which hue chip is unique green, "the question..can be answered only by convention (17)." Hardin is also an eliminativist, and his two thoughts seem connected here (although at first blush they are inconsistent--how can we just pick one when the real answer is "none"?) If the suggestion is that we must espouse an error theory about unique green (given the variation amongst color-perceivers, any choice of a hue chip as the real unique green will make a majority of perceivers wrong), and this error theory is unacceptable (it would be better to espouse eliminativism about colors), then the authors' response is simply that an error theory is not really so disastrous. They remind us that we are speaking of determinate rather than determinable properties. (We are not espousing an error theory for green---only for unique green.) They also remind us that we are not ready to espouse eliminativism about a host of other properties which are often perceived inaccurately (they use the example of spatial properties which are commonly misperceived by people with slightly mismatched retinal images across their two eyes.)
A host of responses that make reference to the peculiarity of the case of color: color is not detectable via other sense-modalities (if we are in error visually we cannot use other senses as independent checks--this distinguishes color from spatial properties). Secondly, color properties, if they exist, do not enter into any "data or theories of any sciences other than those concerned with animal behavior" (e.g., they only enter into intentional explanations--again, unlike spatial properties). So there seems to be a *more* significant cost to espousing an error theory for a color-property (and if physicalists are right, unique green *is* a color-property) than one of these other properties. Unique green will *not* enter in to very many intentional explanations--in the usual way, at least--if most perceivers are in error about which objects are unique green. (We could substitute *a belief that something is unique green* in our explanations of their behavior, but then, since the belief that A is F does not in general entail the existence of a real property F, we could do just as well without it.) (17). B&H's response seems to be this: in order to save the phenomena of ordinary perceivers, we need green to be a really existing property. But if green is a really existing property, then unique green exists.
(The response is a sort of "supertruth" response: on each acceptable adjudication of the boundaries of green [corresponding to each slightly different perceiver], some hue chip is unique green. So it is supertrue that some hue chip is green even if it isn't supertrue *of* any particular hue-chip that *it* is unique green. The weaker thing is all we need to be realists about green and unique green.)
Finally, we revisit the inverted spectrum. H&B suggest that the inverted spectrum thought-experiment is basically irrelevant to the thesis of physicalism about color as they have presented it. Their presentation relied on representationalism about color-experience. This can be true even if (as proponents of inverted spectrum scenarios often think) "what it's like" to have an experience is not exhausted by the representational content of that experience. H&B do point out, though, that to the extent that a "phenomenist" (an opponent of representationalism/intentionalism qua thesis about how representational content *exhausts* phenomenal content) believes that features of color like opponency and the binary/unique distinctions are features of "what it's like", and that's not representational, he will give a different account of these features than the representationalist does.
Tuesday, September 21, 2010
Nonindexical Contextualism and content
Today, we look askance at the word "content". What is this term, from philosophy of mind, doing in our semantic theorizing? We are not sure; our suspicions are roused.
Yet Prof. MacFarlane talks at length about content in "Nonindexical contextualism," where he argues for the existence and viability of a view on which epistemic operators are context-sensitive without being indexical. Translation into our suspect terminology: an epistemic standard parameter may play a circumstance-determinative role without playing a content-determinative role in our semantics.
From his discussion, the following, at least, are clear about content:
*the content of a sentence-at-a-context is intuitively identified with a proposition
*sentences with indexicals express different propositions (hence, have different contents) at different contexts.
*the content of a sentence determines its truth-value at a context of use.
...From this, it seems that the right thing to conclude is that content of a sentence-at-a-context is just its semantic intension. All the indexicals are, so to speak, "filled in", but the the resulting intension has not yet been evaluated at the circumstance of the context, so the intension has not yet been reduced to an extension (either T or F.)
Another way to get a bead on content, suggested by MacFarlane's discussion, is to look at the dispute between Temporalism and Eternalism. For Eternalists, the time of the context gets into the content of tensed sentences like "Socrates is sitting":
"On the Eternalist's view, the sentence ["Socrates is sitting"] varies in truth-value across times because it expresses different propositions at different times." (4)
This suggests the following gloss on the difference between Eternalism and Temporalism in terms of semantic values:
(Temp) [[Socrates is sitting]]^c_{\varnothing}* = \lambda w. \lambda t. Socrates is sitting in w at t
(Eter) [[Socrates is sitting]]^c_{\varnothing} = \lambda w . \lambda t . Socrates is sitting in w at t_c.
...hence on this view what it means for the Eternalists and Temporalists to disagree about what proposition "Socrates is sitting" expresses is for them to disagree on the intension of the sentence. Behold the "t_c" in the Eternalist's semantic entry; this is an indexical, just as "speaker_c" would be the meaning of "I" or "loc_c" would be the meaning of "here." So, on this entry, the Eternalist just thinks that "Socrates is sitting" is synonymous with "Socrates is sitting now."
What I find confusing about this is that I am unable to give a proper semantic entry for temporal operators on the eternalist's view. "Socrates is sitting" does NOT behave like "Socrates is sitting now" in that, of course, [[It will always be the case that Socrates is sitting]] is not the same as [[It will always be the case that Socrates is sitting now.]] Any eternalist view must account for this difference; the Eternalist cannot be so easily refuted as that!
...He must account for it in the same way that a possible worlds theorist accounts for the intuitive truth conditions of "Snow is white" as opposed to "Snow is actually white." He must hold that unembedded assertions like "Snow is white" are as a default evaluated at the world of the context, but that they are still shift-able in the scope of modal operators. I can see two ways to do this. One is to mimic, for our intuitive notion of content, the semanticists intensional type-lift: hold that content is usually extension, unless it is *forced* to be intension by the presence of a model operator. Hence: an unembedded "snow is white" utterance is true if snow is white in w_c, yet the contribution "snow is white" makes in the scope of modal operators does not refer us back to w_c.
The problem with this is that it makes nonsense of the other things MacFarlane says about content. For example, he says of the sentence "tomorrow comes after today" that it expresses different contents at different contexts, while having the same truth-value at every context. However, if the content of an unembedded expression is its EX-tension, then it cannot vary in this way.
The only other way I can see is to hold that content is what Prof. Yalcin calls "centered diagonal content", where this is lambda-abstraction over the c parameter. Hence:
(Temp) [[Socrates is sitting]]^wt_{\varnothing}** = \lambda c. Socrates is sitting at w_c
(Eter) [[Socrates is sitting]]^wt_{\varnothing} = \lambda c . Socrates is sitting at w_c and t_c.
Note once again the absence of t_c from (Temp). What, now, would the difference between (Temp) and (Eter) come to? Almost nothing, which is, perhaps, the point...It seems only to support the following intuition:
"If I had said "Socrates is sitting" at another time, it would have expressed a different content."
...this is true for (Eter) and not for (Temp). These creatures, (Eter) and (Temp), are NOT the arguments of temporal operators. What *are* the intensions on this view? It must be...
(Temp) [[Socrates is sitting]]^c_{\varnothing} = \lambda w . Socrates is sitting in w
(Eter) [[Socrates is sitting]] ^c_{\varnothing} = \lambda w . \lambda t . Socrates is sitting in w at t
Now this is pretty odd. On this version of temporalism, sentence truth looks like this:
A sentence s is true at a context c iff [[S]] is true at w_c and t_c. [hence there need be no temporal content "in" S.]
(A sentence s is true at a context c iff the proposition expressed by S is true when evaluated at the circumstance of C. = (25), pg. 21.)
And temporal operators look...metalinguistic, I guess, like this:
[[ALWAYS \phi]]^cwt = \lambda w . \forall c' s.t. w_c' = w_c, [[\phi]] expresses a truth at the circumstance determined by c; [[\phi]]^w_c, t_c is true.
...this view looks very strange to me, though we should note of an analogous move for epistemic standard parameters it is widely conceded that there *are* no shifters. The arguments in favor of such a view must mostly be content-based ones in the phil-mind sense, because semantically speaking, it's ugly.*** (Perhaps the right thing to say about this ugliness is that we are simply no longer speaking about any kind of semantic value or anything straightforwardly derivable from a semantic value.)
Here is such a consideration: the propositions we believe (= the contents of sentence at a context) intuitively don't determine truth "all by themselves." Sam believes it is 0 degrees--is that belief true or false? Well, we don't know until we know the time and place his belief 'concerns'. We also need to know what world he's in, and intuitively his belief isn't about worlds:
"One might respond to these considerations by bringing the world of the context of use into the *content* of Sam's thought. But intuitively, Sam could have had a thought with the same content even if the world had been very different." (16)
According to Temporalism, both worlds and times play a circumstance- but not content-determining role. For Eternalism, times play a content-as-well-as-circumstance determining role. Yet I am puzzled about how to make this work in the formal semantics, because I am puzzled by how content is supposed to interact with intensions. We have a good argument that intensions must have an *open* (hence shiftable) time-parameter, but whether this legislates that time does not play a content-determining role depends on what the relationship between intensions and contents is. In the epistemic standards case, once it is conceded that there are no shifters, we don't need an open (hence shiftable) time-parameter in the intension. Perhaps we don't need one *at all*, and it is this that nonindexical contextualism amounts to. The difference then would not be between a *free* t-parameter and a [contextually] *bound* t-parameter, but rather a difference between a [contextually] *bound* e-parameter and...no e-parameter at all.
******
*e.g., the intension of the sentence, rather than the extension. (Following Heim and von Fintel.) Note the index here comprises world-time ordered pairs.
**Generalizing the pattern to mean abstraction over c but not w and t? Probably this is abuse of notation.
***Note that this won't work for the world parameter anyway, for familiar reasons: it is not sufficient for the truth of the sentence "necessarily p" that p express a truth at every context.
*******
MacFarlane, J. "Nonindexical Contextualism." Synthese 166, 2009.
Yalcin, S. "Notes on semantics, context, and content". Handout at UC Berkeley for Phil 290-5, 9/16/2010.
Yet Prof. MacFarlane talks at length about content in "Nonindexical contextualism," where he argues for the existence and viability of a view on which epistemic operators are context-sensitive without being indexical. Translation into our suspect terminology: an epistemic standard parameter may play a circumstance-determinative role without playing a content-determinative role in our semantics.
From his discussion, the following, at least, are clear about content:
*the content of a sentence-at-a-context is intuitively identified with a proposition
*sentences with indexicals express different propositions (hence, have different contents) at different contexts.
*the content of a sentence determines its truth-value at a context of use.
...From this, it seems that the right thing to conclude is that content of a sentence-at-a-context is just its semantic intension. All the indexicals are, so to speak, "filled in", but the the resulting intension has not yet been evaluated at the circumstance of the context, so the intension has not yet been reduced to an extension (either T or F.)
Another way to get a bead on content, suggested by MacFarlane's discussion, is to look at the dispute between Temporalism and Eternalism. For Eternalists, the time of the context gets into the content of tensed sentences like "Socrates is sitting":
"On the Eternalist's view, the sentence ["Socrates is sitting"] varies in truth-value across times because it expresses different propositions at different times." (4)
This suggests the following gloss on the difference between Eternalism and Temporalism in terms of semantic values:
(Temp) [[Socrates is sitting]]^c_{\varnothing}* = \lambda w. \lambda t. Socrates is sitting in w at t
(Eter) [[Socrates is sitting]]^c_{\varnothing} = \lambda w . \lambda t . Socrates is sitting in w at t_c.
...hence on this view what it means for the Eternalists and Temporalists to disagree about what proposition "Socrates is sitting" expresses is for them to disagree on the intension of the sentence. Behold the "t_c" in the Eternalist's semantic entry; this is an indexical, just as "speaker_c" would be the meaning of "I" or "loc_c" would be the meaning of "here." So, on this entry, the Eternalist just thinks that "Socrates is sitting" is synonymous with "Socrates is sitting now."
What I find confusing about this is that I am unable to give a proper semantic entry for temporal operators on the eternalist's view. "Socrates is sitting" does NOT behave like "Socrates is sitting now" in that, of course, [[It will always be the case that Socrates is sitting]] is not the same as [[It will always be the case that Socrates is sitting now.]] Any eternalist view must account for this difference; the Eternalist cannot be so easily refuted as that!
...He must account for it in the same way that a possible worlds theorist accounts for the intuitive truth conditions of "Snow is white" as opposed to "Snow is actually white." He must hold that unembedded assertions like "Snow is white" are as a default evaluated at the world of the context, but that they are still shift-able in the scope of modal operators. I can see two ways to do this. One is to mimic, for our intuitive notion of content, the semanticists intensional type-lift: hold that content is usually extension, unless it is *forced* to be intension by the presence of a model operator. Hence: an unembedded "snow is white" utterance is true if snow is white in w_c, yet the contribution "snow is white" makes in the scope of modal operators does not refer us back to w_c.
The problem with this is that it makes nonsense of the other things MacFarlane says about content. For example, he says of the sentence "tomorrow comes after today" that it expresses different contents at different contexts, while having the same truth-value at every context. However, if the content of an unembedded expression is its EX-tension, then it cannot vary in this way.
The only other way I can see is to hold that content is what Prof. Yalcin calls "centered diagonal content", where this is lambda-abstraction over the c parameter. Hence:
(Temp) [[Socrates is sitting]]^wt_{\varnothing}** = \lambda c. Socrates is sitting at w_c
(Eter) [[Socrates is sitting]]^wt_{\varnothing} = \lambda c . Socrates is sitting at w_c and t_c.
Note once again the absence of t_c from (Temp). What, now, would the difference between (Temp) and (Eter) come to? Almost nothing, which is, perhaps, the point...It seems only to support the following intuition:
"If I had said "Socrates is sitting" at another time, it would have expressed a different content."
...this is true for (Eter) and not for (Temp). These creatures, (Eter) and (Temp), are NOT the arguments of temporal operators. What *are* the intensions on this view? It must be...
(Temp) [[Socrates is sitting]]^c_{\varnothing} = \lambda w . Socrates is sitting in w
(Eter) [[Socrates is sitting]] ^c_{\varnothing} = \lambda w . \lambda t . Socrates is sitting in w at t
Now this is pretty odd. On this version of temporalism, sentence truth looks like this:
A sentence s is true at a context c iff [[S]] is true at w_c and t_c. [hence there need be no temporal content "in" S.]
(A sentence s is true at a context c iff the proposition expressed by S is true when evaluated at the circumstance of C. = (25), pg. 21.)
And temporal operators look...metalinguistic, I guess, like this:
[[ALWAYS \phi]]^cwt = \lambda w . \forall c' s.t. w_c' = w_c, [[\phi]] expresses a truth at the circumstance determined by c; [[\phi]]^w_c, t_c is true.
...this view looks very strange to me, though we should note of an analogous move for epistemic standard parameters it is widely conceded that there *are* no shifters. The arguments in favor of such a view must mostly be content-based ones in the phil-mind sense, because semantically speaking, it's ugly.*** (Perhaps the right thing to say about this ugliness is that we are simply no longer speaking about any kind of semantic value or anything straightforwardly derivable from a semantic value.)
Here is such a consideration: the propositions we believe (= the contents of sentence at a context) intuitively don't determine truth "all by themselves." Sam believes it is 0 degrees--is that belief true or false? Well, we don't know until we know the time and place his belief 'concerns'. We also need to know what world he's in, and intuitively his belief isn't about worlds:
"One might respond to these considerations by bringing the world of the context of use into the *content* of Sam's thought. But intuitively, Sam could have had a thought with the same content even if the world had been very different." (16)
According to Temporalism, both worlds and times play a circumstance- but not content-determining role. For Eternalism, times play a content-as-well-as-circumstance determining role. Yet I am puzzled about how to make this work in the formal semantics, because I am puzzled by how content is supposed to interact with intensions. We have a good argument that intensions must have an *open* (hence shiftable) time-parameter, but whether this legislates that time does not play a content-determining role depends on what the relationship between intensions and contents is. In the epistemic standards case, once it is conceded that there are no shifters, we don't need an open (hence shiftable) time-parameter in the intension. Perhaps we don't need one *at all*, and it is this that nonindexical contextualism amounts to. The difference then would not be between a *free* t-parameter and a [contextually] *bound* t-parameter, but rather a difference between a [contextually] *bound* e-parameter and...no e-parameter at all.
******
*e.g., the intension of the sentence, rather than the extension. (Following Heim and von Fintel.) Note the index here comprises world-time ordered pairs.
**Generalizing the pattern to mean abstraction over c but not w and t? Probably this is abuse of notation.
***Note that this won't work for the world parameter anyway, for familiar reasons: it is not sufficient for the truth of the sentence "necessarily p" that p express a truth at every context.
*******
MacFarlane, J. "Nonindexical Contextualism." Synthese 166, 2009.
Yalcin, S. "Notes on semantics, context, and content". Handout at UC Berkeley for Phil 290-5, 9/16/2010.
Thursday, September 16, 2010
Writing about...
Instead of blindly trying to edit my paper by asking myself if I agree with every individual sentence, I've decided to step back and remind myself what I'm trying to accomplish. This could get rocky!...Hang on!...
When we talk about the contents of a perceptual experience, we assign to it a kind of abstract entity, a function from situations to truth-values, which is very much like the kind of abstract entity we assign to a meaningful, well-formed sentence (occurring at a context). When we do this, we assign to perceptual experiences Lewis [1980]'s "first job" for semantic values: determining a truth-value at a context.
However, a big difference with giving semantics for sentences is immediately apparent: perceptual experiences don't have meaningful *parts.* So we cannot give ourselves the compositional project of accounting for the content of the whole experience in terms of the contents of the parts. There is no analogue, for theorizing about perceptual content, of the constraints that compositionality imposes on our semantic theorizing "from below": we just don't know what it would *mean* to give a theory of content for the whole of a perceptual experience by giving a theory of content for the experience's parts, because, er, experiences don't *have* parts in the same way that sentences do. Or at least, *if* they do, this would come as a big fat surprise to me.
We can say that the need for a compositional theory of language is grounded in the bald fact about our language that it consists of sentences which are s.t. the meaning of the whole supervenes on the meanings of the parts. Yet it is worth noting that this is not the way compositionality is usually argued for. The argument is usually put in terms of our *knowledge* of our language: we say that we need a compositional theory for our language because we observe that knowledge of the parts of a well-formed sentence is sufficient for knowledge of the meaning of a novel combination of those elements. (This is what Prof. Yalcin calls the "productivity" of our linguistic knowledge.) I am somewhat confused here as to whether there is a genuine difference in views; presumably our knowledge is productive because our language is productive, and our language is productive because our knowledge of it is productive. Perhaps there is no genuine difference here, just two different ways of putting the point. The first way is to make the compositionality of language sound like a *metaphysical* truth: the meaning of our language just *is* s.t. the meanings of whole sentences is determined on the meanings of their parts. Hence the meanings of the parts are metaphysically sufficient for determining the meanings of new wholes constructed out of those parts. The second way to look at it is as an *epistemic* truth: it turns out to be true of us that we can understand novel sentences whenever we have prior "acquaintance" with their parts. Hence, if we are giving a theory which is supposed to capture (not just the nature of our language but) the nature of OUR KNOWLEDGE of our language, the theory should account for this fact. It would be compatible with such a theory, and such an outlook on the constraint of compositionality, that other creatures with other epistemic habits could speak the SAME language as we do, yet not be "productive" consumers of the language in the same sense that we are. Is this really possible?...I am inclined to feel rather skeptical that this is possible. So to the extent that the two views of compositionality come apart, I am more sympathetic towards the "metaphysical" view.
We return to the nature of perceptual experiences, and the project of assigning them content. We have just come from the observation that we cannot go, as it were, below the level of the whole representational experience, to assign content to its proper parts (upon which the content of the whole will then metaphysically-cum-epistemically supervene.)* But that's okay...after all, that part of compositionality is not even what Lewis is interested in. He is interested in compositionality "from above": on the constraint imposed by the fact that well-formed sentences compose with sentential operators. The question becomes: what (how complex a) semantic value must we assign to the argument-sentence so that this semantic value is sufficient for determining the semantic value whole sentence? It turns out--famously...--that the semantic value must include things that are not in (and not recoverable from) the set of all contexts. This is the set of possible indices.
Now, the analogy I am pursuing in the paper, which is only a partial analogy, is this: although there are ALSO no "experience operators" around for philosophers of perception to talk about, there *is* something quite similar: there are distinct phenomenal states which the content of perceptual experience can interface with. One of these is imagination. Another is memory. Perhaps yet a further one is premonition...who knows?
What is this like, in the philosophy of language?
Well, it's really more like giving a theory of content for representational mental states, rather than sentences. One and the same content can be both believed and supposed (maybe entertained as possible, even, where this is a separate state.) We want to seek this thing, the content, which can be both believed and supposed, so that we intuitively get right what belief and supposition have in common and how they differ.
One oft-proposed connection between content qua representational mental states and philosophy of language is giving a semantic account of *what is said* (which might or might not be identical with giving a semantics for the verb "said that.") Roughly, the idea is that what you say expresses what you believe, and that is just the semantic value of your sentence evaluated at the context of your believing/uttering. The context will play two roles: it will initialize the index (to wit, it is the index of the context that will determine whether the sentence is true or false) and it will fix the extension of indexical terms.
Upshot: maybe our project is like giving an account of what is said/what is believed. What this means is that we may not need an index as well as a context.
Here are two arguments meant to push us to the "index too" conception:
1) Indexicals have semantic values which are sensitive to context. But indexicals don't shift in the scope of modal operators.
I don't think this consideration is relevant either to the language case OR (a fortiori...) to the representational state case. We could just say that indexicals are rigid designators. So while they're sensitive to context for their semantic values, the semantic values don't shift. This could be implemented in several ways. We could restrict the accessibility relation on other contexts. We could define an intermediate notion of 'contextual proposition', where all the values for the indexicals were fixed, and then quantify over whatever remained in the scope of the operator (this would be the equivalent of giving the indexicals widest scope.)
2) There are cases in which the compound sentence you get from intensional operator + sentence just doesn't seem to have the truth-conditions you would get if the intensional operator quantified over contexts. For example, "It might have been that I am not here now." That is true, even though "I am here now" is true in every context. This, it seems to me, is what genuinely makes the case for independently shiftable indices. Note that the argument will go through with an indexical-ridden sentence like "I am here now" even if we took the indexicals to be rigid designators across contexts: all this would mean is that in the scope of the modal operator, "I" always refers to the speaker of the original context, "here" always refers to the location of the original context, and "now" always refers to the time of the original context. In such a language, for example, "I might have been male" isn't made true because a man might have said it; the only contexts quantified over are contexts where the original speaker is still the speaker of the context. So "I might have been male" is false. Nontheless, "I might not have been here" is still true.
While I think this is the right argument for the language case, this argument does not generalize to the perception case, because as we said, there is no analogue of an intensional operator. So my verdict is that so far the "we must have index as well as context!" claim does not go through.
Could an argument from supposing, considered as an attitude taking a propositional complement, be sufficient to overthrow this conception?
...It might be, depending on what we take supposing to be. It seems that supposing can shift individual indices (= individual features of context). We can suppose that I am not here. (I do this all the time, by way of supposing I am elsewhere.) We can suppose it's raining but we don't believe it is. We can suppose that Hesperus is not Phosphorus, for that matter. We can suppose many things which are (i) not true in any context, and furthermore (ii) not true at any index (though we'll leave that one aside for now.) Perhaps this is identical with the attitude of pretense.
An argument that we need more than just contexts to represent the contents of our attitudes of supposing would have to proceed from a variation on the second type of argument. If it just seems like an appeal to intuition, that might be ok with me...it seems like a pretty *strong* appeal to intuition! It would just be to say: can you suppose you're not here? And the answer is just: yes. What is a bit unsatisfying about the dialectic of my paper is that this is precisely what my interlocutors deny. How to respond?...well, I guess my intuitive response is that they don't really believe this on the basis of intuition; they are in the grip of a theory. So the task shifts to explaining why the theory is wrong, which is sort of complicated. In that discussion, it seems like my initial advantage is somehow lost, for want of emphasis.
If we want to put non truth-apt supposings in the form of sentences with different truth values, the nearest analogue seems to be the antecedent of a conditional. Suppose p; is q true? This is just: is it true that p->q? So a way to show that you needed indices as well as contexts here would be to find two sentences p and q that did not differ in their truth-at-a-context profile: whenever p is true at a context, q is true at that context, and vice-versa. Yet we could try to find a sentence r that followed from the former but not the latter. That would show that p was contributing more to the evaluation of the conditional than just a set of contexts where it is true.
Suppose p; is r true? (yes)
Suppose q; is r true? (no)
For this strategy, we could use p = "I exist", and q = "I am here."
If I am here, then I am not there. (True)
If I exist, then I am not there. (False)
My method was another common one: just take p to be "I am not here." Since this is true in no context, every conditional with p as an antecedent should sound the same: either all trivially true or all uninterpretable (or whatever, given our theory of conditionals). The point is that there shouldn't be possible to find an A and a B such that "if p, A" strikes us as true while "if p, B" strikes us as false.
Let's try to see exactly how Lewis makes the point. Is it just a bald appeal to intuition? He considers sentences like
"If someone is speaking here then I exist." (True)
"Forevermore, if someone is speaking here then I will exist." (False)
NB The argument works as an appeal to intuition, but only because we are actually assuming that "If someone is speaking here then I exist" is true, and this is only because we haven't taken the more restricted view of indexicals. Clearly, "If someone is speaking in the Dennes Room then Melissa exists" is not true; my existence does not depend on whether people continue to talk in the Dennes Room. However, other cases could be constructed, as above.
****
It has just occurred to me that this might in fact be possible, and the way of explaining e.g. Escher staircase experiences of the type that interest Susanna Siegel...
Siegel, Susanna (2004). Indiscriminability and the Phenomenal. Phil. Studies.
When we talk about the contents of a perceptual experience, we assign to it a kind of abstract entity, a function from situations to truth-values, which is very much like the kind of abstract entity we assign to a meaningful, well-formed sentence (occurring at a context). When we do this, we assign to perceptual experiences Lewis [1980]'s "first job" for semantic values: determining a truth-value at a context.
However, a big difference with giving semantics for sentences is immediately apparent: perceptual experiences don't have meaningful *parts.* So we cannot give ourselves the compositional project of accounting for the content of the whole experience in terms of the contents of the parts. There is no analogue, for theorizing about perceptual content, of the constraints that compositionality imposes on our semantic theorizing "from below": we just don't know what it would *mean* to give a theory of content for the whole of a perceptual experience by giving a theory of content for the experience's parts, because, er, experiences don't *have* parts in the same way that sentences do. Or at least, *if* they do, this would come as a big fat surprise to me.
We can say that the need for a compositional theory of language is grounded in the bald fact about our language that it consists of sentences which are s.t. the meaning of the whole supervenes on the meanings of the parts. Yet it is worth noting that this is not the way compositionality is usually argued for. The argument is usually put in terms of our *knowledge* of our language: we say that we need a compositional theory for our language because we observe that knowledge of the parts of a well-formed sentence is sufficient for knowledge of the meaning of a novel combination of those elements. (This is what Prof. Yalcin calls the "productivity" of our linguistic knowledge.) I am somewhat confused here as to whether there is a genuine difference in views; presumably our knowledge is productive because our language is productive, and our language is productive because our knowledge of it is productive. Perhaps there is no genuine difference here, just two different ways of putting the point. The first way is to make the compositionality of language sound like a *metaphysical* truth: the meaning of our language just *is* s.t. the meanings of whole sentences is determined on the meanings of their parts. Hence the meanings of the parts are metaphysically sufficient for determining the meanings of new wholes constructed out of those parts. The second way to look at it is as an *epistemic* truth: it turns out to be true of us that we can understand novel sentences whenever we have prior "acquaintance" with their parts. Hence, if we are giving a theory which is supposed to capture (not just the nature of our language but) the nature of OUR KNOWLEDGE of our language, the theory should account for this fact. It would be compatible with such a theory, and such an outlook on the constraint of compositionality, that other creatures with other epistemic habits could speak the SAME language as we do, yet not be "productive" consumers of the language in the same sense that we are. Is this really possible?...I am inclined to feel rather skeptical that this is possible. So to the extent that the two views of compositionality come apart, I am more sympathetic towards the "metaphysical" view.
We return to the nature of perceptual experiences, and the project of assigning them content. We have just come from the observation that we cannot go, as it were, below the level of the whole representational experience, to assign content to its proper parts (upon which the content of the whole will then metaphysically-cum-epistemically supervene.)* But that's okay...after all, that part of compositionality is not even what Lewis is interested in. He is interested in compositionality "from above": on the constraint imposed by the fact that well-formed sentences compose with sentential operators. The question becomes: what (how complex a) semantic value must we assign to the argument-sentence so that this semantic value is sufficient for determining the semantic value whole sentence? It turns out--famously...--that the semantic value must include things that are not in (and not recoverable from) the set of all contexts. This is the set of possible indices.
Now, the analogy I am pursuing in the paper, which is only a partial analogy, is this: although there are ALSO no "experience operators" around for philosophers of perception to talk about, there *is* something quite similar: there are distinct phenomenal states which the content of perceptual experience can interface with. One of these is imagination. Another is memory. Perhaps yet a further one is premonition...who knows?
What is this like, in the philosophy of language?
Well, it's really more like giving a theory of content for representational mental states, rather than sentences. One and the same content can be both believed and supposed (maybe entertained as possible, even, where this is a separate state.) We want to seek this thing, the content, which can be both believed and supposed, so that we intuitively get right what belief and supposition have in common and how they differ.
One oft-proposed connection between content qua representational mental states and philosophy of language is giving a semantic account of *what is said* (which might or might not be identical with giving a semantics for the verb "said that.") Roughly, the idea is that what you say expresses what you believe, and that is just the semantic value of your sentence evaluated at the context of your believing/uttering. The context will play two roles: it will initialize the index (to wit, it is the index of the context that will determine whether the sentence is true or false) and it will fix the extension of indexical terms.
Upshot: maybe our project is like giving an account of what is said/what is believed. What this means is that we may not need an index as well as a context.
Here are two arguments meant to push us to the "index too" conception:
1) Indexicals have semantic values which are sensitive to context. But indexicals don't shift in the scope of modal operators.
I don't think this consideration is relevant either to the language case OR (a fortiori...) to the representational state case. We could just say that indexicals are rigid designators. So while they're sensitive to context for their semantic values, the semantic values don't shift. This could be implemented in several ways. We could restrict the accessibility relation on other contexts. We could define an intermediate notion of 'contextual proposition', where all the values for the indexicals were fixed, and then quantify over whatever remained in the scope of the operator (this would be the equivalent of giving the indexicals widest scope.)
2) There are cases in which the compound sentence you get from intensional operator + sentence just doesn't seem to have the truth-conditions you would get if the intensional operator quantified over contexts. For example, "It might have been that I am not here now." That is true, even though "I am here now" is true in every context. This, it seems to me, is what genuinely makes the case for independently shiftable indices. Note that the argument will go through with an indexical-ridden sentence like "I am here now" even if we took the indexicals to be rigid designators across contexts: all this would mean is that in the scope of the modal operator, "I" always refers to the speaker of the original context, "here" always refers to the location of the original context, and "now" always refers to the time of the original context. In such a language, for example, "I might have been male" isn't made true because a man might have said it; the only contexts quantified over are contexts where the original speaker is still the speaker of the context. So "I might have been male" is false. Nontheless, "I might not have been here" is still true.
While I think this is the right argument for the language case, this argument does not generalize to the perception case, because as we said, there is no analogue of an intensional operator. So my verdict is that so far the "we must have index as well as context!" claim does not go through.
Could an argument from supposing, considered as an attitude taking a propositional complement, be sufficient to overthrow this conception?
...It might be, depending on what we take supposing to be. It seems that supposing can shift individual indices (= individual features of context). We can suppose that I am not here. (I do this all the time, by way of supposing I am elsewhere.) We can suppose it's raining but we don't believe it is. We can suppose that Hesperus is not Phosphorus, for that matter. We can suppose many things which are (i) not true in any context, and furthermore (ii) not true at any index (though we'll leave that one aside for now.) Perhaps this is identical with the attitude of pretense.
An argument that we need more than just contexts to represent the contents of our attitudes of supposing would have to proceed from a variation on the second type of argument. If it just seems like an appeal to intuition, that might be ok with me...it seems like a pretty *strong* appeal to intuition! It would just be to say: can you suppose you're not here? And the answer is just: yes. What is a bit unsatisfying about the dialectic of my paper is that this is precisely what my interlocutors deny. How to respond?...well, I guess my intuitive response is that they don't really believe this on the basis of intuition; they are in the grip of a theory. So the task shifts to explaining why the theory is wrong, which is sort of complicated. In that discussion, it seems like my initial advantage is somehow lost, for want of emphasis.
If we want to put non truth-apt supposings in the form of sentences with different truth values, the nearest analogue seems to be the antecedent of a conditional. Suppose p; is q true? This is just: is it true that p->q? So a way to show that you needed indices as well as contexts here would be to find two sentences p and q that did not differ in their truth-at-a-context profile: whenever p is true at a context, q is true at that context, and vice-versa. Yet we could try to find a sentence r that followed from the former but not the latter. That would show that p was contributing more to the evaluation of the conditional than just a set of contexts where it is true.
Suppose p; is r true? (yes)
Suppose q; is r true? (no)
For this strategy, we could use p = "I exist", and q = "I am here."
If I am here, then I am not there. (True)
If I exist, then I am not there. (False)
My method was another common one: just take p to be "I am not here." Since this is true in no context, every conditional with p as an antecedent should sound the same: either all trivially true or all uninterpretable (or whatever, given our theory of conditionals). The point is that there shouldn't be possible to find an A and a B such that "if p, A" strikes us as true while "if p, B" strikes us as false.
Let's try to see exactly how Lewis makes the point. Is it just a bald appeal to intuition? He considers sentences like
"If someone is speaking here then I exist." (True)
"Forevermore, if someone is speaking here then I will exist." (False)
NB The argument works as an appeal to intuition, but only because we are actually assuming that "If someone is speaking here then I exist" is true, and this is only because we haven't taken the more restricted view of indexicals. Clearly, "If someone is speaking in the Dennes Room then Melissa exists" is not true; my existence does not depend on whether people continue to talk in the Dennes Room. However, other cases could be constructed, as above.
****
It has just occurred to me that this might in fact be possible, and the way of explaining e.g. Escher staircase experiences of the type that interest Susanna Siegel...
Siegel, Susanna (2004). Indiscriminability and the Phenomenal. Phil. Studies.
Wednesday, September 8, 2010
Fregean contents again
Frege has been much in the air this week. Let's make sure we really understand what it's all about. What we have to puzzle out is (i) what it would mean for perceptual states to have "Fregean" (as opposed to/in addition to "Russellian") contents,
(ii) what kind of challenge indexicality, as we find it in natural language, poses for Frege (we do this in the absence of acquaintance with Frege's own work on the subject), (iii) what Frege can tell us about the distinction, recently discussed in Crimmins's "Hesperus and Phosphorus: Sense, Pretense, and Reference" between a sentence's truth-conditions and its modal contents.
I. Fregean and Russellian contents
Fregeans make a sense-reference distinction. Each meaningful term has two sorts of semantic values, a sense and a reference. The referent of a sentence is a truth-value. Senses determine reference; senses of whole sentences, therefore, are truth-conditions. One and the same referent, for a single term or for a whole sentence, may have different senses. Frege had two goals in doing this. One was to explain why identity statements could be contingently true. Another was to explain why they could be cognitively significant; different modes of presentation can plan distinct inferential roles in our mental lives.
At a first pass, to hold that a perceptual state exhibits Fregean contents is to hold that it it contains objects under modes of presentation. We could want these modes of presentation to explain (a) why it is that we do not always recognize the sameness of objects we encounter on different occasions (thus leaving space for cognitively significant breakthroughs when we do come to make informative identity statements on the basis of our perceptions). (b) a point more intimately related to phenomenology of perception, which we would have even if we were omniscient, and always knew when we were looking at the same thing twice. This is simply that when we see things, we see them in a certain phenomenal way. Perhaps that things we see are presented to us in a certain way is just completely obvious. The way in which things are presented to us is, for example, what naturalistic painters are good at capturing.
Siegel writes that
"one of the two roles of m.o.p.'s is to determine reference...another is to reflect cognitive significance."
What about the determination of reference? The intuition is that when we are perceptually in contact with particular objects, we can perceive them incorrectly yet still have contents involving them. So it seems that mode of presentation does not determine reference; reference is determined by context and causal contact. Note that whether or not this makes perceptual content different from linguistic content depends on whether we take the objects represented in experience to be represented in a "name-like" (or demonstrative-like) way, or in a "complex description"-like way. So what to do?...do we abandon Fregean contents, or endorse Russellian contents as well, to capture get the objects we are in causal contact with into the contents of our experience? Moreover, if we go the double-content route (both Fregean and Russellian), do we do so for objects, for properties, or for both? A "double-double" content of a red cube (called 'o') would then contain: (1) o, (2) a mode of presentation of o, (3) o's redness, (4) a mode of presentation of o's redness.
A puzzle: suppose we incorrectly perceive object o as orange (due to funny lighting). It is certainly o which we are (mis)perceiving, but are we (i) misperceiving its redness (presumably, as orange-ness), or (ii) failing to perceive its redness at all? If we go for (ii) then the double-double-content looks overstuffed; we aren't really getting (3) at all. This seems like the right intuition for this case. The difference between this case and the regular inverted spectrum cases (which inspired Shoemaker, for example, to endorse doubled content for color properties) is that the subject is mistaken by her own criteria; if she were to perceive the cube in regular lighting, she would call it "red", not "orange." On the other hand, the lawlike ways in which funny lighting gives rise to misperceptions of color might convince someone that (3) really gets into the content of the perceptual state after all; if the cube weren't red, it wouldn't have been perceived *as orange* in the funny light.
The upshot seems to be this. Frege's original purpose in making the sense-reference distinction was to get a "meaning" (semantic value) for lexical constituents that was fine-grained enough to explain the informativeness of identity statements, while still making it the case that the fine-grained meaning determined course-grained meaning. Now we see arguments that fine-grained (cognitive) content cannot determine coarse-grained (extensional) content. We could just go for both; perhaps not much is lost (according to "two-dimensionalists" about perceptual content) if all that needs to be added to Frege's story is that fine-grained cognitive content *in combination with context* determines extensional content. This is the kind of "narrow content" view (once) endorsed by Fodor. In formal semantics, when it is the context (rather than index) that determines the extension of [phi], then [phi] is an indexical. [phi] is a word, and a constituent of a larger sentence.
Now, I do NOT know what it would mean to say that a constituent of the content of a perceptual experience is an indexical, where 'constituent' is used in the same way.
We will continue this discussion in II below.
II. Frege and Indexicality.
We break from perceptual experience altogether to consider Frege and Indexicality. Consider what a proto-Fregean, equipped only with the sense-reference distinction and the two goals it intends to achieve (cognitive significance and the determination of reference), would have to say about the indexical "I". First, he would face a puzzle about just what the reference of "I" is, since it can be used by different people to refer to different people.
Rather than a sameness of identity underlying [Venus] a proliferation of senses [morning star, evening star] this case gives us a proliferation of different identities underlying a sameness of sense!
The appropriate thing to do, it seems, is abandon the idea that sense determines reference, and argue instead that the sense of "I" only determines reference on an occasion of use. We will have to individuate "occasions of use" finely, however, to capture the fact that only I can secure reference to me using the "I"-mode of presentation. The crucial feature of an "occasion of use," here, is who the user is; you and I cannot occupy the same "occasion of use".
The cognitive significance point is preserved, since it explains why it is informative to learn (on some occasions) that I am the tallest person in the room, while at other times, it is not informative (or indeed even true.) What about a rigid designator statement like "I am Melissa"? We can account for what is cognitively significant about this by characterizing ignorance as ignorance of features of context (aka "occasions of use.")
The new knowledge can be glossed metalinguistically:
I am the the referent of "Melissa".
...in which case we can characterize it as an ignorance of features of the world after all. But I suspect that this gloss is a violation of what---now that I can take the phrase from another authority!---I would call "semantic phenomenology." It seems that when I learn that e.g. *I'm* the one making the mess, what I learn is a fact about the world, not a fact about my language. But a deeper understanding of what is at stake in making this point would be facilitated by a better understanding of the two kinds of disquotational and referential principles we use (as discussed for example by McG/McL).*
III. Crimmins's distinction between truth-conditions and modal contents
The simple move here appears to be equating truth-conditions with communicative content (what Dummett calls "Assertoric content") and modal contents with, well, modal contents (what Dummett calls "Ingredient sense"). We can think of what Crimmins says about compositionality in this light. It is ingredient sense--modal content--which is directly constrained by compositionality, because it is in this mode that we consider what a well-formed sentence needs to contribute to larger sentences of which it is a constituent.
We do not assert the sub-sentences of our asserted sentences (except perhaps in the very special case of sentences conjoined by "and") so there is no need to assign assertoric contents directly to sub-sentences. Thus we have some wiggle room to characterize the communicative (= assertoric) content differently than we characterize the modal content of the same sentence...and perhaps we need to do so by explaining mechanisms of pretense, irony, metaphor, etc.
The distinction is particularly important for a pretense account because it is truth-conditions and not modal contents which are globally affected by pretense. It is no part of the pretense account that there are e.g. pretense-referencing or pretense-triggering lexemes within identity statements (or nonexistence statements, etc.) themselves. Rather, there are classifications of whole chunks of discourse, well above the sentential level.
The account can be usefully compared with discourse representation theory in this way...*Except that*, in discourse representation theory, some lexemes *do* operate directly on a representation which is built up above the sentential level!
****
Siegel, S. "The contents of perception", Stanford Encyclopedia of Philosophy
Crimmins, M. "Hesperus and Phosphorus: Sense, Pretense, and Reference." In Martinich, ed., The Philosophy of Language (5th edition), Oxford.
(ii) what kind of challenge indexicality, as we find it in natural language, poses for Frege (we do this in the absence of acquaintance with Frege's own work on the subject), (iii) what Frege can tell us about the distinction, recently discussed in Crimmins's "Hesperus and Phosphorus: Sense, Pretense, and Reference" between a sentence's truth-conditions and its modal contents.
I. Fregean and Russellian contents
Fregeans make a sense-reference distinction. Each meaningful term has two sorts of semantic values, a sense and a reference. The referent of a sentence is a truth-value. Senses determine reference; senses of whole sentences, therefore, are truth-conditions. One and the same referent, for a single term or for a whole sentence, may have different senses. Frege had two goals in doing this. One was to explain why identity statements could be contingently true. Another was to explain why they could be cognitively significant; different modes of presentation can plan distinct inferential roles in our mental lives.
At a first pass, to hold that a perceptual state exhibits Fregean contents is to hold that it it contains objects under modes of presentation. We could want these modes of presentation to explain (a) why it is that we do not always recognize the sameness of objects we encounter on different occasions (thus leaving space for cognitively significant breakthroughs when we do come to make informative identity statements on the basis of our perceptions). (b) a point more intimately related to phenomenology of perception, which we would have even if we were omniscient, and always knew when we were looking at the same thing twice. This is simply that when we see things, we see them in a certain phenomenal way. Perhaps that things we see are presented to us in a certain way is just completely obvious. The way in which things are presented to us is, for example, what naturalistic painters are good at capturing.
Siegel writes that
"one of the two roles of m.o.p.'s is to determine reference...another is to reflect cognitive significance."
What about the determination of reference? The intuition is that when we are perceptually in contact with particular objects, we can perceive them incorrectly yet still have contents involving them. So it seems that mode of presentation does not determine reference; reference is determined by context and causal contact. Note that whether or not this makes perceptual content different from linguistic content depends on whether we take the objects represented in experience to be represented in a "name-like" (or demonstrative-like) way, or in a "complex description"-like way. So what to do?...do we abandon Fregean contents, or endorse Russellian contents as well, to capture get the objects we are in causal contact with into the contents of our experience? Moreover, if we go the double-content route (both Fregean and Russellian), do we do so for objects, for properties, or for both? A "double-double" content of a red cube (called 'o') would then contain: (1) o, (2) a mode of presentation of o, (3) o's redness, (4) a mode of presentation of o's redness.
A puzzle: suppose we incorrectly perceive object o as orange (due to funny lighting). It is certainly o which we are (mis)perceiving, but are we (i) misperceiving its redness (presumably, as orange-ness), or (ii) failing to perceive its redness at all? If we go for (ii) then the double-double-content looks overstuffed; we aren't really getting (3) at all. This seems like the right intuition for this case. The difference between this case and the regular inverted spectrum cases (which inspired Shoemaker, for example, to endorse doubled content for color properties) is that the subject is mistaken by her own criteria; if she were to perceive the cube in regular lighting, she would call it "red", not "orange." On the other hand, the lawlike ways in which funny lighting gives rise to misperceptions of color might convince someone that (3) really gets into the content of the perceptual state after all; if the cube weren't red, it wouldn't have been perceived *as orange* in the funny light.
The upshot seems to be this. Frege's original purpose in making the sense-reference distinction was to get a "meaning" (semantic value) for lexical constituents that was fine-grained enough to explain the informativeness of identity statements, while still making it the case that the fine-grained meaning determined course-grained meaning. Now we see arguments that fine-grained (cognitive) content cannot determine coarse-grained (extensional) content. We could just go for both; perhaps not much is lost (according to "two-dimensionalists" about perceptual content) if all that needs to be added to Frege's story is that fine-grained cognitive content *in combination with context* determines extensional content. This is the kind of "narrow content" view (once) endorsed by Fodor. In formal semantics, when it is the context (rather than index) that determines the extension of [phi], then [phi] is an indexical. [phi] is a word, and a constituent of a larger sentence.
Now, I do NOT know what it would mean to say that a constituent of the content of a perceptual experience is an indexical, where 'constituent' is used in the same way.
We will continue this discussion in II below.
II. Frege and Indexicality.
We break from perceptual experience altogether to consider Frege and Indexicality. Consider what a proto-Fregean, equipped only with the sense-reference distinction and the two goals it intends to achieve (cognitive significance and the determination of reference), would have to say about the indexical "I". First, he would face a puzzle about just what the reference of "I" is, since it can be used by different people to refer to different people.
Rather than a sameness of identity underlying [Venus] a proliferation of senses [morning star, evening star] this case gives us a proliferation of different identities underlying a sameness of sense!
The appropriate thing to do, it seems, is abandon the idea that sense determines reference, and argue instead that the sense of "I" only determines reference on an occasion of use. We will have to individuate "occasions of use" finely, however, to capture the fact that only I can secure reference to me using the "I"-mode of presentation. The crucial feature of an "occasion of use," here, is who the user is; you and I cannot occupy the same "occasion of use".
The cognitive significance point is preserved, since it explains why it is informative to learn (on some occasions) that I am the tallest person in the room, while at other times, it is not informative (or indeed even true.) What about a rigid designator statement like "I am Melissa"? We can account for what is cognitively significant about this by characterizing ignorance as ignorance of features of context (aka "occasions of use.")
The new knowledge can be glossed metalinguistically:
I am the the referent of "Melissa".
...in which case we can characterize it as an ignorance of features of the world after all. But I suspect that this gloss is a violation of what---now that I can take the phrase from another authority!---I would call "semantic phenomenology." It seems that when I learn that e.g. *I'm* the one making the mess, what I learn is a fact about the world, not a fact about my language. But a deeper understanding of what is at stake in making this point would be facilitated by a better understanding of the two kinds of disquotational and referential principles we use (as discussed for example by McG/McL).*
III. Crimmins's distinction between truth-conditions and modal contents
The simple move here appears to be equating truth-conditions with communicative content (what Dummett calls "Assertoric content") and modal contents with, well, modal contents (what Dummett calls "Ingredient sense"). We can think of what Crimmins says about compositionality in this light. It is ingredient sense--modal content--which is directly constrained by compositionality, because it is in this mode that we consider what a well-formed sentence needs to contribute to larger sentences of which it is a constituent.
We do not assert the sub-sentences of our asserted sentences (except perhaps in the very special case of sentences conjoined by "and") so there is no need to assign assertoric contents directly to sub-sentences. Thus we have some wiggle room to characterize the communicative (= assertoric) content differently than we characterize the modal content of the same sentence...and perhaps we need to do so by explaining mechanisms of pretense, irony, metaphor, etc.
The distinction is particularly important for a pretense account because it is truth-conditions and not modal contents which are globally affected by pretense. It is no part of the pretense account that there are e.g. pretense-referencing or pretense-triggering lexemes within identity statements (or nonexistence statements, etc.) themselves. Rather, there are classifications of whole chunks of discourse, well above the sentential level.
The account can be usefully compared with discourse representation theory in this way...*Except that*, in discourse representation theory, some lexemes *do* operate directly on a representation which is built up above the sentential level!
****
Siegel, S. "The contents of perception", Stanford Encyclopedia of Philosophy
Crimmins, M. "Hesperus and Phosphorus: Sense, Pretense, and Reference." In Martinich, ed., The Philosophy of Language (5th edition), Oxford.
Friday, August 20, 2010
McG/McL, Ch 5: Indeterminate Truth
In this chapter, we prescind from higher-order vagueness and its grand unification with the Tarski hierarchy. Instead, we confront simple truth-value gaps, the kind that arise for words like ‘dommal’ and Carnap's version of ‘soluble’:
(Dommal) Being a dog is sufficient for being a dommal, and being a mammal is necessary.
(Soluble 1) Ax At, if x is placed in water at time t -> (x is soluble <-> x dissolves at t)
(Soluble 2) Ax Ay, if x and y are the same chemical substance -> (x is soluble <-> y is soluble)
We should note that on Carnap's intended reading of these postulates, there is no answer to the question, "is x soluble?" if x hasn't ever been placed into water. The rule for the use of "soluble" appears to presuppose both (i) that x has been placed into water, and (ii) that solubility does not cut across chemical-substance-kind lines.
McG/McL contrast these terms, with their artificially well-defined gaps, with more "penumbral" vague terms like "heavy":
"We know exactly when and why the term [Carnap's "soluble"] was introduced into the language, and we know exactly how it is used. More important, we can say exactly which facts are relevant to the term's application...relevant considerations are neatly localized." (1)
It seems that what is important about these terms is that epistemicism's case is weak for them. I can stipulate into existence a word like "soluble_c" [_c for 'Carnap'] or "dommal", and simply ‘refuse ‘to give conditions for their application which cover all cases. What shall we say about such terms? It is unlikely that other factors, like contextual ones, will fill in the gap that I have left open; the usage patterns of other speakers cannot do so either, since I have just introduced the term. Yet such a term *could* be so introduced and adopted into the language. (Indeed, it seems like many of our terms ‘do ‘come with very substantive presuppositions---what is less clear to me is whether it's right to understand e.g. "dommal" this way.) The authors conclude that "’soluble_c’ gives us [an] unmistakable example of [a] truth-value gap" (2). This is important to the dialectic because the existence of a (nonempty) gap is the supervaluationist's entering wedge.
The authors then consider what we might call "Williamson's gambit" in response to the Dommal problem. This is that "x is a dommal" is false of any non-dog, since the rules have not done enough to make it ‘true ‘of a non-dog. For Williamson, the allocation of truth is stingy; this is how we may adjudicate the truly gap-happy cases. An analogous ruling in the soluble_c case would make "x is soluble_c" false for anything that had never been placed in water. McG/McL respond that such a ruling would be seriously out of tune with the use speakers would make of the term once they had adopted it---since "dommal" is not ‘used’ like "dog", it seems artificial to give it the same truth-conditions as "dog" as a result of applying the arbitrary "truth is stingy" rule.
We should be cautious in reflecting on the "dommal" and "soluble_c" rules. Are they rules for ‘truth’, or are they rules for ‘usage’? Rules for usage--especially unembedded usage---will systematically underdetermine truth-conditions. While I might be able to stipulate usage rules (in affect I am doing this all the time, simply by using my words as I do) it is not so clear that I can stipulate truth-conditions, except in artificial, short-lived contexts; I have wide discretion over my ‘use ‘of sentences, and far narrower control over whether what I say with those sentences is ‘true’. Meditating on this distinction does seem to tell in Williamson's favor; after all, usage patterns to not rule out the discovery of informative identities, and it is unclear whether we ever stipulate gappy truth-conditions, even if gappy usage is common.
McG/McL go on to argue that the ‘(T)-for-utterances’ schema:
If u says that p, u is true iff p.
...is falsified by gappy usage, e.g. by the pattern of usage that would arise of "dommal" were adopted by the linguistic community. The authors reintroduce the idea of a tension between two different kinds of truth here, when they consider the question of whether we should embrace the schema.
Horn 1: "We embrace the schema as an a priori maxim that we intend to hold onto whether or not it reflects the facts of usage." We want "true" as a logical device and the schema's status is axiomatic.
Horn 2: We reject the schema for "dommal" and "soluble_c". In particular, we reject it because it entails bivalence via Williamson's argument from LEM. [?...not sure this is right; the text is a bit unclear here, since it actually seems to argue ‘from’ Bivalence ‘to ‘LEM!] (...And we accept LEM, because we accept classical logic.)
****
Then there is a bit of meditation on the epistemicist's take on things: the way he understands the terms "precise", "vague" and "determinately." The contrast here is between epistemicism and semanticism.
We rehearse the semanticists's diagnosis of the fallacious inference from "there is a red tile adjacent to a nonred tile" to "the word 'red' (or the concept ‘red’) has a sharp boundary" (16); again, the diagnosis rests on the intuitive pull of two competing, but distinct, notions of truth: one which supervenes on usage [where usage can be and is gappy] and another which is disquotational-classical and therefore leaves no gap.
***
We proceed now to the indictment that the supervaluationist cannot really respect classical logic because she cannot reason classically (even if he can get all the classical tautologies.) In a nutshell, the response will be this: she can reason classically all she likes with "plue". The restrictions on reductio, proof by cases, etc. that come with "determinately" should be seen as limitations on these proof schemas for an operator which ‘enriches’ the classical language:
"The valid inferences are the ones sanctioned by the classical predicate calculus, as described in any standard logic text. The semanticist isn't proposing a nonclassical definition of validity; she's proposing a nonclassical definition of truth. She regards the classically valid modes of inference as truth preserving, and she asks whether there are any other modes of reasoning, in addition to those identified by classical logic, that we can also count on to enable us to derive true conclusions from true premises. This quest should not be understood as the search for an expanded notion of validity, because the semanticist is perfectly content with the notion of validity as we have it already." (23-24)
So the semanticist McG/McL have in mind should be seen as distinguishing between ‘valid’ forms of inference on the one hand, and ‘truth-preserving ‘forms of inference on the other. Validity is classical validity. Truth preservation is ‘supertruth’-preservation. Not every truth-preserving argument is valid.
A helpful diagram would be this: Williamson's picture is that supervaluationist logic, which rejects reductio, DS, etc., imposes a severe *restriction* classical logic: that is because the supertruth-preserving inference forms are a *subset* of the valid ones. McG/McL respond by denying that; for them, the valid arguments are a subset of the supertruth-preserving ones. That's because they are keeping all of classical logic--with its reductios, DS's, and all the rest--and augmenting it with an operator, 'D'---whose associated forms of inference, it must be granted, do not include reductio and DS. It is possible for McG/McL to hold that they have included all of classical logic, contra Williamson, and that their picture is in harmony with the inferential practices of e.g. mathematicians who reason by reductio, because their system does countenance unrestricted reasoning by reductio in any context which does not include vague terms. Mathematical languages are precise*, so the mathematicians have committed no oversight. Moreover, we account for classically valid inferences in the general population by postulating that the popular standard for validity is the preservation of supertruth. This has no effect on our domain of classical logic, by Stone's theorem [see below]. And it prevents the "collapse" of the interderivability of "Tr 'p' " and "p".
...I think I understand this picture, but it still seems to me that the populace is faulted by it. After all, it means that when the populace reasons classically by cases, they leave a third case out. It seems like what might be required here is an argument that the populace *does* reason by three cases rather than two (the populace "includes the excluded middle") when the premises involve vague terms, which is, if we take Russell's argument, always; but I am not sure that the populace does this. I believe Keefe thinks the folk do this...
***
Finally, we move on to some many-valued metalogic to flesh out the contrasting picture. The distinction is made between (i) having many truth-values [real numbers between 0 and 1], on the one hand, and (ii) having many ‘designated ‘truth-values, on the other. If there is a boolean algebra defined on these many values, we have Stone's theorem:
[Stone's Theorem] For any Boolean algebra B and any proposed inference, the following are equivalent:
(1) The inference is valid in classical sentential calculus.
(2) The inference is strongly B-valid (preserves a truth-value of 1 from premises to conclusion)
(2) The inference is weakly B-valid (preserves truth value < b from premises to conclusion, where b is the threshold ‘designated’ value)
...as far as I can tell, the Boolean algebra bit boils down to this: the many-valued logic is truth-functional. So Stone's theorem simply tells us that if we keep classical consequence, the choice between strong or weak B-consequence is a non-choice. To wit: the expansion of classical logic undertaken by McG/McL's supervaluationist system is free to choose between the preservation of truth at a point (preservation of an intermediate degree of truth), and the preservation of global truth (preservation the highest degree of truth--supertruth.)
It could be the case that good inferences, the kind that appear in math journals, are cases of strongly B-valid inferences. Strongly B-valid inferences preserve supertruth. So we shouldn't think that, just because we are introducing a logic which restricts e.g. ‘reductio’, that we are contravening the math journals; it's just that ‘they’ were assuming the premises were supertrue, while we are giving a logic in which assumptions may have a lower degree of truth than that. McG/McL's suggestion is that this take on the data of ordinary (and extraordinary) inferential practices isn't ruled out by what we know about how people reason. What I have suggested is that, so far, we seem farther along the road to vindicating the mathematicians than the folk.
Of course, the key here is that the boolean operators are truth-functional! It is quite striking that McG/McL's form of supervaluationism is truth-functional, since of course this gives up the analysis of conditionals that seemed to be such a strength of the supervaluationist account...or perhaps it does not. The concern here is first and foremost with classical logic, which makes use of the material conditional. Nothing yet said prevents McG/McL from endorsing an e.g. metalinguistic analysis of ordinary language conditionals that could capture their non-truth-functional behavior in a supervaluationist framework.
****
*...are they? It seems rather common for there to be gaps in mathematical terminology of the "dommal" kind, even if not of the more regular "heavy" kind.
(Dommal) Being a dog is sufficient for being a dommal, and being a mammal is necessary.
(Soluble 1) Ax At, if x is placed in water at time t -> (x is soluble <-> x dissolves at t)
(Soluble 2) Ax Ay, if x and y are the same chemical substance -> (x is soluble <-> y is soluble)
We should note that on Carnap's intended reading of these postulates, there is no answer to the question, "is x soluble?" if x hasn't ever been placed into water. The rule for the use of "soluble" appears to presuppose both (i) that x has been placed into water, and (ii) that solubility does not cut across chemical-substance-kind lines.
McG/McL contrast these terms, with their artificially well-defined gaps, with more "penumbral" vague terms like "heavy":
"We know exactly when and why the term [Carnap's "soluble"] was introduced into the language, and we know exactly how it is used. More important, we can say exactly which facts are relevant to the term's application...relevant considerations are neatly localized." (1)
It seems that what is important about these terms is that epistemicism's case is weak for them. I can stipulate into existence a word like "soluble_c" [_c for 'Carnap'] or "dommal", and simply ‘refuse ‘to give conditions for their application which cover all cases. What shall we say about such terms? It is unlikely that other factors, like contextual ones, will fill in the gap that I have left open; the usage patterns of other speakers cannot do so either, since I have just introduced the term. Yet such a term *could* be so introduced and adopted into the language. (Indeed, it seems like many of our terms ‘do ‘come with very substantive presuppositions---what is less clear to me is whether it's right to understand e.g. "dommal" this way.) The authors conclude that "’soluble_c’ gives us [an] unmistakable example of [a] truth-value gap" (2). This is important to the dialectic because the existence of a (nonempty) gap is the supervaluationist's entering wedge.
The authors then consider what we might call "Williamson's gambit" in response to the Dommal problem. This is that "x is a dommal" is false of any non-dog, since the rules have not done enough to make it ‘true ‘of a non-dog. For Williamson, the allocation of truth is stingy; this is how we may adjudicate the truly gap-happy cases. An analogous ruling in the soluble_c case would make "x is soluble_c" false for anything that had never been placed in water. McG/McL respond that such a ruling would be seriously out of tune with the use speakers would make of the term once they had adopted it---since "dommal" is not ‘used’ like "dog", it seems artificial to give it the same truth-conditions as "dog" as a result of applying the arbitrary "truth is stingy" rule.
We should be cautious in reflecting on the "dommal" and "soluble_c" rules. Are they rules for ‘truth’, or are they rules for ‘usage’? Rules for usage--especially unembedded usage---will systematically underdetermine truth-conditions. While I might be able to stipulate usage rules (in affect I am doing this all the time, simply by using my words as I do) it is not so clear that I can stipulate truth-conditions, except in artificial, short-lived contexts; I have wide discretion over my ‘use ‘of sentences, and far narrower control over whether what I say with those sentences is ‘true’. Meditating on this distinction does seem to tell in Williamson's favor; after all, usage patterns to not rule out the discovery of informative identities, and it is unclear whether we ever stipulate gappy truth-conditions, even if gappy usage is common.
McG/McL go on to argue that the ‘(T)-for-utterances’ schema:
If u says that p, u is true iff p.
...is falsified by gappy usage, e.g. by the pattern of usage that would arise of "dommal" were adopted by the linguistic community. The authors reintroduce the idea of a tension between two different kinds of truth here, when they consider the question of whether we should embrace the schema.
Horn 1: "We embrace the schema as an a priori maxim that we intend to hold onto whether or not it reflects the facts of usage." We want "true" as a logical device and the schema's status is axiomatic.
Horn 2: We reject the schema for "dommal" and "soluble_c". In particular, we reject it because it entails bivalence via Williamson's argument from LEM. [?...not sure this is right; the text is a bit unclear here, since it actually seems to argue ‘from’ Bivalence ‘to ‘LEM!] (...And we accept LEM, because we accept classical logic.)
****
Then there is a bit of meditation on the epistemicist's take on things: the way he understands the terms "precise", "vague" and "determinately." The contrast here is between epistemicism and semanticism.
We rehearse the semanticists's diagnosis of the fallacious inference from "there is a red tile adjacent to a nonred tile" to "the word 'red' (or the concept ‘red’) has a sharp boundary" (16); again, the diagnosis rests on the intuitive pull of two competing, but distinct, notions of truth: one which supervenes on usage [where usage can be and is gappy] and another which is disquotational-classical and therefore leaves no gap.
***
We proceed now to the indictment that the supervaluationist cannot really respect classical logic because she cannot reason classically (even if he can get all the classical tautologies.) In a nutshell, the response will be this: she can reason classically all she likes with "plue". The restrictions on reductio, proof by cases, etc. that come with "determinately" should be seen as limitations on these proof schemas for an operator which ‘enriches’ the classical language:
"The valid inferences are the ones sanctioned by the classical predicate calculus, as described in any standard logic text. The semanticist isn't proposing a nonclassical definition of validity; she's proposing a nonclassical definition of truth. She regards the classically valid modes of inference as truth preserving, and she asks whether there are any other modes of reasoning, in addition to those identified by classical logic, that we can also count on to enable us to derive true conclusions from true premises. This quest should not be understood as the search for an expanded notion of validity, because the semanticist is perfectly content with the notion of validity as we have it already." (23-24)
So the semanticist McG/McL have in mind should be seen as distinguishing between ‘valid’ forms of inference on the one hand, and ‘truth-preserving ‘forms of inference on the other. Validity is classical validity. Truth preservation is ‘supertruth’-preservation. Not every truth-preserving argument is valid.
A helpful diagram would be this: Williamson's picture is that supervaluationist logic, which rejects reductio, DS, etc., imposes a severe *restriction* classical logic: that is because the supertruth-preserving inference forms are a *subset* of the valid ones. McG/McL respond by denying that; for them, the valid arguments are a subset of the supertruth-preserving ones. That's because they are keeping all of classical logic--with its reductios, DS's, and all the rest--and augmenting it with an operator, 'D'---whose associated forms of inference, it must be granted, do not include reductio and DS. It is possible for McG/McL to hold that they have included all of classical logic, contra Williamson, and that their picture is in harmony with the inferential practices of e.g. mathematicians who reason by reductio, because their system does countenance unrestricted reasoning by reductio in any context which does not include vague terms. Mathematical languages are precise*, so the mathematicians have committed no oversight. Moreover, we account for classically valid inferences in the general population by postulating that the popular standard for validity is the preservation of supertruth. This has no effect on our domain of classical logic, by Stone's theorem [see below]. And it prevents the "collapse" of the interderivability of "Tr 'p' " and "p".
...I think I understand this picture, but it still seems to me that the populace is faulted by it. After all, it means that when the populace reasons classically by cases, they leave a third case out. It seems like what might be required here is an argument that the populace *does* reason by three cases rather than two (the populace "includes the excluded middle") when the premises involve vague terms, which is, if we take Russell's argument, always; but I am not sure that the populace does this. I believe Keefe thinks the folk do this...
***
Finally, we move on to some many-valued metalogic to flesh out the contrasting picture. The distinction is made between (i) having many truth-values [real numbers between 0 and 1], on the one hand, and (ii) having many ‘designated ‘truth-values, on the other. If there is a boolean algebra defined on these many values, we have Stone's theorem:
[Stone's Theorem] For any Boolean algebra B and any proposed inference, the following are equivalent:
(1) The inference is valid in classical sentential calculus.
(2) The inference is strongly B-valid (preserves a truth-value of 1 from premises to conclusion)
(2) The inference is weakly B-valid (preserves truth value < b from premises to conclusion, where b is the threshold ‘designated’ value)
...as far as I can tell, the Boolean algebra bit boils down to this: the many-valued logic is truth-functional. So Stone's theorem simply tells us that if we keep classical consequence, the choice between strong or weak B-consequence is a non-choice. To wit: the expansion of classical logic undertaken by McG/McL's supervaluationist system is free to choose between the preservation of truth at a point (preservation of an intermediate degree of truth), and the preservation of global truth (preservation the highest degree of truth--supertruth.)
It could be the case that good inferences, the kind that appear in math journals, are cases of strongly B-valid inferences. Strongly B-valid inferences preserve supertruth. So we shouldn't think that, just because we are introducing a logic which restricts e.g. ‘reductio’, that we are contravening the math journals; it's just that ‘they’ were assuming the premises were supertrue, while we are giving a logic in which assumptions may have a lower degree of truth than that. McG/McL's suggestion is that this take on the data of ordinary (and extraordinary) inferential practices isn't ruled out by what we know about how people reason. What I have suggested is that, so far, we seem farther along the road to vindicating the mathematicians than the folk.
Of course, the key here is that the boolean operators are truth-functional! It is quite striking that McG/McL's form of supervaluationism is truth-functional, since of course this gives up the analysis of conditionals that seemed to be such a strength of the supervaluationist account...or perhaps it does not. The concern here is first and foremost with classical logic, which makes use of the material conditional. Nothing yet said prevents McG/McL from endorsing an e.g. metalinguistic analysis of ordinary language conditionals that could capture their non-truth-functional behavior in a supervaluationist framework.
****
*...are they? It seems rather common for there to be gaps in mathematical terminology of the "dommal" kind, even if not of the more regular "heavy" kind.
Tuesday, August 10, 2010
Refl-Heck-tions II: Demonstrata and nonconceptual content
Heck (2000)'s delicate point about demonstrative phenomenal concepts, put two ways:
by Siegel in the SEP:
"Another point of debate raised by McDowell's concerns whether it is possible to form demonstrative concepts of the shade represented in experience in cases of illusion, when the shade represented in experience differs from the shade of the thing seen. If demonstrative concepts of color shades can pick out only shades actually had by the thing demonstrated (as Heck 2000 contends), then again McDowell's argument fails. However, it is again a matter of controversy whether demonstrative concepts are limited in this way. Yet another point of debate in this area is whether experience itself would be needed to anchor demonstrative concepts in the first place — in which case, it is said, they could not already be constituted by them (Heck defends this view)."
...and by Tye in his (2005):
"The conceptualist might respond that, whatever may be the case for the demonstrative expression`that shade', the demonstrative concept exercised in the experience is a concept of the shade the given surface appears to have. But, now, in the case of misperception, there is no sample of the color in the world. So, how is the referent of the concept fixed? The obvious reply is that it is fixed by the content of the subject's experience: the concept refers to the shade the given experience represents the surface as having. However, this reply is not available to the conceptualist about the content of visual experience; for the content of the demonstrative concept is supposed to be part of the content of the experience and so the concept cannot have its referent fixed by that content (Heck 2000, 496)."
The putative tension here is between "anchoring" and "constituting" ["being part of"]. I will take "anchoring" to mean "serving as the referent of" [as opposed to, say, having some epistemological meaning a la "serving as grounds for"].
The nonconceptualist claims that the fineness of grain of experience shows that there are nonconceptual contents, viz. contents of our experience which are not denizens of our volutary, thinking-and-imagining conceptual repertoire. The conceptualist reply is that we do have a concept for each shade Red(n), where n ranges across the many many (infinite?) values corresponding to lines on the spectrum. We don't have, say,individual proper names for them all, but instead, we can represent each Red(n) as ''that shade (of red)", where "that" is a demonstrative. Hence our concepts of the different Red(n)'s are demonstrative concepts.
Now consider the case of an illusion of Red(29). The content of the hallucinatory experience is obviously "o is Red(29)." Do we have a concept of Red(29)? We should ask: what does the "that" in "that shade"---which must be the concept we are deploying if we are deploying one at all---refer to? It cannot refer to anything in the real world, since by hypothesis there is nothing in the real world that we are seeing. Hence it must be (some constituent of) the experience itself which serves as the referent of "that". But then we do need the experience to supply the referent of the demonstrative---we do not already have a concept for each color we experience.
Clearly, something would be missing if the content of experience were something like
"object o has this color"
...if there is, ahem, no accompanying demonstrandum. Yet the sentence above is precisely what the accompanying conceptual counterpart of the experience is taken to be, containing only 'deployed concepts' of the agent's conceptual repertoire. Upshot: if we take the relevant conceptual state (probably belief) to rely on experience to supply referents for its demonstratives, then it cannot be the that very concept [the demonstrative one] which is part of the content of the experience. This would make experience self-referential; moreover, there just wouldn't be anything (else) for the demonstrative to refer to.
Tuesday, August 3, 2010
Tappenden on Unifying The Liar and the Sorites Paradoxes
Tappenden gives what might be called a pragmatic analysis of the assertability of the conditional premise of the sorites paradox:
(P2) If a man with c cents is poor, a man with c+1 cents is poor.
Analysis: semantically, P2 is gap. Why? For borderline cases of richness, we espouse strong Kleene tables. Therefore P2 has some instances which are neither true nor false: to wit, any instance of c in which having c cents makes one a borderline case of "poor".
Pragmatic upshot: you can't assert P2, but you can articulate it, where articulation is understood as a speech act distinct from assertion, with a different norm. It is a necessary condition for successful assertion of p that p express a true proposition [Tappenden footnotes Dummett here]; but truth is not a necessary condition for successful articulations. To articulate S is to claim that ~S is not assertable; articulation is for correcting (or preempting?) improper use by others. The relationship to semantic values of associated propositions is this: to correctly articulate S, it need not be the case that S is true; it need only be the case that S is not false. Hence P2's "articulability", and the strong "semantically positive" intuition we have towards such utterances, is explained without needing to postulate that P2 is true. [Note: I am not sure why we must say that S is not false--that it has this weaker semantic status--at all. Perhaps it could be false? He does note that articulation's perlocutionary effect does not, like irony's, depend on recognition of its falsity.]
Along the way, Tappenden makes some interesting, but not heavily supported, claims about the differences between different syntactic forms for (classically) logically equivalent sentences, when considered as the LFs of speech acts. For example, LEM sentences ("All the tiles are either red or orange") are claimed to function as "`sharp boundary' conditions" (565) and hence to be assertable only in the absence of hard (ie borderline) cases: to say that all the tiles are either red or orange is to say that, in our context, we should be able to sort them into two piles with nothing left unclassified. Likewise, to assert an existential ("some man is tall while his neighbor is short") is to implicate that a truthmaking last tall man can be identified.
On the other hand, logically equivalent sentences of the form
(Ax)~(Rx & ~Rx)
enforce weaker "no overlap" conditions: they function as claims that complementaries are exclusive, but not necessarily exhaustive. All this struck me as odd--particularly the claims about or-LEM sentences--because it simply didn't gel with my intuitions. Perhaps that is all that can be said about that.
One quite odd thing about Tappenden's discussion is that he categorizes Fine-ian penubral sentences with the corresponding tolerance sentences, where tolerance sentences are the ones that have the form of P2:
(Penumbral) if a man with c cents is poor, a man with c-1 cents is poor.
(Tolerance--P2) if a man with c cents is poor, a man with c+1 cents is poor.
Both of these sentences are, in Tappenden's taxonomy, "pre-analytic," and they both have the status that they are articulable without being assertable. This lumping-together is partially explained by noting that on 3-valued tables, both types of conditionals are gaps. But given that there is extensive discussion of the assertability and psychological import of these sentences, it is surely worth noting that (Tolerance) is a good deal less acceptable than (Penumbral), and that it is the former only which leads by sorites reasoning to a contradiction.
A final note on the vagueness portion: in the course of the paper, Tappenden makes an intriguing distinction between "essentially" and "inessentially" vague predicates, where it appears that the only essentially vague predicates are observational (e.g. "looks red.") [although he doesn't use enough examples to really confirm this hypothesis].
...and the Liar?
I wasn't sure I understood how the analysis was supposed to apply to the Liar--thereby unifying the two paradoxes--since I'm simply unsure of a very prelimiary point: how do you assert the Liar? The Liar is a sentence that refers to itself. I can refer to myself (with "I"), but I am not a sentence. An utterance can refer to itself (with "this utterance"), but an utterance is not a sentence either. I am not sure whether we can assert a sentence that refers to itself. (Do we thereby have to assert that it refers to itself? If we don't, how will we get the point across?)
If someone said to me:
(*) "This utterance is false"
I feel that my first reaction would be to say "...which utterance"? That is, I wouldn't at all feel confident that I knew which sentence had been asserted, because my instinct would be that the utterance contained an empty (or unidentified) demonstrative term.
I don't mean to be pedantic--but there is a need for some preliminary discussion of asserting the Liar here. Kripke, for example, thinks that you haven't asserted a proposition with (*), even though you did make an utterance.
In the absence of this discussion, what can be said? Tappenden is surely absolutely right that the analogue of bivalence for assertability does not hold:
(Bivalence-Assertability) For all sentences s, either s is assertable or *~s* is assertable. [*'s for Quine-corners].
But we did not need the Liar to show us this! Our lack of omniscience is sufficient. (Perhaps this is not discussed because Tappenden is using Dummett's truth-norm rather than a knowledge norm, a justified belief norm, or a Brandomian, reason-offering norm.) We wanted to know whether the Liar was true, and all we wound up with was the weaker observation that the Liar is not assertable. Tappenden does, however, make a bid to turn this into a solution the Liar by offering the following observation: since we can explain the non-assertability of the Liar without recourse to its truth-value, we are free to hold that both it and its negation are gap. That's good because holding that it has either non-gap truth-value leads to a contradiction.
Finally, we get an explanation of the 'semantically positive' status of these two:
(3*) The liar is true iff the liar is not true.
(C) (As)(True(s) v ~True(s))
...in terms of articulation. I am a bit confused about this, since I don't know exactly how Kripke's semantics for Truth (which Tappenden is taking on board) generates the truth-value Gap for sentences. However, the picture must be that they are gap, and their illusion of truth is explained by conversational norms.
***
Tappenden, Jamie. "The Liar and Sorites Paradoxes: Toward a Unified Treatment." J. Philosophy XC, no. 11, 1993.
with references to:
Kripke, S. "Outline of a Theory of Truth" J. Phil, LXXI, no. 19, 1975.
(P2) If a man with c cents is poor, a man with c+1 cents is poor.
Analysis: semantically, P2 is gap. Why? For borderline cases of richness, we espouse strong Kleene tables. Therefore P2 has some instances which are neither true nor false: to wit, any instance of c in which having c cents makes one a borderline case of "poor".
Pragmatic upshot: you can't assert P2, but you can articulate it, where articulation is understood as a speech act distinct from assertion, with a different norm. It is a necessary condition for successful assertion of p that p express a true proposition [Tappenden footnotes Dummett here]; but truth is not a necessary condition for successful articulations. To articulate S is to claim that ~S is not assertable; articulation is for correcting (or preempting?) improper use by others. The relationship to semantic values of associated propositions is this: to correctly articulate S, it need not be the case that S is true; it need only be the case that S is not false. Hence P2's "articulability", and the strong "semantically positive" intuition we have towards such utterances, is explained without needing to postulate that P2 is true. [Note: I am not sure why we must say that S is not false--that it has this weaker semantic status--at all. Perhaps it could be false? He does note that articulation's perlocutionary effect does not, like irony's, depend on recognition of its falsity.]
Along the way, Tappenden makes some interesting, but not heavily supported, claims about the differences between different syntactic forms for (classically) logically equivalent sentences, when considered as the LFs of speech acts. For example, LEM sentences ("All the tiles are either red or orange") are claimed to function as "`sharp boundary' conditions" (565) and hence to be assertable only in the absence of hard (ie borderline) cases: to say that all the tiles are either red or orange is to say that, in our context, we should be able to sort them into two piles with nothing left unclassified. Likewise, to assert an existential ("some man is tall while his neighbor is short") is to implicate that a truthmaking last tall man can be identified.
On the other hand, logically equivalent sentences of the form
(Ax)~(Rx & ~Rx)
enforce weaker "no overlap" conditions: they function as claims that complementaries are exclusive, but not necessarily exhaustive. All this struck me as odd--particularly the claims about or-LEM sentences--because it simply didn't gel with my intuitions. Perhaps that is all that can be said about that.
One quite odd thing about Tappenden's discussion is that he categorizes Fine-ian penubral sentences with the corresponding tolerance sentences, where tolerance sentences are the ones that have the form of P2:
(Penumbral) if a man with c cents is poor, a man with c-1 cents is poor.
(Tolerance--P2) if a man with c cents is poor, a man with c+1 cents is poor.
Both of these sentences are, in Tappenden's taxonomy, "pre-analytic," and they both have the status that they are articulable without being assertable. This lumping-together is partially explained by noting that on 3-valued tables, both types of conditionals are gaps. But given that there is extensive discussion of the assertability and psychological import of these sentences, it is surely worth noting that (Tolerance) is a good deal less acceptable than (Penumbral), and that it is the former only which leads by sorites reasoning to a contradiction.
A final note on the vagueness portion: in the course of the paper, Tappenden makes an intriguing distinction between "essentially" and "inessentially" vague predicates, where it appears that the only essentially vague predicates are observational (e.g. "looks red.") [although he doesn't use enough examples to really confirm this hypothesis].
...and the Liar?
I wasn't sure I understood how the analysis was supposed to apply to the Liar--thereby unifying the two paradoxes--since I'm simply unsure of a very prelimiary point: how do you assert the Liar? The Liar is a sentence that refers to itself. I can refer to myself (with "I"), but I am not a sentence. An utterance can refer to itself (with "this utterance"), but an utterance is not a sentence either. I am not sure whether we can assert a sentence that refers to itself. (Do we thereby have to assert that it refers to itself? If we don't, how will we get the point across?)
If someone said to me:
(*) "This utterance is false"
I feel that my first reaction would be to say "...which utterance"? That is, I wouldn't at all feel confident that I knew which sentence had been asserted, because my instinct would be that the utterance contained an empty (or unidentified) demonstrative term.
I don't mean to be pedantic--but there is a need for some preliminary discussion of asserting the Liar here. Kripke, for example, thinks that you haven't asserted a proposition with (*), even though you did make an utterance.
In the absence of this discussion, what can be said? Tappenden is surely absolutely right that the analogue of bivalence for assertability does not hold:
(Bivalence-Assertability) For all sentences s, either s is assertable or *~s* is assertable. [*'s for Quine-corners].
But we did not need the Liar to show us this! Our lack of omniscience is sufficient. (Perhaps this is not discussed because Tappenden is using Dummett's truth-norm rather than a knowledge norm, a justified belief norm, or a Brandomian, reason-offering norm.) We wanted to know whether the Liar was true, and all we wound up with was the weaker observation that the Liar is not assertable. Tappenden does, however, make a bid to turn this into a solution the Liar by offering the following observation: since we can explain the non-assertability of the Liar without recourse to its truth-value, we are free to hold that both it and its negation are gap. That's good because holding that it has either non-gap truth-value leads to a contradiction.
Finally, we get an explanation of the 'semantically positive' status of these two:
(3*) The liar is true iff the liar is not true.
(C) (As)(True(s) v ~True(s))
...in terms of articulation. I am a bit confused about this, since I don't know exactly how Kripke's semantics for Truth (which Tappenden is taking on board) generates the truth-value Gap for sentences. However, the picture must be that they are gap, and their illusion of truth is explained by conversational norms.
***
Tappenden, Jamie. "The Liar and Sorites Paradoxes: Toward a Unified Treatment." J. Philosophy XC, no. 11, 1993.
with references to:
Kripke, S. "Outline of a Theory of Truth" J. Phil, LXXI, no. 19, 1975.
Friday, July 23, 2010
Soames on Higher-Order Vagueness
Soames argues that higher-order vagueness isn't as mysterious as it is commonly taken to be. What is the best way to express this?...What is interesting from my point of view is the suggestion that higher-order vagueness really isn't the same kind of thing as first-order vagueness. The picture is this. Take first-order vagueness to consist of a truth-value gap which trifurcates the sorites continuum. The extension of F consists in those things to which the application of "F" is mandated by world + linguistic tradition. The anti-extension of F consists in those things to which the non-application of "F" [= application of "not F"] is similarly mandated by world + linguistic tradition. In between the extension and the antiextension is the undefined region.
Why, now, should we expect that the border between e.g. the extension and the undefined region to be vague? The undefined region is a region where we may apply "F", but we don't have to---it is the zone where we exercise our discretion as competent language-users. The exercise of this discretion is something akin to the exercise of a right in the linguistic community. [The exercise of this right is identified with contextualism, though I'm not sure why.] It is reasonable that we are not highly sensitive to the place where our the external mandate peters out and our internal discretion kicks in---after all, as the first-order analysis shows, this is not the difference between applying the predicate truly and applying the predicate falsely. To the latter difference, the difference in truth-values, we must be highly sensitive. To the difference as Soames conceives it, we need not be highly sensitive.
If we accept Soames's argument that higher-order vagueness isn't the same kind of thing as first-order vagueness, the inevitable appearance higher-order vagueness makes as we theorize about first-order vagueness is not devastating---it is not a true "revenge paradox" in the sense that exposes our theorizing as un-explanatory.
Soames's analysis is a soup of different approaches to vagueness; it seems like his account of higher-order vagueness is a kind of epistemicism. This raises a general question about the usefulness of mixed approaches. It would be unsatisfying to veer wildly between different approaches to vagueness to deal with first-, second- and etc.-order vagueness, since there is a feeling that we are dealing with the same thing; it will look terribly much like trying everything because nothing works.
We can impose a bit of order, though, by attending to the phenomena. And it does seem like epistemicism is a useful way to tackle higher-order vagueness: this is a general feature of the fact that "determinately" and "definitely" are much more terms of art than "blue" and "bald." MacFarlane points this out in the course of advocating fuzzy epistemicism--he applies degrees to order-0 predicates like "bald" and epistemicism to "determinately". Soames appears to be doing the same thing, except with good old gappy logic at the order-0 level. To round out the trio, Heck ("Semantic Accounts of Vagueness") makes the same point in defending supervaluations*.
So should we deny that first-order and higher-order vagueness are the same kind of thing--or do mixed approaches merely generate an illusion of progress? McGee and McLaughlin seem to assume it is more or less the same kind of thing, when they relates the paradoxes generated by "determinately" to the paradoxes generated by truth-predicates:
"It is hardly surprising that there should be paradoxes here [in considering the iterated 'determinately' operator.] 'Determinately' is an ordinary English word, but it is being employed here in a specialized technical usage. Questions of higher-order vagueness arise when we try to use the technical term 'determinately' to characterize the technical term 'determinately.' Whatever we say to describe the usage of the word will be part of the usage we are trying to describe. We get the semantic paradoxes when we try to apply the predicate 'true' to sentences containing the predicate 'true.' We evade the liar paradox and Montague's paradox by replacing the adjective 'true' by the adverbial operator 'determinately', but now we get paradoxes of higher-order vagueness, which arise when we apply the 'determinately' operator to sentences that contain the 'determinately' operator."
(`Determinate Truth,' 26-27)
*"The more important point is that this argument [revenge on the supervaluationist]--like most of the discussions of higher-order vagueness in the extant literature, including my own previous discussions--assumes that the boundary between the heaps and the things on the borderline between the heaps and the non-heaps is not just seemingly vague but really vague, that is, vague in the same sense that the [original boundary] is vague...That can be denied, and I hereby deny it...Of course, not being in possession of the [Philosophers'] Grail, we have little idea where the boundary lies. But that isn't vagueness. It's just ignorance." (124)
****
S. Soames, "Higher-order Vagueness for Partially Defined Predicates" in Beall, ed., Liars and Heaps.
R. G. Heck Jr., "Semantic Accounts of Vagueness" in Beall, ed., Liars and Heaps.
J. MacFarlane, "Fuzzy Epistemicism", johnmacfarlane.net. [also in Dietz and Moruzzi, eds., Cuts and Clouds.]
McGee and McLaughlin, "Determinate Truth", forthcoming.
Wednesday, July 21, 2010
Rayo on Vague Representation
I found "Vague Representation" to be a little unsatisfying, and it is worthwhile to wonder why. Perhaps most theorists of vagueness are trying to do something impossible, like trying to find out why a physical constant has the value it has (Fine 1975). It is what it is; explanation should just end there. I am not sure if I would characterize someone who tries to go further as being "in the grip of" some misguided "picture" (probably, to the effect that semantic competence involves "rules"), but perhaps whether the description is accurate will emerge in time.
The goal of the paper is to give a theory of assertoric content for a vague language. Since theories of assertoric content situate themselves at the semantics-pragmatics border, the paper opens with some clearly correct observations about how vague language is felicitously used. (The question will be, of course, what import these observations have for vagueness conceived as a semantic puzzle, and whether they could possibly constitute a complete answer to that puzzle.) The observations are: first, the use of a vague utterance presupposes that it will be interpreted so as to impart some information. Hence, for example, we can expect a speaker to intend her utterance to be interpreted in a way that excludes some but not all open possibilities from the context set---this is just another way of saying that we expect the speaker to be informative relative to the context. Because of this, a borderline case can sometimes be excluded (relative to competitors) and sometimes included (relative to different competitors) on the basis of the same vague utterance (e.g., "Susan lives in a blue house."). Another observation to add to this is that the context relative to which the speaker seeks to be informative may not be the context of utterance---for example, Susan may say "I live in a blue house", intending only that her utterance be sufficiently informative to me a few hours from now, when I arrive on her street looking for her house. So it is not necessary (indeed, probably not even possible) for her utterance to uniquely pick out the color of her house relative to the entire color-wheel: it only needs to pick it out relative to the other houses on the block. This spectrum already excludes most possibilities; it features "gaps." Rayo describes several maxims of conversation qua their relationship to gappy contexts [gappy in the parameter of application]: utterances should make nontrivial partitions on gappy contexts. (This will be an utterance's "essential effect".)
These maxims, of course, are used not only to determine whether an utterance is felicitous but to calculate what was said (and what the words meant) in the first place.
The presentation of all this in the paper is semantically conservative, and close to supervaluationism, in the following way. The maxims are presented as doing their work--the work of informing a "semi-principled" [that is, semi-constrained] decision [deciding what to throw out of the context set] against a background spectrum of classical semantic theories. There is a spectrum for the same reason there's one in supervaluations: there is a range of classical theories compatible with the extension of "blue" as use determines it. [This is not explicitly explained, but there must be some reason for it!] What we must choose, on an occasion of utterance-processing, is not which classical theory to use but which partition to make; and we can often do the latter without having to do the former. We are aided by (i) certain features all the classical theories respect (the supervaluationist's supertruths); (ii) the maxims about gaps and informative partitions; and (iii) information about which classical theories are rendered most plausible in light of past uses of the term--which ones are admissible in the supervaluationist's sense.
It is (iii) that is likely to give rise to the most protest, in terms of whether "localism" about vagueness is the solution to the puzzles of vagueness. After all, a standard objection to supervaluations is that they do not account for higher-order vagueness. The observation here is that the same puzzle afflicts this account, although in a more indirect way, since an account has been given of the way in which admissibility, in the supervaluationist's sense, needs to do only part of the job [it is aided by maxims and contextual information] and it is clear that the job is not as big as it is often taken to be [it only needs to make a partition, for the nonce, in a gappy context].
In response to the complaint about higher-order vagueness, we get instrumentalism as a metaphilosophical response. The idea is that what we have is good enough. I think it might be good enough for a lot of purposes, but then again, for a lot of purposes, vagueness isn't a problem in the first place. Rayo seems to suggest, by way of saying that this is good enough, that it is as good as anything can be, once we take an instrumentalist picture of...is it language, or explanation?...It is explanations of language use:
"[Higher order vagueness] might well constitute a source for concern if one thinks of theories of assertoric content non-instrumentally. But here we are thinking of a theory of assertoric content as a tool for predicting the evolution of the context set [I guess this means we are still thinking of the context set non-instrumentally.] Accordingly, there is no sense to be made of the question whether a partition is really salient, over and above the question whether treating the partition as salient is a good way of making sense of our linguistic practice (and therefore no sense to be made of the question whether an assertion really has a particular local content, over and above the question whether ascribing that content to that assertion is a good way of making sense of linguistic practice)." (363)
The original suggestion, in contrasting localism with globalism, is that globalism flatfootedly plugs the semantic theory into the theory of assertoric content; the relationship between the two is a big fat "=". Localism is supposed to be less flatfooted, though I'm not sure exactly how this is supposed to help us with concerns about the semantics. It is good to point out that there could be something more complicated there than a big fat "=." But if that's where the suggestion ends, then I don't see how localism even inform any of the semantic puzzles about vagueness. Since we've denied to state any sort of equivalence between semantic and assertoric content, to be instrumentalist or anti-realist about (local) assertoric content doesn't determine that one is instrumentalist about semantic content.
[It should be noted that the picture ascribed to the globalist isn't really that semantic content = assertoric content, rather, it is, I suppose, that semantic content = a function from contexts to assertoric contents, i.e. a function from contexts to propositions. Rayo idealizes away from indexicality for the sake of simplifying the discussion. This seems like the right way to put the orthodox view: semantic content determines assertoric content.]
Perhaps in order to address these concerns, Rayo does return to the sorites paradox itself at the end of the paper. He concedes that the localist response to the conditional premise is quite a lot like the supervaluationist's. However, the localist declines to assign a special status ("supertruth") to sentences which are true in light of every semantic theory: he does not quantify over semantic theories in this way.
"the role of semantic theories in characterizing the notion of truth is not that of articulating possible completions of the language. Their role is to supply the information about past linguistic usage that is used to characterize salience [i.e. the salience of certain partitions of the context set]. " (366)
It does indeed seem like truth is being left out altogether here: Rayo does not characterize an assertion by saying that it is true or untrue, or even that it expresses a truth or an untruth. Instead of expressing anything of the kind, it enables the hearer to partition the context set, and that is that.
...Or is it? We still need truth in a derivative way, since truth is what makes the candidate semantic theories what they are. It seems like little more than an exercise in re-labeling to say that (i) Gricean reasoning demands a context-contingent partition of the context set; (ii) such a partition will be a proposition; (iii) which proposition will do the job is a question with a range of answers; (iv) for reasons listed above, there is a subset of the range of answers we can therefore decline to choose between. We recover the supervaluationist's jargon by saying e.g. that a certain partition---{house 1} out of {house 1, house 2}, for example---is recommended
because "~house 2" is supertrue with respect to context {house 1, house 2} by way of being recommended by every classical semantic theory compatible with the Gricean constraints on semantic theory choice. This seems like supervalutionism sensibly updated so as to take explicit account of conversational pragmatics in its accounting of "admissible valuations" and "what is left open by speaker usage." Perhaps this presentation of localism also gives us a way to explain what is pathological about a sorites series: it lacks gaps, and vague language is not felicitous in the absence of gaps.
Here, then, are old two worries raised anew. On a Stalnakerian picture, content interacts with context, and vice-versa. In the metaphysical order of things, semantic content determines assertoric content, by way being a function from contexts to propositions. We rely on our knowledge of context to determine assertoric content (sometimes because this is part of the compositional semantics and sometimes because it is part of the pragmatics); one way this is done is with our systematic knowledge of indexicals; another is via general Gricean mechanisms we employ to disambiguate between homophonic or ambiguous sentences; another may be by way of metalinguistic reinterpretation a la "Assertion"; and finally, there might be some particular forms that Gricean maxims take in vague cases, such as Rayo's "Principle of Clarity." But certainly the three items on this list rely on a classical picture of truth-conditional contents; why should we think that the last does not? And if it does, why don't the puzzles of vagueness simply reintroduce themselves? One reason, after all, for not being satisfied with pragmatics instead of semantics is that a sentence's felicity-at-a-context profile is not sufficient to determine the felicity of embedded occurrences. Put another way: knowing all this about vague language isn't enough to tell us what vague predicates mean. But this was what we wanted to know in the first place. Or, at least, this is one thing worth knowing.
The second worry is really a constellation: the regular objections to supervaluations. Let us remind ourselves what they are. The first is that supervaluationist logic, while it preserves classical validities, doesn't preserve classical inference patterns. Williamson identifies the root of the trouble as the Supervaluationist's identification of truth with supertruth: this identification conflicts with the role truth plays in the classical inference patterns, as well as the disquotational schema (T(`p') <->p, with `<->' in the object language). This family of objections is distinct from the objections from higher-order vagueness, and I do not see how instrumentalism provides any kind of relevant response. Supervaluationists like McL/McG have responded by acknowledging that there are two kinds of truth for the Supervaluationist, supertruth and pluth (that is, global truth and local truth), the former of which is truth on all admissible valuations and the latter of which is disquotational and classical-inference-preserving. Keefe goes further and argues that in fact the failure of classical inference patterns is appropriate for deductions involving vague terms: when we reason with vague terms, we actually do/should use definite truth rather than pluth.
What Rayo's considerations suggest to me is another way of thinking of truth vs. pluth. McG/McL argue that supertruth is the aim of assertion, because we aim to assert what we know. Another reason we might aim for supertruth is that we wish to restrict ourselves to assertions which our audience will accept; since we don't know exactly which classical theory, or set of classical theories, our audience accepts, we aim for truth on all of them. Perhaps iterated knowledge plays a role here: I assume that my audience has a range of classical theories because he cannot know what classical theory I use; he knows this, hence...etc.
****
We begin by contrasting "localism" and "globalism," which are both theories of assertoric content:Globalism: "[It is] tempting to suppose that the various instances of local usage fall into patterns that are usefully described in terms of global contents: contents that determine a definite partition of the entire space of possibilities. Accordingly, it is tempting to think of the task of constructing a theory of assertoric content as the task of finding an assignment of global contents that fits local usage as neatly as possible...
[Two steps.] The first is to choose a context-insensitive semantic theory (I ignore indexicals to keep things simple.) The second step is to identify the global content of an assertion with the proposition assigned to the sentence asserted by one's preferred semantic theory. The task of constructing a theory of assertoric content therefore boils down to the task of finding an assignment of semantic values to basic lexical items which yields the result that actual instances of usage reliably correct." (348)
****
Finally, it should be noted that the tantalizing suggestion is raised (367, with a hat tip to "Assertion") that the right way of taking the conditional premise might be a metalinguistic one---thus the premise records a fact about admissible usage and not truth. Rayo isn't sure what to make of this and neither am I; but it is quite possible that this is the most interesting thing a classical approach to vagueness should take from the consideration of assertoric content.
Tuesday, July 20, 2010
Tweaking Tolerance
Wright's version of tolerance, in its simplest form, leads to a contradiction:
~(Fn & ~Fn+1)
Fn -> Fn+1.
In order to fix it, we could weaken it (or its consequences) by (i) weakening the force of the negation on the outside by using many-valued logic, metalinguistic negation, etc.; (ii) adding operators like "Knowably", "Determinately", etc. in various places. Both of these things have been done; one could try [with "D" for box and "C" for diamond]:
~(Fn & ~Fn+1)
Fn -> Fn+1.
In order to fix it, we could weaken it (or its consequences) by (i) weakening the force of the negation on the outside by using many-valued logic, metalinguistic negation, etc.; (ii) adding operators like "Knowably", "Determinately", etc. in various places. Both of these things have been done; one could try [with "D" for box and "C" for diamond]:
(0) DFn -> Fn+1 [Williamson's Margin for Error Principle--taken as constitutive of vagueness?...Not quite; taken as constitutive of Inexact Knowledge, of which vagueness is a species.]
(1) DFn -> CFn+1
(2) C(DFn -> DFn+1)
...I don't know whether any entailment relations hold between these. (iii) We could question the truth-preservingness of the many, many applications of MP (an infinite number?) required to get us from the plausible premises to the absurd conclusion. (iv) We could say that the premise is true or valid in some way, but not in the same way as the other premises that get us to a contradiction: for example, that it is true in some metalinguistic sense, or that it expresses a truth about use, rather than a truth about meaning. [A model for this sort of suggestion: some of the premises are logical truths in the logic of indexicals, while others are logical truths in the standard sense.]
But here is another way we could try to fix it: what is wrong with all of these principles is that they appeal to a "next" item in the sorites series---a next tile, a next grain of sand, etc. Clearly, for heaps and rows of tiles, there will *be* a next grain, and a next tile. However, the suggestion is that what we have in mind when we have tolerance in mind is something like:
Each shade of red is next to another shade of red.
or
Each point on the spectrum, if it is red, is next to another red point.
We naturally wish to say that this other red shade, or red point, is very close to the original red one, and so we seek to express this thought with the original tolerance principle, (An)Fn -> Fn+1. The thought seems right; it is its expression in terms of "next" which is wrong, for there is no well-defined way of getting to the next point or the next shade. We go further wrong when we substitute "next tile" for "next point"; since there is only an infinite number of tiles, repeated applications of the principle will get us all the way to the end of the spectrum.
Can we better understand the sorites---originally, a paradox of heaps, where heaps are made of discrete grains of sand---by this route---by arguing that we mistakenly transform a principle which is valid for nondiscrete quantities into one which could apply to discrete quantities, and then so apply it? Furthermore, how should we fix our reasoning?---can some modified version of Tolerance, or the Margin of Error principle, be found, and if so, what use would it be? (Note, of course, that it wouldn't be of use in getting us to the contradiction again.)
Tolerance was originally offered as a criterion for vagueness; one which replaced the older description in terms of indecision in light of all the nonsemantic facts. If a newer version could be found, we could avoid the despairing conclusion (which has been aired) that what makes a term vague that we use it incoherently as a matter of semantic competence.
What, finally, would success in this endeavor suggest---should we conclude that there is a continuum of operators, truth-predicates, or colors themselves? Since it appears that the most plausible starting assumption is about red, and not truth, or the "determinately" operator, can we say that vagueness originates here?
****Wright quotes
"In these examples we encounter the feature of a certain tolerance in the concepts respectively involved, a notion of a degree of change too small to make any difference, as it were. There are degrees of change in point of size, maturity and colour which are insufficient to alter the justice with which some specific predicate of size, maturity or color is applied. This is quite palpably an incoherent feature since, granted that any case to which such a predicate applies may be linked by a series of 'sufficiently small' changes with a case where it is not, it is inconsistent with the exclusivity of the predicate." (333-334)
****Related quote by Rayo 2008 ["Vague Representation"]
In addressing the Sortes Paradox, it is not enough to tell a story whereby [the conditional premise] fails to be true. One must also explain our inclination to accept [it]. It seems to me that the localist is unusually well-placed to supply the necessary explanation. We are tempted to think that [the conditional premise] is true because we make a certain kind of mistake. We think of tolerance---which is a feature of our ability to use linguistic representations---as a semantic principle governing the correctness of our assertions. This leads us from the unobjectionable obser4vation that we are unable to use 'bald' to discriminate between man n and man n+1 to the mistaken conclusion that 'bald' can only be correctly applied to man n if it is also correctly applied to man n+1. It is easy to make this mistake if one is under the grip of a certain conception of language: the idea that there are semantic rules corresponding to sentences, and that language-mastery i sa matter of gaining cognitive access to the relevant rules and learning to apply them in the right sorts of ways. For---as emphasized in Wright 1976 ["Language-Mastery and the Sortes Paradox"]---this picture makes it natural to suppose that one ca nuncover the semantic rules governing our language by straightforward reflection on our usage. But5 the localist thinks of matters very differently. Language-mastery is not a matter of applying semantic rules; it is a matter of making sensible semi-principled decisions about how to partition the context set in light of past linguistic usage and the location of the gap.
(1) DFn -> CFn+1
(2) C(DFn -> DFn+1)
...I don't know whether any entailment relations hold between these. (iii) We could question the truth-preservingness of the many, many applications of MP (an infinite number?) required to get us from the plausible premises to the absurd conclusion. (iv) We could say that the premise is true or valid in some way, but not in the same way as the other premises that get us to a contradiction: for example, that it is true in some metalinguistic sense, or that it expresses a truth about use, rather than a truth about meaning. [A model for this sort of suggestion: some of the premises are logical truths in the logic of indexicals, while others are logical truths in the standard sense.]
But here is another way we could try to fix it: what is wrong with all of these principles is that they appeal to a "next" item in the sorites series---a next tile, a next grain of sand, etc. Clearly, for heaps and rows of tiles, there will *be* a next grain, and a next tile. However, the suggestion is that what we have in mind when we have tolerance in mind is something like:
Each shade of red is next to another shade of red.
or
Each point on the spectrum, if it is red, is next to another red point.
We naturally wish to say that this other red shade, or red point, is very close to the original red one, and so we seek to express this thought with the original tolerance principle, (An)Fn -> Fn+1. The thought seems right; it is its expression in terms of "next" which is wrong, for there is no well-defined way of getting to the next point or the next shade. We go further wrong when we substitute "next tile" for "next point"; since there is only an infinite number of tiles, repeated applications of the principle will get us all the way to the end of the spectrum.
Can we better understand the sorites---originally, a paradox of heaps, where heaps are made of discrete grains of sand---by this route---by arguing that we mistakenly transform a principle which is valid for nondiscrete quantities into one which could apply to discrete quantities, and then so apply it? Furthermore, how should we fix our reasoning?---can some modified version of Tolerance, or the Margin of Error principle, be found, and if so, what use would it be? (Note, of course, that it wouldn't be of use in getting us to the contradiction again.)
Tolerance was originally offered as a criterion for vagueness; one which replaced the older description in terms of indecision in light of all the nonsemantic facts. If a newer version could be found, we could avoid the despairing conclusion (which has been aired) that what makes a term vague that we use it incoherently as a matter of semantic competence.
What, finally, would success in this endeavor suggest---should we conclude that there is a continuum of operators, truth-predicates, or colors themselves? Since it appears that the most plausible starting assumption is about red, and not truth, or the "determinately" operator, can we say that vagueness originates here?
****Wright quotes
"In these examples we encounter the feature of a certain tolerance in the concepts respectively involved, a notion of a degree of change too small to make any difference, as it were. There are degrees of change in point of size, maturity and colour which are insufficient to alter the justice with which some specific predicate of size, maturity or color is applied. This is quite palpably an incoherent feature since, granted that any case to which such a predicate applies may be linked by a series of 'sufficiently small' changes with a case where it is not, it is inconsistent with the exclusivity of the predicate." (333-334)
****Related quote by Rayo 2008 ["Vague Representation"]
In addressing the Sortes Paradox, it is not enough to tell a story whereby [the conditional premise] fails to be true. One must also explain our inclination to accept [it]. It seems to me that the localist is unusually well-placed to supply the necessary explanation. We are tempted to think that [the conditional premise] is true because we make a certain kind of mistake. We think of tolerance---which is a feature of our ability to use linguistic representations---as a semantic principle governing the correctness of our assertions. This leads us from the unobjectionable obser4vation that we are unable to use 'bald' to discriminate between man n and man n+1 to the mistaken conclusion that 'bald' can only be correctly applied to man n if it is also correctly applied to man n+1. It is easy to make this mistake if one is under the grip of a certain conception of language: the idea that there are semantic rules corresponding to sentences, and that language-mastery i sa matter of gaining cognitive access to the relevant rules and learning to apply them in the right sorts of ways. For---as emphasized in Wright 1976 ["Language-Mastery and the Sortes Paradox"]---this picture makes it natural to suppose that one ca nuncover the semantic rules governing our language by straightforward reflection on our usage. But5 the localist thinks of matters very differently. Language-mastery is not a matter of applying semantic rules; it is a matter of making sensible semi-principled decisions about how to partition the context set in light of past linguistic usage and the location of the gap.
Subscribe to:
Posts (Atom)