Monday, April 12, 2010

Cheat-Sheeting on Simons et al.

1a) What is Ross's puzzle? Exactly which modal operators does it arise for?

Ross's puzzle is the inference from

You ought to post the letter

to

You ought to post the letter or burn it.

What I am interested in is chiefly epistemic operators, so am abstracting away from some of the peculiarities of deontic operators. My example will be the inference from

Joe might be in the kitchen

to

Joe might be in the kitchen or the attic.

In particular, the latter seems to entail "Joe might be in the kitchen AND Joe might be in the attic." It is this felt entailment which we are trying to explain. Thus it arises for operators which are given truth-conditions in terms of existential quantification over some set of possible worlds. (I wish to leave to others the question of whether we really ought to analyze deontic "ought" and "may" this way.)

The question of the scope of the "or" relative to the modal is under debate here. One thing to note, though, is that the felt entailment does arise for both narrow and wide scope SS's: both

Joe might be in the kitchen or the the attic.

and

Joe might be in the kitchen or he might be in the attic.

...appear to generate the felt entailment.

2) What are the prospects for a simple Gricean, or pragmatic, explanation of the puzzle, according to which the inference is not strictly valid but nevertheless "reasonable in context"?

The prospects are not bad, but they appear to rely on one particular choice of LF. This is the wide-scope LF. We employ the Gricean-inspired principle that a disjunction is appropriate in context iff both disjuncts are epistemically possible for the speaker. We then extrapolate, in the epistemic case, to the possibility of both disjuncts with regard to truthmaking domain of quantification at the context. We will need some account of truth for unembedded epistemic modal clauses at a context for this, and we will need to employ some kind of S5 axiom.

In a nutshell, I think this often works. But it has two weaknesses. The first is that it relies on an undefended syntactic assumption (the wide-scoping of the "or".) The only defense offered for this is that it (pragmatically) validates the free choice inference...but a semantic account might be able to get the inference semantically when the scope is narrow. The second weakness is that it does not generalize well to the deontic cases. So it is both (i) not fully general and (ii) not even a solution to Ross's puzzle in its original form (since the modality involved there was deontic.) The kind of conditions on deontic ideality which would need to hold in order for the inference to be generated pragmatically do not seem reasonable to me: that is, it should not in general follow that when something is possibly permissible, it is permissible.

However, it is worth noting that a major point in favor of the pragmatic analysis is the apparent cancellability of the "free choice inference":

You may have coffee or tea--I don't remember which.

Joe might be in the kitchen or the attic--I'm not telling you which.

3a) What are the prospects for a semantic explanation of the puzzle, according to which the patterns are valid?

I think the prospects are pretty good. But I appear to be in the minority here. The basic idea of the semantic proposal I'm in favor of comes from Mandy Simons. It employs a Hamblin semantics on which an or-coordination denotes a set whose members are its disjuncts, i.e.

[[Larry, Moe or Curly]] = [[Larry or Moe or Curly]] = {Larry, Moe, Curly}

The semantic entry for "or" on such a theory would be:

[[or]] = \lambda x \subseteq D_{\tau} . \lambda y \subseteq D_{\tau} . x \union y

This is a "flexible type" entry: "or" can join nodes of any semantic type, outputting a node in the same semantic type. However, the truth-function requires that the inputs to the semantic entry be sets. The easiest way to accomplish this is to adopt a Hamblin type-shift, according to which nodes denote singletons of their (old) extensional denotations. So, for example, [[Joe]] = {Joe}. We might call our "old" extensions "atoms" or "ur-elements" of the new interpretation function. (Thus one way of looking at this semantics is as "mereological," with atoms, and "or" is the "fusion" operation. )

The question now arises: how does composition proceed? We should use

Hamblin Functional Application: Let \alpha be a branching node with \beta and \gamma its daughters. WOLOG, assume \beta \subseteq D_{\tau} and \gamma \subseteq D{\tau, \pi}. Then [[\alpha]] = {a: Eb \in [[\beta]], g \in [[\gamma]] s.t. a = g(b)}.

While this looks a bit complicated, it's actually quite easy to see how it relates to regular FA.

NB, though, that we will need Regular FA to account for the semantic operation engendered by the "or" itself. (This is by far the most elegant way to account for it, although it expands our repertoire of composition rules.) Otherwise we will get what looks like a type-mismatch when we try to compose the function denoted by "or" with the first of its disjuncts.

3b) What is the intuitive idea behind this semantics?

The intuitive idea behind the semantics, for Hamblin, was that a question denoted a set of possible answers. (For him, the semantic value of a question word like "who" is D_e.) This puts a partition on logical space. The result is that when a question is posed in a context, it partitions the common ground for the audience. The audience's job is then to reduce the common ground by cutting along one of the dotted lines.

For Simons, the thought is similar. When an "or" is asserted in context, its function is to "divide up" (put a partition or cover on) some set. This presumably marks a "dotted line" along which our investigation is to proceed, with the eventual goal of eliminating or choosing one of the disjuncts. However, she can then give a definition for modal operators according to which the modal operators can do something else with the disjuncts: for example, they can universally quantify over them. Here's her entry for the epistemic modal operator "might_e":

[[might]] = \lambda {p1..pn}, p1-pn \in D_{st} . ACC_e \subseteq Union(p1-pn) [Coverage] &
\forall pi, ACC_e \intersect pi \neq \emptyset. [Genuineness]

There are two separate issues here--which, I think, is important for the next step I want to make. FIRST, we want to keep the disjuncts of an or-coordination "separate." By some means, then, we shall give a semantics whereby [[p1 v p2]] \neq [[p]], where [[p]] is the possible-worlds-semantics union of [[p1]] and [[p2]]. (Intuitive examples: [[right hand or left hand]] \neq [[hand]]; [[platypus or echidna]] \neq [[monotreme]], [[nuclear war or nonnuclear war]] \neq [[war]].) There is a large and varied semantics literature reasons to and ways of doing this. [c.f. Zimmermann, Rooth, the association with focus literature, and the closure under entailment problems we looked at in seminar--these problems having to do with giving a semantics for "believes" statements etc.]

4) Can you compare this to other semantic solutions in the literature?

I don't know much about other semantic solutions in the literature (but I'll look at Luis Alonso-Ovalle.) Zimmermann conceives of his solution as a semantic ones, but I'd prefer to classify it as a pragmatic one, for a number of reasons. First, it is a wide-scoping view, and second, it is very far from general. So roughly it seems to have all the disadvantages of the wide-scope view while with none of the advantages (since there's no account of cancellability.) Frankly, Zimmermann's view is quite odd.

What I would like to do is try to sketch out an alternative semantic account, similar in spirit to Simons's, but one that takes up Partee and Rooth (1982)'s suggestion that the semantics of "or" should be assimilated to the semantics of indefinite noun phrases [the suggestion is 27 years old, but to my knowledge nobody has made good on it]. This would be a way of making good on a certain burden imposed on Simons by her account, which is to explain how unembedded disjunctions get interpreted. (To begin drawing the connection, note that (i) there is a deep logical connection between disjunction and existential quantification, and that (ii) indefinites give rise to free choice effects.)

Let me first sketch the problem for Simons, give her response, and then try to sketch the parallel with indefinite noun phrases.

The problem for Simons is that e.g. [[John sang or Jane danced]] = {John sang, Jane danced}. How to assign truth-conditions to this? She writes:

"Recall that the truth conditions for the modal/or sentences require the existence of a set which has two properties: it is related in a specified way to some other semantic object; and it is supercovered by the denotation of the embedded or coordination. Let's suppose that sentences containing or coordinations always have truth conditions of this form. We can achieve the intuitively correct results for [[John sang or Jane danced]] by [identifying the set to be supercovered with a factive common ground]. (18-19)"

What this means is that either (i) the common ground gets into the semantics, as in dynamic semantics; or (ii) the association of a disjunctive sentence with truth conditions occurs in the pragmatics, rather than the semantics. I'm not sure how best to gloss this, or whether this naturally suggestion should really be neutral between these two glosses.

Now I need to make the jump to indefinites. According to Heim, indefinites (just like disjuncts) have no quantificational force of their own. How, then, to account for scope?



Friday, April 9, 2010

Peacocke on Berkeley: Outline

First: claim that Berkeley was right about our inability to perceive an unseen tree. Then:
3 questions.

1) What is the nature of this distinction between what is in the image and what, in the same imaginative project, is imagined: "the question of the image/imagination distinction" (20)

2) Wittgenstein and King's college (or a clone?) being imagined to be on fire. "We will want to know what it is that makes one singular content rather than another a component of what is being imagined; and we will want to know why it seems that there is a sense in which it is absurd to suppose the imaginer might be mistaken about the identity of that content." [Both (1) and (2) are "the question of content"--(pg 20).]

The third question is why Berkeley is right about the unperceived tree (it is assumed that he is.)

General Hypothesis: to imagine something is always at least to imagine, from the inside, being in some conscious state. (21)

"the sense in which your imaginings always involve yourself is rather this: imagining always involves imagining from the inside a certain (type of) viewpoint, and someone with that viewpoint could, in the imagined world, knowledgeably judge `I'm thus-and-so,' where the thus-and-so gives details of the viewpoint." (21) [IEM]

..."the condition [of the imaginer] seems to be a conceptual truth. It is not just a reflection of each person's egocentricity...it is a consequence of two conceptual truths: one of them is the General Hypothesis, and the other is that for each thinker, the content `I am not the person with *these* conscious states' is not epistemically possible." (21)

From this we derive the following more specific "constitutive hypothesis", the Experiential Hypothesis:

To imagine being [phi] in these cases is always at least to imagine from the inside an experience as of being [phi]. (22)

Peacocke writes that this "may seem uncontroversial: but I shall, in developing from it answers to our three questions, argue that it can be used in defense of Berkeley's doctrine about unperceived trees and in criticism of some received philosophical views on imagination." (23)

Imagining and supposing: "I shall say that these are difference [between e.g. suitcase and suitcase with a cat behind it] in which conditions are S-imagined to hold. 'S' is for 'suppose'--although S-imagining is not literally supposing, it shares with supposition the property that what is S-imagined is not determined by the subject's images, his imagined experiences." (25)

Back to the tree:
"In defending Berkeley's claim, I am not denying that one can imagine an array of physical objects and then make-believe that it is unperceived, or then conceive of it as existing unperceived, or make the supposition that it was, will, or might be unperceived. one may even imagine a tree and then, in a second imaginative project, imagine a world in which no one sees *that* tree. What I am asserting is only that if what is imagined is a physical object, then the imagined experience of the object is, in the imagined world, a perception." (30)

Imagination

What do we philosophize about imagination for?
1) Berkeley: to show that idealism is true
2) Martin: to show that disjunctivism is true
3) Peacocke: ? [to show that Berkeley is right]
4) (me?) to investigate whether and how imagination is a guide to (some kind of) possibility
(Peacocke too: to rebut argument about our knowledge of other minds and the possibility of inverted qualia).

Or, we could come at imagination directly from the following puzzles:

The form of "imagines" statements: Compare
(1) Joe imagines flying above San Francisco.
(2) Joe imagines a brown banana.
(3) Imagine flying above San Francisco!
(4) Imagine a brown banana.
(Compare: "Joe wants...")

****
Berkeley, Martin, Peacocke
****
Hypothesis: To imagine is to imagine perceiving.


*****My view?****
Hypothesis: the content of (visual) imagining is the content of experience, in a hypothetical mode.

A worry: imagination is too subjective to serve as a guide for possibility; it leads us to psychologism and solipsism.

Reply: maybe yes, maybe no. We need an account of the objectivity of the content of imaginings. Indicative conditionals shall be our guide!

A tour of the philosophy of indicative conditionals.
"One standard way of approaching the problem....begins with the assumption that a sentence of this kind expresses a proposition that is a function of the propositions expressed by its component parts...[a conditional assertion is a standard kind of speech act with a distinctive kind of content--a conditional proposition.] But there is also a long tradition according to which conditional sentences..are used to perform a special kind of speech act." (Stalnaker 1)
*the content/force distinction--used, for example, in philosophy of memory. (Call the distinctive force "conjecture," perhaps,)

*a feature shared by both camps: the creation of a ``derived context" by the antecedent.

*Do indicative conditionals have highly context-sensitive truth-conditions (Stalnaker), or no truth-conditions at all?
"What must be granted is that in some cases, indicative conditionals are implicitly about the speaker's beliefs. We must allow that what I say when I say something of the form 'if A, then B'may not be the same as what you would have said, uttering the same words." (Stalnaker, 12, emphasis added)

What pushes Stalnaker to this conclusion? Consider cases like the infamous case of Sly Pete!

[...]

Another case: The miners!

Lesson: we must restrict reasoning in the scope of an assumption. Two ways to do this: limit the application of certain rules in the scope of an assumption (Byrne, MacF and K), or forbid discharging of an assumption (Heck?), or create more cases (me?)...

Monday, March 29, 2010

Post-meeting with Maziar 3/29

Q and A!

Q1) What are AP, IP, and [dot] as node-labels?
A1) AP = Adjective Phrase, IP = Inflectional Phrase, aka "S"...in the 90's and 00's, IPs changed to "TP"s ("T" for "tense"). As for the [dot], it appears to be something idiosyncratic to H&K.

Q2) What is "exportation"?
A2) no good answer/depends on context.

Q3) What's an "intervention effect"?
A3) This also depends, but think of "intervention" as "in between"; for example, when someone talks about negation and intervention, the question is whether the negation can take scope between two other scope-taking items in the sentence.

Q4) What about the semantics of "everything you like, I hate"?
A4) "everything you like" has been topicalized in this sentence; the underlying form should be "I hate everything (wh) you like." From that we can get "everything wh1 you like t1 is s.t. I hate it."

Friday, March 12, 2010

Simons Revisited

In particular, today I'm revisiting the type-lift. Should we just say that x in D_e are singletons of individuals, p in D_st are singletons of propositions, and be done with it?

Perhaps we should. Here are the benefits:
--solves the problem of OR-coordinations for nodes in D_e
--unifies account with DPs?
--solves the problem of independent composing for wide-scope OR
--limits the use of FA to OR/AND, EVERY (?), MIGHT/MUST: ie, operators on sentences. (this doesn't work for "every". Still worried about "every"...)
--if we, following H & vF, take the i variable to be tacit in the syntax--in fact, even if we don't but we like Yalcin's Hamblin-inspired modal resolution semantics--we have a nice result: MIGHT relates a pair of sets-of-sets.
[another way of putting it: we have independent evidence from *each* argument MIGHT takes that the types of the arguments should be sets of propositions rather than just propositions.]

Note though that:
--we still need an exception for the logical connectives and the modal operators; that is, we appear to need both HFA and FA.
--what does this do to the pragmatics?
--we have the "every" problem.

Running with the "every" problem...
this seems to suggest that when we have an operator that coordinates sets, we gotta type-lift the sets. So [[every]] takes a pair of sets of properties. We can give an entry for this:

...does this entry clash with our nice system up above???
Here's how it might be phrased as an objection. It seems like the truth conditions of

Every guest ate or drank

require early unioning: Ax[guest(x) -> (ate(x) or drank(x))]

But if we require this, shouldn't we reject our semantic entry for "or", which passes up undigested disjuncts? There are several solutions here. One could give an entry for [[every]] that achieves this effect:

[[every]] = \lambda f \subseteq D_et . \lambda g \subseteq D_et . \forall x[x \subseteq Union(f_n in f) -> x \subseteq Union(g_n in g)]

...but I'm not sure this is the way to go. It is theoretically expensive in our new system because it requires FA rather than HFA, and we had previously restricted FA to operators on propositions.
[LIGHTBULB! If we move the quantifier, maybe we actually DO have an operator on a proposition: Every guest 1 t1 ate or drank. "t1 ate or drank" could be type-lifted to a proposition. Must investigate this.]

Here's how resistance might go, though. The inference

(P) Every guest ate or drank.
(C) Therefore, some guest ate and some guest drank.

Sounds good to me, but not so strong I'd want to call it semantic. I might just resist here.

What does Simons have to say about this?

"the examples suggest that the presence of an operator--perhaps simply of a quantifier--can, and in some cases perhaps must, put a halt to the independent composition." (17)

"I...tentatively suggest...[that] we simply have a choice in these cases, i.e. the compositional possibilities are not completely determined by the syntactic structure. Independent composition can halt at any node where there is an alternative option for composition, ie whenever composition arrives at a head, such as a modal, which can combine with a set argument directly." (20)

Sunday, March 7, 2010

Post-Meeting 3/5/10

Desiderata:
1) become better-versed in possible Gricean maneuvers w.r.t. Ross's puzzle. In particular, for the wide-scope version, a relevant alternative to the asserted

(Might p) v (Might q)

is

(Might p) & (Might q)

....in other words, why not just say the implicature? It's not any more complicated!
[initial response: ooh, damn. True dat.]

2) Understand Kratzer's semantics for modals. This has a conversational background parameter f in the index. Let's understand indices and contexts! Woo!

3) I never got to ask about propositions. Still not sure what they are. Definitely the case that in a tensed language there are tensed propositions: functions from world-time pairs to truth-values (in other words: you need both a world and a time to get to a truth-value.)

4) Regarding tensed propositions: consider the philosopher who thinks we don't believe tensed things. Therefore we need to believe eternal things. (Therefore there has to be hidden syntactic material in sentences like "Joe is fat"?...maybe.) Then the things we believe are not propositions? At least, according to the eternalist, they are not. "Nixon is president" does not have a truth-value/is not a proposition; it is a function from times to propositions.

Wednesday, March 3, 2010

Post-Mock with Richard

Thanks, Richard!!

So, I began by making a strong case for the pragmatic implicature reading. I then felt a little stuck: I had made *such* a good case for the Ross entailment's being pragmatic, that another, semantic version seemed unnecessary. I'll now to try to say (more clearly than I did then) just what needs to be said next.

First, the LF in which disjunction takes narrow scope has independent interest. It seems to be allowed by our syntax, so there is a question of what the semantics is for it.

Secondly, it is hard to claim that all claims with Surface Form MIGHT(A v B) are really wide-scope disjunctions: Zimmermann doesn't even try. (He just claims that it is the syntactic "sine qua non" of his solution. Presumably his evidence for the syntactic claim is only as good as the claim that, the LF must be a wide-scope disjunction in order to validate the free choice inference. But this is exactly what Simons and I dispute; if we CAN give a semantics for the narrow LF on which the entailment still holds, the evidence for taking the surface form to really be a wide-scope disjunction *because the Ross's entailment is felt* evaporates.)

Thirdly, there is a good set-centric generalization of our current semantics for MIGHT and MUST which suggests the interpretation for narrow-scope disjunction I'd like to give....since it is intuitively the function of MIGHT and MUST to characterize SETS, it is prima facie plausible that they could predicate irreducibly setwise properties of these sets. (We lack an argument, at least, that they cannot or shouldn't.)

Fourthly, the generalization I have suggested is supported on independent grounds found at the semantics-pragmatics interface. There is good reason to believe, for example, that what is merely a good inference at the level of unembedded assertion is a semantic entailment at the level of belief ascription. Why is *this*? Well, it relates to the idea that belief contexts are intensional (or even hyperintensional).

Consider, for example, a philosopher who (like nearly everyone) accepts that
"Hesperus is Phosphorus is nec. true" is true iff "Phosphorus is Phosphorus is nec. true" is true.
but nonetheless denies that
"Joe believes Hesperus is Phosphorus is nec. true" is true iff "Joe believes Phosphorus is Phosphorus is nec. true" is true. (Compare: "Joe believes Hesperus might not be Phosphorus" is true iff "Joe believes that Phosphorus might not be Phosphorus" is true.)
As for belief ascription, so for epistemic modal statements (which characterize information).

Compare, also the general maxim that belief ascriptions attribute diagonalized propositions, which comes directly from the maxim, "don't say "Joe believes p" unless Joe would assert "p."" This enables a (hyperintensional) account of belief ascription to piggyback on Stalnaker's diagonalization strategy in "Assertion." [which was originally a theory of communicative content.] If we do this, then we can get an account of why you shouldn't say e.g. "Joe believes that p or q" unless Joe would assert "p or q"--which, for Gricean reasons, he probably wouldn't assert unless both disjuncts were epistemically possible for him. Depending on the semantics we give for belief assertion [and the formalization diagonalization provides], it could be positively FALSE to ascribe to Joe the belief that p or q unless both disjuncts are epistemically possible for him.

Another route to (possibly?) the same point: we can associate with a sentence at a context both (a) a semantic value and (b) the characteristic effect the assertion of the sentence has on the context. When we get into belief contexts, we are characterizing an agent's state of mind; when we say "a believes p," we ascribe to him a state of mind which is the state of the context brought about by a felicitous assertion of p. Thus, we can connect the pragmatic effects of unmodalized disjunction with the semantic effects of modalized disjunction.

Lastly, and relatedly, the "mixed info-state" semantics I suggest for embedded disjunctions can help us account for puzzles like MacFarlane and Kolodny's miners puzzle.