Friday, April 9, 2010

Imagination

What do we philosophize about imagination for?
1) Berkeley: to show that idealism is true
2) Martin: to show that disjunctivism is true
3) Peacocke: ? [to show that Berkeley is right]
4) (me?) to investigate whether and how imagination is a guide to (some kind of) possibility
(Peacocke too: to rebut argument about our knowledge of other minds and the possibility of inverted qualia).

Or, we could come at imagination directly from the following puzzles:

The form of "imagines" statements: Compare
(1) Joe imagines flying above San Francisco.
(2) Joe imagines a brown banana.
(3) Imagine flying above San Francisco!
(4) Imagine a brown banana.
(Compare: "Joe wants...")

****
Berkeley, Martin, Peacocke
****
Hypothesis: To imagine is to imagine perceiving.


*****My view?****
Hypothesis: the content of (visual) imagining is the content of experience, in a hypothetical mode.

A worry: imagination is too subjective to serve as a guide for possibility; it leads us to psychologism and solipsism.

Reply: maybe yes, maybe no. We need an account of the objectivity of the content of imaginings. Indicative conditionals shall be our guide!

A tour of the philosophy of indicative conditionals.
"One standard way of approaching the problem....begins with the assumption that a sentence of this kind expresses a proposition that is a function of the propositions expressed by its component parts...[a conditional assertion is a standard kind of speech act with a distinctive kind of content--a conditional proposition.] But there is also a long tradition according to which conditional sentences..are used to perform a special kind of speech act." (Stalnaker 1)
*the content/force distinction--used, for example, in philosophy of memory. (Call the distinctive force "conjecture," perhaps,)

*a feature shared by both camps: the creation of a ``derived context" by the antecedent.

*Do indicative conditionals have highly context-sensitive truth-conditions (Stalnaker), or no truth-conditions at all?
"What must be granted is that in some cases, indicative conditionals are implicitly about the speaker's beliefs. We must allow that what I say when I say something of the form 'if A, then B'may not be the same as what you would have said, uttering the same words." (Stalnaker, 12, emphasis added)

What pushes Stalnaker to this conclusion? Consider cases like the infamous case of Sly Pete!

[...]

Another case: The miners!

Lesson: we must restrict reasoning in the scope of an assumption. Two ways to do this: limit the application of certain rules in the scope of an assumption (Byrne, MacF and K), or forbid discharging of an assumption (Heck?), or create more cases (me?)...

Monday, March 29, 2010

Post-meeting with Maziar 3/29

Q and A!

Q1) What are AP, IP, and [dot] as node-labels?
A1) AP = Adjective Phrase, IP = Inflectional Phrase, aka "S"...in the 90's and 00's, IPs changed to "TP"s ("T" for "tense"). As for the [dot], it appears to be something idiosyncratic to H&K.

Q2) What is "exportation"?
A2) no good answer/depends on context.

Q3) What's an "intervention effect"?
A3) This also depends, but think of "intervention" as "in between"; for example, when someone talks about negation and intervention, the question is whether the negation can take scope between two other scope-taking items in the sentence.

Q4) What about the semantics of "everything you like, I hate"?
A4) "everything you like" has been topicalized in this sentence; the underlying form should be "I hate everything (wh) you like." From that we can get "everything wh1 you like t1 is s.t. I hate it."

Friday, March 12, 2010

Simons Revisited

In particular, today I'm revisiting the type-lift. Should we just say that x in D_e are singletons of individuals, p in D_st are singletons of propositions, and be done with it?

Perhaps we should. Here are the benefits:
--solves the problem of OR-coordinations for nodes in D_e
--unifies account with DPs?
--solves the problem of independent composing for wide-scope OR
--limits the use of FA to OR/AND, EVERY (?), MIGHT/MUST: ie, operators on sentences. (this doesn't work for "every". Still worried about "every"...)
--if we, following H & vF, take the i variable to be tacit in the syntax--in fact, even if we don't but we like Yalcin's Hamblin-inspired modal resolution semantics--we have a nice result: MIGHT relates a pair of sets-of-sets.
[another way of putting it: we have independent evidence from *each* argument MIGHT takes that the types of the arguments should be sets of propositions rather than just propositions.]

Note though that:
--we still need an exception for the logical connectives and the modal operators; that is, we appear to need both HFA and FA.
--what does this do to the pragmatics?
--we have the "every" problem.

Running with the "every" problem...
this seems to suggest that when we have an operator that coordinates sets, we gotta type-lift the sets. So [[every]] takes a pair of sets of properties. We can give an entry for this:

...does this entry clash with our nice system up above???
Here's how it might be phrased as an objection. It seems like the truth conditions of

Every guest ate or drank

require early unioning: Ax[guest(x) -> (ate(x) or drank(x))]

But if we require this, shouldn't we reject our semantic entry for "or", which passes up undigested disjuncts? There are several solutions here. One could give an entry for [[every]] that achieves this effect:

[[every]] = \lambda f \subseteq D_et . \lambda g \subseteq D_et . \forall x[x \subseteq Union(f_n in f) -> x \subseteq Union(g_n in g)]

...but I'm not sure this is the way to go. It is theoretically expensive in our new system because it requires FA rather than HFA, and we had previously restricted FA to operators on propositions.
[LIGHTBULB! If we move the quantifier, maybe we actually DO have an operator on a proposition: Every guest 1 t1 ate or drank. "t1 ate or drank" could be type-lifted to a proposition. Must investigate this.]

Here's how resistance might go, though. The inference

(P) Every guest ate or drank.
(C) Therefore, some guest ate and some guest drank.

Sounds good to me, but not so strong I'd want to call it semantic. I might just resist here.

What does Simons have to say about this?

"the examples suggest that the presence of an operator--perhaps simply of a quantifier--can, and in some cases perhaps must, put a halt to the independent composition." (17)

"I...tentatively suggest...[that] we simply have a choice in these cases, i.e. the compositional possibilities are not completely determined by the syntactic structure. Independent composition can halt at any node where there is an alternative option for composition, ie whenever composition arrives at a head, such as a modal, which can combine with a set argument directly." (20)

Sunday, March 7, 2010

Post-Meeting 3/5/10

Desiderata:
1) become better-versed in possible Gricean maneuvers w.r.t. Ross's puzzle. In particular, for the wide-scope version, a relevant alternative to the asserted

(Might p) v (Might q)

is

(Might p) & (Might q)

....in other words, why not just say the implicature? It's not any more complicated!
[initial response: ooh, damn. True dat.]

2) Understand Kratzer's semantics for modals. This has a conversational background parameter f in the index. Let's understand indices and contexts! Woo!

3) I never got to ask about propositions. Still not sure what they are. Definitely the case that in a tensed language there are tensed propositions: functions from world-time pairs to truth-values (in other words: you need both a world and a time to get to a truth-value.)

4) Regarding tensed propositions: consider the philosopher who thinks we don't believe tensed things. Therefore we need to believe eternal things. (Therefore there has to be hidden syntactic material in sentences like "Joe is fat"?...maybe.) Then the things we believe are not propositions? At least, according to the eternalist, they are not. "Nixon is president" does not have a truth-value/is not a proposition; it is a function from times to propositions.

Wednesday, March 3, 2010

Post-Mock with Richard

Thanks, Richard!!

So, I began by making a strong case for the pragmatic implicature reading. I then felt a little stuck: I had made *such* a good case for the Ross entailment's being pragmatic, that another, semantic version seemed unnecessary. I'll now to try to say (more clearly than I did then) just what needs to be said next.

First, the LF in which disjunction takes narrow scope has independent interest. It seems to be allowed by our syntax, so there is a question of what the semantics is for it.

Secondly, it is hard to claim that all claims with Surface Form MIGHT(A v B) are really wide-scope disjunctions: Zimmermann doesn't even try. (He just claims that it is the syntactic "sine qua non" of his solution. Presumably his evidence for the syntactic claim is only as good as the claim that, the LF must be a wide-scope disjunction in order to validate the free choice inference. But this is exactly what Simons and I dispute; if we CAN give a semantics for the narrow LF on which the entailment still holds, the evidence for taking the surface form to really be a wide-scope disjunction *because the Ross's entailment is felt* evaporates.)

Thirdly, there is a good set-centric generalization of our current semantics for MIGHT and MUST which suggests the interpretation for narrow-scope disjunction I'd like to give....since it is intuitively the function of MIGHT and MUST to characterize SETS, it is prima facie plausible that they could predicate irreducibly setwise properties of these sets. (We lack an argument, at least, that they cannot or shouldn't.)

Fourthly, the generalization I have suggested is supported on independent grounds found at the semantics-pragmatics interface. There is good reason to believe, for example, that what is merely a good inference at the level of unembedded assertion is a semantic entailment at the level of belief ascription. Why is *this*? Well, it relates to the idea that belief contexts are intensional (or even hyperintensional).

Consider, for example, a philosopher who (like nearly everyone) accepts that
"Hesperus is Phosphorus is nec. true" is true iff "Phosphorus is Phosphorus is nec. true" is true.
but nonetheless denies that
"Joe believes Hesperus is Phosphorus is nec. true" is true iff "Joe believes Phosphorus is Phosphorus is nec. true" is true. (Compare: "Joe believes Hesperus might not be Phosphorus" is true iff "Joe believes that Phosphorus might not be Phosphorus" is true.)
As for belief ascription, so for epistemic modal statements (which characterize information).

Compare, also the general maxim that belief ascriptions attribute diagonalized propositions, which comes directly from the maxim, "don't say "Joe believes p" unless Joe would assert "p."" This enables a (hyperintensional) account of belief ascription to piggyback on Stalnaker's diagonalization strategy in "Assertion." [which was originally a theory of communicative content.] If we do this, then we can get an account of why you shouldn't say e.g. "Joe believes that p or q" unless Joe would assert "p or q"--which, for Gricean reasons, he probably wouldn't assert unless both disjuncts were epistemically possible for him. Depending on the semantics we give for belief assertion [and the formalization diagonalization provides], it could be positively FALSE to ascribe to Joe the belief that p or q unless both disjuncts are epistemically possible for him.

Another route to (possibly?) the same point: we can associate with a sentence at a context both (a) a semantic value and (b) the characteristic effect the assertion of the sentence has on the context. When we get into belief contexts, we are characterizing an agent's state of mind; when we say "a believes p," we ascribe to him a state of mind which is the state of the context brought about by a felicitous assertion of p. Thus, we can connect the pragmatic effects of unmodalized disjunction with the semantic effects of modalized disjunction.

Lastly, and relatedly, the "mixed info-state" semantics I suggest for embedded disjunctions can help us account for puzzles like MacFarlane and Kolodny's miners puzzle.

Friday, February 26, 2010

Zimmermann basics

Zimmermann is interested in analyzing (unembedded) disjunctions as "lists of epistemic possibilities." Here, what "list" crucially involves is closure: a list of items is closed when nothing else can be added to the list: (intuitively, the list items union to and *cover* the contextually determined set over which we are quantifying--so nothing can be added to the list except a set which is a union of subsets the sets that are already in it). Various "closure operators" which may be added to "open" lists are discussed (and many of them involve Montague type-lifting the items in the list.)

What this involves for choice sentences, syntactically speaking, is that all choice sentences are analyzed as wide disjunctions (the contrast here with Simons couldn't be greater.) So the following inference does follow for Zimmermann:

Might(A)
Therefore, Might (A v B)

...it's just that the conclusion is not a parse that occurs in natural language. As far as I can tell, this gives [[or]] quite a complicated semantic entry. It is going to be a recursive, raised-type thing, but the basis for the recursion will be:

[[or]] = \lambda p . \lambda q . might p and might q.

In order to figure out how the recursion goes, I'll need to learn more about Montague lifts.

Friday, February 19, 2010

Post-Meeting 2/19/10

--Challenge: Consider two felt entailments:
(1) (Might A) v (Might B) => (Might A) & (Might B)
(2) Might (A v B) => (Might A) & (Might B)
Considering that we will, according to Simons, need recourse to pragmatic mechanisms to explain the first felt entailment, isn't a semantics that gives us a semantic entailment in (2) redundant?

--Answer: I'm not sure. It depends on:
1) whether the first felt entailment is really that strong. I guess I don't think it is--especially not on the assumption that epistemic modal operators work in the same way that deontic and other modals do (but perhaps this is not a good assumption to make...) Even without relying on the analogy with other forms of modality, it seems like the felt entailment is NOT CANCELLED by the rider "but I don't know which", but rather forced to a reading in which the entailment was never felt in the first place!

There is Zimmerman's argument that A v B => Might A & Might B. This is a good one in most circumstances. But this is definitely pragmatic.

2) the status and viability of my/Simons's claim that epistemic modal operators by default take type-lifted arguments--that is, sets of propositions rather than bare propositions. Reply: but the Hamblin Type-shift will make it the case that bare propositions are also (singleton) sets of propositions. Counter-reply: ok, well, I was considering an alternative semantics in which the type-lift doesn't occur until the derivation hits an operator that demands it: that means that, when [[Might]] hits a set {p1, p2} of propositions, it will try to compose with the whole unit before trying to compose with the individual disjuncts. Since it CAN compose with the whole unit, that's what it will do--it will never go to the fallback step. (Simons thinks that it does, sometimes, but I don't think so.)

Q) How devastating is it, for Simons, that on her semantics [[must]] and [[might]] aren't duals?
A) It's not so good, but note that in her favor they do come out duals in the single-proposition case: that's the case about which we have the strongest prima facie intuitions.

Counter-answer) Yeah, but, those negations of the second type (``You can't take French or Spanish, you HAVE to take Spanish") sound awfully metalinguistic!
[what is the status of metalinguistic negation...?]