Week 2 – Weatherson on pragmatics

The paper we discussed this week is here, and the handout is here.

Cian had a lot to say about this paper. Most of our discussion stemmed from certain kinds of counter-example that he suggested. I hope I’m not misrepresenting him in what follows:

The first kind of case is meant as a counterexample to the left-right direction of principle (1), which says roughly that you believe P if and only if conditionalizing on P doesn’t change any conditional preferences over things that matter. Weatherson says that the L-R direction ‘seems trivial’, but what about the following simple case:

I’m considering whether to buy insurance against meteor strikes. I believe my house won’t be struck by a meteor tomorrow; but I still buy the insurance just in case. Conditonal on my house not being struck by a meteor tomorrow, however, I prefer not to buy the insurance.

I guess what Weatherson would have to say is that your choosing to buy the insurance is incompatible with your believing that your house won’t be struck. Is this really that plausible? Perhaps it could be motivated by the following line of thought: a) if I don’t believe that it might be struck, I won’t buy the insurance. But b) if I believe that it won’t be struck, I don’t believe it might be struck. So c) if I believe it won’t be struck, I won’t buy the insurance.

The weak point here seems to be b). If we can’t believe that something won’t happen unless we believe that it’s not true that it might happen, then it’s hard to maintain we have many beliefs at all.

(It’s not essential to the case that the striking is in the future. Consider instead a kind of insurance which pays out if, unbeknownst to you, the house has already been struck.)

The second kind of case is also directed against the left-right direction of 1). Let q be a conspiracy theory which says that the real president of the US is chosen randomly by aliens, and the election is just a sham. Let p be that Obama is president. It seems coherent that we could prefer armed rebellion to acquiescence conditional on q, but prefer acquiescience conditional on p & q (reasoning that in this case, since the aliens by chance picked the right man for the job, we don’t need to bother rebelling).  Then the left-right direction of 1) tells us that we don’t believe p. But this seems silly – of course we believe that Obama is president.

Can Weatherson appeal to his restriction to live and salient options or to relevant and salient propositions to deflect these counterexamples? It seems not – presumably there is no problem with describing cases where armed rebellion and insurance-buying are possible for us, and we’re seriously considering them while still retaining our beliefs that the house won’t be struck by a meteor and that Obama is president. Similarly, presumably we can retain these beliefs even while taking seriously the conspiracy theory and meteor-striking possibilities, and/or currently considering them.

One way out would be to deny that, if we take the conspiracy theory or the meteor-striking possibility seriously, then we can’t believe that Obama is president or that the house won’t be struck. But what justifies this restriction? We worried that any sense of ‘taking seriously’ that would do the job would end up being parasitic on belief and therefore unavailable for use in characterising it. For example, the proposal that a necessary condition for taking something seriously consists in not believing that it’s false seems to fall prey to circularity.

The third kind of case Cian suggested was meant as a counterexample to the set of principles on page 10. In particular, he didn’t like the consequence that believing that p entails that conditionalizing on it can’t move you from believing q to believing ¬q. Suppose you have credence 0.5 in a coin landing heads, and credence 0.5 in it landing tails. Let q be ‘the coin lands heads’, and p be ‘there is no goblin that will make the coin land tails’. Now conditional on there being no goblin which will make the coin land tails, your credence in q should go up a tiny bit (after all, you’re still leaving the option open that there’s a goblin to make it land heads.) But if the threshold for belief is 1/2 as these principles suggest, then conditionalizing on p is enough to shift you from not believing q to believing q. So in normal cases of coin tossing we don’t believe that there is no goblin which will make the coin land tails. This isn’t good.

We had plenty of other thoughts, but that’s probably enough to be going on with!

Advertisements

2 thoughts on “Week 2 – Weatherson on pragmatics

  1. Lots of good comments here. Here are a couple of thoughts.

    I think I want to just bite the bullet on the goblin’s case. If you don’t give credence 0 to the goblins messing with the coin, then I don’t think you have any reason to have credence 0.5 in it landing heads. (Unless you have reason to think the goblins are exactly as likely to futz with it in a pro-heads way as a pro-tails way, but that would be an even odder case than the one described.) So given credence 0.5 in heads, you should have credence 0 in goblins messing with the coin. And if that’s right, conditionalisation won’t change the credence in heads. So I think if we’re modelling rational agents here, there isn’t a problem. Irrational agents, who assign credence 0.5 to heads, and positive credence to goblins, end up having odd views on my theory. But that’s right – those views are odd.

    The bigger issue concerns what it is I’m trying to analyse. I think there are a lot of cases where the English sentence “S believes that p” is intuitively correct, but I deny that S believes that p. That’s because the concept I’m trying to analyse isn’t the concept uniquely, or even I suspect usually, picked out by the English word ‘believe’. Well what concept is it yet?

    That’s something I could have been clearer on. But what I was interested in is something in the vicinity of the following three concepts.

    1) The attitude an agent should have towards p when we can take p to be one of the propositions that structures the game the agent takes themselves to be playing.

    When we’re doing game theory/decision theory, we often write down rules, payout tables etc, and say the agent is playing a particular game with those rules, payout tables etc. But what must things be like from the agent’s perspective if that’s true? Not just that they’re fairly confident that those rules apply, that those are te payout tables etc. If that were true, several game-theoretic arguments would trivially fail. One possibility would be to say the agent must have credence 1 that those things are true. But that would mean game-theory was never applicable. My idea is that we need an interest-relative concept here, namely the one I articulate.

    And I think this matches up with intuitions about these cases. I don’t think when running through the decision tree in the aliens case, we simply take it as given that Obama is President. There will be nodes in the tree where he is, and nodes where he isn’t. That means we aren’t taking it as given. That means we don’t, in my preferred sense, believe it. The same is true for the insurance case I think. Whenever you’re making a decision, and the best representation of the decision includes some ~p cells, then you don’t believe p.

    2) The states we must justifiably be in in order to have a justified belief, in the sense that epistemologists usually use the term ‘justified belief’.

    One can justifiably have high credence in a lottery proposition without, according to many theories, justifiably believing it. But I don’t think justified belief requires justified credence 1. That would be a sceptical theory. So we need something else; I think what we need is a theory like mine.

    3) The mental state component of knowledge.

    If you think knowledge is a mental state, or you think it isn’t ‘factorisable’. this won’t make a whole lot of sense.

    My hope, and this wasn’t really argued at all, was that (1), (2) and (3) would lock in on the same concept. And, moreover, that this would be an interest-relative concept. And, though this is insanely optimisitic, that it would be the concept I described.

    A less optimistic project was that the kind of theory I described, based around invariance under salient conditionalisation, would be the right analysis of the kind of thing that (1) is getting at. That is, what I’m really doing is drawing conditions for proper use of game/decision-theoretic models. I think that matches an epistemologically significant notion of belief, but that could well be a major mistake on my part.

  2. Thanks for that Brian, it really clarifies what you were trying to do in the paper.

    About the goblins – the worry is that we have basic symmetry reasons to assign 0.5 credence to heads and 0.5 credence to tails, given that the coin will land either heads or tails, regardless of what goblins are doing. This presupposes that we do assign equal credence to goblins fiddling in a pro-tails way as in a pro-heads way. But this again seems justified by the symmetry of the case – goblins seem just as likely to favour heads as to favour tails. But if you don’t find these latter symmetry considerations compelling, then we can just stipulate it in (ie switch to the case you call ‘even odder’.)

    So I don’t see why you think an agent is irrational who assigns (conditional on the coin landing either heads or tails) 0.5 credence to heads, 0.5 to tails, 0.00000001 to tails-favouring goblins, and 0.00000001 to heads-favouring goblins. And such an agent doesn’t believe there are no goblins of either sort according to the view in the paper. But given what you go on to say, perhaps that’s a bullet you want to bite, as a case of divergence between a folk notion of belief and a useful philosophical notion.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s