Week 7 – White on the principle of indifference

Handout here; original paper here. Unfortunately we ended up discussing a slightly out-of-date version of the paper – sorry not to have checked on whether a more recent version existed.

Summary – White makes a fairly compelling attack on ‘mushy credence views’ with the simple but ingenious example of the coin which has been painted p on one side, and not-p on the other, with whichever is correct going on the ‘heads’ side. If your credence in p starts out mushy, mushy credence views seem to predict either that your credence in ‘heads came up’ should go mushy, or that your credence in p should go precise. But either option seems to conflict with some fairly obvious premises. I was convinced by this side of the argument – and it does give us good reason to go back to the principle of indifference and see what was wrong with it.

We wondered whether the coin argument would work for unknowable p – as White states the argument, it relies on the person running the coin toss knowing whether p. But it seems we could replace ‘knows whether p” in the example with ‘has credence 1 either in p or in not-p”, and the same sort of objection recurs for the mushy credence view.

The problems for the mushy credence responses take us back to the multiple partitions problem, and to cases like van Fraassen’s cube factory. Given that a mystery cube is less than 2 feet wide, what should our credence be that it’s less than 1 foot wide? The answer given by the principle of indifference depends on whether we partition its state space by surface area, by volume, or by side length.

White says he ‘doesn’t really have an answer’ as to what to say about cases like this. But an answer can be extracted from what he says next, and I think it’s a plausible one. This is that our evidence does in fact tell on the question of which partition to use (or which weighted combination of partitions, perhaps) – it’s just that we’re not generally in a position to know what partition our evidence supports. This seems a good response to me, and also a promising direction for further enquiry. Principles governing rational choice of state-space partition for simple chance set-ups seem like viable topics of study – it seems at least plausible that we incorporate some such principles into folk theory, even if we don’t know precisely which ones they are.

The obvious question to ask in this connection is ‘what determines appropriateness for a choice of state space’? Different answers to this question look like giving different strengths of normativity for the rationality of applying them. For example, assume an indeterministic world – the ideal choice of state-space partition for some system’s state-space consists in a measure given by the laws over the state space given by the laws and the whole past history. But the norm ‘match credences to probabilities given by this ideal partition’ is very demanding to satisfy. To do it, we’d have to at least know the true laws and the whole past history, and be able to do the number-crunching. No existing rational being can get close. Compare this with norm ‘believe only the truth’ for rational belief in general.

A less demanding norm would say that the rational choice for state-space partition is the one based on the measure given by the laws over the state-space given by the laws and our evidence about the past history. That’s a lot less restrictive, so the state-space would be a lot larger. This seems to correspond better to our epistemic state. But it’s still absurdly demanding – we can weaker the norm further, and say that the rational choice for state-space partition is the one given by current best scientific theory over the state-space given by current best scientific theory and our evidence about the past history.

This final norm – appealing to evidence about the past plus current best scientific theory – seems to me like an appropriate candidate for the rational norm governing state-space partition choices in applications of the principle of indifference. It still, in a sense, requires logical omniscience to know what exactly is rational according to it – because we don’t in general know how the factors determining the partition determine it in detail. That requires accurate modelling of the system in question. But we can say something basic about which factors have an influence.

One thing that stood out for me is that White uses a non-Lewisian notion of objective chance throughout. For him chances seem to be (roughly) probabilities conditionalized on our actual evidence, where our priors are the ones a rational agent who knew the laws of nature would have. This means that the chance of a coin already tossed but not revealed can be 1/2 – Lewisian chances, which are conditionalized on the whole history, don’t allow for this. So White may have been thinking of one of the less demanding norms, perhaps the second of the three mentioned above.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s