The MLE blog returns!

Yep, that’s right: this blog is now back in business, in order to provide an online accompaniment to the MLE seminar series, now being run by Natalia Hickman and Neil Dewar. Although we can’t promise reports on the seminars in the manner of Al, we will post announcements of upcoming sessions and links to (Weblearn-held) copies of the reading.

We only have one more session this term: that will be on Wednesday of 8th week (4th December), from 4:30pm-6:00pm in the Colin Matthew Room; the paper will be “The Last Dogma of Type Confusions” by Ofra Magidor (presenter to be announced shortly). Other than that, the sessions from this term have been as follows:

Session 1 (Wednesday 23rd October)

Jonathan Schaffer, “Spacetime the One Substance” (presenter: Neil Dewar)

Session 2 (Wednesday 6th November)

Paul Hovda, “What is Classical Mereology?” (presenter: Josh Parsons)

Session 3 (Wednesday 20th November)

Daniel Greco, “Iteration and Fragmentation” (presenter: Natalia Hickman)

The future of this blog…

… is uncertain. Next term we’re passing on the running of the MLE seminar to James Studd and Andrew Bacon – it’ll be up to them if they want to continue blogging here about our discussions. It’s been fun – thanks everyone/anyone for reading!


Week 8 – Bennett on metametaphysics

In my and Natalja’s final session convening MLE we discussed Karen Bennett’s ‘Composition, colocation, and meta-ontology’, available here. The handout is here.

In this paper, Bennett distinguishes three different versions of the ‘dismissive’ attitude towards metaphysical questions, and asks whether any of them are appropriate in the case of the debates over composition and colocation. She (rightly, we thought) argues that we shouldn’t automatically put all metaphysical debates in the same category – dismissivism might be appropriate for some debates, but inappropriate for others.

The three kinds of dismissivism discussed are ‘anti-realism’, which claims that there is no fact of the matter about the answer to some metaphysical question; ‘semanticism’, which claims that some metaphysical question is ‘merely verbal’, and that the answer to it is analytic in our language (whichever language that is); and ‘epistemicism’, which claims that while a metaphysical question does have a non-analytic answer, we are not currently in a position to judge either way on it. Bennett goes on to argue that, for the debates she considers, semanticism is implausible and epistemicism is a live option. But because she doesn’t say much about anti-realism, the positive arguments for epistemicism seem pretty weak. (I wasn’t really convinced by the negative argument against semanticism either – it boils down to the claim that we can’t define things into existence, which will be denied by anyone with neo-Fregean sympathies.)

The argument for the disjunctive conclusion that either anti-realism or epistemicism is true about the debates considered goes via the claim that these debates are ‘difference-minimizing’. I wasn’t entirely sure what this was meant to mean – does whether a debate is difference-minimizing depend on the intrinsic properties of the issue being debated, or on the participants in the debate, or both? For the argument to lead to any substantive conclusions, I think it must be that the issue is intrinsically such that rational philosophers debating it will tend to difference-minimize – but Bennett on various occasion mentions philosophers (Burke, Rea, Cameron, Parsons) who don’t difference-minimize. Couldn’t this form the basis for a counter-argument? I suppose Bennett has to rely on the claim that these philosophers are just badly mistaken and have misjudged the intrinsic properties of the issue the debate is about. Either way, I thought the notion of ‘difference-minimizing’ was too vague and weak to have a strong metametaphysical conclusion founded on it.

Part of the argument that these debates are intrinsically difference-minimizing seems to be that structurally symmetrical problems arise for both sides of the debates. This feature of the dialectic, if genuine, does seem to be of real metametaphysical interest – someone who wanted to defend a form of structuralism about metaphysics might argue that the different sides agree on the structure of the correct view, which is all there really is to a view, so that they’re not really disagreeing at all (I take it this would amount to a form of anti-realism.) But it’s not clear how this feature gives us much motivation for epistemicism – if the debate really is symmetrical in nature, then the claim that there is an unknowable fact of the matter about which side is right seems dubious. Such a fact of the matter would be ‘metaphysically arbitrary’.

In any case, I wasn’t convinced that the debates are totally symmetric. Bennett argues by induction from 4 cases where a ‘twin’ argument can be given against one of the arguments used by one side, but that’s a pretty weak inductive base. Moreover, one of the examples looks flawed. Bennett argues that the ‘causal exclusion’ or ‘overdetermination’ argument used by the nihilist against the believer in composite objects has a twin argument which works against the nihilist – where a believer would say that a ball broke the window, even though the simples arranged ballwise were causally sufficient for the breaking, the nihilist must accept many pluralities of simples, all of which are causally sufficient to break the window. It doesn’t matter exactly which plurality we settle on. But this doesn’t look like a twin for the causal exclusion problem, it looks like a twin for the problem of the many.

Consider the following case – two simples travelling together jointly break a window. Neither of the simples by itself would have been sufficient for the breaking. The believer, who says that the pair which the simples composed was the object which broke the window, seems vulnerable to the causal exclusion argument; the simples were jointly sufficient, so why postulate the ball as a cause? (I’m assuming the simples aren’t many-one identical to the ball.) But the nihilist seems vulnerable to no analogous argument. There’s only one plurality of simples sufficient for the breaking – both of them. Thus, no causal overdetermination. And the reason there’s no argument against the nihilist here is just that, as I’ve set the case up, the problem of the many can’t get a grip. Hence my suggestion that while the nihilist does face an analogue of the problem of the many, he faces no analogue of the causal exclusion argument.

Week 5 – Schroeder on negation

The paper we discussed this week is here and my (very short) handout is here.

Schroeder is offering more of a general structure for an expressivist account than a fully-worked out one, and one of the points he’s fairly vague on is what descriptive predicate should typically follow the ‘is for’ attitude. For the purposes of the paper, he adopts a proposal of Gibbard’s, which analyses disapproval (a technical term for the expressivist) in terms of being for blaming for; so the idea is that ‘Jon thinks murder is wrong’ should be rendered as ‘Jon is for blaming for murdering’.

(Note that we can’t just adopt the ‘is for’ proposal without any descriptive predicate: ‘is for the non-occurrence of’ because this collapses two readings we want to keep distinct; the non-occurrence of not-murdering is the same as the occurrence of murdering, while not blaming for not murdering is not the same as blaming for murdering.)

Taken literally, it looks like there are counterexamples to the analysis in terms of blaming. There are surely cases where we think something is wrong, but are against blaming anyone for it, perhaps because we think that apportioning blame at all would be unhelpful. Similarly, the suggestion that we should use ‘avoiding’ falls foul of cases where we think something is wrong, but are not for avoiding it, because all of the alternatives are worse.

Of course, these observations rest on ordinary usage of ‘blame’ and ‘avoid’. If ‘blaming for’ is a technical term with a stipulative meaning, like ‘disapproval’ and ‘tolerance’ have traditionally been for the expressivist, then perhaps the problem can be nullified. So I’d suggest resurrecting the old Blackburnian terminology of ‘booing’ and ‘hooraying’, and saying that we have tacit knowledge of the meaning of these terms in virtue of our competence with moral discourse.

We can take either of these as primitive, and define the other in terms of it; for example, booing x is equivalent to hooraying not-x, and hooraying y is equivalent to booing not-y. The advantage of this is that it doesn’t seem to be vulnerable to the same kinds of intuitive counterexamples as any of the candidate descriptive predicates that Schroeder mentions. The disadvantage is that we then require two primitive notions in our expressivist semantic, rather than one (the being for relation).

Another thought we had was that the ‘being for’ proposal seems to lose some of the distinctive thought behind expressivism, that moral judgments consist in some attitude to the act whose morality is called into question. On Schroeder’s proposal given in terms of blaming for, thinking murdering is wrong doesn’t involve having some attitude to murdering; rather, it involves having some attitude to blaming for murdering. This allows us to ask for an explanation of why someone has their particular attitude to blaming for murdering. For Schroeder’s kind of expressivist, no explanation is possible; but for a moral realist, an explanation is easily available – it’s because murdering is wrong.

The worry, then, its that the demand on Schroeder’s expressivist for explanation of why someone has some particular attitude to blaming for x seems rather more pressing than the demand on the traditional expressivist to explain why they have some attitude to x itself. This can be thought of as a dilemma for the expressivists – either they have a working semantic theory without sufficient motivation, or they have a well-motivated theory which cannot explain logical validity for moral arguments. Schroeder, who is no expressivist himself, would presumably be happy with this dilemma.

One nice thing we noticed about the proposed analysis is that it disambiguates apparently distinct claims which the normal expressivist view runs together. Where a traditional expressivist would say ‘Jon strongly disapproves of murdering’, Schroeder’s expressivist can disambiguate this as either ‘Jon is strongly for blaming for murder’ or  ‘Jon is for strongly blaming for murder’.

Week 4 – Eagle on ‘might’ counterfactuals

This week we discussed some unpublished material by Antony on ‘might’ counterfactuals. The handout is here, and the paper is here.

We thought a bit about cases in which ‘could’ and ‘might’ come apart. In the paper, Antony discussed sentences like

33b)  If we’d left the gate open, the dog could have got out; yet if we’d left the gate open, it isn’t the case that the dog might have got out.

The felicity of such sentences seems to show that at least some ‘might’ counterfactuals shouldn’t be analysed in terms of ‘could’, but instead should be given an epistemic reading. Antony isn’t averse to this idea – in fact, his final view is that ‘might’ is ambiguous in counterfactual contexts between the epistemic reading and the ability reading. However, this does invite the further question of what determines the appropriate reading for some given ‘might’ counterfactual.

Fron 33b we naturally conclude though the dog has the ability to get out, it is so disposed as to not exercise this ability.  The only way to interpret someone who expresses the conjunction as not contradicting themselves is to give the ‘might’ and the ‘could’ different readings, and the ‘could’ tends to snaffle the ability reading, leaving the epistemic reading for the ‘might’. 33a, on the other hand, is intuitively clashing:

33a) If we’d left the gate open, the dog might have got out; yet if we’d left the gate open, the dog couldn’t have got out.

An explanation for this would be is that the ‘might’ in the first conjunct naturally takes an ability reading, which the second conjunct then contradicts. If this is right, then it looks like ‘the dog couldn’t have got out’ always takes an ability reading, while ‘the dog might have got out’ can take both the ability and epistemic readings.

This suggests the following difference between ‘might’ and ‘could’ in counterfactual contexts. When used as a predicate, as in the examples above, can/could always takes the ability reading. It only takes the epistemic reading when used as a sentence modifer, as in ‘it could be that the dog escaped’. May/might, on the other hand, can take either the epistemic reading or the ability reading when used as as a predicate. Like can/could, may/might always takes an epistemic reading when used as a sentence modifier.

On a separate issue, I wondered about how use of ‘might’ outside of counterfactual contexts fits with the duality account of the relationship between ‘might’ and ‘would’. Assume Goldbach’s conjecture is necessarily true. On a Lewisian account, there are no worlds in which Goldbach’s conjecture is false, so any counterfactual of the form ‘If x were the case, Goldbach’s conjecture might be false’ comes out false. As Williamson has argued, it seems a plausible principle of counterfactual logic that ‘if, whatever were the case, then p’ entails that necessarily p. Putting this together, the Lewisian account of ‘might’ in counterfactuals contexts entails that ‘Goldbach’s conjecture might be false’ is necessarily false. This is a bad result; we want to say that lots of speakers speak truly when they assert Goldbach’s conjecture might be false’, if their epistemic state is such as not to rule out its falsity. I’m sure this has been observed before; I’d be glad of any references.

Week 3 – Weatherson on counterexamples

Handout here, original paper here.

In this paper Brian Weatherson argues that we can in principle make substantive discoveries in theoretical philosophy which correct mistakes in our pre-theoretic beliefs about some subject matter. The crux of the argument is that, according to the right (eg the Lewisian) theory about meaning, the referents of our theoretical terms are often stable over small variations in use. In some domain where there are few very natural candidate referents to which we might plausibly be interpreted as referring , even relatively systematic false beliefs can be tolerated before use is changed enough for reference to change. Thus it can be the case that the correct response to an intuitive counterexample is to reject certain kinds of intuitions in order to preserve overall theoretical unity and simplicity.

The example looked at in detail is the Gettier counterexamples to the JTB theory of knowledge. Weatherson isn’t committed to the JTB theory – he just thinks that it isn’t straight-away refuted by the Gettier examples. Maybe the theoretical benefits obtained from a simple and elegant epistemological theory outweighs the need to capture intuitions in certain types of case.

The part of the argument I found most puzzling was the part where Weatherson responds to the objection that the JTB theorist must mean something other than we do by ‘knows’. First he argues for the Lewisian theory of meaning – all well and good. But the Lewisian theory of meaning per se is quite compatible with the view that a tribe speaking a language in which the meaning of ‘knows’ was given by the JTB theory would be speaking a different language from ours. This would depend on the details of how use is balanced against simplicity. Indeed, this view seems intuitively plausible – surely such a wide difference in use would constitute a change in meaning?

In explaining away this intuition, Weatherson goes on to argue that the natural properties in the vicinity of our use of ‘knows’ are extremely scarce. But talk of the distribution of natural properties here presents a puzzle. What does it mean to say ‘there are just no reasonably natural properties in the vicinity of our disposition to use ‘knows’.’? It seems like to make sense of this sort of talk, we need some kind of measure over global-disposition-to-use space with which to compare distances between properties. But nobody has a clue how to explicate such a measure; until we do, talk about scarceness or abundance of natural properties in certain domains has to be taken as merely suggestive and metaphorical.

Luckily, the dialectic does not require talk of distribution of properties to be made precise. The resources needed to stave off the ‘meaning-change’ objection to the JTB theory are more straightforward. We require at the very least that there must be no property which is a) as natural or more natural than the property of having a JTB and b) as well-matched or better-matched than the property of having a JTB to our use of the term ‘knows’. But we also need that there be no property which, while doing less well on one criterion than JTB, does so much better on the other criterion that it is the best referent for ‘knows’.

Weatherson’s argument that there is no such property goes via ‘the failure of the ‘analysis of knowledge’ merry-go-round to stop.’ I take it the thought is that despite our best efforts, we have failed to find any really good candidate analysis. Maybe this does lend support to the idea that there is no extremely simple and natural property which corresponds exactly to our use of the term ‘knows’. But nonetheless there are several properties, like the one picked out by a causal theory of knowledge, which seem to do better at capturing our intuitions about Gettier cases. They go wrong elsewhere; but nothing Weatherson says gives us any reason to think that these post-Gettier ramified theories of knowledge don’t do better overall than the JTB account.

Weatherson’s defence of the JTB theory would be in more trouble still if we could appeal to the Lewisian thought that conjoining natural properties leads to no loss of naturalness. Then theories which consist of the JTB account plus some other necessary conditions given in terms of very natural properties would turn out at least as natural as the JTB account. Such theories also tend to match usage better than the JTB theory (that was the point of introducing the extra necessary conditions, after all) so it looks like they should be strictly preferable. Then we’d have to argue that the reduced simplicity of the more complex theory is such a cost that it counteracts these considerations.  We’re then a long way from the ‘failure of the merry-go-round to stop’ line of argument.

However, the closing section of the paper is relevant here. For reasons which seem to me good ones, Weatherson argues against the idea that naturalness is always conferred on a conjunctive property by its conjuncts. If this is right, then we can take the ‘merry-go-round’ idea in a different direction. All of the proposed additions to the JTB theory involve conjoining extra properties onto the property of being a JTB; if conjoining extra properties can make a property less natural (for example in cases where there are multiple ways of satisfying the conjunctive property), then maybe all of the proposed additions to the JTB theory do score significantly lower on naturalness, enough to offset their better match with use. However, it’s unclear if Weatherson would want to go this way, as he doubts that the failure of naturalness to distribute across conjunction generalizes from the JTB property to all the proposed analyses of knowledge.

In this connection, I wasn’t convinced by the thought that the failure of naturalness to transferred to conjunctions from their conjuncts told in favour of the metaphysical thesis that naturalness is primitive. It seems quite compatible with the universals account that the properties of being F and being G are co-intensive with genuine universals, but the property of being-F-and-G is not co-intensive with a genuine universals. All we need to do is to reject conjunctive universals (or maybe just reject their genuineness.) And this is something we might want to do in any case. A similar response is available in the case of the primitive resemblance view. There can easily be cases in which the F’s resemble one another, and the G’s resemble one another, but the F-and-G’s do not all bear primitive resemblance relations to one another. All we need is that there is no single resemblance relation which holds among all the F-and-G’s, only two different resemblance relations, one holding among all the F’s and the other holding among all the G’s. Given that the resemblance relation is primitive, I can’t see any objection to stipulating that it works this way.

Week 2 – Kment on counterfactuals

A really interesting paper this week – it can be found here, and the presentation is here.

Kment’s main proposal is that match of matters of particular fact should be relevant to closeness of two worlds for the purposes of evaluating counterfactuals if and only if the matters of fact have the same explanation in both worlds. Secondarily, he proposes that we should allow for laws to have exceptions, and hence that all worlds which share the same laws as ours should be closer to actuality than any world with different laws.

We quite liked the main proposal, but worried about the individuation of explanations it relies upon. What are the conditions for two events to have the same explanation? For example, consider the counterfactual ‘if I had tossed the coin five minutes earlier, it would still have come up heads’. This seems false, but perhaps Kment can account for this falsity by saying that the coin’s coming up heads in the various A-worlds would have a different explanation from its explanation in the actual world, because it would have been caused by a different event of tossing.

However, what about ‘if I had tossed the coin one nanosecond earlier, it would still have come up heads’? Here we were much more inclined to take the counterfactual as true. Perhaps this difference goes along which goes along with the intuition that the actual tossing and the 1-nano-second earlier tossing count as the same event (or as counterparts according to some very natural counterpart relation), while the actual tossing and the 5-minute-earlier tossing count as different events. But if this is the line Kment would want to take, we’d need to hear more about how it is to work.

Finessing the individuation critera for explanations might also afford a solution to the problem case (25) which Kment mentions inconclusively. If the explanation for the lottery’s having the result it did does not include that phone A was used to make the call, but just includes that some phone of such-and-such qualitative character was used to make a call, then we would get the right result that even if phone B had been used, the result of the lottery would have been the same. This requires that the explanation of the lottery’s result should only include qualitative features of certain early-enough explanatory factors, rather than the whole fully-detailed causal story. That is, explanations should comprise roughly the minimal information required to determine their explanandum.

This solution involves dropping the transitivity of explanation which Kment explicitly assumes – because it is plausible that a call being made explains the outcome of the lottery, and that the use of phone A explains that a call was made. However, perhaps dropping transitivity of explanation is any case desirable. Consider the well-known counterexample to transitivity of causation – the boulder’s rolling down the mountain is the cause of the hiker’s ducking, and the ducking is the cause of his survival, but the boulder’s rolling is not the cause of survival. The same counterexample seems to work against transitivity of explanation – the rolling explains the ducking, and the ducking explains survival, but the rolling does not explain survival.

Another issue we thought about was the degree to which a Humean could adopt the notion of laws as having exceptions. Clearly it’s incompatible with Lewis’ own theory of laws, according to which the laws are those true universal generalizations which provide the best balance of simplicity and strength, but perhaps (as Antony suggested) a Humean view which took laws to be more like habitual statements would work. Habituals tolerate exceptions, but they still explain their instances.

Maria had a potential objection to this approach for the Humean (and to any view according to which there are restrictions on how many exceptions are possible before the laws have to be different) – suppose the number of exceptions in a world are right on the borderline for it’s having some particular laws. Then the extra small exception needed to accommodate some antecedent would involve consideration of an A-world with too many exceptions to have the same laws as the original world. Then the A-world which, intuitively, is the right one for evaluating the counterfactual would not come out as closest according to Kment’s criteria. So it looks like the view might only in fact be compatible with strong ‘immanent’ views of laws where any arbitrary number of exceptions are possible while the laws remain the same.

One final thought; it would be possible to hold that exceptions are possible to all special-scientific laws, but not to the fundamental laws, if such there be. This seems to fit well with usage: we talk of the ‘laws’ of statistical mechanics, even though they only hold with high probability, but we are much less willing to admit that the laws of fundamental physics might have exceptions. Someone who took this view of laws could carry over everything Kment says about ordinary counterfactuals, though might have to say something a little more counterintuitive about counterfactuals concerning fundamental physics (perhaps in a deterministic world we would have to count as true ‘if this electron had been over here and not over there, the matter distribution at the big bang would have been different’). However, this consequence might be ameliorated by the indeterminism of fundamental physics.