Thursday, April 27, 2006

Against 'Against Vague Existence'

In 'Against Vague Existence', Ted Sider argues that our quantifiers cannot be vague, because it is impossible to characterize semantic vagueness in our quantifiers in the usual way; that is, in terms of multiple admissible precisifications. Sider considers as a test case whether it could be indeterminate, due to vagueness in the existential quantifier, whether the following was true:

(E): Ex (x is composed of the F and the G).

Sider claims that the ‘familiar model’ for spelling out how such indeterminacy comes about would apply to this case as follows:

(P1): ‘E’ has at least two precisifications, call them E1 and E2. There is an object, x, that is in E1’s domain but not in E2’s domain, and which is composed of the F and the G. Thus, (E) is neither definitely true nor definitely false.

But, as Sider points out, ‘the defender of vague existence thinks that it is not definitely true that there is something composed of the F and the G ... She will therefore not make this speech’ (p. 139).

Sider proceeds to offer three options to the defender of vague existence: rejecting the need to non-vaguely describe the precisifications, using vague quantifiers to non-vaguely describe them, and finding non-quantificational non-vague language to describe them. None of these is my preferred way of resisting Sider’s argument. Nor do I wish to resist (here) his two presuppositions: that the indeterminacy of (E) would have to be semantic (as opposed to ontic), and that this means we need to explain it in terms of multiple admissible precisifications of the existential quantifier.

Instead, I propose we investigate how much can be achieved through paying careful attention to scope in describing the relevant precisifications. Instead of (P1) above, perhaps we can appeal to:

(P2): ‘E’ has at least two precisifications. On the first precisification, there is an object, x, which is composed of the F and the G. But on the second precisification, there is no such object. Thus, (E) is neither definitely true no definitely false.

Note that, now, the existential quantifiers appearing in the account of why (E) is neither definitely true nor definitely false occur within the scope of the operators ‘on the first precisification’ and ‘on the second precisification’. Hence there is no need for any metalanguage commitment to an object which is composed of the F and the G.

(P2) seems to do what Sider requires: it talks about two precifications for ‘E’, and explains what it is about these two precisifications which results in the indeterminacy of (E). So what’s wrong with (P2)?

Wednesday, April 26, 2006

Field Motivates Concept Grounding

In his paper 'Recent Debates About the A Priori', Hartry Field seems to me to be spot on the following passage (pp. 13-14 of the online paper):

[T]he key issue is ... : why should the fact, if it is one, that certain beliefs or inferences are integral to the meaning of a concept show that those principles are correct? Why should the fact, if it is one, that abandoning those beliefs or inferences would require a change in meaning show that we shouldn't abandon those beliefs or inferences? Maybe the meaning we've attached to these terms is a bad one that is irredemably bound up with error, and truth can only be achieved by abandoning those meanings in favor of different ones ...

However, I don't agree with Field that this point is a step on the way towards his conclusion, namely that to claim we have an entitlement, at least in the case of 'basic' beliefs or logical rules, is merely to express an attitude of approval toward those beliefs or rules.

Rather, what Field's questions rightly raise to notice is the need for some account of why we should trust the meanings we have attached to our words (or, as I would prefer to say, why we should trust the concepts we express by them) to serve as epistemic guides to the way the world is. That is, we need a story about concept grounding to explain why these meanings or concepts are rightly taken as encoding information about the world which we can recover through introspection.

Friday, April 21, 2006

Beall and the Paranormal

This week I've been thinking about JC Beall's fun paper 'True, False and Paranormal'. Essentially, the idea is to model our truth-talk with a disquotational predicate 'T', and have a language which can exhaustively characterize all its sentences as 'true', 'false' or 'other', without thereby admitting true contradictions. Sentences of the model can take any one of five values, 0, .25, .5, .75 and 1, of which .75 and 1 are designated. A new term 'pi' is introduced (sorry I don't have a pi-symbol here!) to mean 'paranormal' (or 'other'), and pi-A takes the value .75 when v(A) = .25, .5 or .75, and takes the value 0 otherwise.

It seems that the view requires that there is no predicate in the model language like D ('D' for 'designated'), where:
v(D[A]) = 1 or .75 when v(A) = 1 or .75
and v(D[A]) = 0 or .25 when v(A) = 0, .25 or .5

Otherwise the sentence
A*: ¬D[A*]
presents a problem. (It is designated iff it isn't.)

But it seems to me that such a predicate is needed in order for the model language to be capable of expressing claims about its own semantic machinery (that is, capable of expressing the designation stuff that we can talk about in the metalanguage). Nothing else will do; in particular, any predicate D' such that the value of D'[A] is .5 when the v(A) = .5 does not express designation; any designation claim must take value .25 or less when v(A) = .5.

Why? Because .5 is not a designated value. Of course, that only means its true-in-real-life that .5 is not designated, which you might want to say does not imply that the value of D[A] should be an undesignated value. Truth-in-real-life (as JC has helpfully stressed to me in conversation) is not supposed to be modelled by designation but by truth-in-the-model (i.e. the behaviour of 'true' in the model).

But I think in order for our D to express designation we need D[A] to be false-in-the-model when v(A) = .5, which (unless I'm missing something) means we want F[D[A]] to be designated when v(A) = .5, i.e. we want T[¬D[A]] to be designated, which can only happen if ¬D[A] is designated, which in turn requires that v(D[A]) be at most .25 (because in this model negation toggles designated values with values of at most .25).

If Beall is saying that no model is capable of expressing these things he seems to be forced to say either that no model is adequate as a model of English, which can talk about its own semantic machinery, or that, like the models, English cannot express claims about its own semantic machinery.

If one says the first thing, then one has to admit that these models can't help us understand the Liar - after all, the Liar arises because English can express claims about its own semantic machinery, and if the models don't model this feature of English, they are irrelevant to the Liar. (This point relates to Beall's claim to have preserved exahustive characterization: the exhaustive characterization we wanted was an exhaustive characterization of all the sentence in the model in terms of their semantic values, but we didn't get that.)

And familiarly, if one says the second thing, one places an implausible limitation on the expressive powers of English.

Friday, April 14, 2006

UConn

I’ve just got back from the University of Connecticut, where I gave my paper on Modal Knowledge at the Philosophy Department Colloquium and got lots of helpful feedback. The UConn grad students have just started a blog, What Is It Like To Be a Blog?.

I also attended a conference on Conditionals. On Saturday, Dorothy Edgington told us what she thinks about subjunctive conditionals, namely that they do not have truth values but rather express the speaker’s belief that the conditional probability of the consequent on the antecedent is high (i.e., basically, they function pretty much the way she thinks indicatives do, but in a different tense).

I was worried that there seem to be cases where the consequent is unlikely to be true given that the antecedent is, yet still would be true if the antecedent was. The existence of such cases suggests that subjunctives and conditional probabilities are not correlated in the way Edgington claims. We discussed the following example (one suggested by Edgington when I raised my worry in discussion time). Suppose you decide at the last minute not to get on a plane which is very unlikely to crash, and the plane in fact crashes, killing all its passengers. It’s tempting to think that the probability of your dying conditional on having got on the plane is low, because the plane’s chances of crashing were low, yet clearly if you had got on the plane you would have died.

Edgington replied that we simply have to find the right conditional probability, in much the same way that (say) someone who likes closest-world semantics for these conditionals has to find the right similarity relation. In the above case, she said, the right conditional probability is that of your dying given that you got on the plane and it crashed. To save the proposal, you just need to find the right pieces of further information about the world to take into account, besides what is mentioned in the antecedent. This information will makes the conditional probability of the consequent high.

This might be thought to be a bit ad hoc, given that the crashing is not mentioned in the antecedent of the conditional we’re actually interested in. But regardless of that, I think it is a problem that the response won’t always work if there is indeterminacy in the world. For in that case, there will – or at least, could – be some unlikely events that just happen, and are not rendered any more likely however much further information about the world we take into account.

Suppose C is an event that would have happened, without being determined, if A had happened, although it was very unlikely to do so. Then the subjunctive:
If A had happened then C would have happened
looks true, although the conditional probability of C on A is low. And in this case there is no further information about the world that we can take into account which will make the conditional probability of C high – that is, there is nothing which can play the role played by the fact that the plane crashed in Edgington’s response to the first example.

So Edgington’s account of counterfactuals seems at risk of giving the wrong results unless we assume there is no indeterminacy. And this seems to be a distinctive problem for the account, not analogous to an problem faced by closest-world approaches.

Tuesday, April 04, 2006

Time Travel and Backwards Causation

It seems to be quite widely assumed (I am informed, by sources who read more about this sort of thing than I do) that backwards time travel requires backwards causation. It wouldn't suffice for time travel if in 2015 I climbed into a time machine and someone looking and acting exactly like how I look in 2015 came into existence for no apparent reason in 1926. Rather, that person's turning up in 1926 would have to be caused by my actions in 2015.

I agree with the insufficiency claim. But I'm not sure whether the causal requirement is motivated by it. (I'm also hung up on whether it might be a conceptual truth that causation happens forwards, not backwards, but let's not worry about that.) Wouldn't it be enough if there were the right kind of non-casual explanation of this person's turning up in 1926, in terms of my actions in 2015? If not, why not?

(I am concerned that I will be trussed up in a sack and sent back to Cambridge for posting on this, but I'm going to do it anyway ...)