This week I've been thinking about JC Beall's fun paper 'True, False and Paranormal'. Essentially, the idea is to model our truth-talk with a disquotational predicate 'T', and have a language which can exhaustively characterize all its sentences as 'true', 'false' or 'other', without thereby admitting true contradictions. Sentences of the model can take any one of five values, 0, .25, .5, .75 and 1, of which .75 and 1 are designated. A new term 'pi' is introduced (sorry I don't have a pi-symbol here!) to mean 'paranormal' (or 'other'), and pi-A takes the value .75 when v(A) = .25, .5 or .75, and takes the value 0 otherwise.
It seems that the view requires that there is no predicate in the model language like D ('D' for 'designated'), where:
v(D[A]) = 1 or .75 when v(A) = 1 or .75
and v(D[A]) = 0 or .25 when v(A) = 0, .25 or .5
Otherwise the sentence
A*: ¬D[A*]
presents a problem. (It is designated iff it isn't.)
But it seems to me that such a predicate is needed in order for the model language to be capable of expressing claims about its own semantic machinery (that is, capable of expressing the designation stuff that we can talk about in the metalanguage). Nothing else will do; in particular, any predicate D' such that the value of D'[A] is .5 when the v(A) = .5 does not express designation; any designation claim must take value .25 or less when v(A) = .5.
Why? Because .5 is not a designated value. Of course, that only means its true-in-real-life that .5 is not designated, which you might want to say does not imply that the value of D[A] should be an undesignated value. Truth-in-real-life (as JC has helpfully stressed to me in conversation) is not supposed to be modelled by designation but by truth-in-the-model (i.e. the behaviour of 'true' in the model).
But I think in order for our D to express designation we need D[A] to be false-in-the-model when v(A) = .5, which (unless I'm missing something) means we want F[D[A]] to be designated when v(A) = .5, i.e. we want T[¬D[A]] to be designated, which can only happen if ¬D[A] is designated, which in turn requires that v(D[A]) be at most .25 (because in this model negation toggles designated values with values of at most .25).
If Beall is saying that no model is capable of expressing these things he seems to be forced to say either that no model is adequate as a model of English, which can talk about its own semantic machinery, or that, like the models, English cannot express claims about its own semantic machinery.
If one says the first thing, then one has to admit that these models can't help us understand the Liar - after all, the Liar arises because English can express claims about its own semantic machinery, and if the models don't model this feature of English, they are irrelevant to the Liar. (This point relates to Beall's claim to have preserved exahustive characterization: the exhaustive characterization we wanted was an exhaustive characterization of all the sentence in the model in terms of their semantic values, but we didn't get that.)
And familiarly, if one says the second thing, one places an implausible limitation on the expressive powers of English.
Friday, April 21, 2006
Subscribe to:
Post Comments (Atom)
1 comment:
Carrie, a few thoughts. You say:
If one says [that no model is adequate as a model of English, which can talk about its own semantic machinery], then one has to admit that these models can't help us understand the Liar - after all, the Liar arises because English can express claims about its own semantic machinery, and if the models don't model this feature of English, they are irrelevant to the Liar.
This doesn't seem right to me. Let's suppose for the sake of argument that no model can capture that aspect of English language which is its ability to talk about all of its own semantic machinery. Does it follow from this that no model is an adequate model of English, or that no model can help us understand the Liar? Certainly not.
Let me take the second question first. The simple Liar sentence "this sentence is false" can be modeled in Beall's paranormal language and can be characterized in the language. It is a paranormal sentence, and it is true to say that it is paranormal. So even if this language cannot talk about all of its own semantic machinery, it can still help us understand the Liar.
The first question is more interesting and gets to the heart of your concern. Why think that no model is an adequate model of English? Because, as we supposed, no model can talk about all of its own semantic machinery. But surely English can! Or so the objection goes. Question: is it true that English can talk about all of its own semantic machinery?
There are two routes on might take to an answer. One route is the brute intuition that it can. The second route is to offer a particular model (paranormal, or otherwise) that seems to explain all of the otherwise pressing concerns about the semantics of our real language, but cannot talk about all of its own semantic machinery. On the second route we have a good reason to think that the same holds true in our real language, so we might just reject the brute intuition.
Nothing seems to bar the second route, and it is probably the sort of thing Beall should say. Furthermore, one can take the second route without being forced to conceede that formal models fail to be explanatory.
Post a Comment