Here's something I was wondering about during Tim Williamson's talk at the Arché Vagueness Workshop yesterday.
Suppose you want to keep bivalence, and you also think that the only way for something to be an aspect of a predicate's meaning is for our use of the predicate to determine that it is (both Williamsonian thoughts, I gather). But you also think it is at least prima facie implausible that usage determines an exact cut-off point for all the vague predicates (a thought pressed against Williamson by McGee and McLaughlin).
We seem to have at least two options.
1: Accept the implausible-sounding claim (with Williamson), or
2: Argue that many of the things we think are vague predicates ('red', 'bald' etc.) are in fact not predicates at all. By bivalence, in order for them to be predicates we would need there to be cut-off points for their application. But it is implausible that our use of these terms determines such cut-off points. And there is nothing else which can do this sort of meaning-constituting work.
It's initially implausible, sure, that words like 'red', 'bald' and so on are not predicates. But my question is: by what sorts of methodological considerations do we weigh this implausibility against that of the claim that usage determines cut-off points for all such terms?
(Incidentally, Williamson mentioned that he talks about Option 2 in his book on vagueness, which I haven't had a chance to look at yet. He obviously has reasons for preferring Option 1, which I will be interested to read.)