Sunday, November 18, 2007

Arche Basic Knowledge Workshop

Yesterday I attended the first day of the 1st Arché Basic Knowledge Workshop. (Unfortunately I’m missing the second day.) There were papers by Ernie Sosa, Martin Smith and José Zalabardo.

Sosa presented some work from his new book project. Among other things, he suggested what he called a ‘transcendental argument’, intended to bolster the (anti-sceptical) thought that we are entitled to the position that our faculties are not completely unreliable. The main thrust of this was that the alternatives – rejection or even suspension – are cognitively unstable. But I'm not convinced; even granting that they are, and that this makes them epistemically unacceptable, in order for this to lend support the remaining option it must be assumed that there is always some epistemically acceptable option available to us. If we have no reason to assume that, the epistemic unacceptability of two out of our three options does not entail the epistemic acceptability of the third.

Smith presented an intriguing paper in which he suggested that we rethink our notion of justification so that it is not linked to ideas of probability-raising or probabilification. Instead, we should think that justification is a matter of what he called ‘normic support’. A normically supports B just in case the most normal worlds where A and B are more normal than the most normal worlds where A and not B. This suggestion raises a number of very interesting issues, but what caught my attention were Smith’s further claims about the interactions between randomness and normality. His notion of normality has it that any outcome of a genuinely random process is as normal as any other. One worry this raises is that if, as some results in quantum physics suggest, most or all of what happens in the physical world involves quantum-level random processes which can have macro-level effects, it will be difficult to order worlds for normality so as to generate the kind of results Smith has in mind. But it would be interesting to consider whether notions of normality without this link to randomness might serve in this context, even if they may not get quite the results Smith is after (such as the result that one is not justified in believing one's lottery ticket will lose).

6 comments:

Kenny said...

Do you know if there's a written version of the Smith talk? That sounds like an idea I've been having recently. In a lot of discussions in our formal epistemology reading group here at Berkeley, people have gotten some funny results using (among others) a premise that evidence E justifies proposition P only if E raises the probability of P. But I don't see why this has to be the case, especially since a Bayesian is going to say that people sometimes have very high priors for things in the complete absence of evidence. (This is bound to happen sometimes just because of the infinities involved.) So they might get evidence that slightly lowers their probability in the proposition, but nonetheless justifies them in believing it, where they weren't justified before.

Of course, this requires a notion of belief that isn't completely reducible to degree of belief.

Carrie Jenkins said...

Hi Kenny,

I don't know of a written version I'm afraid. I think it is work in progress, but you could always ask Martin himself!

I don't see why you say that the lowering of something's probability by acquiring evidence which justifies it (in the way you envisage) requires a notion of belief that isn't reducible to degree of belief.

Kenny said...

Perhaps it doesn't require it, but it might seem more natural that way. As I've described it, the agent can be perfectly reasonable in assigning a high degree of belief at first despite not being justified in believing, and then later assign a lower degree of belief while being justified in believing. If it's plausible that the agent has these reasonable assignments of degree of belief, while also believing when and only when she is justified, then she must go from disbelieving when her degree of belief is high, to believing when her degree of belief is lower. So belief can't just be a matter of having a high enough degree of belief.

Maybe it just comes down to having a high enough degree of belief for the relevant context, and the context has changed, but that's a more complicated position. All I want to say here is just that belief doesn't come down to a threshold of degree of belief (whether that threshold is at 1 or at some lower number).

Carrie Jenkins said...

What's wrong with saying the agent starts out with an unjustified belief then moves to a justified belief although her credence is lowered by the justifying evidence?

Kenny said...

The problem with that is just that it means someone with a perfectly reasonable prior has an unjustified belief. Maybe that's right, but it sounds a little odd.

Anonymous said...

Schrodinger said that the reason our
bodies or any material bodies hold together is that the aggregate of quantum fluctuations in these bodies
has a statistical average in favor of
those elements that hold it together--versus the elements that favor its dissolution. He said that the balance of the two may be very close. In other words we hold together only because the statistical probability favors that, but just barely.
If norms are an extension of our bodies holding together, in this case you could express norms--and one place being more normal than another-- as some sort of statistical phenomena.