Saturday, April 28, 2007

Epistemic Conservatism

Daniel and I have been talking a lot about conservatism lately (Daniel's been writing a book chapter on it), and we're considering writing a joint paper on the topic. Here's one of the things we've noticed that we'd like to write about.

A few importantly different kinds of epistemic conservatism seem to be floating around in the literature, not remarked upon nor clearly separated from one another, although it is far from obvious how they are related.

Some versions are about how to update your beliefs (e.g. Quineans, Bayesians), others about how to evaluate beliefs at a time. Let's call these 'update-evaluating conservatism' and 'state-evaluating conservatism' respectively. In the latter category, there are some versions which say that what matters is your belief state at an earlier time than the time which is being evaluated (e.g. Sklar), others which say that what matters is your belief state at that very time (e.g. Chisholm). Let's call these 'diachronic state-evaluating' and 'synchronic state-evaluating' conservatism respectively. Here are some examples from each category:

Update-evaluating (always diachronic):
The best updating strategy involves minimal change to your belief and credence structure.

Synchronic and state-evaluating:
The fact that you believe p at t1 gives a positive boost to the epistemic valuation of your belief in p at t1.

Diachronic and state-evaluating:
The fact that you believe p at t1 gives a positive boost to the epistemic valuation of your belief in p at t2.

Now, the interesting question: does believing one of these principles commit you to any or all of the others? In this paper by McGrath – one of the few I know of that talks about this stuff – it is assumed that the core of conservatism is an update-evaluating kind, but that this is equivalent in truth-value to a corresponding synchronic state-evaluating kind of conservatism.

But here's one reason to doubt things are that simple. Suppose I have a belief at t1 that is so epistemically bad that there is nothing to be said in its favour. Suppose I retain that belief at t2, with no new evidence, purely through inertia. One might wish to approve of the update qua update-evaluating conservative, but not wish to proffer any corresponding (diachronic or synchronic) state-evaluating approval of the belief at t2 – which, after all, is still held for really bad reasons.

2 comments:

bloggin the Question said...

Any system of rationality must entitle you to make inferences from what you believe. For example: I believe p, I discover than p entails q, I now believe q with justification. Since p entails p in all cases, this axiom of rationality just leads to conservatism. If you are not entitled to make inferences from your beliefs then there can be no justification in believing anything.
In the example you gave you have an unjustified belief that seems to become slightly justified just because you believe it, is this right? It is hard to imagine an actual case. Here is one: I have a belief that I am looking at a zebra. This entails that I am not looking at a painted mule. So I tacitly believe that I am not looking at a painted mule. But I have no justification for this belief. All my evidence points to the fact that I am looking at a painted mule. Do any of the three forms of conservatism mean that I am in fact justified in believing I am not looking at a painted mule? Lets say the next day someone asks if I believe that the zebra was not a painted mule. Should I respond yes on conservative grounds? I take it you are saying that I should only say yes qua update evaluating conservative, but not "approve" of the belief diachronically, or synchronically. That seems just right. In actual cases one would continue tacitly believing that the zebra is not a painted mule until one actually evaluated the belief and only then discover that there is no reason to believe it presently or in the future. However the update evaluating conservatism is just enough to stop you rejecting the belief without good reason.

Clayton Littlejohn said...

BTQ,

You wrote:
Any system of rationality must entitle you to make inferences from what you believe. For example: I believe p, I discover than p entails q, I now believe q with justification. Since p entails p in all cases, this axiom of rationality just leads to conservatism.

It can't be that simple. Suppose you're crazy to believe p, something you'll only realize once you appreciate the connection between p and q. If the justification you've just offered for conservatism was right, could modus tollens _ever_ lead you in the right direction?