1. Bias defined. A bad thermometer is one that often gets the temperature wrong. In one case, the errors are random: sometimes too high (say, it reads 74 when the temp is 72), sometimes too low. In another, the errors are systematic—it tends, say, to read too high. In the second case, the thermometer is not just unreliable; it is biased. In this book Thomas Kelly defends a definition of bias that fits this case:
Bias is the systematic deviation from a norm, or standard of correctness.
For the thermometer, the norm is accuracy: getting the temperature right. Other norms abound, and so therefore do other kinds of bias. Judges, for example, are subject to the norm proportion the punishment to the crime. A judge, therefore, whose sentencing is more lenient in the afternoon than in the morning, or while full than while hungry, is biased.
While some philosophers earn fame and glory for defending crazy and obviously false ideas, this theory of bias seems right at first glance: a more precise articulation of what everyone already hazily believes. Kelly agrees, but warns that it has “far-reaching and radical implications.”
2. The limits of charity. You’re reasonable and open-minded, of course. But also opinionated: your political beliefs fall somewhere on the “Left-Right” spectrum:
Mine do too; let us suppose, somewhat to the Right of yours:
While you disagree with me, and so (necessarily) think I am wrong, you don’t think I’m stupid, idiotic, or even irrational. Reasonable people can disagree!
You may want also to think that I am unbiased. It would be mean to think otherwise, right? But, Kelly argues, to be that open-minded, would be irrational.
From your perspective, you view is true:
Therefore, from your perspective, my political beliefs do not just deviate from the truth, they do so systematically. So you must think that I systematically deviate from the norm believe the truth; you must think I am biased. And I have to think the same of you.
So, while reasonable people can disagree, they cannot always regard each other as unbiased.
3. This isn’t that bad, though. While you may have to regard me as biased, because (you have to think) I violate the norm believe the truth, in another respect you can still think me unbiased: namely, with respect to the norm believe in accord with your evidence. My beliefs might be too-far-to-the-right only because I have misleading evidence that favors the right-ish opinions. I ended up with biased beliefs because I am unbiased in my assessment of my (unbeknownst to me) biased evidence.
4. When bias is better. Suppose you’ll have to rely on someone else’s judgments on some topic. In the ideal, you want someone who is reliable: they usually get things right. If that’s not on offer, which is second-best—someone who is unreliable and biased, or someone who is unreliable and unbiased? Since bias is bad, you might prefer the second choice; but only if your diet of examples is limited. Kelly describes an unreliable and biased basketball referee, who
might either be biased in favor of overly physical play (plays that should have drawn foul calls do not), or alternatively, biased against physical play (plays that shouldn’t have drawn foul calls do).
Players might prefer either of these biases to a referee who calls fouls unreliably but without bias, because the biased ref
has a sort of consistency over time, and this allows for the formation of rational expectations on the part of both players and coaches, who can adjust their playing styles and strategies accordingly.
Kelly doesn’t put the point this way, but I would: call fouls correctly is not the only norm of refereeing. Another is be consistent in your foul-calls (treat like cases like). Meeting the first norm, one automatically meets the second. But an unreliable ref (who does not meet the first), can only meet the second norm by being biased. The unreliable and unbiased ref meets neither norm. That’s why, while the calls of the biased ref are bad in a way, they are still better than the calls of the unbiased but unreliable ref.
5. Bias and knowledge. You might well think that, if my thinking about a certain topic is biased, then my opinions on that topic are mere opinions; they do not amount to knowledge. Kelly argues otherwise: one could be a biased knower. His case: a parent is watching her child play, and anyone watching the scene could plainly see that the child is alive and well. But
If the child wasn’t alive and well, the parent would still believe this, because the parent is so deeply invested in it being true, that her desires would ensure that she believed accordingly. If evidence emerged that the child wasn’t alive and well, psychological mechanisms would be triggered that would lead her to dismiss the evidence.
This mother is biased in favor of believing that her child is alive and well. She still, however, knows that her child is alive and well: she’s got a clear view of her child, is watching carefully, and that’s what produces her belief. So biased knowers are possible—but only in strange cases. Here the “biasing mechanism,” as it were, does not operate to produce the belief (that her child is alive and well) that constitutes knowledge. It just waits in the wings, only activating if the belief isn’t proudced by ordinary perception processes. Kelly suggests that the beliefs of biased believers that are produced by a biasing mechanism, do not constitute knowledge.
6. The true power of bias attributions. Suppose I am considering whether abortion rights are protected by the US Constitution. What should I make of the fact that Antonin Scalia, the late Supreme Court Justice, denied that abortion rights were so protected? Scalia was an expert in law, a very smart man; in those respects, better positioned to judge the issue than I. “Ah,” I might say, “but Scalia believed abortion was deeply immoral. And while being deeply immoral and being unprotected by the Constitution are different things, Scalia was biased against thinking that deeply immoral acts were Constitutionally protected!”
But what actually follows from this? Somehow, it’s supposed to mean that I can ignore Scalia’s opinions, when inquiring into whether the Constitution protects abortion. But why? The answer isn’t obvious, because of point 5, bias and knowledge. Even if Scalia is biased, he might still know whether the Constitution protects abortion. And if he might know, shouldn’t I to some extent defer to his opinion?
Kelly says no. Even if Scalia knows the answer, his belief about what that answer is, if biased, is “insensitive”: he would have believed the same, even if he’d been wrong. And
The fact that the expert [here, Scalia] believes insensitively means that his believing as he does is evidentially worthless to you...his believing H isn’t a piece of evidence (even a tiny piece of evidence) that speaks in favor of your believing H.
This is one of those far-reaching and radical implications. You might have thought, if one person knows something, they can spread that knowledge around, until more and more people know it; and the knowledge of the community as a whole can thereby improve. But that’s false: if the original knower is a biased knower, then he cannot “serve as an epistemic resource to others.”
7. Bias in action. This theory of bias is meant to be general. There is biased believing, when one’s beliefs systematically depart from the truth; but there is also biased action, for example, when one’s acts systematically depart from what is right. I might, for example, pick my friends more often than I pick my colleagues, for my softball team, even when I know they’ll be equally good at the game. My choices are biased toward my friends.
Kelly mostly focused on bias in the production of beliefs; he’s a professional epistemologist, and only dabbles in ethics. Maybe that’s why the theory’s application to ethics left me with more questions. Aristotle thought virtues were means between two extremes. The courageous man feels the right amount of fear, not too much or too little. He who feels too much fear, will exhibit the vice of cowardice; he who feels too little, will exhibit recklessness. If this is right, and insofar as be virtuous is a norm of action (and it surely is?), then Kelly should—right?—regard cowardice and recklessness, and other vices, as biases. They depart from a norm of action in one direction or the other, just as a biased thermometer departs from the norm of accuracy in one direction or the other. But wouldn’t “cowardice is a bias” be, as the Oxford philosophers might have put the point back in the 1950’s, quite an odd thing to say?
If Kelly is right about biased knowledge (and I think he is), then many of the thought experiments that have been used to motivate reliabilism about knowledge over the past few decades are just mistaken in their "common intuitions" (and I think they are).