Monday, January 28, 2013

More of the politics of science

In introduction to linguistics classes, students are told that linguistics is a science because it takes a descriptive, rather than a prescriptive, approach to language. In introduction to psychology, a different approach was taken. First, the concept of “confirmation bias” was introduced—people have a tendency to look for evidence to support their views rather than for evidence to contradict them, and they are often dismissive of evidence that contradicts their views. Therefore, to be a science, psychology must have research methodologies to help overcome confirmation bias.

Both approaches have their value. I like the approach taken by linguistics, and it has to do with my view on the is-ought problem, combined with my experience that using strongly ought-laden categories for descriptive purposes tends to lead to skewed pictures of things. (I realize that this itself closely relates to my normative judgment that researchers should avoid such analytic categories when possible. But I am not arguing for value-free research [which would make no sense at all, given that feeling a need for research involves epistemic values]. Rather, my position is that scholars’ epistemic values should take precedence over political values [with the exception of ethical obligations to research participants.])

Dealing with the problem of confirmation bias is also very important for science. It has often been said that science is not a process for always getting things right, but it is an inherently self-correcting process. I think this view of science is basically correct (provided that social values/taboos do not make some topic essentially unresearchable, or only researchable from a very limited range of perspectives).

In my last post, I discussed approaches to “objectivity” in research, and indicated my own preference for a) striving for (limited) objectivity, and b) putting epistemic values over political values in our research. Confirmation-bias is a major part of why (limited) objectivity is so difficult. In my view, there are three main means of overcoming it.

1) Be aware of the problem. If we are aware of the problem and strive for limited-objectivity, then recognizing (and trying to counter it) is an important step. But this is not nearly enough.

2) Hypothesis testing. I firmly believe that both quantitative and qualitative methodologies are important for studying humans. Among those studying asexuality, no one thinks we should only use quantitative methods, but there seem to be some who seem committed to only using qualitative methods, and this is something I am wary of because I am distrustful of our impressions of trends and tendencies (salience causes us to overestimate frequency). Rigorous methodologies, including quantification (if done well), allow for more rigorous forms of testing our hypotheses/beliefs.

3) Science is a social enterprise conducted by people of a variety of biases. Everyone has their biases and their prejudices. I’ve got mine, and you’ve got yours. If everyone working in a certain area has the same biases, then you’ve got a serious problem for any hope of limited-objectivity or scientific advancement. If we have different biases, different prejudices, and these are combined with shared epistemic commitments, then there is hope.  If I do a study and those with similar views find my analysis convincing, but those with rather different ideologies have differing views, this can be helpful, but it depends on how they respond. If they attack my research on purely moral/ideological grounds, this is a serious problem. If they criticize it on methodological grounds (as is popular in academic arguments), that’s fine, but what would advance the field is if they think that my findings are the result of some problem/limitation with my methodology, and then respond by trying to do a better study with improved methodology. Perhaps they discover that, even with those improvements, my findings are largely replicated. Perhaps they find that, without the improvements, my findings are replicated, but when this or that is controlled for, the effect goes away. Either way, as long as the research is well-conducted, the field has been advanced.

For this to work, there must be a shared epistemic commitment. Personally, I read things primarily because I want to learn (or because I have to read it for a class, a reading group, or because it pertains to something I’m working on, but then I very much hope that I will learn from it). If some researcher has an obvious political agenda, their work is empirically weak, it’s clear that they allow their political commitments to trump epistemic commitments, and I’m not especially interested in working on their particular political cause, why should I waste my time reading their stuff? If I have to read their stuff because academic requirements make me feel obliged, I feel it causes me pain: It can be annoying to read, and I’ve wasted time reading crap that could have been spending reading something more informative.

By contrast, if I have a number of political/ideological disagreements with someone, but they research issues closely related to mine, it is methodologically rigorous, and we share epistemic commitments, then my epistemic commitment will require that I take their work seriously. This is precisely the situation that we need to have to fulfill the third criteria necessary for achieving (limited) scientific objectivity.

If a field puts its political commitments above its epistemic commitments, the consequence is that they tend to be ignored by people who don’t already agree with them, and they run the risk of creating echo chambers. This especially concerns me because my experience has been that a dangerous temptation to which many activists are prone is to consider complex issues from a narrow range of concerns, and to consequently have a high tolerance for collateral damage and low regard for the law of unintended consequences: Pain that they cause others is OK because it is done in the name of justice.

No comments: