Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Research Methods And Bias In Science

iStockphoto

I've always been a little skeptical about the scientific method.

Science isn't one thing, after all. Just as sports isn't one thing. There isn't one way to win, or one way to get the gold. And, so, there isn't one way to conduct research in fields as different as chemistry, economics, climate science and ethology.

Nevertheless, just as there are shared values in sports — the value of fair play, might be a good candidate — so there are shared values in science. Some of these are: keep an open mind in the pursuit of truth; question one's own certainties; be on guard against hidden biases, especially one's own.

This topic is very much in the news lately as science — especially social science — confronts a crisis about the replicability of findings. It turns out, according to an essay by science writer Regina Nuzzo in last week's Nature, that bias — or shoddy practices — can easily find their way into the laboratory of even the most well-meaning researchers. For example, you might be so convinced of your hypothesis that you throw out data that doesn't support it on the very reasonable grounds that, well, there must have been some mistake. You keep probing until you get a hit — and then you publish. In such a case, do your data support the finding in that case? Not really.

Or you might tailor your theory to the data. That's a good thing, in general. After all, you want your theory to conform to the the facts. But, at the same time, you don't want to pretend that the data were predicted, when they aren't. I'm not a good dart thrower just because I can throw darts at a wall and then put a circle around a big clump of them and claim that I've hit the target (to use an example from the article).

Physics, I learned from the article, has developed a technique to combat fake bullseyes (or fake significant findings). We are familiar with the concept of the double blind study: In such studies, neither the researchers nor the subjects of an experiment (a drug study, for example) know whether they're getting a placebo. This way, the subjects can't let their knowledge influence what they report. And researchers can't let what they expect to find shape what they notice.

The new technique of data blind studies takes this to a whole new level. In data blind studies, you collect data and design a computer program that generates alternate, divergent data sets, all of which are possible but only one of which is actual. You, the researcher, don't know which data set is the genuine one. This means you have to figure out what to say — what your principles of analysis are — before you even know whether the findings support your theory or not. It's a little like the principle that the child slicing the pie in half shouldn't know in advance which portion she'll get. You slice, or conduct research and analysis, behind what philosophers call the veil of ignorance. This lets you decide what your standards for assessing data will be independently of what you actually discover. You only lift the vail, or raise the blind, when you've agreed on criteria of evaluation that are neutral.

It is a remarkable example of the scientific disunity with which I began that, as a matter of fact, the use of the data blind method isn't more widespread outside of physics.


Alva Noë is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Alva Noë is a contributor to the NPR blog 13.7: Cosmos and Culture. He is writer and a philosopher who works on the nature of mind and human experience.