Image via SSSB
Most journalism programs train students in journalism, not the subject they're covering. Therefore, it's not surprising that sometimes journalistic descriptions of the sciences can be misleading. Unfortunate, but not surprising. The most common mistake I see is also the most important: the confusion of correlation with causation. Many people reading this blog are no doubt familiar with the concept, but it's not as elementary as it might seem. Despite an economics undergraduate degree, a stint TA-ing statistics, and my consulting gig, it took me until my economics PhD coursework to really understand this in an intuitive, visceral way: We can't learn anything from studies where we don't have a good reason to believe the control group is like the treatment group. That's jargon-y, I know, so I'll break it down a little. The language around statistical studies comes from medical evaluations, where one group is treated (e.g., with a drug) and another group given nothing or a placebo, and their outcomes tracked. The first group is called the "treatment" group, the second the "control." Commonly in medical studies, these groups are assigned randomly, so that the researchers are sure that it's the drug causing one group's improvement or deterioration, not another source. You can see the problem with doing it a different way: If we compared people who naturally took some medication with people who did not take it, we would see the people who took it as sicker, and risk concluding the medicine was harmful, because being sick is what made them seek out the medication. A correlation would be the description for the tie between the sickness and the taking of the medication. Naturally, people who are sick take medication. But here, the causation is actually from the sickness to the medication, not vice versa.
The same applies to people who eat different types of foods, get different amounts of exercise, weigh more or less, or have different levels of education. They simply aren't very similar to people who fall on the opposite side of that scale. The problem is the same as the person who took the medication because they were sick--whether somebody does/doesn't do or is/isn't something we're interested in studying is tied to who that person is, which means we can never get an accurate measurement of the impact of that behavior or characteristic. This is why I'm so tired of newspapers telling us that certain foods are miracle foods, drinking will keep us from gaining weight, and any number of other silly claims. As one nutrition expert--the inventor of the term "orthorexia" for a disordered obsession with eating healthy foods--put it, "it's probably impossible to tell what's healthy and what's not healthy in terms of lifestyle from the studies that we have."
The straight correlation studies are certainly no good, but I don't think another class, which controls for observable characteristics, are much better. This is another jargon term--an observable characteristic is something that's measurable about a person, like their age or socio-economic status. These type of characteristics cover many important things, but exclude other equally important things we can't quantify, like the propensity of an individual to take care of oneself. @EbertChicago recently tweeted saying the case was closed on whether soy was good for us, citing an article in the Huffington Post. I don't blame Mr. Ebert for being taken in by the research presented in the article, it certainly sounds scientific and convincing. The author cites numerous peer reviewed research that shows people who consumer soy to have no adverse effects, and to often be healthier than their soy-avoiding peers. But when I looked at the studies, I found they were what is called "case-control," a medical research term for "not randomized." In case-control studies, people with a certain condition or who exhibit a certain behavior (in this case, eating soy) are matched to similar people without this condition/behavior, and their characteristics compared. Things like race, socioeconomic status, occupation, age, etc, are controlled for. Unfortunately, it's impossible to control for all the essential elements that might predict health outcomes: people who eat soy are different from people who don't in important, often unobservable ways. One obvious culprit is that people who eat a lot of soy are most likely getting a lot fewer of their calories from meat. Meat may be bad for health in large quantities, and therefore the apparent benefits of soy could really be benefits from avoiding meat.
It is only in the last decade that social scientists have begun to understand the profound difficulties in finding the right things to control for. The problem is that there is so much information contained in what someone chooses to do--so many factors that lead up to that choice--that we are at a loss to enumerate them. This is where the correlation is not causation thing gets tricky even for the very scientific-minded. We've already ruled out that to see whether being overweight is bad for someone's health, we can't just compare the health of heavy people to that of less heavy people (something that is still done all the time, by the way). However, many journalists and researchers still seem to think we can understand the health effects of weight by looking at people who lose weight. This simply isn't true. The behaviors involved in losing weight (changing eating and exercise practices) as well as the predisposition to take on this task are just as likely as the weight loss itself to have health impacts. In fact, one study that actually was randomized found that changing health behaviors and information did change health outcomes, whereas losing weight did not. People who lose weight are different than those that don't. People who eat soy are different from those that don't. People who drink, go to college, eat low-fat foods, shower only on Wednesdays, and dye their hair are different than those that don't. There is little hope of finding a suitable control group, or statistically controlling for important confounding factors.
I'm not saying every study needs to be randomized, this frequently isn't feasible, but I do believe that we either need research with a credible control group (often this is from quasi-randomization--policy or environmental quirks that result in otherwise similar people undertaking different behaviors) or we need to be very clear about what our research can say and what it doesn't. E.g., a finding that people who drink wine live longer is just that--a correlation between drinking wine and living longer. So no cutesy lead-ins about how that extra glass of red might buy you an extra year. No one has any idea if that's true, and pretending it is distracts from the actually interesting fact that the same people who tend to drink wine also tend to live longer lives. And I would be remiss to lay excessive blame on journalism. Social scientists themselves are too often sloppy in the way they describe their research. They might be very careful in the body of the paper, but in the abstract or the seminar presentation or the media quote they slip into colloquialisms about what their research means, even if it doesn't mean this at all. Hence the endless soundbites about how a study that finds better education is correlated with higher-paying jobs means that if you stay in school for five extra years, you could make a ten thousand extra dollars! Does that mean if I stay in school for a hundred years I could clear $200,000? And if I lose infinite pounds will I be infinitely healthy? No, and no. Perhaps the most depressing thing about correlation not being causation is that we actually understand very little about how the world works. But at least we know the right way to find out.
If I had to choose the single most important thing I learned in college, this would be it. Totally worth the four years if only this sinks in though - imagine how different political debates would be if the public had a basic level of statistics literacy.
ReplyDeleteThis comic seems appropriate
ReplyDelete