By Mark Blaxill, Editor-at-Large
In the Orwellian language of industry-funded research, the world has only two kinds of science: "sound science" and "junk science." So whenever you hear someone attack a piece of work as "junk science" it's time to put your bullshit filters on. Like a political campaign, financially-involved partisans in a scientific controversy have learned that negative campaigning is an effective tactic.
Instead of dealing with facts, analysis and data, the tried and true partisan approach is to launch an ad hominem attack on the character, motivations, the sanity or even the home décor of the targeted analyst. Facts and data are awfully boring after all and most journalists have been conditioned not to actually read the science they cover. Instead, they kowtow to "experts" who filter research for them. And if a fancy-sounding expert denounces troubling work as "junk" or is willing to don his white coat and intone for the camera that "science has spoken" on a controversial topic, well then that's how it is, right?
Wrong. The scientific process has little to do with class distinctions between respectable gentlemen and dangerous junkmeisters. It's far less socially aware: it's about evidence. And evidence is a funny thing. It's democratic and unpredictable. It has a way of not cooperating with comfortable orthodoxies and appealing theories. And when problems are complex, the evidence has a funny way of hiding behind bad design, methodological errors and investigator bias. In our world of new environmental diseases and, of course, an epidemic of chronic disease, the great scientific issue of our time is not the struggle between "junk science" and respectable science, it's the simple question, why are so many children sick?
And in seeking the answer to that question, there is really only one enemy. Bad analysis.
Bad analysis can happen many ways. Biased study designs can lead to bias in the evidence base. Useful data can be processed poorly or with errors in interpretation. Important insights can be lost or suppressed in favor of the preferred insights. Or perfectly good evidence and data can be misrepresented. In short, smart scientists (because most of the people who submit scientific papers for publication are undeniably smart) make mistakes. Sometimes these mistakes are innocent. Sometimes they're a result of someone stretching their findings to defend their financial or ideological interest.
But sometimes smart people just get things wrong. And because the peer review process is an imperfect filter, sometimes mistakes get into print that are just plain stupid.
In this month's Journal of Child Neurology, there's a fascinating new report on autism and mercury that exposes a really large and really stupid mistake. Two psychologists from the University of Northern Iowa Catherine DeSoto and Robert Hitlan examined a paper published in 2004 in the same journal by a group from the University of Hong Kong looking at mercury exposure in children. The Hong Kong study was led by Patrick Ip and looked at mercury levels in blood and hair for 7 year olds with autism and compared the results to normal controls. In their paper, Ip et al reported findings that went against the "poor excretor" hypothesis of autism and mercury, claiming that there was no difference in any measure of mercury exposure and excretion between their autism children and control children. The last line of their abstract (the only part of the study most scientists ever read) was surprisingly categorical: "the results from our cohort study…indicate that there is no causal relationship between mercury as an environmental neurotoxin and autism."
DeSoto and Hitlan took a preliminary look at Ip's analysis and were concerned. "While attempting to estimate the effect size based on the Ip et al statistics, we realized that the numbers reported by Ip et al could not be correct." So they went back and looked at the data again. What they concluded, looking at exactly the same data, was that a better analysis led to the opposite conclusion. [Full disclosure: I've neither met nor corresponded with DeSoto and Hitlan, but they described a study on which I was a co-author as the "most direct test of the hypothesis that autistic children may be deficient in terms of the ability to remove mercury from circulation." I agree with them.]
I recommend that you read DeSoto and Hitlan's paper. It's quite short and, despite a necessary discussion of the statistical details, quite accessibly written. It's also wise. In summary, their analysis revealed the key flaws in the Hong Kong team's study design as well as a number of important effects that Ip et al overlooked.
The Ip et al study never attracted much attention in the first place because their result was unimportant. They tested blood and hair mercury levels in 7 year olds. No one has ever claimed that mercury levels in 7 year olds would have anything to do with the kinds of mercury exposures required to induce autism in infant brains. More than anything else, the mere publication of this article demonstrated the kind of sloppy logic that pervades the autism debate. All that Ip et al demonstrated was that 7 year old Chinese children had relatively high levels of mercury in their hair. Their strongly worded conclusion that there was no link between mercury and autism was overly aggressive and, frankly, just plain silly.
But DeSoto and Hitlan, their attention attracted by a clear statistical error, started asking some smart questions. First, and most surprisingly, they used the basic statistics reported by Ip et al to calculate that there was a significant elevation in blood mercury levels (higher by more than 10%) in the Chinese 7 year old autism group. [What this means is anyone's guess. Perhaps these older children remain closer to the environmental sources of mercury that placed them at greater risk of autism to begin with.] Second, and more importantly, they deduced that the differences between the hair and mercury excretion patterns in the sample also supported the idea that the children in the autistic group were poor excretors. Using the expected observation that the blood and mercury levels were highly correlated in the sample (blood mercury levels explained 75% of the differences in hair mercury) they discovered that "the relationship between blood levels of mercury and mercury excreted in the hair is reduced for those with autism compared to non-autistic persons; furthermore, the difference between autistic persons and nonautistic persons is most pronounced at high levels of mercury."
This is an important and unexpected finding. It supports one of the central hypotheses at the heart of the autism-mercury controversy and suggests that the excretion deficit in autistic children might persist longer than anyone had guessed. Why did Patrick Ip and his team miss this point? Well first of all, they made a simple calculation error; an error that their biases led them to believe was a valid result. Second, they had a failure of imagination. It simply never occurred to them, that their data contained evidence supporting the hypothesis they were so eager to refute. If it weren't for the sharp eyes and diligent work of DeSoto and Hitlan, this analytical opportunity would have been lost forever.
Do these errors make Patrick Ip, Virginia Wong, Marco Ho, Joseph Lee and Wilfred Wong of the University of Hong Kong junk scientists? Certainly not. They are without doubt well-educated, well-meaning and smart scientists. But in this case, they did bad work, bad on many levels: testing a hypothesis that no one ever offered; drawing inappropriate conclusions from their sloppy design, making calculation errors that the initial peer reviewers missed; and failing to notice a surprising relationship that should have been obvious to anyone with knowledge of the poor excretor hypothesis. DeSoto and Hitlan deserve praise for confronting the errors of the Hong Kong team, who should be held accountable for doing bad work. Confronting error is a difficult and risky exercise, especially when you're on the unpopular side of a controversy. But in their conclusion, DeSoto and Hitlan state with great eloquence why this is so important.
"Of utmost importance (which outweighs the discomfort of writing about an error made by colleagues whom we know are generally competent researchers) is that potential researchers who are trying to understand what is and is not behind the rise in autism are not misled by even the slightest misinformation."
Amen to that.
Here's a LINK to their report.
TrackBack URL for this entry:
Listed below are links to weblogs that reference WHEN SMART SCIENTISTS MAKE STUPID MISTAKES :