Wednesday, August 25, 2010

Data vs. Interpretation

I don't remember if I've made this point here before -- if I have, I suspect it's probably buried in some lengthy block of text, so I'm going to go ahead and give it its own post.

Basically one thing I've noticed in whatever passes for "science journalism" these days (not that ALL of it calls for scare quotes, but enough of it does to merit them here) is that often articles are written and headlined in such a way that blurs the distinction between what researchers observed and/or recorded (i.e., data), and what this data means (to either the researchers or the authors of articles covering the research).

For instance, take the cat-cognition study I referenced in two recent posts. The study itself had some flaws (which I won't relate again here as I covered that in detail in the aforementioned recent prior posts), but by far the most bizarre thing I saw in response to the study was the vast number of popular articles announcing "Study Proves Dogs Smarter Than Cats", and similar sentiments along those lines.

As far as I could tell, there was no data whatsoever to support this notion of dogs being categorically "smarter" -- all the data really revealed was a difference in performance between the tested (rather small) sample sets of dogs and cats on a particular task. The implications of this task performance difference were discussed in the applicable paper(s), with the experimenters suggesting various interpretations of their own (some of which could stand for some rigorous criticism), and then the media had a kind of frivolous field day making their own further interpretations (but acting as if their particular interpretations had actually been objectively observed during the experiments).

Which is, you know, kind of a major category error. An interpretation isn't directly observable at all, and someone with a decent grasp of scientific methodology (and you don't need to be a professional scientist to acquire this) will pretty much always maintain awareness of this. If you read a paper of a well designed study you will probably find it very heavy on the data and very light on the firmly-stated conclusions. But a lot of people don't understand, or don't care, that being "tentative" in this manner isn't a weakness of science, but an essential strength and source of both flexibility and responsiveness to incoming information.

Anyway, though, this isn't some screed in defense of cat cognition (though I do think many cat cognition studies suffer from terribly poor design). I am just using that subject as something I can easily point to as a concrete example. Really my concern here has to do with far too many people, whether they be researchers, journalists, or simply curious laypersons, failing to distinguish between "what was measured/recorded" and "what can reasonably be concluded based on what was measured/recorded".

Too often it seems that conclusions based on stereotypes, unexamined assumptions, or sheer unmitigated ignorance get taken as somehow tantamount to Really Significant Data That Means Something Important.

This is not only an intellectual integrity/rigor problem, in my opinion, but an ethical one as well -- e.g., I've encountered a truly stunning amount of "interpretation/data blurring" in the realm of autism research, which of course has the potential to impact actual living autistic people in serious ways.

Phrases like "lack of Theory of Mind", "lack of empathy", etc., are pulled out of who-knows-where, defined poorly if at all, but then astoundingly offered up as objectively existing based on observations that could very well mean something else entirely (which is totally aside from the problem of the wrong observations being counted as significant or insignificant in the first place).

Of course I do not mean to say that interpretation is always bad and ought to be avoided -- rather, I just think too often interpretations are put forth too firmly and too prematurely, to the detriment of the subjects they seek to explore or point out. And like I've repeated several times here already, interpretations can get muddled with data to the point where questions that could really benefit from a lot more data do not receive this benefit. In other words, when people presume they already know everything there is to know about something, they may be less inclined to bother obtaining further information on it.

(Moreover, when this muddling becomes habitual, I suspect it also becomes really difficult for people to know when an interpretation is valid. But that's a whole other post!)

So in any case I will stop now, hopefully keeping this post at a more generally readable length than I am usually capable of (writing "long" posts is often the only way I can ever write anything at all). Because this is something I think about a lot, have experienced direct consequences from, and also see as being a concern for other sorts of humans and non-humans whose well being all too often can hinge upon the interpretive whims of others.

3 comments:

jimf said...

> Basically one thing I've noticed in
> whatever passes for "science journalism"...
> blurs the distinction between what
> researchers observed and/or recorded (i.e.,
> data), and what this data means. . .

Yes, well, that is a problem, isn't it? Journalism being journalism, and not scholarship -- a branch of the entertainment industry, basically. So there's one source of bias right off the bat -- articles in the mass media need to attract casual readers (to make the advertisers, who are keeping the journalists and their publishers employed, happy) and that means primarily that they need to be gee-whiz-worthy, with perhaps the irritatingly supercilious -- "so you were dumb enough to buy the glossy rag you're holding for **this**, huh?" -- cautionary paragraph at the end to take some of the gee-whiz out of the cover blurb). And they've gotta be short and simple (middle-school-level English), and upbeat, upbeat, upbeat. Optimistic. Life, particularly life here in the U.S. of A. must be shown to be getting better and better, every day in every way. Then there are the political and ideological biases to squint through. Nature (if you're a Republican) vs. Nurture (if you're a Democrat), that sort of thing (reversed, of course, when it comes to the causes of homosexuality ;-> ).

When I was a kid, I **loved** newsstand popular science stuff. Now, it makes me feel vaguely ill. Though I'll still occasionally buy some of it (Scientific American Mind -- What Makes You **You**?) if I'm absolutely desperate for something undemanding to read on the bus home, though I feel somewhat the same way about it as I'd feel about eating a Reese's peanut butter cup on the bus on the way home. Junk nutrition, junk reading.

And, of course, sometimes what passes for serious science turns out in the end to be completely off-the-wall. I was browsing in Barnes & Noble last weekend in a book entitled _The Masters of Sex: The Life and Times of William Masters and Virginia Johnson, the Couple Who Taught America How to Love_
http://www.amazon.com/Masters-Sex-William-Virginia-Johnson/dp/0465020402
which contains a chapter about Bill Masters' (and it was really Masters' book, the author of the biography claims; Virginia Johnson considered insisting that her name be removed as co-author, but in the end she didn't want to provoke the boss) 1979 _Homosexuality in Perspective_, which made claims about what is today known as "reparative therapy" that are still trotted out by conservative Christians and reparative therapists themselves, but which are not taken seriously by most of the psychiatric and psychological community. The "observations", the bio's author claims, were pretty scarce in that book, while the "interpretation" came entirely out of Bill Masters' imagination. Or so the bio's author claims; I certainly haven't read _Homosexuality in Perspective_, nor do I intend to. So who the hell knows? (I think I can guess, based on the Zeitgeist of Masters' generation and even of the time the book was written and published in, but that's meta-meta-meta interpretation. Which is pretty much the best most people can do, most of the time.)

Anne Corwin said...

Hi Jim, your comment triple-posted somehow so I deleted two of the duplicates. Figured you'd be okay with that.

I agree with your points on "journalism" but at the same time I've seen plenty of examples of "over-interpretation" at, say, the "press release" level (which is journalism I guess, but more telegram-ish) and even at the study paper/abstract level itself. A particularly egregious example of this is the "autism = broken mirror neurons" phenomenon...it was just sort of decided, somehow, for a while, that "mirror neurons" objectively existed as some sort of specialized brain structure when really "mirror neuron" itself was kind of an interpretation. So it's a pervasive problem, with popular journalism just showcasing the worst/most obvious examples.

That said, I don't agree that over-interpretation is generally optimistic (though maybe I just haven't seen enough examples). It seems to depend on the subject matter. Like I've seen plenty of articles spouting gloom-and-doom about vaccines.

I can also relate a lot to this: "When I was a kid, I **loved** newsstand popular science stuff. Now, it makes me feel vaguely ill. Though I'll still occasionally buy some of it (Scientific American Mind -- What Makes You **You**?) if I'm absolutely desperate for something undemanding to read on the bus home, though I feel somewhat the same way about it as I'd feel about eating a Reese's peanut butter cup on the bus on the way home. Junk nutrition, junk reading."

I remember poring over old issues of "Discover" and "Omni" (which I thought had the coolest covers...) as a kid growing up in the 1980s, and part of me still wonders vaguely if articles were "better" back then. But I kind of doubt it at this point.

G-nome said...

"It is the mark of an educated mind to be able to entertain a thought without accepting it."
-Aristotle

http://rationalwiki.org/wiki/Science_woo

"The only thing that interferes with my learning is my education."
-Einstein