I don't remember if I've made this point here before -- if I have, I suspect it's probably buried in some lengthy block of text, so I'm going to go ahead and give it its own post.
Basically one thing I've noticed in whatever passes for "science journalism" these days (not that ALL of it calls for scare quotes, but enough of it does to merit them here) is that often articles are written and headlined in such a way that blurs the distinction between what researchers observed and/or recorded (i.e., data), and what this data means (to either the researchers or the authors of articles covering the research).
For instance, take the cat-cognition study I referenced in two recent posts. The study itself had some flaws (which I won't relate again here as I covered that in detail in the aforementioned recent prior posts), but by far the most bizarre thing I saw in response to the study was the vast number of popular articles announcing "Study Proves Dogs Smarter Than Cats", and similar sentiments along those lines.
As far as I could tell, there was no data whatsoever to support this notion of dogs being categorically "smarter" -- all the data really revealed was a difference in performance between the tested (rather small) sample sets of dogs and cats on a particular task. The implications of this task performance difference were discussed in the applicable paper(s), with the experimenters suggesting various interpretations of their own (some of which could stand for some rigorous criticism), and then the media had a kind of frivolous field day making their own further interpretations (but acting as if their particular interpretations had actually been objectively observed during the experiments).
Which is, you know, kind of a major category error. An interpretation isn't directly observable at all, and someone with a decent grasp of scientific methodology (and you don't need to be a professional scientist to acquire this) will pretty much always maintain awareness of this. If you read a paper of a well designed study you will probably find it very heavy on the data and very light on the firmly-stated conclusions. But a lot of people don't understand, or don't care, that being "tentative" in this manner isn't a weakness of science, but an essential strength and source of both flexibility and responsiveness to incoming information.
Anyway, though, this isn't some screed in defense of cat cognition (though I do think many cat cognition studies suffer from terribly poor design). I am just using that subject as something I can easily point to as a concrete example. Really my concern here has to do with far too many people, whether they be researchers, journalists, or simply curious laypersons, failing to distinguish between "what was measured/recorded" and "what can reasonably be concluded based on what was measured/recorded".
Too often it seems that conclusions based on stereotypes, unexamined assumptions, or sheer unmitigated ignorance get taken as somehow tantamount to Really Significant Data That Means Something Important.
This is not only an intellectual integrity/rigor problem, in my opinion, but an ethical one as well -- e.g., I've encountered a truly stunning amount of "interpretation/data blurring" in the realm of autism research, which of course has the potential to impact actual living autistic people in serious ways.
Phrases like "lack of Theory of Mind", "lack of empathy", etc., are pulled out of who-knows-where, defined poorly if at all, but then astoundingly offered up as objectively existing based on observations that could very well mean something else entirely (which is totally aside from the problem of the wrong observations being counted as significant or insignificant in the first place).
Of course I do not mean to say that interpretation is always bad and ought to be avoided -- rather, I just think too often interpretations are put forth too firmly and too prematurely, to the detriment of the subjects they seek to explore or point out. And like I've repeated several times here already, interpretations can get muddled with data to the point where questions that could really benefit from a lot more data do not receive this benefit. In other words, when people presume they already know everything there is to know about something, they may be less inclined to bother obtaining further information on it.
(Moreover, when this muddling becomes habitual, I suspect it also becomes really difficult for people to know when an interpretation is valid. But that's a whole other post!)
So in any case I will stop now, hopefully keeping this post at a more generally readable length than I am usually capable of (writing "long" posts is often the only way I can ever write anything at all). Because this is something I think about a lot, have experienced direct consequences from, and also see as being a concern for other sorts of humans and non-humans whose well being all too often can hinge upon the interpretive whims of others.