Hard News: Shonky scepticism
167 Responses
First ←Older Page 1 … 3 4 5 6 7 Newer→ Last
-
Reece:
After all, learning has to start somewhere, we're not all born genii.
Nor am I, think sometimes the learning I did came at the expense of valuable skills in social etiquette and diplomacy ;-)
Bart:
While I agree with much of what you said regarding the influence of prior knowledge on what questions get asked and the interpretation of results –
And may I say before I go further that I place great value on the very well informed comments you have made in the past.
On two points I differ.
First in most cases the data is good. It is the exception that data is distorted not the rule.
We may end up having to agree to disagree on how “good” data is at times. Issues such as intention to treat in MedSci highlight that at best, good data is a moving target. Regardless of whether data is deemed good or not it still doesn’t mitigate the habit of science to emphasise some data at the expense of others. Contemporary science is in my belief more vulnerable to this problem now than ever before.
http://en.wikipedia.org/wiki/Intention_to_treat_analysis
The second area I disagree with is the quote above. I really really dislike it when folks in and out of science intimate that the general public cannot understand the science. It just isn't true.
It really wasn’t my intention to suggest/intimate that lay reading of research should be regarded as too hard. I do think it is fair to say that at times it isn’t easy. As below;
Nope there is no easy way to grasp this debate or evaluate the evidence within it. But usual rules apply; multiple sources, cross reference, check authority and agenda.
My intention was to encourage lay readers to read widely and interpret with caution. This is presented as a prelude to a plea on behalf of many who work in science like yourself.
Please be patient and persist with science and scientists it is tough turning squiggles and graphs into yes/no, good/bad right/wrong, and sometimes common language isn't good enough.
Your analogy is false, reading journal papers without truely understanding methods and stats is NOT like kicking tyres on a car. It is what it is and isn't really "like" anything else.
Like you, I also dislike what I perceive to be poor analogies *. I might have made my point better if I had said "runs the risk of being like kicking the tyres on a car". Nonetheless I am confused when you say science “is what it is”. This seems to risk suggesting that science is an enigma. This doesn’t reveal the falsehood (perceived or otherwise) in the analogy but rather it appeals to personal authority as a point of argument.
Reading the background and conclusions presented in refereed journals allows the lay reader (most times) to get a reasonable idea of what the author and the reviewers believe is the current state of knowledge.
I would always encourage lay reading of research. The gap between lay knowledge and science is too large and quite frequently it is filled by the Durkin’s of this world or dare I say it militant anti-science creationists. The key issue here is the belief that reading backgrounds and conclusions to research leads to reasonable conclusions most of the time. I think we can agree that these conclusions are most likely to be based on incomplete understanding. What perhaps is more interesting is that by addressing only the background and conclusion sections, “reason/reasoning” is based on interpretation led by authors, reviewers and context. In this situation, the data however good cannot speak for itself. Call me an old cynic if you must, but I am not convinced of the robustness of this approach. As a reviewer myself I am well aware of how imperfect my own reading and judgement can be.
To suggest that folks shouldn't try because they don't have the years of training needed to understand the methods in details is just the kind of arrogance that I display (and am ashamed of) all too often.
As pointed out earlier, not really my intention. However, I don’t think it is arrogant to suggest that sick people go and see GP’s, that car is taken to the garage to be fixed or that people listen carefully to what scientists have to say. These activities tend to bring about reliable outcomes, which is surely why such professions exist.
Apologies for the long post I’m trying not to flame but perhaps I’ve had a little too much coffee today.
* I particularly dislike the way the butterfly analogy is over-used and over generalised - yes small changes can precipitate much bigger ones but this doesn’t dictate that there should be a search for a single cause. -
i have to agree with the 81st on this one. having lay readers examine journal article is great, but assuming the inferences can be made reasonable sense of, there still remains the issue of primary data appraisal.
without some understanding of methodology (and it doesn't have to be that advanced), invalid or contestable data goes unnoticed. and once you've got data you want to make a point, albeit falacious, well the sky's the limit isn't it? worse still, dubious data can then be employed to make wonderfully 'clear' findings that any lay reader can 'understand' and act on. this technique is pretty common, especially with those who are pushing spurious arguments in the first place. Maxim comes to mind (i hope that's not being too mean).
one of the best authors i know of on the issues of science reporting is Daniel Yankelovich - well worth a google.
-
I do not want this to become a flame war so I guess we are just going to disagree on some things. Couple of comments though
81st asked
Nonetheless I am confused when you say science “is what it is”.
That isn't what I said, I said reading science is what it is. I don't believe reading science is "like" anything else. That's not saying science is an enigma. I think the point of your analogy is that the expert is always going to understand the science better than the lay reader. But my point is that the lay reader can get good understanding from the science too. You might think that it isn't "good enough" but there we disagree, that's fine.
Riddley commented rightly that without assessment of the methodology you can't judge the quality of the data. But the whole point of the peer reviewed journals is that the peer reviewers who are experts should have already assessed the quality of the data. So a lay reader should not need to repeat that assessment and should be able to trust the reviewers and editors.
Sometimes that trust is misplaced - when that occurs you sometimes even hear about it in the mainstream media. But the vast majority of published data is good quality.
cheers
Bart -
Another pitfall with science articles is the extent to which the data supports the discussion&conclusion. It's not uncommon for papers to gently overstep the strength of their findings. Like where a survey is used to measure behaviour, and then you end up with the survey data being used like it actually measures behaviour. Somewhat inevitably as it distils down, it gets a little more black and white. The results section is often a touch grey, the discussion a little less so, the conclusions a little less so, and what makes it into the abstract. That said, more power to people reading science.
-
Bart, i didn't realise your comments pertained specifically to journals. in that case i would generally agree, although papers can be published for all sorts of reasons and sometimes it seems intended for the (expert) reader to make what they will of the data's validity.
without wanting to labour a point too much, there is another problem with journals however, and that is they never publish null hypotheses, so when a rigorous study finds no significant relationship between variables (which in itself can be useful, for the very least in establishing contexts of other findings), the information is not presented.
-
That said, more power to people reading science
yeah definitely. as reece said, you gots to start somewhere, and i'd hazard to say a shakey primary reading of a journal article is likely to still be less misinformed than relying on secondary msm readings, which step even further over the boundaries of valid extrapolation and inference than even anthro journals do ;)
-
yeah definitely. as reece said, you gots to start somewhere, and i'd hazard to say a shakey primary reading of a journal article is likely to still be less misinformed than relying on secondary msm readings, which step even further over the boundaries of valid extrapolation and inference than even anthro journals do ;)
Yeah, I think that's the point actually. With the GE thing it frustrated me a great deal that most media tended to treat the science as basically unknowable and inscrutable, and would just line up opposing talking heads without making any attempt to assess relative credibility.
They wouldn't report economic stories that way. Why science, then?
-
Good question Russell.
Although I suspect that economists believe they are practicing a science - they even get a nobel for it!
cheers
Bart -
Oh. One more great thing... I think there is increasingly a trend for some of the top journals to publish more controversial, provocative, salacious and/or quirky papers than some of the lower ranked journals. That's not to say that they're bad science per se, but that it's important to be a little cautious, even if something is published in Nature or the Lancet or BMJ or whatever. Sometimes (and I stress only sometimes) the editorial will highlight this (and potentially provide a good critique), but again, not always.
-
Although I suspect that economists believe they are practicing a science - they even get a nobel for it!
and the nobel prize for voodooism masquerading as rationality goes to!! <strike>political science</strike> economics!
-
i didn't think they handed out nobels for political studies?
-
thereby aggravating the ire of political 'scientists' world-wide.
it's a blinking conspiracy by the economists...
-
i just can't seem to help being inflammatory
-
Bart:
By all means, let’s agree to disagree on our faith in the review process…
Russell:
The reporting thing is bang on. It is strange to see pseudo-balanced reporting of what in some cases is quite an unbalanced process. The uncertainty it creates is perplexing to say the least.
But this is the issue that worries me most.
I think there is increasingly a trend for some of the top journals to publish more controversial, provocative, salacious and/or quirky papers than some of the lower ranked journals.
I agree and I think it is because academic publishing is now in many areas more likely to be subject to market forces and answerable to shareholder interests. I know for certain, that journal editors are frequently “advised” by marketing execs. Marketing departments appear to have really got stuck into the issue of impact factor and its power to keep journals on subscription lists. This stretches as far as putting pressure on editorial boards to increase frequency of publication which inflates the impact factor of a journal through increased citations. Increasing frequency means more papers to review and more pressure on reviewers sometimes at the expense of overall quality. Clearly being controversial would be another tactic employed in order to increase citations.
I think there is a difference between increased readership and increased market share.
-
I think the other strategy to influence impact factors is pre-publication. Some journals are now putting accepted manuscripts (pre-proof stage) on the web. This means that people effectively can read and cite them before the impact factor's 2 year window starts.
-
James, a partial answer is to have Open Access Repositories so there are reputable sources on the Internet for academic research and publication.
|OARiNZ is the NZ version based on open source systems and being customised here.
Disclaimer/plug alert - our company is involved in the customisation work.
-
As a bookend, the following articles recently appeared in New Scientist:
Post your response…
This topic is closed.