Suggested listening: [Morcheeba – Who Can You Trust?]
How do you sort the "good" scientists or the "bad" without actually going to check out their labs? This was part of the question asked by Daily Mail science editor Michael Hanlon at the Goldacre vs Lord Drayson debate at the Royal Institution last week.
This is an important question in light of the over-blown MRSA scares only a few years ago. Ben Goldacre has written a multitude of articles and blogs about this, exposing the so-called “expert” used by newspapers Dr Christopher Malyszewicz as a fraudster with a lack of proper qualifications, who “got false positive results from his garden shed laboratory”.
Actually checking out people’s labs is something that Dr James Logan PhD says is impractical and can be misleading:
“A trained scientist could even find it hard to tell what quality of research is going on in a lab just by having a look around – after all, in my area we get some high quality research coming out of labs in Africa and most of their labs are very run down and poorly resourced, yet the quality of the science that they are able to do there is not necessarily compromised.’
So what can journalists do to ensure that their sources are bona fide proper scientists? Of course, the quality of the scientific method of an individual research paper and its peer-reviewed context are the most important things to look at, rather than its author (more on this if you read on).
“Obviously though, reputation and previous can influence opinion, but in an ideal world a reporter should be only assessing the quality of the work,” says Adam Rutherford, editor at Nature, “Does it affect the quality of Goodfellas that Age of Innocence was pony?”
However, chatting to Vaughan Bell, blogger for Mindhacks.com in the aftermath of the Godacre vs Drayson debate (which was a re-enactment of their previous widely-publicised stances, read more here), many questions were raised in my mind about the scientists themselves.
Especially in the light of research that Goldacre highlighted in his talk about how people, and indeed scientists, are influenced by the popularity of a science story in the MSM. In the case of the scientists, they tended to cite studies with more frequency if they are covered in the media. See here for details of the study, in which academics were influenced by coverage in the NY Times. (Phillips DP et al. N Engl J Med. 1991;325:1180-3.)
Also worrying is this recent article from the Guardian about scientist selling their signatures to big pharma research.
Another reason that quantity of citations is not a good indicator of a good scientist or the quality of a paper is that studies can be cited as examples of controversial or bad research that the paper wishes to contradict.
Bell says using Google Scholar to see how many times studies have been cited can be a fairly good indicator of relevance – but whether this is for good of bad reasons, we must bear in mind its popularity is also tempered by the influence of the media.
All in the rep
So how else can we assess if a scientist is “good” or “bad”? A lot rides on reputation. Dr Logan says that “Reputation is probably a good indicator, and would be assessed by talking to peers.” This kind of snooping around can prevent another Dr Christopher Malyszewicz-style disaster.
Of course, reputation becomes a lot more relevant when you are not looking at a research paper, but something else as a basis of a report. This could be notes from a conference plus an abstract of the research, the result of a meeting, press conference or other less- or un-scrutinised channels.
“Then assessing credibility becomes much more relevant,” says Rutherford, “At this point I think looking at publication record can be a useful way of assessing the quality, impact and import of a researcher.”
Of course, we have to be prepared that we might be surprised by a scientist deviating from their track record – “As happens with reputation in every other walk of life,” says Bell, “How do you know Tom Cruise isn’t going to make a good Hamlet?”
Even if a scientist’s reputation is good, it doesn’t necessarily follow that all of their claims are founded in good science. Mark Henderson, science editor at The Times, nudges me in the direction of his blog on Tracy Alloway’s controversial statements about Twitter dulling the memory – which she then admitted wasn’t substantiated with any research whatsoever. Just a hypothesis, then.
Would now be an appropriate time to mention Ida?
Rutherford comments: “Some of the members of the team behind the world’s greatest ever scientific discovery in the world ever that changed the universe for ever aka Ida the lemur, made claims that were not supported by the peer reviewed evidence. Any further publications from those team members should be treated with the same levels of scrutiny as any other academic peer reviewed paper.”
Tricky stuff; at least with a peer-reviewed paper you know that claims about it are based on research that has been given the stamp of approval by reviewers. But can the hallowed tradition of peer review also be susceptible to bias?
In theory, no. But in reality, almost certainly. Occasionally, “bits of “science junk” slip through [peer review] – even in journals like Nature, Science and PNAS,” says Dr Logan. See this analysis of retraction of research papers by Nature.
Dr Logan also says that this is not down to lack of rigour in the peer review process, in his experience. So that leaves the shifty-looking candidates of unscrupulous editing, or nepotism amongst reviewers. Of course the opposite can happen: “Competing scientists can slam a manuscript simply because they don’t like the authors or because they want to be the first people to publish that work,” says Dr Logan.
One scientist, a biological physicist I chat to over Twitter, Dr Ian Hopkinson PhD, agrees that there is politics operating within the scientific community.
@SmallCasserole Ooh thanks for that. So majority rule?
Outside the box
De Grey claims that “The first 1,000-year-old is 20 years younger than the first 150-year-old.” And guess what? He thinks they’re alive now.
His idea for increasing longevity is to combat disease in the body by creating therapies to treat the onset of aging, such as de-fuzzing arteries of cholesterol. He proposes this can be done by isolating the genes in certain strains of bacteria used to break down cholesterol, for example, to formulate as part of an injected panacea.
Not having read any of his research papers, I couldn’t comment on the quality of the science. However, I was fascinated by De Grey as a character, and it seems he has both his opponents and, er, non-opponents in the scientific community. I was curious whether he was too marginal and outlandish to be important or whether he was in fact some kind of genius visionary. Only the rigorous testing and verification of his theories will prove this one way or another.
Rutherford makes the point that with someone like De Grey, even a sidelined position can grow to appear more mainstream then it is within the media. This is probably because it is interesting, as in unorthodox, and also because hunger of the press to regurgitate pithy soundbites. Rutherford says: “A good journalist would only reiterate this... if the argument was sound or the evidence compelling.”
Mavericks are shown over time and testing to belong to one of two camps. One: Their “heretical” ideas are supported by evidence and embraced by conventional science, such as Lynn Margolis on endosymbiosis (a theory that parts of cells originally came from bacteria) or James Lovelock’s Gaia theory.
Two: Or, their maverick status becomes “self-selecting” – in other words their work is not proven and they therefore chose to be regarded as anti-establishment. For an example of this see this piece by Rutherford on Rupert Sheldrake.
Ahead of such a time when a scientists' theories are proven or dismissed, how should the media handle mavericks?
Henderson tells me it is fine to write up a maverick’s untested views as long as the caveats about lack of proof and peer criticisms about their work are high up in the article. He warns: “Ask yourself, though, if the hypothesis is plausible, and whether it contravenes things that are thought to be well-established. If so, it's worth applying the maxim of extraordinary claims needing extraordinary evidence.”
These are worth forging as a journalist. Time to get connected.
Henderson suggests that beyond ringing the authors and independent scientists for their views on the research, it can be useful putting papers you are reading through “a kind of informal peer-review” by running them past scientists you know in the relevant field to get their comments.
Dr Logan agrees, and would like to see more scientists’ views represented in articles. He says: “It is rare that the journalist will change the piece to be more balanced when they have a skepticism from another scientists – after all that will ruin a perfectly good and interesting story! It’s usually tagged on the end.” (BTW, Dr Logan big-ups the New Scientist for “giving a fair and balanced review of controversial research”).
From Dr Logan’s experience working with journalists on a skills exchange initiative run by the Wellcome Trust and Documentary Filmmakers Group, he would like to see more media training on “how science works and how scientists think”.
But this is a two-way street. Dr Logan also reckons that scientists need to buck up their ideas and engage more in communication with the media. He says: “A lot of scientists are too scared to communicate in case their reputation is compromised, which is a shame.” Perhaps understanding this, as a journalist, is one small teeny tiny step in the right direction for scientist-journalist relations.
NB Had to interrupt my science literacy series to blog this, part two coming v soon, promise.