Há algo de muito podre na Nomenklatura científica na terra do Tio Sam

quinta-feira, agosto 19, 2010

The Politics of Ideas : On Journals
The Editor
First, a note about journal editors.  At top-notch, high impact journals, the power that editors have in shaping which theoretical work gets reported – or no – is massive.  In psychology, where there are any number of competing theories to explain the same phenomena, the particular leanings of the editor can make a considerable difference.
Indeed, if an editor has a vested interest in the outcome of the review process and wants to exercise (undue) influence, she has a number of options.  She can a) send the paper to people she knows will strongly favor acceptance, b) reject it out of hand, or c) significantly hold up the publication process, by either sitting on the paper as long as possible, or by sending it to unfavorable reviewers.
These are not the actions of ethical editors.  But not all editors act ethically.  Here are a handful of horror stories that illustrate how editor politiking can interfere with science.
The Delayer
A couple of years ago, an acquaintance of mine – S. – submitted his thesis work to a well-regarded psychology journal.  Although S. didn’t know this at the time, the action editor (AE) who received the paper had a vested interest in not seeing it published.  Indeed, S.’s work threatened some of the theoretical claims AE was preparing to make in several upcoming papers.  But instead of simply rejecting the paper – which had obvious merits and would clearly be high impact – AE decided to do something altogether more clever: substantially delay the publication process.
First, AE solicited reviews.  The two reviews he received on the paper were uniformly positive but both asked for revision and for more experiments to shore up some of the claims that were made.  This could have been dealt with fairly easily in revision, if S had received the reviews in a timely fashion.  But instead of sending back the reviews, AE claimed that he could not make a decision based on the received reviews and that he was still trying to find a third reviewer for the paper.  This went on for a year or more, and S. became increasingly unsure about how to proceed.  On the one hand, AE – who is rather famous in our field – appeared to be trustworthy, and sent letters assuring S. that he was still valiantly searching for a third reviewer.  On the other hand, time continued to pass, and no reviews were returned.
But then, it got worse.  At the end of that year, AE ended his term with the journal, but explained to the other editors that he would specifically like to see this paper through the rest of review.  When S tried to follow-up with the journal about what had happened with his paper, no one was sure.  AE had long since stopped responding to emails.  While in retrospect, S. probably should have pulled the paper from the journal and resubmitted elsewhere, he was a relatively young researcher and was loath to start the process over after so much time had already passed.
So what was the outcome?  Some two years after the paper was first received, the new head of the journal intervened, and – with sincere apologies and some embarrassment – returned the paper and the positive reviews.  But by this stage, years had elapsed, and the work – which contained sophisticated and innovative modeling techniques – was no longer considered groundbreaking.  Not only that, but AE had rather liberally borrowed from its literature reviews in his own work.  In the meantime, S’s promising young career and his confidence had been destroyed.
The paper – which is, to my mind, still a brilliant piece of research – remains unpublished.  S. no long works in academia.
Whither the reviewer?
The story of S. is tragic. But you may be wondering — what’s it got to do with Hauser?  Ah — but see, the politics of the review process don’t always work against the author.  Sometimes, they work against the reviewer, and against the interests of science, more generally.  Here are two such stories I’ve been told.
#1 In the first, a friend – J. – was asked to review a paper that criticized some rather provocative work she had done early on in her career, which had challenged the theoretical status quo. When she reviewed the paper, she found substantial flaws in both the researchers’ methods and in their statistics.  Redoing their stats, she found that none of the claims they made in the paper were substantiated by the numbers they were reporting.  In fact, far from disproving her earlier work, the researchers had replicated it.  On the basis of J’s review, the journal formally rejected the paper.  But then, under pressure presumably – the head of the journal reversed the decision and accepted it, under a new title and a different action editor. It was published soon thereafter, faulty stats in place.  It was only years later that J. received a formal apology from the new head of the journal, who told her what had happened.  Not surprisingly, one of the authors on the paper had called in a favor with the head editor.
#2 In another, similar tale, a researcher, R., was looking over a much-discussed conference paper, that was then in press as a journal article.  R. quickly realized that the data the authors were reporting did not support their statistics.  In fact, the researchers had made a serious error in analyzing their data that – when corrected for – led to a null result.  R. contacted the lead author to notify him.  When the article was published as a journal article several weeks later, the data reported had mysteriously “changed” to match the statistics.
…I know a virtual laundry list of similar stories. It seems clear to me that, even at top journals, the editorial process is not without missteps, oversights, and the occasional ‘inside job.’ In some ways, given how overworked and underpaid the editors are — how could it not be?  Of course, I strongly doubt that this is business as usual at these journals. But it’s undeniable that these incidents do happen.  And it’s why I find the Hauser ‘scandal’ relatively unsurprising (or surprising only insofar as he was eventually caught).  Because, think of it this way : who in psychology was better placed than Marc Hauserto call in the occasional favor?  To ensure editors chose chummy reviewers?  To fax over a last minute ‘revision’ to his data set?  The man is undoubtedly charismatic — he’s a well-known public intellectual and wildly popular as a professor, and he has one of the most enviable lists of collaborators and co-investigators in the field.  Regardless of the quality of his output –or the reality of it– those three factors — charm, popularity, and friends in high places, would have made him an immensely powerful figure in the field.  (If you think academia is so different from the Baltimore City Police Dept, think again!)
In other ways though — regardless of corrupting influences — it’s not wholly surprising that rotten statistics are going to press unvetted.  From what I can make out, it’s the rare conscientious reviewer who double-checks a paper’s data against its stats.  (–Which is just bad news for science, frankly.)
Each year, the professor I work with, Michael, teaches his students a valuable lesson in this, when he shows them firsthand why it pays to keep abreast of the numbers.  Michael opens the day’s lecture by having the students debate the merits of a famous paper on sex-differences and their apparent relationship to autism.  The debate will rage fast and furious, until Michael will stop to ask if anyone has looked closely at the statistics reported in the paper.  Almost invariably, no one has.  Over the next five minutes, Michael will illustrate why the statistics in the paper are impossible given the data that the authors report.  The debate ends there.  Without any real results, what’s there to argue about?
I’ve thought about this a lot in the years since I first sat through that class, and particularly after I heard some of the stories recounted above.  And it’s been surprising to me, the number of times that I’ve come upon papers – famous and well cited ones, at that – that have made a royal mess of what they’re reporting.  Either the statistics don’t match the data, or the claims don’t match the statistics.  This is more than a little unnerving.  When foundational papers, which make important and widely-believed claims, aren’t showing what they purport to, then what can we trust?  Who can we cite?  What should we believe?
And furthermore: How does this happen? Isn’t the review process in place to ensure for methodological rigor, statistical accuracy, and supportable claims?
Absolutely — in theory.  With a good, not-too over worked editor, and ethical, earnest reviewers, that will be the outcome.  But from what I can square, unfortunately that’s not how the review process in psychology always works.  Far too often, the politics of ideas disrupts the honorable pursuit of science.
[...There's more where that came from.  This is the first in an occasional series on the politics of ideas in psychology.]
[Having had my say, I would recommend you also read a very different perspective written by Art Markman: sometime limericist, consummate gentleman and head editor of Cognitive Science.]
[And finally, as an obvious addendum, I want to make clear that there are many, many ethical and hard-working editors and reviewers in this field, some of whom I have had the great pleasure of working with. That there are politics at play is evident. That there are many scientists far more interested in the pursuit of ideas than in the pursuit of power, is doubly so.]
...
+++++
NOTA DESTE BLOGGER:
Pelo visto o fenômeno é mundial e não é bom para a ciência que caí em descrédito junto à população.
+++++
Vote neste blog para o prêmio TOPBLOG.