Some People Hate Open Science. This Is What They Think.

James Heathers
12 min readSep 6, 2017

--

The stories with the most meat on their bones are written in someone’s diary, whispered in confessionals, and — these days — typed into the Google window at 2:40am, after more gin than is sensible on a Wednesday.

I was expecting it to say ‘stepchildren’ or ‘job’. Turns out, the world is bleaker than my imagination.

If you want a line straight into someone’s brain, your conduit should be candid, fluid, private thoughts. The best place to find them is behind a strong veil of anonymity.

This is why the tweetstorm we’re about to see is the most consequential thing I’ve ever read about the present replication crisis in science.

(It just had the misfortune to occur during the resurgence of the putative Fourth Reich, which managed to dominate all media, everywhere, and then the continuing omni-drowning of large parts of the US south-east.)

But first, background.

REPLICATION CRISIS

What is the ‘replication crisis’?

The replication crisis (or replicability crisis) refers to a methodological crisis in science in which scientists have found that the results of many scientific studies are difficult or impossible to replicate on subsequent investigation, either by independent researchers or by the original researchers themselves.[1] While the crisis has long-standing roots, the phrase was coined in the early 2010s as part of a growing awareness of the problem.

Since the reproducibility of experiments is an essential part of the scientific method, the inability to replicate the studies of others has potentially grave consequences for many fields of science in which significant theories are grounded on unreproduceable experimental work.

Basically, when we wish to do science, we base experiments or observations upon results other scientists have already established. It is therefore pants-wettingly important that we can trust what has already been published.

Fairly recently, a loosely-defined movement has arisen from a number of quarters, where people are directly challenging that trust by evaluating the quality of previously established work by testing the same results all over again. As implied by ‘replication’.

We replicated a bunch of preclinical drug trials.

Finding: most of the previous results couldn’t be reproduced.

Then, similarly, some important results in cancer biology.

Finding: most of the previous results couldn’t be reproduced.

Then, there was a huge replication study in psychology.

Finding: most of the previous results couldn’t be reproduced.

(Note: it looks even worse under Bayesian re-analysis.)

This is very obviously a problem. It is an environment that allows us to advance a thesis such as “EVERYTHING IS FUCKED”.

And, even while I was drafting this, another toe is planted firmly in the rear end of scientific quality:

When the press coverage of the “replicability crisis” in psychological science first began a few years ago, reporters generally broached the topic in a respectful and delicate fashion, hinting at problems but not trumpeting them.

That has changed noticeably in the past year or two. Science reporters who used to assume that peer review was a hallmark of “scientific literacy” are now openly stating that peer review is unreliable — not just in psychological science but across the scientific disciplines.

In short: science is allowed to be wrong or mistaken, but it isn’t allowed to be generally untrustworthy. The state of affairs is sufficiently bad that at this point we’re not debating whether or not we have large and extremely serious structural problems (we do), or whether we should be exploring solutions (we are), but rather how should we frame those problems — for instance, should this drab state of affairs be classified as ‘self-correcting’… or ‘broken’? Are recent attempts to fix scientific processes part of the scientific process, or separate from it?

(Or is that question itself just sophistry?)

But!

Into this dim nightmare, ascendant and glorious under hastily strung fluorescent lights, riding backwards on a cardboard horse and carrying a tin sword, with one bollock poking out near the saddle pommel, come people who think everything in science is actually just fine, thanks very much.

And that brings us to this:

Alex Holcombe represent

Now, there is a real danger here, and it is that we dismiss the person who wrote this. Who is this anonymous table-banging martinet? A old fossil or a young idiot. Probably some senior ivory-tower gink. Tell them to cram their opinion. It doesn’t matter what they think.

They may be wrong, but this is a considered opinion by a senior scientist, confessed in private, in language that they feel is appropriate. It isn’t a well-crafted public statement, it’s a screed. It’s honest. Dismissing it because we don’t agree with it is short-sighted.

More importantly, it’s also potentially representative of a ‘silent majority’ of scientists who think everything is just fine, thanks very much, and that all that fuss they read about in the Guardian is just agitprop from the pesky young scientists riding their damn pogo sticks on their damn science-lawn.

So.

This statement needs to have its legs pulled off, the way a toddler treats a captured stick insect. We must rock our miner’s helmets and spelunk around in its guts.

Here we go:

This is a very cynical person.

It’s really hard to use the phrase ‘fad for quality’ seriously. Go on, try it. Or try to rephrase it. “A passing flirtation with the silly idea of making things better.” Just as bad. A fad for quality? It sounds absolutely batty no matter who says it or how.

If you see attempts to make scientific research better, in light of the failures of broad field-wide replications and increased retraction rates and colossal funding disparities, as some kind of short-lived and pointless exercise, then it is likely you distrust the motives of the people who are doing it.

And, having met many of these people, it is very, very difficult to avoid the conclusion that they are sincere. They may not always be right, and some of them are as annoying as nettle pajamas, but I say with great confidence that they are [a] passionate, [b] extremely hard-working in an extra-curricular or voluntary capacity, like scientists don’t already have enough to do, [c] committed to using evidence-based scientific observations themselves to improve science, and [d] deeply critical of each other when it comes to broad prescriptions for scientific policy, data management, statistics, behaviour, and (gulp) tone. This is a loud, detailed, occasionally aggressive, sincere conversation.

For these actions to be a fad, you must regard modern attempts to address the quality of science as a kind of rearguard action to upset the established order, as a sneaky upstart behaviour cloaked in righteousness, as a convenient wedge used by also-rans to promote themselves and slander / destroy others.

To SEE this level of cynicism, you must possess it yourself. You must be a withered little person who sees scientific enterprise as a transcontinental game of Go Fish rather than an attempt to extract meaning from the universe.

They confuse producing scholarship with producing knowledge

“The contribution to knowledge of large replication projects is zero.”

This is an amazing sentence.

To cut a very long story very short, we understand well the basic statistical backbone of asking scientific questions from scientific data. (Note: the less-basic parts are the cause for horrendously detailed and unending fights).

We also understand well that when we recheck existing scientific results initially reported in small experiments using much larger experiments, these replications (a) are appropriately powered in a way the originals were not, and (b) frequently suggest those initial observations were nothing more than noise… or something more sinister.

(Like bad research practice. Or confirmation bias. Or dishonesty.)

The above is the well-recognised process of ensuring that published scientific results are in fact what they say they are — and that is what is being dismissed as having ‘no contribution to knowledge’.

For this to be the case, we have to conflate scientific knowledge with scientific novelty. In this sense, replicating someone else’s work (especially when replications show the original work as as overblown or fanciful) is kind of breaking the rules.

And speaking of which…

They think requests for further information about science are a hostile or invasive act by definition.

Consider the phrase “reviewers who demand, on no grounds, raw data”.

The key words are on no grounds. The logical conclusion of this, of course, is that if someone writes a scientific paper there should be grounds before any scientist asks another for data. Those grounds are invariably some kind of suspicion of a mistake or wrongdoing. This perspective portrays collected data as a personal asset rather than a collective good. From this, we get the strong sense that asking someone for their data simply because you are entitled to is violating some kind of social contract.

I’ve referred to this attitude before as a kind of Academic Prisoner’s Dilemma — imagine Researcher A and Researcher B write a lot of papers in the same area. People within their personal networks review each others papers, review each others grants, and have a mutual interdependence.

If they are both silent with regards to strong criticism of each others errors, they both have the freedom to publish what they want. Direct criticism would quickly devolve into a mutual loss of trust, interfering with the ability to publish papers or receive grant money.

Note the papers may disagree in their conclusions, sometimes violently, but this is good for business. Sets of dueling ideas, established creatively, can carry on for decades. A lot of senior scientists have, like Oscar Wilde, a series of close personal enemies. But direct criticism, even of work which is egregiously bad, is destructive to our ability to build empires. Disagreement is good for business, but criticism is not.

The attitude that this reveals is one where if you show your data to other people, then it will be co-opted by ‘parasites’ and published elsewhere. Again, we are playing Go Fish — if I give someone access to my data, they might potentially publish it in some form which will deprive me of future publications, or publish it without even citing me!

(Note: there is not a lot of evidence that the above is likely. Often journals will require a description of where your data is from, and you can’t say ‘I produced this analysis of a clinical trial on some numbers I found somewhere on the internet from a dude called Kevin’. Or you will be required to state you have ethical approval specific to the project (if you’re using data you found /‘borrowed’, you might have to lie about that). Also, never underestimate the enormous drag factor at work when trying to understand other people’s datasets (especially if they’re large). They may be poorly annotated or confusing. Anyone picking a dataset up second-hand is often significantly less agile than the people who are familiar with it. Finally, datasets are often shared *with the condition that you cite the source of the dataset if you use it*, and researchers who do not do so stand a solid chance of coming into close personal contact with both tar and feather.)

In short, when I see the phrase on no grounds, I hear ‘I will sacrifice absolutely zero potential personal advantage.’ Again, this is the kind of hypercompetitive careerist mentality that is obsessed personal advancement, no matter what it is. It’s Cersei Lannister science.

The fear of a descending cloud of ‘bullies’ is a fear of public discussion and a bemoaning of the loss of narrative control

People who care about science can be good and goddamn vocal about it. Our anonymous martinet here is worried that these vocal people run an echo-chamber which silences dissent, one where people who don’t agree with various open science initiatives are mercilessly curb-stomped in the court of public opinion.

Now, so far, I have tried to be understanding. I have refrained from billingsgate. I have even tried to be something approaching sympathetic. It’s possible that someone can scoop you with your own open data. It’s possible that the various bits and pieces of the open science movement could be used cynically for personal advantage. It’s possible to make a vexatious data request.

But this argument, this one in particular, this is shit and bilge. It is a kind of intellectual cowardice running nitrous.

My reasoning for this distaste is simple: it is decrying openness, and mistaking people’s agreement for necessary agreement, for the enforcement of orthodoxy. What is portrayed as a sulphorous cloud of self-righteous head-kickers, the Replication Bullies That Descend Like Harpies, is in reality a remarkably heterogeneous group of opinions on how to improve science. There are no blanket diktats, there is no orthodoxy to defend. Hell’s teeth, many of these people actively dislike and oppose each other. They are part of a conversation, not some intellectual monolith.

Also, if a large crowd of clever, vocal, intellectually aggressive people ‘descend’ like Virgil’s harpies on an idea you’ve made public, you should at least consider that it might be because whatever case you’re making is weak, not just because they’re all bin-dwellers and skull-splitters who resent your fat effect sizes and good looks.

Thus: if you have a better idea on how to improve science, float it, justify it, and promote it. People will care. But you can’t say “I can’t float problems with the replication movement publicly because a lot of people will disagree with me” and expect the most anemic shred of sympathy.

In any case, contrast this to the alternative — an academic old boy’s networks, full of closed doors, whispers, backchannels. Ingroups and horse-trading. The ‘right kind of people’ and their winks and smiles. Even the most virulent public discussion is better than the rigid 19th century glad-handing misery we’re sailing away from.

My conclusion is simply that our voice in the darkness here is afraid of scrutiny.

This person is *pissed*

There is a point where critical graduates to hostile. To me, the line is crossed when we graduate from criticising ideas to criticising the mindset, intellectual tradition or level of thinking which produced it. It makes the leap into predicting the (bad) motivations of the people you disagree with. This is related to without quite being ad hominem.

So:

  • “there is substantial evidence against the thesis presented which the author seems to be in ignorance of” has knuckles, but it might well be fair.
  • “there is substantial evidence against the thesis which the author, like most members of intellectual tradition XYZ, blithely ignores because it is inconvenient to her central thesis — such dishonesty is typical for people like her” is now hostile. It assumes motives and sets up necessary opposition.
  • “author is a cock and hence incapable of rational thought” is ad to the hom.

There may be finer grained distinctions to be made, but the central point here is our anonymous author very much sees this as an ‘us-and-them’ situation. The other side — and there are sides, not a community of people with a common goal — is people trying to ruin science.

CONCLUSION

We could go on — “symbols of ethical purity” could even tempt me to dust off my Freudian hat — but I’m already pushing out 3k, horrifying copy-editors, and mixing metaphors.

This is the point: we’re winning.

‘We’ is the various people behind the loose agglomeration of ideas like open data, open methods, replication, registered reports, and so on. The accuracy fetishists, which I think will be my new favourite epithet (sorry, ‘methodological terrorist’).

Broad changes that are being implemented right now to make science more trustworthy, replicable,transparent, accurate and procedurally fair. These will inevitably thrive in the long term — how could they not? A scientific tradition that works better will eventually dominate. It will be better at doing facts.

The only problem with the above?

Actually, it isn’t a problem. It’s the frame I chose — we’re winning.

That’s a crap frame. There’s no we, no them and us. There is just us. Science working better is in literally everyone’s best interests … except bad scientists and those who find facts inconvenient.

And, on the small possibility that our anonymous reviewer is reading this, here’s a message:

I get that you’re angry. You probably feel destabilized and frustrated with the people you see. They’re making bold declarations about intellectual traditions you want to defend. They feel that a fortress you defend should be sacked like Carthage. You may even have been attacked personally in one of the many internecine conflicts about God-knows-what over the last decade or so.

It probably all feels personal.

It probably all feels like a huge step backwards to start questioning basic parameters of how investigation should occur.

But.

Boy’s club science is going to die. Closed science is going to die. Everything you’ve ever represented about ‘how to get ahead’ or ‘how to be a successful scientist’ was a way of navigating a difficult system which quickly became an end unto itself. That will die too. Maybe not in your lifetime, maybe not even soon, but eventually. These things will die because we will kill them.

And you are on the wrong side of history. You are on the wrong side of the evidence about how to get evidence.

I don’t know of how to convince you of the sincerity of all this, that it isn’t a movement to try to make other people look bad. Nothing comes across weaker than shouting, ‘no, seriously, I *really* mean it!’

So, don’t worry about that. Look at the evidence. Science has Stage II methodological cancer. As in, it’s serious, but it’s treatable with modern methods.

It isn’t too late to change your mind. There’s always time to read your Ioannidis, hoist the black flag, and start laying bricks in a better wall.

--

--