The GRIM test — further points, follow-ups, and future directions

  • Nick Brown and I published a pre-print of our paper on the GRIM test, a very simple technique for determining if means with small cell sizes can exist, and what it means if scientific papers are reporting ‘impossible’ data — which they often do.
  • (Obviously we are now in the process of publishing a non-pre-print version, so that’s happening too.)
  • The pre-print has been downloaded 400-odd times, the article has received around 2750 views, my Medium article explaining the test has had around 10K views, and my first Facebook post has around 17.5K views. This is all without university support, or a press release, or any official promo. We just lobbed the paper into the public consciousness like a damp firework with a lit fuse, and hoped it went off.
  • An excellent online calculator appeared courtesy of Jordan Anaya. It is wonderfully straightforward, and a lot neater than my code or our spreadsheet. I’d recommend using that if you need to GRIM test some data.
  • The first mainstream article has been written about the technique, which is good news. I hope there’s more.
  • A firehose of comments, links, and tweets (Twits? Twerts? Insert appropriate noun here, I still think Twitter is Satan Incarnate) have been left / swapped / conferred. Heartfelt thank yous to everyone who left them — I’ve seen some of them, but I’ve also been on holiday for Memorial Day weekend, so I’m still catching up.

This is not a fraud test, it is an inconsistency test…

…and yes, the framing is important.

  • Inconsistency in numbers doesn’t mean anything more or less than that, it’s not a euphemism. It’s “not compatible or in keeping with its description”, full stop.
  • The reasons for “honest mistakes” are many.
  • Most researchers who bothered to engage with the process of identifying data errors were reasonable and straightforward about any inconsistencies.

“If this test is used for fraud detection, won’t this only catch people who are crap at fraud? Won’t it fail to detect people who make up *data*, and only find people who make up *means*?

Making up data is a pretty silly crime. It’s like plagiarism in that for you to derive any benefit from it, the deed must be done in full public view. And the more successful your crime is, the more attention your work will garner… that could well make you more likely to get caught.

“What are the dangers of ‘data vigilantism’?”

Glib answer: I don’t know. What are the dangers of inconsistent research?

“It’s a bit simple, isn’t it?”

Deathly simple.

“I did the GRIM test on a paper, and something is wrong — what should I do?”

Hard to say out of context. Here’s what I’d do. Mileage may vary.

“GRIM test? Really? Who came up with that? Is one of you into heavy metal or something?”

Answers respectively: Yes. Yes. Me. Yes, me.

“What are you doing next?”

Don’t know yet. Still finishing this, technically. The time period including “next” starts post-publication.

--

--

I write about science. We can probably be friends.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
James Heathers

James Heathers

I write about science. We can probably be friends.