Stop Treating Your Experimental Participants Like Cattle

James Heathers
7 min readApr 6, 2018

Seriously. Stop it.

Social scientists are presently concerned about the use (and misuse) of statistics. It ain’t new. They’re just doing it again for the first time.

This concern takes a few forms — using the wrong methods, using the wrong tests, using the wrong framework for understanding hypotheses, using the method which ‘works’, using the wrong significance criteria (or using one at all) and so on.

But I don’t need to explain it like that, and I’m not a statistician. At least, not in the way that matters.

Let’s do it my way.

Some researchers use numbers the same way we recycle e-waste. Basically, they crush garbage into crumbs, feed it into a blast furnace of statistics, and cast copper ingots. Only, because it’s shiny, they often say it’s gold. After its initial valuation, it is locked in Fort Knox, and the safe door is welded shut.

Some researchers would use multi-level modelling to open a packet of Tunnock’s Tea Cakes.

NOT THAT COMPLICATED

Some researchers think ‘Confirmation Bias’ is an indie band.

We’re Confirmation Bias, and this next song is off our second EP, Staring Sadly

Oh, and some researchers can’t add.

If you don’t know who I mean, this might be your first time here.

Conceptual mistakes, procedural mistakes, over-confidence, and flat-out old-school incompetence. This has all been discussed at length.

At great length, actually.

At long, tedious, brutal, unending, expand-the-80-intervening-tweets length.

If you’ve ever watched an argument about statistics either in person or online, you’ll realise that it’s often people shouting at cross purposes across different time zones until one of them has to go to bed, with enough unpacking and assumption-questioning and re-defining and point-of-ordering to make a barrister give up the bar to become a scuba instructor.

More recently, a similar series of discussions has sprung up around theory. Do your ideas represent what you think they should? How strong are your hypotheses? Are your experimental operationalisations sufficient? Have you considered the induction problem? If your resulted were ‘significant but inverted’ (i.e. you found a significant OPPOSITE effect to what you imagined) would that kill your theory but let your paper live?

Or, to put it another way, does your experiment represent a useful analogue of any broader context than it was conducted in? Or is it just an incredibly specific construction of an incredibly lovely house on incredibly weak stilts?

You are one tidal wave away from being on the evening news there, superchief

And while I’m here, remember the fun doesn’t end when the study’s over, and all our results have been spit-polished nice-like. Then we meet our old friend, overextrapolation!

“Well, this intervention worked in seventeen women from Virginia who were all called Qcindee (the Q is silent) or Chadleen so I see no reason not to roll it out in South Sudanese farm workers.”

These are complicated issues, and to describe them fast is to not describe them well. If you’re involved with any of the above, resist the urge to correct me too stridently. Arguments about both statistics and theory are difficult to understand because of the level of detail and prior knowledge required, and difficult to hear, because they often require you to question the assumptions of your own work.

But.

What I’m about to say isn’t complicated, or difficult, or hard to fix. It’s a combination of bog-average common sense and a sniff of human decency.

However it IS a problem in social, psychological, medical, behavioural, etc. research, maybe even of equivalent magnitude, and it goes like this:

Stop treating your study participants like shit.

I cannot tell you how many times I’ve seen someone shuffle an experimental subject into The Glorious Study Of Something-or-other, grunt at them, ladle them into a squeaky corner seat in a lab with all the home comfort of the Balkans conflict, push the chair square up to a computer terminal, mutter some instructions about how to slop the buttons around, and then shuffle them out again.

Or some giddy Hooray Henrietta hand out a thick stack of forms to an incredulous room of people.

We have to fill out all of this?
Yes, it’s mandatory, hop to it.
What, all six of them? There must be 40 pages here.
Yes, it’s mandatory, hop to it.
Some of the questions are the same.
DID I STUTTER, JEREMY.

Basically, a lot of data collection in the social sciences treats fully-grown adults like cattle, as if the data they will produce is an unfortunate necessity endured to get to the main job of … well, abusing that data. Unnamed bodies are herded in, given a series of incomprehensible tasks in which they could not possibly be less engaged, then herded out.

Well, dismissive boring context-free experiments earn you dismissive boring context-free answers. Without any investment, your participants will lie, dissemble, or fill out what they think is realistic. Or more likely than all of the above, they’ll just put what amounts to ‘whatever’.

You will not be able to catch them through patterns in their answers (and almost no-one will think to look anyway).

A solid theory and appropriate statistics will not stop your participants from being bored, curious, ruinous, or outright lying their arses off to get to the pub before happy hour stops. It won’t stop them from trying out the Curious George routine, and trying to presuppose what you’re doing, break your theory to bits, make you happy… or make you miserable.

I know this because that’s what I did when I was 17. God, I was bad at doing experiments. I punished redundancies (where’s your Cronbach’s alpha now?), put down confusing responses, and occasionally stuck my tongue in my cheek and did something goofy. I tried to crack response time tasks. I tried to reverse engineer experimental aims. Oh, and if anything was really disrespectful or outlandish, I occasionally put something awful. I think I got an IAT once to tell me I was “strongly implicitly biased against white people”, and considering I’m the whitest person in the world who isn’t a Finnish albino, that took some doing.

Basically, I fiddled, something I’ve always done when bored.

And forget about ‘manipulation checks’. If you bore people rigid and then ask them whether or not your experiment is any good, they only might tell you. They probably won’t tell you if they feel straight-up disrespected by your awful procedures. They will say the words necessary to get out of the room.

I’m flummoxed by people who do psychological research having only the most cursory understanding of how normal people behave. Newsflash: people resent bullshit.

So, TALK to your participants. Don’t give away the aims of your study, or fill them full of expectations that will bias their answers. But! For the love of everything that chants the black hymns of a failed republic, give them some kind of buy-in. Try to explain the research area. Use only the words that are necessary when you do it. Don’t stack scales ontop of scales ontop of scales if you don’t have to. Look people in the eye. Find out what their names are, if you can.

And if you’re committed to long psychometric instruments — I know some short-form questionnaires have all the reliability of a crocheted wetsuit — give people breaks. And pay ’em. And all of the above goes double.

Oh, and if you have assistants of any form running your studies, occasionally help them do all of the above. Try to remember that THEY are doing YOU the favour.

Sorry if that’s tiring. You know what else is tiring? Doing the experiment.
I have no sympathy whatsoever for the argument We don’t have the time to invest in these people. You’re going to invest plenty of time in the paper you write about whatever numbers they’ll put in order for you. Don’t you want that to go as well as possible?

Obviously, we accept the fact that there’s some tedium involved to questionnaire-based research. You can’t make everything exciting. (Remember that nauseating fad from a few years ago where we tried to ‘gamify’ everything? Turns out a shit game is as boring as a shit non-game experience. Who ever would have seen that coming.)

Basically, sometimes someone’s going to have to get a bit bored, particularly if we’re constructing scales from scratch. Or doing basically all psychometrics. So consider that even more reason to think about your design, and your manner, and conferring some — any! — kind of respect to the people sitting through your BS.

I don’t know if this qualifies as a skill in its own right. Maybe it does. Or, as I said before, maybe just human decency / common sense. But whatever it is, it could stand to be improved.

--

--