This article is Part 3 of the Science Critique 101 Series. 

There are four common critiques that people often defer to when they reject a piece of research. These relate to 1) sample size, 2) research bias, 3) the peer-review process and 4) funding.

In Part 1 of the series which appeared in the May-June 2021 magazine, I made the point that no research is perfect. Trade-offs are always made in the decision-making process when designing a research project. That’s why the identification of a flaw or a shortcoming does not mean that we should completely dismiss the findings. If we do, we risk depriving ourselves of useful knowledge.

In Part 2 in the July-August magazine, I tackled the issue of sample size and the popular belief that the bigger the sample, the better the research. This belief is based on the important need for a sample to adequately represent a population. However, the exact numbers behind ideas of ‘big’ and ‘small’ are relative to the research aims, the size of the total population, the type and quantity of data being considered for each research participant/subject, and the implications that are being drawn from the findings.

In this, Part 3 of the series, I consider research bias.

Bias and transparency

I have discussed bias before in relation to cognitive biases in human decision-making, but research bias is different. (You can find my ‘Bias Beware’ article on the Horses and People website or by using this link: https://bit.ly/3A03bfY).

As one of the ‘big four’ science critiques, research bias is usually a personal attack on one or more of the authors of a research publication, suggesting that they only found what they wanted to find.

In these circumstances, rather than considering research findings on their own merits, people reject the study as little more than part of the researcher’s personal and subjective agenda – their bias to find what they wanted to find.

In other words, accusations of biased research seek to undermine the scientific legitimacy not only of the research but of the authors.

If we dismiss a whole piece of research based on one critique, we risk throwing the proverbial baby out with the bathwater. Shutterstock.

A researcher accused of being biased might be said to have used a ‘biased sample’ that would give her the findings he or she wanted, or perhaps they deliberately chose a method of analysis that would play down the cases that didn’t suit their ends, or likewise overemphasise those that did. 

Maybe a few instances arose in their data set that didn’t support their theory, so they imposed a new parameter to erase those cases. They might, for example, delete everyone in the upper age bracket and then retrospectively change the aim of the study to focus on younger groups. They might have unintentionally created a data collection tool incapable of finding any counter evidence to their assumptions about what they would find.

Should it happen? No.

Has it ever happened? Yes.

The reasons why are vast, not all are deceptive, and not all the impacts are consequential.

The need to minimise the chance of fatally biased research being published is precisely why the peer-review process is so important, and a topic I will discuss in the next part of this series.

I think what matters most is the issue of transparency. Someone seeking to be deliberately misleading or self-serving by falsifying or misrepresenting their research will normally go to great lengths to cover their bias.

In the name of transparency, I can reveal that this series of articles on Science Critique 101 was inspired by public commentary about an article I published with some colleagues on whip-use in Thoroughbred flat horse racing.

In fact, most of my articles are inspired by challenges in my personal life as a horse rider and my professional life as an equestrian social scientist.

Am I biased towards certain topics? Absolutely! Does that mean my treatment of those topics is biased? Not at all!

The article in question about whip-use was published five days before the 2020 Melbourne Cup – an emotionally charged time for horse racing supporters and protestors. In the article, we reported that there was no statistically significant difference in the speed, straightness and safety of horses ridden in races with whip-use allowed when compared with races where whip-use was permitted. In other words, we found no evidence that whip-use makes horses run faster, straighter or safer.

It is hardly surprising that comments on social media critiqued our findings and rejected our study – using the basics of Science Critique 101. Some of the claims that our research was biased related to a belief that one of the authors – Paul McGreevy – only publishes research “against” whip-use.

Indeed, McGreevy – together with various co-authors, has published over 20 peer-reviewed articles related to whip-use in racing. If the only research you ever read was about whip-use, you might get the impression that this shows an unusual focus on the topic. However, McGreevy has published over 300 academic papers, which makes his work on whip-use around 6.5% of his total research output.

Still, even if a researcher had dedicated 100% of their research to whip-use in horse racing, that does not indicate biased research. What it might indicate is that the researcher is interested in whip-use, that their research department specialises in whip-use, that the topic deserves significant attention, or that they have become known for their expertise on a subject to the extent that other people have invited them to collaborate or provided funding. None of those reasons for conducting research should be confused with biased research.

Accusations of biased research moreso reveal an uncritical (and ironically self-serving and biased) use of science critique by people with whom the findings are unpopular.

In the example of whip-use in racing, they also reveal the fears and anxieties of horse racing enthusiasts, which is hardly surprising.

In both those cases, people often become subject to a confirmation bias – cherry picking only the information or facts that suit what they want to believe (read more on this topic here), which is ironic given that they are accusing researchers of the same.

Confirmation bias can be clearly seen in social and mainstream media debates about vaccination, climate change and gun control, as well as rollkur, tight nosebands, and whip-use in horse racing.

Racehorses and jockeys galloping towards the finish line Many researchers dedicate their time and attention to a topic about which they are personally interested, passionate, or even invested – but this does not mean they conduct biased research. Image Shutterstock.

There are many researchers who dedicate their time and attention to a topic about which they are personally interested, passionate or even invested – but this does not mean they conduct biased research. Many of the topics of my own research were not a direct part of my academic remit or required workload. If I had not been driven by my personal interest in horse riding, they would remain under-researched (rider risk perception, helmet use, bushfire response and preparedness, for example).

Science is not a popularity contest

When research findings are popular, researchers are unlikely to be accused of conducting biased research. However, when their findings are unpopular, researchers often find themselves being accused of conducting biased research.

As I discussed in my article on sample size (Part 2 of this series), the popularity of certain findings is a prime example of the socio-political climate in which research is always conducted and interpreted. You might be able to take the politics out of research, but it is almost impossible to take research out of politics.

So what does this discussion of the difference between research bias and biased research tell us about the focus of this series – the problems with science critique 101?

Can a little bit of science critique be dangerous?

I believe a little bit of science critique can be dangerous – especially when it is used to reject research findings outright.

When we throw the proverbial baby out with the bathwater, there is so much more at stake than a researcher’s credibility.

Someone who believes that whip-use is essential to the very existence of racing may well feel threatened by research finding no valid justification for whip-use. Such a person might dismiss the research based on one or more of the big four Science Critiques 101. However, taking such an extreme position based on one concern around research bias does a great disservice to the very thing they value; horse racing.

If we all agree that racehorses should be run on their merits without compromising horse or jockey safety, shouldn’t we all be concerned that there is no evidence that the whip does what we thought it did?

One response would be to ‘double-down’ on our beliefs because we can’t face the thought that we might have been wrong, misguided or just subject to a common association bias, where we assumed that horses who won whilst being whipped won because they were whipped! Part of this doubling-down is rejecting any evidence to the contrary by revealing its flaws.

Another set of responses would be to entertain the possibility that we were wrong, to accept the discomfort that might come from that realisation, to reassess our old beliefs in light of new information and to reconsider how a new perspective can still serve our old values.

If we don’t, we fail to see the bigger picture; that we need to find effective ways to keep horses running straight, ensure that horses are being ridden to their full merits and reduce the frequency and seriousness of  accidents like horse and jockey falls.

Pen placed on scientific journal paper with highlight colour The more familiar you get with the layout of academic articles, the easier to know which bits you need to understand and which bits it is OK to skim through to still answer your particular question about the research. Image Shutterstock.

So what can we do?

If you want to develop your ability to critique research, and to develop a feel for what is ‘good’ or ‘bad’ research, then you can do two things.

First, check your own bias:

  • If you really want to believe the findings, you probably won’t recognize any flaws.
  • If you don’t want to believe the findings of the research, chances are you will find a justification for rejecting the study.

Importantly, if you are experiencing a strong emotional response against the findings, resist the urge to write-off the entire study based on one basic critique.

Remember – research is by its very nature incremental. Each study is a small step. As part of a much larger body of work and scientific debate, each study leads somewhere. It might be a step in the same direction, coming closer to where we thought we were headed. Yet, with each study comes the chance that we will find ourselves at a fork in the road.

For example, research on whip use was carving out a route towards different types of whips or more stringent regulation of where the horse can be struck and how often.

Our study comparing whipping-permitted and whipping-free races showed us that we were actually headed towards a dead end. The whip doesn’t even work! It’s time to make a U-turn.

The second way to develop your ability to critique research is to read the article! Many articles are open-access which means you can read the whole piece without hitting a paywall. If the full text is not available or requires a fee, you can email the corresponding author and ask them for a private copy. Their email address is displayed on the first page of the article.

If you can’t face reading the entire article, just start with the abstract. If that’s bearable, try reading the conclusion and if you’ve made it that far, why not check out the discussion – which should include the limitations that the authors have themselves acknowledged. These can often give you a sneak peak of what research might be coming next.

It doesn’t matter if you don’t understand every sentence in a journal article. Most research is specialised. The intricacies of the tests for statistical significance or the steps taken to determine codes, themes and sub-themes might be difficult to comprehend – but that’s OK.

Researchers from different fields often face the same difficulties reading each other’s work. If you feel overwhelmed, you might want to skip or skim-read the introduction/background and the methods, especially in articles where there is lots of jargon.

Still, the more you familiar you get with academic articles, the better you will get at reading through the relevant parts to find out for yourself:

What were the research aims or questions? These details are usually found in the abstract, at the end of the introduction and/or at the start of the results and again stated at the start of the discussion and/or conclusion.

Who participated in the research (human or animal) and how were they selected? These details are usually found in the abstract and the methods section, or not at all if the article is theoretical/conceptual and not the report of an experimental study.

curious horse peeking behind a rustic wall If an article is behind a paywall, you can contact the corresponding author and request a private copy. Their email is usually listed under the heading or amongst the first footnotes. Shutterstock.

What methods were used to address the research aims? These details are usually found in the abstract and the methods section, or not at all if the article is theoretical/conceptual and not the report of a study.

What did the researchers find? This will be in the results section and may be in the abstract but with less detail and specificity.

What are the researchers saying/arguing or claiming? This should be included in the abstract and usually in the discussion section and/or the conclusion. If there is no distinct conclusion, the end of the discussion should summarise what was found.

What were the limitations of the research, the trade-offs and the shortcomings? These details are usually stated in the discussion but can be in a section with a heading of the same. Limitations are often paired with a few sentences on future research. Future research can also reveal the shortcomings with the current research, as they have been identified as the next step in confirming/testing/re-testing or extending the current research.

Did the researchers have any conflicts of interest? Most journals require the authors to state all their conflict of interest and even have a heading with exactly that wording (usually a sub section at the end of the article, just before or after the refence list).

Was the research funded, and by whom? Journal editors request this information from authors. It is usually found under a separate heading of ‘funding’ but may be included in the statement on conflicts of interest.

You can then decide for yourself if the research was biased, by considering if the researchers made:

  • sensible choices about how to collect their data,
  • reasonable interpretations of what they found and,
  • thoughtful conclusions that are based in what they reported to have found.

Finally, if you still want to reject the whole study, ask yourself what might be at stake if you do? What incremental step might you still be able to take?

Next time…

In a future article, I will discuss a form of Science Critique 101 that relates to the peer-review process. I will address the popular misconception that peer-review involves academics banding together, patting each other on the back and turning a blind eye to low quality research (the opposite of what really happens).

By explaining how the peer-review system works, I will demonstrate how the fact that an article is peer-reviewed is possibly one of the weakest arguments for rejecting research findings.

Reference:

Thompson, K.; McManus, P.; Stansall, D.; Wilson, B.J.; McGreevy, P.D. ‘Is Whip Use Important to Thoroughbred Racing Integrity? What Stewards’ Reports Reveal about Fairness to Punters, Jockeys and Horses’. Animals 2020, 10, 1985. Full text available here.

To find Journal Articles by topic:

Google Scholar – for searching for academic papers

Directory of Open Access Journals – for searching for open-access academic papers 

This article was first published in the November-December 2021 issue of Horses and People magazine. 

You can buy a digital copy of the magazine for just $5.00. 

Instant download!