It's not like Twitter to be quiet. Especially around hotly debated issues like peer review and diversity. However, this week when I tried to start a debate around diversity within peer review, my Twittershphere was completely silent.
After a day and over 1000 views, my tweet had exactly three interactions. A comment from myself (trying to stoke the fire - you know how it is), a like from my mum (not a joke) and a retweet from a friend from my PhD.
I wouldn't call it a triumph, and it got me wondering why? I've seen my Twitter network explode at the slightest mention of diversity and peer review, why was it silent when I combined them? I don't have a definitive answer to this. It could be timing, my Twitter following or the way that I worded the Tweet, but I'm worried. I knew that it was Peer Review Week this week and I knew diversity was an issue that would be raised, however, I had nothing to say. Why? Simply put, ignorance. I care about the peer review process and I care even more about diversity in science, however, I have no idea about how the two are connected. I have limited experience and some anecdotal evidence but nothing more. I can assume that diversity metrics in peer review would follow the known trends in academia, but I didn't know for sure.
I simply didn't know. Embarrassingly, this was the extent of my knowledge of the topic of diversity in peer review. I started to wonder if I was the only person ignorant about this or if the anonymity of peer review made it impossible to truly know if biases existed in the process.
Luckily, at this point, my friend from my PhD (remember him, the person who engaged with my tweet and wasn't my mum) threw me a bone.
There's certainly evidence of gender bias in peer review https://t.co/vIEEsq3sjR
— Mike Cox (@MikeyJ) September 12, 2018
There is evidence for gender bias in peer review. A study from eLife showed that in the Frontiers series of journals, there are certain biases. In short, women are under-represented in peer review and in general, editors tend to favour reviewers of the same-gender. However, perhaps one of the most important lines in the article is this:
In addition, we found that women contribute to the system-relevant peer-reviewing chain even less than expected by their numerical underrepresentation, revealing novel and subtler forms of bias than numeric disproportion alone.
This means that as well as being under-represented in the peer-review process women suffered additional biases which reduced their participation in peer-review even more.
As frustratingly familiar as these findings seem, reading them made me feel a little relief. At least now we know what the numbers are and where the problem lies. It feels like a tangible problem that can be addressed, measured and eventually fixed. Monitoring and fixing the problem relies on peer-review data being freely available, which for the Frontiers journals, they are. But, as the paper asks, how representative are the Frontiers journals? Not only in terms of their peer review demographic but also in making the identity of the peer reviewers public?
Anonymity in peer review does have some benefits but trying to find information about diversity made me acutely aware of a significant cost. If we hide information about reviewers, it makes it almost impossible to measure bias and discrimination in peer review and this can prevent even a basic discussion of the topic on a platform like Twitter. Is it time to do away with anonymity in peer review?