The strengths and weaknesses of one of science's most important safeguards
- Travis M. and Spencer Greenberg
- 1 minute ago
- 10 min read

Key Takeaways
💡 Understanding peer review is important for everyone, not just academics. We all depend on science to be reliable, and peer review is supposedly the system that helps enforce this.
📄 Peer-review means articles by experts are evaluated by experts. They assess methods and claims before publication, though (sometimes shocking) mistakes still slip through.
🧠 Peer review is flawed. Understanding its limits helps you interpret research more wisely, without rejecting it altogether.
🚧 Better models exist. Triple-blind review, open critique, and preprints offer promising ways to improve how research gets evaluated and shared.
👉 Think you can spot solid science? Try our Guess Which Experiments Replicate tool and put your judgment to the test.
Part of thinking clearly is having reliable methods for gaining knowledge. Some methods are clearly terrible: for instance, if you based all your beliefs on flipping a coin (“Heads I believe it, tails I don’t!”), you would struggle to form an accurate view of anything. Similarly, you probably know someone whose word doesn’t count for very much and trusting what they say won’t reliably lead you to the truth.
However, other methods for gaining knowledge can be much better, and the methods of science are plausibly among the best of the best. Nevertheless, science isn’t flawless – there are ways it could be improved – and one component of science that has lots of room for improvement is the peer review process (the process by which articles submitted to academic journals are reviewed by other academics who give feedback and advise editors whether or not to publish). This week, we’re going to explore some of what’s good and some of what’s disturbingly bad about peer review. We’re also going to outline some of the solutions that have been proposed. The aim is to have a better understanding of science - how it is practiced and how it can be improved.
We believe understanding the flaws and strengths of peer review is important for everyone, even if you're not an academic, because it shapes the reliability of information in daily life - from health advice to financial decisions and technology and beyond. We all depend on science to be reliable and peer review is supposedly the system that helps enforce this, so let’s understand it better!
What is peer review?
Most of the knowledge gained by scientists and other academics reaches the world via academic journals. Scientists conduct experiments, write up their findings, and send them to journals for publication. Journals then have to decide whether the articles submitted are good enough to warrant publication, but the editorial team at any given journal usually won’t have people with enough expertise in specific subdisciplines to assess the quality of a given article they receive. So they send the articles to experts (aka peers) to look at (aka review). Experts will give comments, suggest changes, and indicate to the editor whether they think the article they’re reviewing should be published or not.
Because it is so synonymous with scientific methods today, it’s easy to forget that peer-review is a very recent convention. Although early forms of peer-review can be traced back to the first scientific journal (Philosophical Transactions, c.1665), the practice was not the standard way of doing things until after the second World War. Two of the top journals in the world today (Nature and The Lancet) didn’t adopt the process for all papers until the 1970s!
Four good things about peer review
First, let’s discuss the good things about peer review. Many argue that peer review is essential for science; they say it ensures that:
“An article in a reputable journal does not merely represent the opinions of its author; it bears the imprimatur of scientific authenticity, as given to it by the editor and the referees [they] may have consulted. The referee is the lynchpin about which the whole business of Science is pivoted.” (John Ziman, 1968)
And:
“The product of peer review is said to be public confidence that high-quality academic work that makes a contribution to the accumulation of knowledge has been done. Again, equals active in the same field are said to be in the best position to know whether quality standards have been met and a contribution to knowledge made.” (Margaret Eisenhart, 2002)
Indeed, when peer review works well, it can:
Successfully filter out lots of very bad work
Enforce some good practices
Provide great suggestions to improve papers
Provide grounds for confidence in the claims made in published papers
But peer review often does not work well. Let’s talk about that.
Six bad things about peer review
Here are six of significant shortcomings of the peer review process:
1️⃣ It does not reliably catch errors or stop nonsense from being published
This is perhaps the most egregious shortcoming of peer review, since catching errors and nonsense is peer review’s most fundamental reason for existing. It is supposed to be a safeguard against bad research. Often it is, but there are many worrying exceptions.
This isn’t the fault of the peer reviewers themselves. They are working within a system that has bad incentives: peer reviewers are experts who work hard for little or no reward (including no pay), and often under time constraints that make deep scrutiny difficult or unfeasible. They are typically expected to fit unpaid peer review into schedules that are already asking far too much of them, meaning that they may not have enough time to thoroughly check data, verify claims, or deeply engage with complex methodologies. The result is a system that lets through more errors than it should.
For example, here are just two absurd examples of things that have made it through peer review and into published journal articles:
Wait, those aren’t error bars!
Error bars are little lines you’ll often see on charts and graphs in scientific articles. Roughly, they show how much certainty or precision is involved in the measurement. More precisely, they depict one standard deviation of uncertainty, one standard error, or a particular confidence interval (most commonly, a 95% interval). Here’s an example:

But, as Retraction Watch reported, a paper in 2022 made it through peer review, to publication, using capital ‘T’s instead of error bars! Unlike error bars, capital ‘T’s tell us literally nothing about the data. Here’s the offending graph:

The paper was eventually retracted, which is a good thing. But the fact that it made its way through the peer review process without anybody spotting this graph error (or the other errors in the paper) is a sign that peer review can fail spectacularly at doing its job.
Wait, that’s not a date!
Of course, one shocking example from one bad paper isn’t enough to justify criticism of the whole peer review system. For that, we’d need something more systematic. Something like the fact that a study of supplementary data files for published genetic studies found that roughly 20% (that’s one in five!) data files contained gene names that had been incorrectly converted to dates by Excel. The authors write:
“The spreadsheet software Microsoft Excel, when used with default settings, is known to convert gene names to dates and floating-point numbers. A programmatic scan of leading genomics journals reveals that approximately one-fifth of papers with supplementary Excel gene lists contain erroneous gene name conversions.”
In most of these cases, it is possible to tell what the original gene name was, so the data can be corrected, but these errors can cause even more errors when doing things like comparing datasets. Also, a very worrying thing about this is that it indicates that very few editors or reviewers (if any) look through the data that are submitted along with studies.
Given that it is often necessary to look at the data to have a shot at catching some of the worst academic practices (such as data fabrication, selective reporting, or statistical manipulation), this is evidence that the peer review system is systematically failing to scrutinize submissions to an appropriate standard.

We at Clearer Thinking also know this first-hand because we frequently find serious errors in data when we replicate new psychology papers for our Transparent Replications project. Despite all of the papers we examine being published in top journals in the field, we've caught a wide variety of statistical mistakes, as well as serious issues with experimental design and with how results are reported. Unlike most peer-reviewers (who have very little time to review), we are replicating the papers from scratch, which requires us to very carefully scrutinize the materials, statistics, and data for issues.
2️⃣ It's shockingly random whether a paper is accepted
It depends on which reviewers happen to be assigned. In one psychology study, 12 already published papers were resubmitted to the very same journals that had already published them. Only 3 of those journals noticed that they’d already published the paper, and 8 of the remaining 9 papers were rejected! It is a waste of time for everyone involved (those who wrote the paper as well as those who are involved in evaluating it) when peer review ends up being such an unpredictable process.
3️⃣ There aren't accountability mechanisms for reviewers
This means they can provide overly harsh, vague, or unconstructive feedback without consequences, or they can miss egregious errors and recommend a paper for publication that should be thrown in the trash. While most peer-reviewers are undoubtedly trying to do a good job (and doing it for no pay!), there are some seriously bad reviewers out there. And there's nothing authors can do when reviews are clearly unfair or inaccurate, except appeal to the editor. On this topic, there is a Tumblr.com account to which people submit shocking comments they’ve received from peer reviewers. Submissions exhibit how cruel and negligent reviewers can be. For instance, posters report receiving comments like:
“If this was taken from a successfully defended thesis, as it appears to have been, then he should not have been awarded a PhD”
And:
“I thought that the author might be trying to “have it both ways.” To be clear, this was just a passing thought and frankly, I read the manuscript about two weeks ago and don’t remember the context, nor did I cross-walk one part of the essay with another to validate the thought.“
Comments and experiences like these can have severe effects on researchers, particularly early in their careers. With no accountability mechanisms, reviewers often just get away with it.
4️⃣ It takes a ludicrously long time to publish
While some journals prioritize speed, one study found that business and econ papers average 18 months from submission to publication, with chemistry taking 9 months. And this is the best-case scenario where you get no rejections! When academics leave the ivory tower and join the business or nonprofit world, it's not uncommon for them to remark on how much faster things seem to move in those realms!
5️⃣ Reviewers frequently demand unimportant (but time-consuming) changes
The reality is that there are many subjective aspects of evaluating a paper. There are lots of great reviewers who offer great suggestions or catch mistakes. But there are also reviewers whose feedback feels like pointless hoop-jumping that barely makes a paper better (if at all). It's also not uncommon for multiple reviewers to provide contradictory feedback, which can make authors unsure how to revise their work or journals unsure how to proceed with a decision. Researchers studying the effects of peer review have pointed out that “To date, we do not know whether papers published with peer review are generally improved over those without.” While others have noted “peer review remains critically poorly understood in its function and efficacy, yet almost universally highly regarded”.
6️⃣ Reviewers can be biased
There is evidence for a great many biases in the practices of reviewers, such as:
Halo effects, such as biases in favor of authors affiliated with prestigious institutions
Biases towards papers with conclusions that confirm the reviewer’s prior beliefs
A conservative bias against innovation that reviewers are unfamiliar with
And so on.
In many fields, reviewers are anonymous but submitters' names are not kept secret from reviewers and editors! This likely produces a number of biases. The evaluation of quality should not depend on name recognition or an appeal to authority.
Since we at Clearer Thinking are not directly affiliated with any universities (though some of us have PhDs and/or have taught at universities), we have first-hand experience of an apparent bias against ‘outsiders’. These days, when we get our findings published in academic journals, we find it much easier to team up with university-affiliated academics and co-author papers with them, using our study data.
This list is far from exhaustive, but it details some of the major problems with peer review.
How can peer review be improved?
These problems don’t necessarily mean we should abandon peer review. Some researchers have pointed at the COVID-19 pandemic as evidence that the current system of peer review may be better than no system at all. They point out that, during the height of the pandemic, research findings needed to be disseminated more quickly than traditional peer review would allow, so ‘preprint’ versions of articles (versions that haven’t undergone peer review) were shared widely online.
They argue that this came with significant downsides - such as that non-scientists (including journalists) tended to think of preprint articles as equally as credible as peer-reviewed articles, which made discussions of this research prone to misinterpretation, exaggeration, misinformation, fake news, and conspiracy theories. Any way you cut it, peer review does reject a lot of low-quality work, and that shouldn't be dismissed. If peer review were removed without adding anything in its place, we could be left with a system that's substantially worse.
Whether or not you find that line of argument convincing, it doesn’t have to be the whole picture. Changing the way that peer review is done right now needn’t entail abandoning reviewing altogether. Instead, many different solutions and alternatives have been suggested. Things like:
Opening up reviewing to wider communities (whether of academics or otherwise). Such as is done here.
De-biasing the process through means such as triple-blind reviewing (which is when the submitter doesn’t know the identities of the reviewers, and neither the reviewers nor the editor knows the identity of the submitter) and reviewer training.
Introducing market forces by doing things like paying and publicly reviewing reviewers
Changing the point at which journals enter the process: instead of papers being submitted to journals who then choose which ones to publish (and the rest toil in obscurity), all papers are released online (free), and journals merely collate those which they deem best or most interesting. Some fields have taken a step in this direction, with preprints of papers appearing online (freely available) right away and journal refereeing occurring afterward.
At the end of the day, the goal isn’t simply to tear down peer review; it’s to build something better. Science thrives on iteration, self-correction, and improvement. It’s important that we apply those same principles to the very processes that determine what science gets published in the first place.
Want to take your thinking about academic research even further? Why not try our Guess Which Experiments Replicate quiz? You’ll be presented with examples of peer-reviewed research findings and asked to guess which ones failed to replicate when researchers tried them again. Test your judgment of experimental results!