The Guardian view on social science research: embracing uncertainty | Editorial

A A new set of studies published this month suggests that nearly half of all findings published in reputable social science journals cannot be replicated by independent analysis. This is part of a long-standing problem in many areas of research – most visibly in the social sciences and psychology, although concerns have also been raised in areas of biomedical research.
The latest work is a seven-year project called Systematizing trust in open research and evidence (Score), which has now published three studies covering 3,900 social science articles. It found that more recent articles and those published in journals requiring in-depth sharing of the underlying data were more likely to be reproduced. Furthermore, medical research faces its own constraints: different patient volumes and limited sample sizes mean that in practice it can resemble social sciences more than laboratory physics. Clearly, policymakers should be wary of any claims that are not based on a broad and robust evidence base.
Language is key: reproducibility checks whether results can be recreated using the same data and methods. Replication tests whether the results hold for new data in different contexts. Science rarely produces exactly the same results, and understanding why is part of how knowledge accumulates. But increasingly, politicians seek to turn uncertainty into denial and turn normal scientific uncertainty into evidence of failure. That’s why a May 2025 White House executive order highlighted science’s “reproducibility crisis,” essentially a Trumpian call for doubt and inaction.
Unfortunately, large-scale verification projects, like those undertaken by Score, are rare. Most academic researchers prefer to spend time on work more likely to enhance their careers. Score reanalyzed existing data and, in separate work, replicated studies from scratch in more than 100 papers. Around 49% still failed to reproduce the initial result. This reveals a deeper problem. Reanalyzing the data is relatively simple; performing an identical experiment is not. It is difficult to recreate social and medical research experiments, the results of which depend on complex human systems. AI can help decide what to test, but it cannot reduce the cost and time required to duplicate a piece of research.
Not all replication failures signal a crisis. Some discoveries are not of much importance; replication studies may themselves be flawed. Results that do not replicate consistently should be compared to a broader evidence base when directing policy. Treating non-replication as a disqualification confuses uncertainty and ignorance. This risks paralyzing decision-making where judgment matters most. Greater transparency makes outright fraud more difficult and helps identify errors. Major funders such as the UK’s Economic and Social Research Council already require it, and the approach should be universal.
Some are optimistic, saying the research “eventually self-corrects.” The long-term solution – changing incentives so that existing outcomes are tested – would build trust. But this requires a restructuring of research culture and funding. For now, this remains largely theoretical. These studies should strengthen the case for change and serve as a warning. The social sciences are a powerful tool for understanding the world – and this confidence will be built by recognizing uncertainty, not by repudiating it.
-
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.




