Since John Ioannidis’ article was published, many studies have had significant problems in repeating published scientific results. Most well-known is perhaps Amgen’s study from 2013 where 90% of 53 preclinical cancer studies’ results could not be repeated, but the problem is not limited to biomedicine; all research areas with complex issues and large quantities of data suffer from the same issues.
Step towards safer results
Ioannidis stresses that he believes in science and the scientific community. Many people want to deal with the problem of reduced reproducibility.
“A lot of very good research is carried out and there are many proposals for how to change the situation.”
Examples of initiatives are that scientific journals have increased their publishing requirements, that clinical trials are now more often pre-registered and that the organisation Reproducibility Initiative has been given funds to validate more preclinical cancer studies.
Ioannidis has also seen improvements in the area of research on laboratory animals and in research fields such as psychology.
He still thinks that there are not enough measures being taken to raise the level. In particular, he wants to see a reward system for critical reviewing. It is difficult to get funds for studies that try to repeat what has already been done and it is difficult to publish negative results, such as when a study shows a hypothesis to be wrong.
“Many people think he complains too much, but I think Ioannidis has put his finger on important issues,” says Hans Wigzell, professor of immunology and former Vice-Chancellor of Karolinska Institutet, who commented earlier in Curie about developments.
“It is a serious problem when results are not reproducible. Health care is dependent on studies being reliable. The phase 2 studies of drug candidates by the pharmaceutical company Amgens and Boehringer Ingelheim went totally pear-shaped. That was the reason why they invested one hundred million trying to repeat the studies which produced the candidates.”
Reviewers as authors
Those who peer-review scientific manuscripts do not receive a reward for doing it carefully and contributing to improving the article’s quality before publication. John Ioannidis believes that those who do a thorough peer-review should be rewarded by having their name in an appendix to the article, or even in some cases, when their contribution is as important as that of the article’s authors, they should have their name in the article in the same way as the original author.
There is no reward system for post-publication reviews, either. The opportunity of commenting on articles in PubMed did not lead to the rigorous peer-review which had been hoped for, says Ioannidis, just short meaningless comments in most cases. Those who make comments are not rewarded for doing a thorough review.
John Ioannidis believes that repeating a study should be a part of the article, at least in cases where it is cheap enough to do it. To take one example, identifying genes associated with diseases.
It is not possible to repeat every study, of course, in particular when it involves people. However, it should always be theoretically possible to re-analyse data through showing the entire documentation: all data, protocols, algorithms and methods.
“The method in the article should not have any restrictions regarding space.”
For that, resources such as databases are necessary and it is an open question how much you can realistically ask for.
Another problem is methodological errors. John Ioannidis points out that many of those who start research have good education in their subject, but not in how the measurement methods work.
Researchers often have a poor knowledge of statistics and are not aware of how you may unconsciously influence and interpret the results measured if they are not sufficiently blinded. Ioannidis’ arguments in his article from 2005 are summarised in the fact box.
It is also increasingly important to be able to analyse large data sets.
“Twenty years ago, it was possible to make calculations in epidemiology on the back of an envelope. Now it often involves complex algorithms.”
But you can only take into consideration factors that you know about.
“We don’t know everything about the body and how different diseases occur. A few years ago it was not known that mice in different laboratories often have lab-specific intestinal flora which affect study results,” says Hans Wigzell.
Researching in networks
John Ioannidis’ proposal is that researchers should collaborate in large networks to a greater degree. Isolated teams are important for creative ideas and breaking new ground.
“But after you have created a picture of the idea, you need to validate the results through large collaboration processes,” he claims.
One example of this is High Energy Physics. Tens of thousands of researchers are working together to identify new elementary particles.
“If they were split up into thousands of groups there would be thousands of articles published, but no real particles would have been found. Possibly there would be many researchers who would claim to have detected particles through suboptimal analysis which do not really exist.
It is not possible to escape the fact that research, by definition, is about making mistakes and trying to find a better method, says Ioannidis. It is not possible to be perfect.
“There is a myth about how research is carried out, as in the story of Newton’s apple, but prominent science is about hard work to improve methods.”
Also read in Curie: Competition attracts short cuts