34 When should you correct or retract your paper?
Once your Version of Record is published, any changes that you may wish to make will result in a separate correction publication. In extreme cases, you may even need to retract the paper. The different options available in many journals to correct your paper are shown in Table 34.1. If you are in doubt about exactly what kind of correction you need, read the guidelines from the Council of Science Editors and the Committee on Publication Ethics (COPE) (Barbour et al., 2009).
Action | Example | Conclusions impacted | Issued by |
---|---|---|---|
Corrigendum / erratum / correction | Important typos / incorrect figure legends or tables / author name or address issues | Not | Author |
Expression of concern | Data appear unreliable / misconduct suspected | Undetermined | Editor (perhaps through information received) |
Partial retraction | One aspect of the study is corrupted / Inappropriate analysis | Overall findings remain | Author or Editor |
Full retraction | Majority of the study is corrupted / Evidence of misconduct / Work is invalidated | Yes | Author or Editor |
34.1 Making a correction to a published paper
It is very unlikely that you will be in a position where you will need to think about retracting your paper. If you notice a mistake, especially one that results in a difference to how the results are presented, then you should approach the editor about publishing a correction (also termed a corrigendum or erratum - plural errata).
Because a corrigendum is a change to the VoR, it will result in what is in effect an additional separate publication (with its own DOI). This will succinctly point out the error and how this should be rectified. In the journal, this will only be a few lines. In addition, on the site of the original publication, the journal will place a notice that there has been a correction, and provide a permalink to the correction. However, it results in a lot of extra administrative work for everyone, so it’s best avoided if at all possible. This is another reason why it’s worth taking your time when checking your proofs.
Another way to avoid having to make a corrigendum is to ensure that all co-authors are happy with the original submission, the resubmission and the proofs (i.e. get it right before you submit it).
Mistakes do occur, and it is likely to be some time after the publication that you might find that there was an error. Errors such as typos, or mistakes in the introduction or discussion are unlikely to warrant a corrigendum. However, if the error is in the way that the results were calculated, or causes a change in the significance, then you should consider making a corrigendum. If you feel that it is necessary, do consult your co-authors before taking it to the editor. It is well worth having someone else check your new calculation, as the last thing you want is a compounded error.
If the mistake is systemic, and changes all of the results, their significance and/or the validity of the conclusion, then you need to consider a full retraction.
34.2 Expression of concern
An expression of concern lies between a correction and a retraction. An expression of concern can precede a retraction, but suggests that the editors are seriously worried about something being wrong with the publication. For example, a paper published in Proceedings of the National Academy of Sciences that used an unusual mutant strain of Chlamydomonas (a genus of green algae) was placed under an official ‘Editorial Expression of Concern’ when the editors learned that the authors would not share their strain with any other researchers (Berenbaum, 2021). Clearly, this notice could be removed if, for example, the authors agreed to share their strain and their findings were replicated. However, if they continue to refuse, the editors could also fully retract the paper.
This is an unusual case, but shows how the editors are prepared to take their journal’s policy seriously, as a requirement of submission is that authors must be prepared to share any unique reagents described in their papers, or declare restrictions up front. Failure of these authors to follow through with the transparency declared on submission could mean that these authors have their paper retracted. Finding out this sort of information takes time, and so there is often a lag in the retraction window (Figure 34.1).
34.3 A retraction is unusual
“A person who has committed a mistake, and doesn’t correct it, is committing another mistake.”
— Confucius
A retraction of a paper is when your paper is effectively ‘unpublished’. This happens at the discretion of the editor (and often the entire editorial board), and is a very serious issue. Retractions are rare. Reasons for retractions vary. It could be that a piece of equipment was later found to have malfunctioned or was calibrated incorrectly (Anon, 2018). A cell line was misidentified. Or they can be through no fault of the authors. For example, Toro et al. (2019) had their manuscript rejected by Journal of Biosciences, but due to an administrative error, the article was printed in an issue, and later retracted. However, the top reason for retraction is now misconduct (Fang, Steen & Casadevall, 2012; Brainard & You, 2018), and this is hardly surprising given the crazy incentives that many scientists received to publish in journals with top impact factors. Another important factor with retractions is that they appear to be more common in journals with higher impact factors (Brembs, Button & Munafò, 2013), and this should not surprise us as these journals are prone to publishing studies with confirmation bias (Forstmeier, Wagenmakers & Parker, 2017; Measey, 2021).
Although retractions are so rare in life sciences, 0.06% of all papers published between 1990 and 2020, they appear to be on the increase in the last 30 years: from around 0.5 to over 15 papers per 10 000 published (see Figure 34.1). It takes a mean time of nearly 2 years between notification of problems with a paper, and issuing a correction or retraction (Grey, Avenell & Bolland, 2021), but this belies a bimodal distribution in retraction times with the first hump coming from self-correcting authors, or clerical errors from journals coming within months of the original date of publication. The second hump is usually associated with fraud, and comes after several years of investigations by institutions often with added legal frustrations.
Retractions showed a steep up-tick in 2011, when a number of laboratories made multiple retractions (see Fang & Casadevall, 2011; Brainard & You, 2018). A single publisher was responsible for pulling a great many of the retracted papers in 2011, and this spike in retractions isn’t seen so harshly in the life sciences (Oransky, 2011; Brainard & You, 2018). What is clear from Figure 34.1 is that the rising trend in retractions (solid green line) appears to have been unaffected by the 2011 spike, unlike other subject areas. Claims that retractions are levelling off (Brainard & You, 2018), are not matched by the data. Indeed, in early 2024 the steep rise in retractions has been said to be reaching a crisis point (McKie, 2024). Figure 34.1 shows no signs of reaching a plateau in 2023, but whether the ‘tip of the iceberg’ suggestion (McKie, 2024) holds true for Biological Sciences, we have yet to see. Certainly, the four-fold increase in retractions of original articles (to 26 retractions per 10 000 articles published in the Biological Sciences) in the Biological Sciences since 2010 is very worrying.
Misconduct in journals can lead to large numbers of retractions. In 2023, the journal Genetika retracted 31 papers after it came under pressure from being exposed for fraudulent citations (Watch, 2023). As a result of the fraud allegations, the journal was investigated by Clarivate and removed from its Science Citation Index, meaning that it also lost its Impact Factor. The journal was accused of citation stacking where citations are traded between authors or journals. Most of the retractions appear to be associated with a single author. Whether or not this was done with the knowledge of the editor is unknown, but the punishment will be hard on the scholarly society to which the journal is connected.
34.3.1 How do you know if a paper you cited is later retracted?
Citations to retracted papers are not uncommon, and often positively cite the paper even when the retraction has been made for misconduct (Bar-Ilan & Halevi, 2017). This suggests that most authors are simply not aware of the retracted status of many publications. Of course, if you visit the publisher’s website, you should see a clear notification at the Version of Record (e.g. Figure 34.2), that points to the retraction notice (see Figure 34.3), but this is not always the case. In general, publishers seem very shy about their retractions, and it can be difficult to track down retractions that should be clear for everyone to see. Indeed, if publishers did their due diligence on notifications on retractions, and this were entered into CrossRef, we wouldn’t need an independent database like the Retraction Watch database.
If you downloaded the article before it was retracted, then you will not be aware of what has happened unless you are following that particular publication. Similarly, if you get your search results from Google Scholar, there is no indication that a paper has been retracted. Scopus and Web of Science clearly indicate some articles that have been retracted, but the vast majority go unrecorded even on these databases. Perhaps this is why even highly publicised retractions continue to be cited by articles that follow (Piller, 2021). Clearly, the community is still responsible for watching what happens to the literature, even once a paper is cited. Of course, the publishers could be using items such as DOIs to track retracted papers and query their citations. So why don’t they?
Some literature databases will notify you if a paper that you have in your database is retracted - but don’t count on this unless you use Zotero. Zotero takes DOIs from the retraction watch database, and uses them to notify uses of any retractions of articles that have occurred. A large red bar (that’s hard to ignore) is shown across the top of the citation window. This is a strong and very positive reason for using Zotero, leaving you to get on with your research.
Publishers are bad at sending through the correct metadata with their content. For example, for the same 30 year period as Figure 34.1, Web of Science lists only 133 retractions. Really what this means is that if you aren’t using Zotero, and you want to be sure that there hasn’t been a retraction in any of your source material, you need to run a search on the Retraction Watch database - “better to use Zotero”, says Ivan Oransky. Right now, the chance that any paper published in the last 30 years that you have cited will get retracted is still low (one in 1750), but if it was published in 2020 this rises to one in 650.
34.3.2 Notification of retraction
The notification of retraction is supposed to explain exactly why a paper has been retracted. For example, if you have cited or used this work you should know whether it is because data has been fabricated, or more innocently there was a mistake with the equipment or another aspect of the investigation. However, it seems that some journals are issuing retraction notices that fall short of the guidelines required by COPE (Barbour et al., 2009), and that these delays are not in the interest of anyone involved (see Grey, Avenell & Bolland, 2021; Teixeira da Silva, 2021). Clearly, there is need for improvement here on the part of the journals.
But we must be cautious about playing a blame game when it comes to journal retractions (Smith, 2021). We already know that peer review has shortcomings (see Part IV), and even the best of peer reviewers and/or editors cannot be expected to spot potentially fatal errors, especially when these come about from deliberate deceit on the part of the authors. Retraction will remain a necessary part of the scientific publishing process, and as retractions become more commonplace among journals, we can hope that guidelines will be followed in a timely manor (Grey, Avenell & Bolland, 2021).
34.4 Retraction Watch
To learn more about retractions in science, I encourage you to read the blog at Retraction Watch (https://retractionwatch.com/). This will give you an idea of the reasons why retractions are made, and give you some perspectives about the practices (and malpractices) that go on in the scientific environment.
34.5 Fabrication of data
The fabrication of sensational results that might land you publication in a high ranking journal, and consequently a job or a promotion, is an important incentive for individuals to commit fraud. However, there are also a large number of people who need a publication in any international journal but lack the scientific or linguistic skills to make it happen. These people are more likely to pay third parties of organised fraudsters to have their name appear on an author line. The resulting paper in which all text as well as the data is fabricated, represents another source of large numbers of retractions in lower ranking journals.
34.5.1 Fraud by individuals or laboratories
The fabrication of data does happen. An anonymous survey on research integrity in the Netherlands suggested that prevalence of fabrication was 4.3% and falsification was 4.2%, while nearly half of those surveyed admitted to questionable research practices most prevalent in PhD candidates and junior researchers (Gopalakrishna et al., 2021). A growing body of retractions and alleged evidence on the tampering of data in spreadsheets has led to the suspension, resignation and ultimately publication of the misdeeds of a top researcher, Jonathan Pruitt. The detection of fraudulent (usually duplicated) data in spreadsheets is not too difficult to spot (e.g. by application of Benford’s law - the last digits of naturally occurring numbers should approach uniformity), and has become the subject of some contract data scientists who specialise in finding such instances of fraud. To some extent, the automated assessment of fraudulent practices has been or could be implemented for many infringements (Bordewijk et al., 2021).
Pruitt’s case (see below) highlights a good reason for increased transparency in the publication process. A blog post from someone caught up in the Pruitt retractions makes the point that journals that insisted on full data deposits for publication were well ahead of those that hadn’t (Bolnick, 2021). Of growing concern in many areas of Biological Sciences is the potential to manipulate results that are essentially images of results, for example blots on a gel. However, it turns out that manipulated images are also not too hard to discern.
Images are increasingly being used in journals to demonstrate results, and the manipulation of images in published papers appears to be rife. In a study of more than 20000 papers from 40 journals (1995 to 2014), Bik et al. (2016) found that nearly 2% had features suggesting deliberate manipulation. These could include simple duplication of an image from supposedly different experiments, duplication with manipulation (e.g. rotation, reversal, etc.) and duplication with alteration (including adding and subtracting parts of the copied image). The authors suggested that as they only considered these types of manipulations from certain image types, the actual level of image fraud in scientific papers is likely much higher (Bik, Casadevall & Fang, 2016). An R package (FraudDetTools) is now available for checking manipulation of images (Koppers, Wormer & Ickstadt, 2017), but there are other steps that reviewers and editors can take themselves (Byrne & Christopher, 2020).
34.5.2 By third parties
The existence of paper mills should also be mentioned at this point. Papers, produced commercially and to order, are entirely fabricated by third parties to improve the CVs of real paying scientists (Teixeira da Silva, 2021). Hundreds of these papers have been discovered that appear to come from the same source (Bik, 2020), but the true size of such additions to the scientific literature is unknown. For example, in April 2023 Wiley admitted that their own publications (under Hindawi) had ~1700 papers that resulted from paper mills. This will undoubtedly be another bump in the number of retractions as Wiley struggle to sniff out all of the faked content (Flynn, 2023).
The pressure to publish is widely acknowledged as driving questionable research practices, including fraud (Gopalakrishna et al., 2021). Some have suggested that the additional pressure to obtain a permanent academic position is enough to drive some scientists to commit fraud (Fanelli, Costas & Larivière, 2015; Husemann et al., 2017; Kun, 2018). The idiom ‘publish or perish’, and the importance of publishing is made elsewhere. However, I hope that by shedding some light on unethical practices, this book equips you to avoid these together with those that may espouse them, and instead show you that there is a better path to success.
When people rely on publishing papers in journals for their careers, it seems that there are predators happy to make a publication happen at the expense of the integrity of the publishing process. Scammers posing as well-known academics use modified domain names to gain editorial control of special issues of journals. Once in the editorial seat, they quickly start accepting low-quality papers from clients who pay to get included (Else, 2021). Retractions of hundreds of papers in special issues indicate that editors need to pay special attention before handing over the role of gatekeeper.
34.6 Who is responsible?
In the case of fraud, retraction statements should indicate who the perpetrator is in order to exonerate the other researchers. Some research into the likely source of the fraud has been conducted. There are clearly serial fraudsters, and the presence of their names in the author list is a red flag for those investigating fraud. Data from papers that are known to be fraudulent suggest that the first author is the most likely to be responsible for the fraud committed, and middle authors the least (Hussinger & Pellens, 2019). This suggests that in a collaboration, you should be very careful who you collaborate with. While you might not be responsible, the discovery of (particularly large scale) fraud might well harm your career.
A study looking for patterns about types of authors involved in retractions (all reasons), suggested that Early Career Researchers were particularly likely to be involved in retractions (Fanelli, Costas & Larivière, 2015), although the exact reason why remains obscure.
It is worth noting that while the journals (and ultimately the journal editor) are responsible for retractions from journals, this is not the same as punishing individuals who have committed fraud. As we have already seen, there can be many reasons for a retraction, and it is not up to editors or journals to punish researchers as there will be innocent researchers who may also need to make retractions. Moreover, it should never be the role of the journal to have any punitive action over a researcher. There are other mechanisms for this with the employer and (where applicable) the academic society involved. Different institutions and governments will have different rules when it comes to fraud being committed by their employees. These processes may be legal and take some time to finish. As we will see with the case of Pruitt, once lawyers get involved, the process may become mired in a lot more bureaucracy.
Whilst you may consider that your employers are not a group that you are particularly concerned about, you should consider the possibility that any scientific fraud could penetrate deeper than you are aware. For example, if you used fraudulently obtained data in order to make a grant application look better, then this would be deemed very serious by the (probably) government that you were applying to. Essentially, this becomes financial fraud as you are using false data in order to obtain money. While your employers may simply remove you from your position (and their employment), the government might prosecute and you could find yourself in prison. This does happen in some countries, and so (obviously) it is a really bad idea to commit fraud.
34.7 What to do if you suspect others
If you suspect that someone in a lab in your department, faculty or university is fabricating data, find out whether your university has a research integrity officer (RIO); most universities in the US have one. Document your evidence if you can and approach the RIO or person in the equivalent position. If you can’t find such a person, then ask at your university library for the most relevant person. Libraries are usually neutral places where you can find out information without arousing suspicion. You do need to be careful that you do not place yourself in harm’s way when reporting, so be prudent about sharing until you are assured protection from any potential retaliation. It isn’t easy to be a whistleblower - but it is the right thing to do.
If the research is published, and you think it is fraudulent, approach the editor directly. If there is some conflict of interest (like the person is at your institution), then you can try to sort it out internally (as described above). Otherwise, you can approach the editor directly yourself, anonymously or by using a third party.
The Committee on Publication Ethics (COPE) has published some useful flowcharts to guide researchers who suspect fraud in manuscripts or published articles:
- ‘Image manipulation in a published article’ (COPE, 2018a)
- ‘How to recognise potential authorship problems’ (COPE, 2018b)
- ‘Systematic manipulation of the publication process’ (COPE, 2018c)
- ‘How to recognise potential manipulation of the peer review process’ (COPE, 2017)
- ‘If you suspect fabricated data in a submitted manuscript’ (Wager, 2006a)
- ‘Ghost, guest, or gift authorship in a submitted manuscript’ (Wager, 2006b)
- Undisclosed conflict of interest
- ‘Plagiarism in a submitted manuscript’ (Wager, 2006c)
34.8 Confirmation Bias, scientific misconduct and the paradox of high-flying academic careers
Scientists are human, and humans are fallible. An early survey of 2000 students and 2000 faculty from 99 US institutions on misconduct (data fabrication and plagiarism) suggests that is unusual but not rare: 6-9% of participants claimed to know of examples (swazey1993ethical?). Other examples of bad behaviour were much more prevalent among scientists, as they are in other workplaces. Goodstein (Sun, 02/21/2010 - 12:00), studied many cases of fraud and built up a profile of a fraudulent scientists as a young male who works in a fast moving and competitive area of biological sciences where publications result in financial reward through grant income.
It seems likely that most of those guilty of such misconduct escape any consequences. Perhaps this is why examples of high-flying academics persist and are well publicised. The poor responses of their institutions are also the cause for a lot of concern.
34.8.2 Monkey business researching monkey business
Marc D. Hauser was a professor of psychology at the prestigious Harvard University, Boston USA, worked on primate cognition and especially how it relates to the evolution of human mental capacities. His work has shown cognition in monkeys previously thought only to be present in great apes (including humans). Students and co-workers claimed that they could not repeat Hauser’s data when they worked through the original video recordings. One of the major problems with his work was his failure to produce original data in several studies as many original recordings had gone missing. In publications, Hauser used new datasets to back up original observations. This does remind us of the need to safeguard data that we generate with a data management plan. However, the claims of scientific misconduct against him were persistent and eventually prompted an investigation by his institution.
Harvard University investigated his case, problematically behind closed doors and over several years. They concluded that Hauser was guilty of scientific misconduct on eight different counts, but never revealed what these were or how they related to his publications. Were these articles already retracted or never published? Without the results of the university’s enquiry being made public, there hang more questions than answers over Hauser and exactly what his misconduct was. He resigned from his position at Harvard University in 2011 and
34.8.3 Climate change and ocean systems
Another example of a rising star with high profile papers, allegedly making a habit of fabricating data, comes in the world of marine biology (Clark et al., 2020). The researchers in question, Danielle Dixson under the supervision of Philip Munday, made counter-claims that the detractors were unimaginative or that they are attempting to make a career from criticism. Meanwhile, students from their own labs continue to raise concern about the culture of fraud (see Enserink, 2021). In this case, the tide of evidence against the marine biologists appears to have turned, with forensic data specialists finding multiple examples of suspect data.
34.9 The conflict position of institutions between cash cows and misconduct
It is clear from these reports that there are systemic problems when high profile scientists are accused of fraud. Journals say that it’s the responsibility of the institutions, and the institutions have no impetus to find fraud as that might lose a very productive (think research income) and high profile scientist (see Pennisi, 2021). What university would want to have its name dragged through the mud, and on top of this lose a large amount of grant income? Top researchers become untouchables in many institutions because they are essentially cash cows that no-one wants to disturb. Allegations against such individuals also include bullying and sexual misconduct. For those interested in reading more high profile misdemeaners in science Stuart Ritchie (2020) has put together a popular book on the subject.
Another important issue that arises from (alleged) scientific fraud is that it creates a culture of research that pushes towards an extremely unlikely hypothesis, in the misbelief that the hypothesis is likely given the nature of the publications (also see Fanelli, Costas & Ioannidis, 2017). Indeed, this natural selection of ‘bad science’ has permeated the hiring system so that researchers like this are more likely to be hired (Smaldino & McElreath, 2016). Forsmeier et al (2017) have an excellent review that outlines the problems with a culture that pushes towards increasingly unlikely hypotheses (see also Measey, 2021 on Type I errors).
At the heart of all of this is the cult of the Impact Factor and the research mentality that it generates.
34.8.1 Social spiders
Jonathan Pruitt had it all going for him. His studies of spider sociality were producing novel and significant results that opened the door to publications in high impact journals. In turn, this opened the door to getting prizes and funding. The funding allowed him to conduct more studies and soon a prestigious chair in Canada with more funding to pursue his rocketing career. Things started to unravel for Pruitt when colleagues raised concern about his data in some of their publications. Things gathered pace very quickly, and doubt gathered around more and more of his publications. Although there is much written about the Pruitt debacle on the internet, the blog by American Naturalist editor and former Pruitt fan and friend, Dan Bolnick, is particularly enlightening (Bolnick, 2021). Pruitt’s case is becoming increasingly untenable as more editors backed by co-authors are retracting papers where he contributed data (see Marcus, 2020). For Pruitt, this became a threat to his career and livelihood (Pennisi, 2020). Similarly, his university is faced the possibility that they hired a fraud. Consequently, this whole debacle has slipped into the legal world. After three and a half years, McMaster University (his former employer) has finally issued their report on the affair concluding that he engaged in falsification and fabrication of data (Kincaid, 2023). Pruitt is thought to have left research and taken a teaching job at a high school in Florida (Kincaid, 2023).
The harrowing part of Bolnick’s account is when he, co-authors and other editors started to receive letters from Pruitt’s Lawyer (see one example on Bolnick’s blog). At the point that the lawyer steps in, the functioning academic community that had raised itself to meet the demands of the concerns began to get muted. Bolnick then makes an important point that the legal threats from Pruitt’s Lawyer were stifling the freedom for academics (in this case the co-authors and editors) to publish, and therefore their academic freedom. Although there are many lessons from the Pruitt debacle, the important points include the impact that it had on other people’s lives and careers, as well as the legal implications of raising grant monies on the basis of fabricated data. Although there might be some comfort in the way in which the academic community pulled together and that the university inquiry ultimately reached the right decision, there are countless other cases of academic fraud and misrepresentation where the perpetrators are never brought to justice.