Chapter 30 Are researchers writing more, and is more better?

The idiom ‘publish or perish’ suggests that researchers will increase their output in order to obtain positions and promotions. And if a researcher’s productivity is measured by their publication output, shouldn’t we all be writing more papers? Certainly, it appears that more papers are being published (see Figure 30.1). An estimate for the total number of scholarly articles in existence was 50 million in 2009, with more than half of these published in the years from 1984 (Jinha, 2010).

Similarly, if we are all be writing more, then wouldn’t some people start publishing two (or more) papers, when one would be adequate? This idea of ‘salami slicing’ to inflate outputs would be an understandable strategy if researchers were all trying to increase their output. Alternatively, the names of authors might be added to papers in which they did not make significant input via ghost authorship or hyperauthorship (see Cronin, 2001 for an interesting historical perspective, and Part I).

FIGURE 30.1: The growth in the number of papers published in Life Sciences over time. The number of papers published (blue line) compared to a standard growth rate (black line). The data come from www.scopus.com.

A study by Fanelli and Larivière (2016) has a new take on the above questions, by asking whether researchers are actually writing more papers now than they did 100 years ago. They used the Web of Science to look for unique authors (more than half a million of them) and determine whether the first year of publication and the total number of publications resulted in an increasing trend. Fanelli and Larivière’s (2016) trend line for biology is very stable at around 5.5 publications whether you started publishing in 1900 or 2000 (note that earth science and chemistry do both increase dramatically).

But it is possible that these figures could be explained by the fact that the culture in publishing in biological sciences has changed a great deal since then. One hundred years ago, it was very unlikely that any postgraduate students would publish articles in peer reviewed journals. Moreover, it was also acceptable for advisers to take the thesis work of their students and write it up in monographs. This has certainly changed with the ranks of authors now being swelled considerably so that many more authors are likely to be included on only a single publication in which they participated (see Measey, 2011). I interpret this as the biological sciences becoming more democratic, with more of the people that contribute to the work receiving the credit.

30.0.1 At what rate is the literature increasing?

Using several databases (Web of Science, Scopus, Dimensions and Microsoft Academic) back to the beginning of their collections at the start of scientific journals in the mid 1600s, Bornmann et al. (2020) caluclated the inflation rate of scientific literature to run at 4.02%, such that the literature will double in 16.8 years (Bornmann, Mutz & Haunschild, 2020). This means that there is literally twice as much published in 2020 as there was in 2003.

Although the early period of scientific publishing was notably slower than today, it is since the mid-1940s (following the end of ‘World War II’) that science has seen an exponential growth in productivity, with annual growth of 5.1%, and a doubling time of 13.8 years (Bornmann, Mutz & Haunschild, 2020).

30.0.2 If more is being published, will Impact Factors increase?

Yes, there is inflation of impact factors. If the numbers of citations per paper remains constant, then the Impact Factor of journals should increase annually at 5%. My impression (see Measey, 2011) is that citations are increasing in papers as the literature increases, which suggests that Impact Factor will grow at a faster rate .

30.1 Are some authors unfeasibly prolific?

How many articles could you publish in a year before you could be described as ‘unfeasibly prolific?’ This question was posed in a study that examined prolific authors in four fields of medicine (Wager, Singhvi & Kleinert, 2015). This publication piqued my interest as it turns out that the authors decided that researchers with more than 25 publications in a year were unfeasibly prolific, as this would be the equivalent of “>1 publication per 10 working days.” Their angle was to suggest that publication fraud was likely, and that funders should be more circumspect when accepting researchers productivity as a metric. Looking back through the peer review of this article (which is a great aspect of many PeerJ articles), I’m astounded that only one reviewer questioned the premise that it’s infeasible to author that number of papers in a year.

I have published >25 papers and book chapters in a year, and I know other people who do this regularly. To me, there is no question that (a) it is possible and (b) that they really are the authors - with no question of fraud. Firstly, the idea that prolific authors constrain their activity to “working days” is naïve. Most will be working throughout a normal weekend, and working in the early morning and late evening, especially in China (see Barnett, Mewburn & Schroter, 2019). A hallmark of a prolific author would be emails early in the morning and/or late at night. This gives you an indication of their working hours, and how they are struggling to keep up with correspondence on top of writing papers.

Authorship of a publication is often the result of several years of work, and it can be at many different levels of investment (see Part 1). Thus, from my perspective, when I look at authoring a lot of publications it reflects the activity of the initial concept for the work, raising of money, conducting the field work or experiment, analysing the data and then writing it up (with the subsequent submission and peer review time - see Part I). Often, the work conducted by students who lead the publications, many years later, are the culmination of many years of investment of both time and money. And in the Biological Sciences, good study systems keep giving.

30.2 Salami-slicing

‘Salami-slicing’ is different things to different people. While many people refer to salami-slicing as the multiplication of papers into as many papers as possible, or slicing research into ‘least publishable units,’ others are referring to publishing the same paper more than once in different journals. In the latter case, this should be considered self-plagiarism at best and fraud at worst. If you find examples of dual publications, these should result in retractions. Instances don’t need to be exact copies. I edited a submission where one of the referees alerted me to the fact that the same data with a very similar question had already been published two years previously. In this instance, I passed the submission to the ethical panel of the journal who rejected it, and flagged the authors for scrutiny in the case of future submissions. Note that instances where conference abstracts are printed in a journal does not prevent you from publishing a full paper of this work. I would maintain, however, that you should rewrite the abstract to prevent self-plagiarism (Measey, 2021).

During the production of any dataset, you are likely to find that you are able to answer more questions that you originally set out to ask when you first proposed the research (i.e. preregistration; see Part I). The question you will be left with is: whether you should be adding these post hoc questions (questions that arise after the study) to the manuscripts that you planned to write when you proposed the research, or whether these should be published as separate publications - clearly identified as post hoc questions? The realities of publishing in scientific journals means that in many instances you will be restricted by the number of words a journal will accept. This will mean that for certain outputs it will not be possible for you to ask additional post hoc questions, or potentially all the questions you wanted to report in the preregistered plan.

There is nothing wrong with writing papers based purely on findings that you came across during the study: post hoc questions. There is a clear role in scientific publishing of natural history observations. But that any publication (or part of a publication) that results must indicate that it results from a post hoc study. To my way of thinking, it would be more useful if journals had separate sections for such studies, with other publications only stemming from from those that can show a preregistered plan. This would clearly improve transparency in publishing, and avoid accusations of p-hacking or HARKing.

At what point does the separation of research questions into different papers become ‘salami slicing?’ There is no simple answer to this question, and editors are likely to disagree (Tolsgaard et al., 2019). However, there are ways in which you as an author can make sure that your work is transparent, and therefore that you are not accused of ‘salami slicing.’ First is the preregistration of your research plan. Second is to preprint any unpublished papers that are referenced in your submission. There are also guidelines from COPE on the: ‘Systematic manipulation of the publication process(COPE, 2018c). And the last is to be transparent when you publish post hoc research.

In manuscripts where another very similar study is cited by the authors, but not available to reviewers or editors, there should be a ‘salami slicing’ red flag. Obviously, when you produce a number of outputs from a research project, they are likely to be linked and therefore cited by each other. However, when these are not available to reviewers and editors (as preprints or as preregistration of the questions), authors should expect to be asked for these manuscripts to demonstrate that they are not salami-sclicing. Perhaps worse, however, is when authors deliberately hide any citation to another very similar work. In the end, we have to rely on the integrity of the researchers not to be unethical or dishonest.

30.3 Is writing a lot of papers a good strategy?

This is a question of long standing, and one that you may find yourself asking at some point early on in your career. I’d suggest that the answer will be more about the sort of person that you are, or the lab culture you experience, over any strategy that you might consciously decide. If you tend toward perfectionism, this will likely result in fewer papers that (I hope) you’d consider to be of high quality. If on the other hand your desire were to finish projects and move on, you’d be more likely to tend toward more papers. It is clear that the current climate leads towards the latter strategy, with increasing numbers of early career researchers bewildered at the idea of increasing their publication metrics (Helmer, Blumenthal & Paschen, 2020). But what should you do?

Given that the ‘best’ personality type lies somewhere in the middle, you can decide for yourself whether you identify with one side more than the other. But which is the better strategy? Vincent Larivière and Rodrigo Costas (2016) tried to answer this question by considering how many papers unique authors wrote and seeing how this relates to their share of authoring a paper in the top 1% of cited papers. Their result showed clearly that for researchers in the life sciences, writing a lot of papers was a good strategy if you started back in the 1980s. However, for those starting after 2009, the trend was reversed with those authors writing more papers less likely to have a ‘smash hit’ paper (in the top 1% of cited papers). Maybe the time scale was too short to know. After all, if you started publishing in 2009 and had >20 papers by 2013 then you have been incredibly (but not unfeasibly) prolific. Other studies continue to show that in the life sciences, writing more papers still provides returns towards having papers highly cited: the more papers you author, the higher the chance of having a highly cited paper (Sandström & Besselaar, 2016).

One aspect not considered Larivière and Costas (2016) is that becoming known as a researcher who finishes work (resulting in a publication) is likely to make you more attractive to collaborators. Thus, publishing work is likely to get you invited to participate in more work. Obviously, quality plays a part in invitations to collaborative work too. Thus pulling the argument back to the centre ground.

There are other scenarios in which you might be encouraged to write more. In Denmark, for example, research funding is apportioned to universities based on the number of outputs their researchers generated in a point system, where higher ranked journals get more points. This resulted in researchers in the life sciences changing their publication strategy with a notable increase in publications in the highest points bracket following this change (Deutz et al., 2021).

You may find yourself becoming preoccupied about which is the best strategy for you, not because you want to, but because your institution is relying on you to pull your weight in their assessment exercise. University rankings are now very important, and big universities like to be ranked highly for research, which depends (in part) on the quantity and quality of their output.

30.3.1 Natural selection of bad science

In (2016), Smaldino and McElreath proposed that ever increasing numbers of publications not only leads to bad science, but is currently selected for in an academic environment where publishing is considered as a currency. They argued that the most productive laboratories will be rewarded with more grant funding, larger numbers of students, and that these students will learn about the methods and benefits of prolific publication. When these ‘offspring’ of the prolific lab look for jobs, they are more likely to be successful as they have more publications themselves. An academic environment that rewards increasing numbers of publications eventually selects towards methodologies that produce the greatest number of publishable results. To show that this leads to a culture of ‘bad science,’ Smaldino and McElreath (2016) conducted an analysis in trends over time of statistical power in behavioural science publications. Over time, better science should be shown by researchers increasing their statistical power as this will provide studies with lower error rates. However, increasing the statistical power of experiments takes more time and resources, resulting in fewer publications. Their results, from review papers in social and behavioural sciences, suggested that between 1960 and 2011 there had been no trend toward increasing statistical power. Biological systems, whether they be academics in a department or grass growing in experimental pots, will respond to the rewards generated in that system. When grant funding bodies and academic institutions reward publishing as a behaviour, it is inevitable that the behaviour of researchers inside that system will respond by increasing their publication output. Moreover, if those institutions maintain increasing numbers of researchers in temporary positions, those individuals are further incentivised to become more productive to justify their continued contracts, or the possibility of obtaining a (more permanent) position elsewhere. Eventually, this negative feedback, or gameification of publishing metrics, produces a dystopian and dysfunctional academic reality (Helmer, Blumenthal & Paschen, 2020).

An example of this kind of confirmation bias driven publishing effect toward bad science can be found in the literature of fluctuating asymmetry, and in particular those studies on human faces (Van Dongen, 2011). Back in the 1990s, there was a flurry of high profile articles purporting preference for symmetry (and against asymmetry) in human faces. The studies were (relatively) cheap and fast to conduct as the researchers had access to hundreds of students right on their doorsteps. The studies not only hit the top journals, but were very popular in the mainstream media as scientists were apparently able to predict which faces were the most attractive. Stefan van Dongen (2011) hypothesised that if publication bias was leading to bad science in studies of fluctuating asymmetry in human faces, there would be a negative association between effect size and sample size when fluctuating asymmetry in human faces. Effect sizes are expected to be smaller in larger in studies with high sample sizes as these come with less accurate measurements (see Jennions & Møller, 2002), but we should not expect to see this relationship change depending on the stated focus of the study. However, van Dongen found that this relationship was diminished in studies when fluctuating asymmetry in human faces was not the main aim of the study, suggesting that there was important publication bias.

Where others have looked, publication bias has been found and is particularly associated with a decreasing effect size that correlates with journal Impact Factor: i.e. once the large effect is published in a big journal, the natural selection of bad science results in publication bias, and diminishing effect sizes that ripple through lower impact factor journals (Munafò, Matheson & Flint, 2007; Brembs, Button & Munafò, 2013; Smaldino & McElreath, 2016), while negative results disappear almost entirely (Fanelli, 2012). One can presume that negative results exist, but their authors either do not bother to write up the results, or that journals won’t publish them.

The direct result of a system driven by Impact Factor and author publication metrics is that we will have a generation of scientists at the top institutions that are trained not to conduct the best science, but to generate publications that can be sold to the best journals. We should be deeply suspicious of any claim of linkage between top journals and quality (Brembs, Button & Munafò, 2013). Indeed, what we see increasingly is that the potential rewards of publishing in top Impact Factor journals leads not only to bad science, but increasingly to deliberate fraud. Continuing along this path threatens to undermine the entire scientific project, and places science and scientists as just another stakeholder in a system ruled by economic markets, and their promotion of the fashion of the day (Casadevall & Fang, 2012; Brembs, Button & Munafò, 2013).

References

Barnett A, Mewburn I, Schroter S. 2019. Working 9 to 5, not the way to make an academic living: Observational analysis of manuscript and peer review submissions over time. BMJ 367:l6460. DOI: 10.1136/bmj.l6460.
Bornmann L, Mutz R, Haunschild R. 2020. Growth rates of modern science: A latent piecewise growth curve approach to model publication numbers from established and new literature databases. arXiv preprint arXiv:2012.07675.
Brembs B, Button K, Munafò MR. 2013. Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience 7:291. DOI: 10.3389/fnhum.2013.00291.
Casadevall A, Fang FC. 2012. Reforming Science: Methodological and Cultural Reforms. Infection and Immunity 80:891–896. DOI: 10.1128/IAI.06183-11.
COPE. 2018c. Systematic manipulation of the publication process. Committee on Publication Ethics; Springer Nature.
Cronin B. 2001. Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? Journal of the American Society for Information Science and Technology 52:558–569. DOI: https://doi.org/10.1002/asi.1097.
Deutz DB, Drachen TM, Drongstrup D, Opstrup N, Wien C. 2021. Quantitative quality: A study on how performance-based measures may change the publication patterns of Danish researchers. Scientometrics 126:3303–3320. DOI: 10.1007/s11192-021-03881-7.
Fanelli D. 2012. Negative results are disappearing from most disciplines and countries. Scientometrics 90:891–904. DOI: 10.1007/s11192-011-0494-7.
Fanelli D, Larivière V. 2016. Researchers’ Individual Publication Rate Has Not Increased in a Century. PLOS ONE 11:e0149504. DOI: 10.1371/journal.pone.0149504.
Helmer S, Blumenthal DB, Paschen K. 2020. What is meaningful research and how should we measure it? Scientometrics 125:153–169. DOI: 10.1007/s11192-020-03649-5.
Jennions MD, Møller AP. 2002. Publication bias in ecology and evolution: An empirical assessment using the ‘trim and fill’ method. Biological Reviews 77:211–222. DOI: 10.1017/S1464793101005875.
Jinha AE. 2010. Article 50 million: An estimate of the number of scholarly articles in existence. Learned Publishing 23:258–263. DOI: https://doi.org/10.1087/20100308.
Larivière V, Costas R. 2016. How Many Is Too Many? On the Relationship between Research Productivity and Impact. PLOS ONE 11:e0162709. DOI: 10.1371/journal.pone.0162709.
Measey J. 2011. The past, present and future of African herpetology. African Journal of Herpetology 60:89–100. DOI: 10.1080/21564574.2011.628413.
Measey J. 2021. How to write a PhD in biological sciences: A guide for the uninitiated. Boca Raton, Florida: CRC Press.
Munafò MR, Matheson IJ, Flint J. 2007. Association of the DRD2 gene Taq1A polymorphism and alcoholism: A meta-analysis of case–control studies and evidence of publication bias. Molecular Psychiatry 12:454–461. DOI: 10.1038/sj.mp.4001938.
Sandström U, Besselaar P van den. 2016. Quantity and/or Quality? The Importance of Publishing Many Papers. PLOS ONE 11:e0166149. DOI: 10.1371/journal.pone.0166149.
Smaldino PE, McElreath R. 2016. The natural selection of bad science. Royal Society Open Science 3:160384. DOI: https://doi.org/10.1098/rsos.160384.
Tolsgaard MG, Ellaway R, Woods N, Norman G. 2019. Salami-slicing and plagiarism: How should we respond? Advances in Health Sciences Education 24:3–14. DOI: 10.1007/s10459-019-09876-7.
Van Dongen S. 2011. Associations between asymmetry and human attractiveness: Possible direct effects of asymmetry and signatures of publication bias. Annals of Human Biology 38:317–323. DOI: 10.3109/03014460.2010.544676.
Wager E, Singhvi S, Kleinert S. 2015. Too much of a good thing? An observational study of prolific authors. PeerJ 3:e1154. DOI: https://doi.org/10.7717/peerj.1154.