How Covid-19 Can Help Us Build Better Scientific Communities

Designed by Crystal Nattoo

Immediate publicity for Covid-19 science has become imperative.

In April, just as California went into shelter-in-place, a team of researchers from Stanford University released the results of one of the first studies on Covid-19 antibody prevalence.1 Promising analyses that would extrapolate the responses of a small drive-by survey to the rates of Covid-19 infection across the country, these results went instantly viral.2,3 They estimated true rates of infection were 50 times greater than what current statistics claimed. Many Covid-19 critics jumped onto these results: clearly Covid-19 must be far less serious than people believed if there were so many positive cases across the country and no one had realized yet.4 The study’s authors themselves wrote op-eds based on their research, arguing that lockdowns weren’t necessary.5 Life could go back to normal. 

But on Twitter6,7 and on personal websites, other scientists began to examine the study, which was publicly available on the platform medRxiv.8 Then the critiques started coming in. Researchers found basic mathematical errors in the team’s analysis: at one point, when verifying their data, they had called 0.0034 (instead of 0.1975) the square root of 0.039.9 Moreover, the study’s antibody kit had yet to be formally approved by the FDA,10 and the research team had adjusted their results to account for national population disparities using unproven and disputed weighting.11 The study had claimed to recruit a representative group of participants but had found participants through targeted Facebook ads12 and emails.13

In the normal course of scientific research, these errors would have been corrected long before public release. Public release, in science, generally takes the form of publication in an academic journal. To get into a journal, a study would have to be formatted, submitted, selected for publication; go through rounds of peer review; and then, finally, be released in an issue. 

But the Stanford researchers hadn’t published their study in a journal. Instead, they had posted it on an increasingly common alternative: a preprint server, where studies pre-review are made publicly available. And so, far before its formal publication in an academic journal, the Stanford researchers captivated an international audience, as citizens clamoured for any bit of information on a confusing, contradictory virus.14  

Covid-19 has turned everything on its head. Months of review have become too long to wait for results that have life-saving implications within days, if not hours. Immediate publicity for Covid-19 science has become imperative. Now, researchers rely on preprint servers to make their preprints — drafts of studies before peer review — instantly accessible.

Almost a year after the Stanford antibody study, this practice of releasing preprints for Covid-19 studies has continued.15 Preprint servers, particularly medical ones, have reported 400% increases in activity compared to the same time periods last year.16 Some scientists and major online platforms predict that this trend may last beyond this pandemic, signifying a widespread shift in academic publishing from now-antiquated systems towards a fast-paced, open-access, digital future. 

But is this shift really new? Though Covid-19 may have accelerated this shift, the internet has left conventional academic publishing processes in crisis for decades, as the demands of the information economy have clashed with a peer review process developed in the Cold War.

Peer review forms the heart of the much-praised scientific publication process. This is where errors in the Stanford antibody study would have been caught: when its preprint was sent to established scientists for review. Through multiple rounds of editing and feedback, these scientists would have critiqued the study’s methodologies, writing, and arguments, and passed a final verdict on whether it met the publication standards of their respective journal. This peer review process is regarded as the golden standard for scientific knowledge.17 Peer review validates and corrects results, and ensures only—ideally, anyways—objective truths enter the scientific canon.18 In this way, science is validated by other scientists—and so, neatly builds upon itself.

The idea of getting experts to verify new science in their field stretches back to the 1700s, when Philosophical Transactions of the Royal Society mailed manuscripts to members “most versed” with the subject matter before its publication.19 For the most part, though, journal editors took responsibility for decisions and revisions on papers. There weren’t many to choose from: it was often difficult to fill the pages of journals, and editors had to make repeated appeals to get publishable papers.20

This dynamic shifted after World War II, when science changed in two major ways. First, as the Cold War raged, scientific study expanded dramatically. People began to recognize the importance of theoretical advances for increasingly pressing issues—national security in particular—and more scientific research took place. Editors no longer struggled to fill the pages of their journals, as they had before—quite the opposite.21 Second, science began to specialize. As scientific techniques advanced and fields developed, research results began to become increasingly niche. Editors began to realize, in more and more situations, they didn’t know enough to review the articles.22 Outsourcing to scholars with more specific knowledge became necessary to judge journal submissions, especially for medical research, where advances were happening at accelerating rates.23 

After the Cold War, the peer review process became institutionalized into the practices of most major journals.24,25 And, as the internet developed, peer review and journals shifted online with little fanfare.26 Scientists received papers to review through their emails instead of their post boxes. For the most part, journals shifted to paid viewership on websites. Some journals, however, took advantage of the significant reduction in printing costs and switched to open access models, with papers free to access to anyone on the web—a model titled Gold Open Access. The journals that chose to adopt Gold Open Access, however, only make up 10% of all online journals today, meaning 90% of publications remain hidden behind paywalls.27 

But the internet made possible another form of publication that circumvents paywalls and conventional timelines. Scientists themselves gained the ability to publish and distribute papers widely at any point during the publication process. This kind of open-access publishing—called self-archiving—proliferates through new online platforms, including ResearchGate and Academia.edu, the so-called “social media” for scientists.28,29 Preprint servers like ArXiv, bioRxiv, and medRxiv emerged, and are dedicated to self-archiving preprints—given basic eligibility checks—before peer reviews. 

These servers have thrived the last few months, despite controversy surrounding studies like that of the Stanford antibody researchers.30 Some changes have come to pass: medRxiv has erected a large red sign on its homepage emphasizing that preprint results are strictly preliminary results and should not be used to guide clinical practice.31 Other preprint servers have also raised their bar for acceptable preprints. Studies based, for example, purely on computational modelling with no empirical foundation are no longer accepted by bioRxiv.32

Over the last few months, the rate of preprint publication on Covid-19 research has dropped33 as journals have accelerated peer review processes to a few days, instead of a few months, for pressing studies.34 Meanwhile, alternate academic structures have emerged to hone in on Covid-19 research: one journal, Rapid Response: Covid-19, or RR-C19, applies machine-learning algorithms to well-known preprint servers to identify the most important, urgent papers.35 Then, RR-C19 draws from an existing network of scientists to source reviews and publish those papers, often in a matter of days.36 Johns Hopkins launched a similarly accelerated publishing process, amassing and reviewing a compendium of “original, high-quality” research and “papers receiving significant attention, regardless of quality.”37

These fixes, however, are hardly sustainable in the long-term; machine-learning algorithms, which are hardly infallible, are difficult to develop for all fields of research, especially because definitions of urgency vary hugely not just from topic to topic, but also from researcher to researcher.38 The broader questions about the scientific process remains. How can academic publishing adapt to the internet? Can knowledge be both accessible and robust? What does a ‘better’ scientific process mean?

Even as we live isolated, Covid-19 has revealed how interdependent our societies are. The scientific process thrives on that interconnection. Knowledge is always produced in the context of a community: the peer review process acknowledges that by ensuring no study is ever truly individual.39 But the peer review process, by definition of “peers,” cannot ever become fully inclusive.40 A move towards preprint release allows us to expand scientific communities beyond elite institutions and those with the ability to pay for journal publications. 

Preprint servers provide an increased accountability for science practice. As we move forward, we can shift preprint servers from an accessory to an integral part of the scientific process. Covid-19 has forced today’s preprint servers to set standards for new research. Notably, because of recent attention given to preprint servers, the retraction rate for Covid-19 studies has also increased—meaning scientists are raising the bar for their studies’ quality, often as a result of public pressure.41 These corrections, like those for the Stanford researchers’ study, have been prompted by the most informal of discussions: a thread on Twitter, for example, where scientists talk in an open forum. 

Because anyone can access a preprint server, this also means anyone can participate in these discussions, as opposed to just those invited to comment in a conventional peer review. One of the primary criticisms of peer review is how it can perpetuate existing hierarchies: reviewers tend to come from elite institutions and in turn favor papers from such institutions.42,43 Preprint servers provide a platform for a more open, equal distribution of better papers. 

The flipside of this accessibility, as the Stanford researcher’s study exemplifies, is that mistaken data or flawed conclusions can receive undue publicity. Preprints without mistakes, too, can be taken out of context, with potentially harmful implications.44 But all open-access data faces these challenges. Here, they represent an opportunity for improving research, not a reason to limit access to it. 

Using open-access preprint servers incentivizes scientists to consider their studies’ impact on a community beyond their peers, due to the potentially widespread reach of their studies on these servers. This consideration can mean taking extra care to ensure the quality of writing and analysis before public release. But this can also mean engaging in intentional, two-way dialogues with those broader communities about new knowledge and its implications.  Used wisely, preprint servers can form part of the path to a broader, more diverse, and more efficient kind of research—exactly what we need as we build a stronger scientific future. 

Rewired is a digital magazine where technology and society meet. We’re committed to curating stories that amplify diverse perspectives and bridge disciplines.
Sign up to receive updates about upcoming issues and submission openings via email.