Interested in getting involved or contributing in some way?
Check out our Contributing Documentation for more details about how to get involved.
This website is still in development. There’s lots we need to fix and clarify. So if something doesn’t make sense, let us know by contributing!
Science is one of the most fundamental drivers of human development. People are drawn to it by curiosity, a wish to understand the world, and to help humanity solve its problems. Unfortunately, many of the current structures in science are arguably not fit for purpose. They are not only inefficient but can lead to dangerously wrong conclusions. Below we examine a few of these problems in order to understand why we need ‘a Science Collective’.
Science in the 21st century is institutionalised. It is carried out in academia, government-run organisations, or for-profit companies. This often leads to narrow and politically/economically driven agendas and to stifling bureaucracies. It also limits those who are outside these institutions from participating and contributing to science in meaningful ways. Institutions on their own aren’t necessarily bad, but when top-level decisions are made by people unfamiliar with the scientific process and the needs of scientists, the institution becomes a barrier rather than a support.
Scientific publication is still organised with a for-profit model that imposes paywalls and other barriers while profiting from public funding of science and unpaid academic input in the form of submissions and peer-review. For-profit publishers actively work to maintain the status quo of the research cycle by perpetuating the focus on publication and citation metrics as indicators of “impact”.
Scientific careers and funding decisions are evaluated by metrics (publication quantity / citations / impact factors / prior funding / h-index) and have a bias towards certain output types (journal papers versus education, code, or curated data). Even though funding bodies claim to seek innovation, the incentives imposed by the metrics-based assessments encourage a low-risk incremental approach with a narrow focus. Innovation is often confused with novelty (unexpected results that get attention in the media). Researchers understand that it pays off to follow the current hot topics and trends rather than to go after true, though higher-risk, innovation.
Taken together, the incentives of the current system do not reward behaviours that are central to the conceptual foundations of the scientific method:
- Replication studies are undervalued: They are not seen as innovative, nor do they often give impactful findings, and so are difficult to publish in the current publication system. Therefore there is no incentive or pressure to conduct them.
- Reproducibility is basically non-existent: Sharing of data, methods, and analysis code are barely recognized, nor can they be published independently of a result with ‘novelty’ value.
- Null or ‘negative’ findings are often discarded: Like with replication studies, null findings are not usually considered novel nor impactful.
- Little regard or support for methods development: The emphasis on publication quantity, novelty and impact leads to undervaluation of thorough work to understand the limitations of currently used methods and the development and critical appraisal of new approaches.
- Non-traditional research activities are discouraged: Any activity that does not add to the institutionally accepted metrics, such as teaching, public outreach, or building software and datasets, is implicitly and explicitly discouraged.
- Collaboration is limited and difficult: The current metrics incentivise competition over collaboration, as sharing with, providing support for, or training to collaborators reduces time and effort spent on adding to the metrics. Funders spend a lot of effort encouraging collaboration, but this is generally assessed only at the level of co-application and co-authorship, not at the level of attaining truly integrated workflows and building up shared resources.
- Funding for non-traditional activities is difficult to obtain: Making data and code follow FAIR principles, building modern IT infrastructure, creating formal training programs for technical skills, or hiring highly skilled technical personnel are expensive and time consuming, but because project budgets need to be focused toward directly measurable outputs, researchers are generally discouraged from including them in research budgets.
These shortcomings of the current scientific/academic system not only lead to a general degradation in quality of the scientific output and wasted resources; they also affect the people who were drawn to science by curiosity. Many become disenfranchised and demotivated, often leaving academia and science. People who stay and are successful are often the ones who understand how to thrive in the current incentive system and are best at ‘playing the game’.