This research suggests that open research publication platforms such as Octopus could be an important mechanism for achieving various reforms, but need to be supported by those who carry out assessments for funding or for career advancement.
For example:
Outreach and education about Octopus (and other similar platforms) would alleviate a substantial barrier to sharing, as respondents to the survey and interviews often do not know how and where to publish non-paper-based research outputs.
Funder and institutional policies need to be checked to ensure they reward and do not hinder good research sharing.
Overtly recognising the value of good ideas and methodological innovations as much as data or results would help address concerns about “scooping”, whereby researchers feel that they only get credit for work if it involves a “finding” (and a “positive” finding at that).
The eight Octopus publication types can both allow recognition for forms of research other than “results”, and bring recognition to specialists – such as data scientists or data collectors – especially if research assessment can recognise these contributions.
Breaking the link between “results” and other parts of the research process can, as in registered reports as well as modular publication platforms such as Octopus, mitigate the pressure for QRPs and publication bias in order to produce “positive findings”.
Peer review (and the proposed ratings system) within Octopus, focussed as it is on smaller publications than an entire “paper”, could help assess the intrinsic qualities of research, without such an assessment being influenced by the potential findings and implications of it, again focussing the incentives on research quality and minimising publication bias and the pressures for QRPs.
The Octopus open peer review mechanism could satisfy the strong desire for early feedback when developing studies, particularly if the risk of retribution can be mitigated.
The removal of first names and institutions from the top of publications within Octopus (which could potentially be extended to an entire replacement of names by ORCID iDs) helps remove some cues for unintentional bias (gender and institutional) by making a reader follow a link to find out more details of any individual author. This allows readers to read and assess the quality of a publication more on its own merits, rather than unintentionally noticing gender and institutional cues that might bias their assessment.
The use of automatic language translation (with care, to avoid mistranslation incidents) could help reduce the biases faced by non-native English speakers.
During the interviews, participants struggled to break down research in their disciplines into an ordered, discrete set of steps. In addition, we received criticism during the survey that it was framed around the natural and applied sciences without consideration for other fields, such as the arts and humanities. This reflects the diversity in how research is done within and across disciplines, and where Octopus could be placed within the wider open research publishing ecosystem. For example, while Octopus presents itself as the “global primary research record”, its eight ordered publication types might not be the one size that fits all forms of research. Indeed, it is unlikely that the “research” Octopus currently claims to represent includes fields such as, but not limited to, the arts and humanities. This should be clarified on its website. In contrast, some platforms allow publishing individual components of research without an overarching structure. For instance, GitHub is commonly used to publish the code behind scientific software, though it was not originally conceived for that purpose.
Some of the issues revealed by our study – such as the culture around chasing novelty and a “good story” – might need to be tackled through using a publishing platform like Octopus (which offers itself as a venue where there is no need for a story) as places to carry out research assessment, breaking the current perceived link between “good story” and “good research”. Such platforms can sit alongside outlets where a story is the driver of readership (and those story-driven articles could be commissioned by such outlets, and potentially even authored by, and with commensurate credit to, story-writing specialists such as science writers and journalists). This could be part of a broadening of what counts as a research “output” in assessments to include non-narrative publications. It could also improve recognition for specialist contributions – such as that from statisticians, methodological experts, or local “fixers” – and be part of a movement to include criteria for doing open research in assessments. These reforms should be sensitive to the fact that researchers, especially academics, are already overburdened, and for open research to be prioritised and become the norm, other dis- or mis-incentives should also be tackled so that what is being assessed is “different” and not “extra”.
In summary, Octopus, while not sufficient in itself, could be seen as a necessary part of realising systematic reforms in research culture, especially with regards to research sharing and assessment. However, other players within the system (such as funders, institutions, and anyone carrying out research assessment) need to be aligned to create an environment that incentivises best practice and open sharing and collaboration.