Help us improve by providing feedback or contacting help@jisc.ac.uk
Research Problem
Rationale / Hypothesis
Method
Results
Analysis
Interpretation
Real World Application

Analysis of qualitative interviews to identify whether Octopus might help researchers produce high quality, open research

Publication type:Analysis
Published:
Language:English
Licence:
CC BY 4.0
Peer Reviews (This Version): (0)
Red flags:

(0)

Actions
Download:
Sign in for more actions
Sections

Interview recordings were transcribed, and analysed for underlying themes.

When asked about how they share research, all respondents indicated that the traditional peer-reviewed journal article is the most important and primary form for doing so, with events such as conferences being a secondary way. For some social scientists, events could also include workshops that gather input from stakeholders to shape their research. Sharing other aspects of research, such as data, methods, or ideas, is very rare when compared to traditional papers which present a complete "story".

Regardless of how sharing is done, almost all researchers we interviewed stressed the unmet need for more constructive feedback on research, especially in its early stages. Overwhelmingly, they considered this a key reason for sharing research because early critique can improve methods before work commences. They emphasised that the "process" of how research is done is more important than "output", because "...the process is something you can control".

Sharing research methods is not limited to describing them in the methods section of a traditional paper, but also refers to other formats in which they are documented. This includes protocols or software code, where the latter is seen as occasionally shared but rarely reviewed.

Some interviewees recognised other benefits of sharing research, where it: 

  • Reduces duplication of precious time and effort;

  • Reduces friction in finding and accessing material, such as data;

  • Ensures honesty, transparency, and accountability, especially (but not exclusively) if the research is publicly-funded;

  • Creates new opportunities for networking; and

  • Acts as a backup of a researcher's work, allowing them to find it again more easily.

The researchers agreed in principle that open research is desirable. One described an ideal situation where the entirety of their research is documented in an electronic notebook, because it would "...save so much time, people could use that data retrospectively, they wouldn't have to repeat things." Despite this ideal, the researchers spent most of their interview time describing barriers to its realisation.

Barriers to research sharing

Some of the barriers to sharing research are technical or personal, while most are related to political pressures experienced by almost all interviewees.

Technical barriers

Several researchers shared concerns over publishing sensitive data, of which the most common type is personally identifiable information. Other sensitive information could be safety-related, such as communication protocols for commanding spacecraft which can be misused to redirect their trajectory. The interviewees generally recognised that this is not a binary issue, but rather a difficult set of trade-offs where one should strive to be "as open as possible, as closed as necessary." One of them mentioned the usefulness of publishing representative, "synthetic" datasets rather than none at all.

Other concerns include not knowing how to avoid predatory journals and the prohibitively-high article processing charges (APCs) of some open access journals. The latter has wider political dimensions in terms of inequitable access to resources across the world. For example, one respondent suggested that high APCs have a gatekeeping role, forcing those with less funding to publish only in journals they can afford, rather than those most appropriate for the subject matter and with high visibility.

Personal reasons

As described above, most of those we interviewed recognise the value of sharing research, especially in its early stages. However, many of them also expressed a fear of publishing intermediate, immature, or incorrect outputs. They anticipate personal embarrassment and being negatively judged by their peers. A respondent said: "I still find so many typos in my papers now" and even these relatively minor mistakes are a continuing source of unease.

Political factors

One interviewee praised the scientific value of pre-registered replication studies, but ultimately concluded that they will not be doing this kind of open research. Like most others who were interviewed, this researcher cited lack of time as a major barrier to sharing more of their work: "I'd be drowning... If you're a successful person, you don't have the time to do that... I get no recognition for being involved in those projects at all. There's no value to me to do that." While this researcher is based in industry, the sentiment is widely shared among academics. When questioned further, most participants revealed that this barrier to sharing is due to political considerations, especially with regards to how their careers would be impacted.

“If you're a successful person, you don't have the time to [share more]... I get no recognition for being involved in those projects at all. There's no value to me to do that.”

Top among these concerns is the perceived risk of being "scooped". Depending on research discipline, the target of scooping could be data (such as in fields reliant on secondary data), research ideas (such as for theory-focused researchers), or methods (such as software code of hardware designs). There is very little trust among researchers that their work will not be scooped partly because, as one interviewee described it, the current system "rewards scooping".

While most did not explicitly define scooping, it is most commonly referred to as a form of plagiarism. To mitigate this behaviour, some participants noted the benefit of publishing early stages of research is to create a historical record of who first thought of and did what. For example, one physical scientist described how the GitHub version control platform maintains a detailed history of "commits", which tracks the who, when,  and what of changes to software code on a fine scale. In principle, having such a paper trail allows proper attribution in cases of plagiarism. That said, even if a time stamped record of work exists, the interviewees stressed that institutions must be reformed to recognise these non-traditional forms of publishing research.

Even without plagiarism, another form of scooping stems from a research culture rewarding those who are first. For example, one researcher feared that even if they publish their research idea and put their name on it, someone else could still beat them to winning a grant based on that idea. Therefore, according to another interviewee, there is considerable incentive for everyone to keep their work a secret at least until (and not necessarily even then) a peer-reviewed, high impact paper is published or a grant is awarded.

In addition, there is substantial fear of negative career consequences from being "caught" for making honest mistakes. A biologist we interviewed recalled that, during the peer review of a submitted paper, problems with a reagent used in their experiment suggested the results were not as high-impact as originally thought. This meant that their work could not be published in a high-profile journal, and with far less benefit to their career. According to this biologist: "...if I have done something wrong, I want it to be found out... but it would be a horrible experience to go through". This quote is consistent with the widely-expressed desire for early stage feedback, and suggests the current system might be punishing those who are honest about their mistakes.

Causes of questionable research practices (QRPs)

The pressure to publish high-profile papers while avoiding – or not revealing – mistakes could lead to questionable research practices (QRPs). Some QRPs are inappropriate randomisation and blinding in studies, or, most commonly, data manipulation and cherry picking.

One quantitative social scientist described widespread data manipulation in their field, or "trying different methods to get a significant result". This is, in part, enabled by a focus on traditional papers while not requiring the publication of data, code, or detailed and replicable methods. In fact, if asked to publish other components of research, it will "[sound] like you're killing them." This has been so normalised that when confronted about "cleaning" data, a common justification for this QRP is that "no one would know about it".

Bias in research assessment

Overvaluing publication records

Of those interviewed, academic researchers overwhelmingly cite traditional peer-reviewed papers as the key consideration in research assessment for funding, career progression, or national-level university evaluations. Many lament that the content of research is not important as long as it is published in a high profile journal: "...it kind of doesn't matter so much what you did because once your name [is] on the paper, that's like, you've got it. It's in the bank." Some job openings even require applicants to have published in a select list of the most “high impact” journals. The quantity of publications is just as influential, where early career researchers are taught to break down results into "minimum publishable units" to maximise the number of papers. One interviewee also lamented that in many assessments, a paper in "[an open access journal] doesn't count as a publication."

Additionally, one participant voiced concern that with some universities or research funders, only papers above a certain number of citations will be considered. This way of doing assessments only values research that is currently popular. This interviewee works in a highly specialised field where they publish their work in a journal that is topic-appropriate, and where they can receive the most useful review of their work. However, because the journal is so niche, it does not rank highly in the citation-derived metrics that assessors consider. This has hampered the development of this researcher's career and their job security, despite widespread praise by downstream practitioners on the value of their work.

Interviewees also noted that the importance of publication record in assessments gives an oversized role to those reviewing articles. Typically, only two peer reviewers are assigned to a submitted article, and their perspectives and biases could potentially derail the career of a researcher. One engineer we interviewed recalled how their manuscript on construction materials for buildings was rejected by a reviewer because it was not useful for aircraft. Another interviewee noted that it is unfair to place such a great responsibility and stress on peer reviewers, as their critique might have long-lasting implications for others beyond the content of the paper itself.

Another respondent noted that because funding agencies or universities rank people with “impressive publication records” higher than those without, they – in effect – conduct assessments "not from judging pieces of research, but from judging researchers".

The politics of attribution

The obsession with publication record that the interviewees perceive in assessment engenders a complicated set of politics and competition around authorship on papers.

One aspect is the very strong competition to be the first to put their name on a piece of research, which can easily "make or break" careers. Personal connections and prestige are perceived to be key in this arena. Instead of sharing, this environment promotes "castle building", where research outputs (such as software code) are kept secret, and "...if you want [me to share this] capability, you need to have me on your team". The capabilities in question could also be tacit knowledge and skills, or components of research like protocols and data. For the latter, it can be traded as a currency for paper authorship. This practice is sometimes formalised, where: "If you want to use other people's data [sets], then you might need to sign the contract saying that if you use their data set to produce any work, then their names should be on the papers as well." This is also reflected in views on the goals of networking in research, where it is defined as knowing the right people in order to obtain the data you need instead of for intellectual exchange.

When a traditional, peer-reviewed paper is being drafted, deciding authorship and its order can be complicated and stressful. A common symptom is that the division of labour is unclear and leads to misappropriation of credit. For example, the authorship of someone who provided guidance on research might be placed in a more prestigious slot, like first or last author (depending on discipline), while taking focus away from those who actually carried out the work. There is also disproportionate recognition in authorship, such as those who did 90% versus 10% of the work being placed at positions in the author list which imply equivalent contributions.

Interviewees described their struggles defining what levels of contribution merit authorship, especially when the effort behind that contribution seems small. For instance, one author had trouble deciding whether to include someone who provided useful but brief comments that probably did not take a lot of time. In other situations, those who are considered to be "plumbers" – such as statisticians, software programmers, or local “fixers” – are often demoted on the author list. For example, a statistician whose feedback completely changed the focus of a paper and its target journal was only mentioned in its acknowledgements section.

In addition to these legitimate challenges in deciding authorship, there are several forms of political pressure. Sometimes junior researchers are left out of authorship to make room for those with more power. In other cases, some authors are added – possibly as a favour to them – even when no one who conducted the research knows them or what their contributions were. One interviewee was forced to add an author who was "[a senior researcher] literally just [because they] gave me a [sample] on dry ice". Also, the prestige of the author list is so important that "...you would want to have the Nobel Laureate at the top, just to make sure that you get picked up by a journal." According to another: "I think it's already become a norm now that people accept the fact that you don't need to do anything. You just need to know the [right] people, then you put those people together, you'll get the credit as well."

Some interviewees acknowledge that there are existing attempts to provide more equitable attribution in paper authorship, such as the CLEAR or CRediT guidelines. However, "nobody reads it, it has no impact."

The need for a "good story"

For a piece of research to be published in a paper, there is a heavy bias towards what would most likely be considered an impactful, "flashy" story. Regarding “flashiness”, one researcher described it as: "I think most research is just, you know, very small, incremental steps. But it's like you can't really get funded if you can't say that it has huge, like potentially huge, impact on something very downstream." In other words, interviewees generally agreed that assessments "...are heavily persuaded by writing quality, particularly in the idea of storytelling quality, and then making a sound-bitey type point." This is further complicated by the fact that what counts as "interesting" research is in the eye of the beholder. The flashiness of research is so important that, to the frustration of one quantitative social scientist: "...it almost feels like [...] I'm a novel writer instead of a researcher."

Another researcher was concerned that the impact of research is typically, and often solely, measured in "capitalist" (i.e. how much profit can this research generate?) or "colonialism" terms. The latter could be "parachute research" where communities studied or affected by the research have little to no say in how it is done, shared, or assessed.

Some interviewees noted that the bias towards research with "impact" neglects the nature of doing science, which is often meandering and non-linear. Research is often built on mundane, boring "grunt work" that is valuable, and might eventually build up to impact that is not initially apparent. Discoveries often happen during this grunt work, and "...it's the practical stuff, really, that churns out the interesting stuff and then that's kind of where you work from." For one social scientist, traditional papers take too long to publish and are not useful for the stakeholders they work with. Instead, there is value in spontaneous and unplanned work, such as co-developing a survey with a community partner for whom the research can have direct benefit. In any case, researchers are sometimes pressured to retroactively come up with a good story to justify their work, which can be frustrating.

Other biases and discrimination

The interviewees raised other forms of bias and discrimination in research assessment, such as:

  • "Credentialing" where, for example, if a researcher with “only” an undergraduate degree is listed in a grant application, it will be discriminated against regardless of actual merit.

  • Personal characteristics such as gender or race affect assessment outcomes.

  • The personal geopolitical biases of referees or journal editors can inappropriately decide the outcome of peer reviews.

  • Funders sometimes define their remit too narrowly, missing out on valuable interdisciplinary research.

  • Support for research, especially financial support, is narrowly aimed at academic institutions which excludes many non-institutional researchers.

Improvements to research assessment

Despite prevalent misgivings about the current state of research assessment, those we interviewed identified several ways that the process could be reformed.

While there are guidelines such as CLEAR or CRediT for better attribution for authors of traditional peer-reviewed articles, that information tends to be ignored by readers and those performing assessments. Encouraging, or possibly requiring, adoption of these guidelines for incorporation into, or replace, author lists could be a useful first step.

Similarly, some interviewees wish that open research practices were valued in assessments. This could be paper trails, such as commits in a Git repository or hypotheses published on Octopus, which could be used as a source of accurate attribution. This way, "...even if I announce my hypotheses, but I never get around to testing it [...] and someone else does. That's totally fine, because you've timestamped that hypothesis... and you can be much more open."

Several of those interviewed highlighted the need to reward researchers who openly share mistakes. One also pointed out the value of recognising limitations in a study, and discussions of it should be required in papers. Such discussions should recognise that research quality is not a binary issue, but the management of trade-offs resulting from practical constraints. Another researcher believed that positionality statements – which present a researcher's experiences and perspectives relevant to a study – should be required not just in the social sciences, but for all research because we all bring our perspectives to the work we do, and should not pretend to be objective.

Most interviewees agreed that assessments are over-focused on outputs, whether that is papers in academia or patents in industry. Assessments should be based on the process of research, not its products. This could mean that in addition to reviewing methods, assessments should value the usefulness of negative or null results. Some noted this as a key difference between academia and industry where, for example, a null result could be viewed as valuable for a pharmaceutical company because it helps them avoid unproductive avenues for drug development. When assessing methods, one biologist observed that assessments are often done by senior researchers who do not perform any practical work, and can no longer effectively appraise it. Instead, "it should be grunts assessing grunts, right?"

Crucially, several researchers stressed that the method for assessment should itself be subject to critical scrutiny and research. In one large collaboration in the physical sciences, a social scientist was brought in for an ethnographic study on the collaboration itself. Insights from this study helped these researchers reflect on their collaboration, and potential ways to improve it. Another researcher noted that when assessing assessment, community stakeholders beyond the nominal, academic researchers should be involved.

Encouraging more openly collaborative ways of thinking and working

Unfortunately, as evidenced by the pervasiveness of structural problems that the interviewees described, most of them are not hopeful of positive changes. Some described a brain drain from research, especially academic research, because "there's people now who want a change, but they're not in a position for the change to happen. And by the time they are, everyone's giving up. They're leaving." Academics are especially overworked and underpaid, and as described in one sharp comment: "...anyone with half a brain cell now realises that the academic system is just not a level playing field and they just get the hell out of the dodge as quickly as possible."

In addition to what has already been described, interviewees highlighted issues preventing open critique of research, and how the division of labour and specialist skills are not recognised and rewarded, resulting in everyone becoming overburdened.

Open critique of the research process

Lack of time is a common barrier to not just doing open research, but also providing effective critique. One social scientist described how they are stressed by not having sufficient time to provide quality peer reviews of papers or make fair editorial decisions, yet these activities are expected of academics. Another said that offering critique is difficult, because traditional papers have a sense of finality that does not welcome further feedback.

In the context of open research, giving critique publicly can be intimidating, not just from a lack of confidence or fear of discrimination (such as based on gender), but also the possibility of retribution from those in positions of power: "...being vocal means that I often get in trouble.... they don't invite me to meetings, for example, because they don't want someone sitting there throwing a spanner in the works or something, right? They'd rather just try and get by without anyone mentioning anything."

There is a perceived lack of social structure for feedback outside of the peer review process for papers. Some find it is hard to give unsolicited feedback, while others decry the absence of a safe way to communicate with more powerful or senior researchers.

That said, one researcher recounted feeling validated and encouraged by positive feedback, which meant that they were "on the right track".

Division of labour

Early career researchers tend to act as generalists, and have to take on practically all of the work, end-to-end. This is especially true for students from undergraduates to those pursuing a PhD. This might be expected, as many training degrees are designed to give people a generalist overview and experience across all parts of the research lifecycle. In some circumstances, the tasks include applying for research funding which, as mentioned above, could be difficult for those with only undergraduate credentials regardless of the merit of their proposed research. In any case, these early career researchers do identify gaps in their abilities, and bring on specialists as needed. They perceive that those in more senior positions do much less of the practical steps in research, such as data collection or analyses.

Regardless of career stage, there is widespread sentiment that specialist contributions to research are often "invisible" and not appreciated. For example, one physical scientist we interviewed performed a major overhaul of the analytical source code underpinning a major research project. This contribution required expert knowledge in software engineering and the underlying science. However, other than receiving verbal appreciation, this effort was largely "thankless", and the researcher was pressured to "pivot towards publications" which is considered more productive. Similarly, work by statisticians or data scientists are often unappreciated: "Sometimes they wouldn't be put on a paper as a middle author, and maybe they would be put in the acknowledgement... the view of the statisticians is being like one of the plumbers or something like that, where '[they are] just calculating p values, right?'" A manifestation of this lack of understanding is that senior project managers, such as principal investigators, would hold unrealistic expectations regarding what junior researchers should produce with limited resources. This could, for example, take the form of a project manager setting an unrealistically short timeframe for completion of certain research tasks by a junior team member.

The diverse forms of this crucial, but unrecognised labour also include "fixers" with expert local knowledge to facilitate social science research; interview transcribers or translators with tacit contextual knowledge; or various research assistants. Several interviewees also recognised that critical specialist contributions to research extend beyond those directly related to the subject matter. They could be administrators and finance staff in large projects, or professional writers and graphic designers.

One social scientist reminded us that the forms of reward and recognition for contributions can be just as diverse as specialisms. Consequently, researchers and institutions should be mindful of how contributors would like to be acknowledged and rewarded in addition to traditional paper authorship.

Importantly, several interviewees stressed that doing good open research is itself a professional skill. Sometimes, senior researchers may seem receptive to open research practices, but typically delegate the practicalities to junior team members. Like other specialisms, the "articulation work" of opening up research is not recognised. Another researcher suggested that rather than requiring yet another skill for overburdened academics to excel at, specialists should be employed within a project to ensure it is managed according to open research best practices.

One specialism that is considered to be almost universally important is networking, and it is often for political - rather than intellectual - reasons. For example, assessments for tenure or promotion hinge not only on a "flashy" publication record, but also the personal connections of the assessed. These acquaintances are asked to provide anonymous references for the tenure-seeking researcher. Even the speed at which the references are provided can be measured during assessment, where letters received earlier are scored higher. Therefore, junior researchers are constantly stressed by the need to cultivate connections in anticipation that some might later be asked to review their performance. It was in this context that one interviewee said: "...who you work with matters, potentially more than almost anything else.

While early career scientists feel they have to be generalists, senior researchers also believe that division of labour is inefficient and it places undue burden on everyone. To quote one exasperated researcher: "Why are you asking the rocket scientists to figure out the Zoom meeting?"

In academic settings, the pressure to be good at everything is a major source of stress, including for those who do not desire to become generalists. The burdens are not limited to research, but all that is asked of an academic and what they are assessed for, such as teaching or administration. This pressure often leads to the QRPs discussed previously: "...[academics are] under so much pressure like to teach, to publish, to be at conferences, to do this, to do that... 'Do you have a [social media] account?' 'You have to engage with the students.' It's like you can't do all that stuff, and the only way that you'll meet those [assessment] criteria, like what they call metrics, is to cheat. There's no other way of doing it." One biologist we interviewed did not want to cheat, and decided to leave academia because of these untenable pressures. There is a fear that this brain drain will lead to a vicious cycle where only those with questionable ethics will remain in academic institutions.

Funders

This Analysis has the following sources of funding:

Conflict of interest

This Analysis does not have any specified conflicts of interest.