Oxford press surveys researchers on AI, builds tech bridges

GLOBAL

In the age of AI, the scholarly process will break down without new standards and technologies that make sure intellectual work is identified and cited, Oxford University Press has warned. A new survey the press conducted worldwide shows that 76% of academic researchers use some form of generative AI in their work.

Key findings of the survey of 2,345 researchers are that 28% believe AI will revolutionise how academic research is conducted and disseminated, and 27% are excited about AI’s prospects for research. Machine translation and chatbots are the most popular AI tools, followed by AI-powered research engines or research tools.

More negatively, there are concerns about the erosion of critical thinking and about respect for intellectual property rights, with trust in AI companies very low. Interestingly, there are stark generational differences, with older researchers more embracing of AI technologies.

Publishers were as surprised as universities by the sudden appearance of generative AI at the end of 2022, first in the form of ChatGPT but soon followed by a giddy array of new AI tools. Some major academic publishers reacted by banning AI use for research articles, but as with universities, the focus has since turned to managing it.

David Clark, managing director (academic) for OUP, told University World News: “Overall, the high level of engagement from researchers at all stages of career, disciplines, and geographies highlights a growing need for publishers to work with research authors, the broader research community, and companies developing large language models [LLMs] to set clear standards for the future.

“We know that generative AI will gradually change the ways in which researchers both retrieve scholarly information and also use it to develop their ideas.

“Making sure the work that they use, and the ideas upon which they draw, can be discovered and properly credited is essential to a sustainable scholarly process in an AI world. We are all standing on the shoulders of giants, and we need those giants to be acknowledged.”

New standards and new technologies are needed to help ensure work is properly identified and new forms of AI-enabled reference and citation are developed. “The scholarly process will break down if these new tools, services and standards are not informed by the needs of the research community,” Clark said.

“Publishers are well positioned to understand researcher requirements and work with technology companies to build solutions with high scholarly integrity, ones that aid the development of future research rather than obscure it,” he said. OUP is working with tech companies that are developing LLMs, which are trained on huge sets of data and use a transformer (neural network architecture) to process and generate data such as text.

The survey

Oxford University Press is the world’s largest university press, publishing thousands of books a year and with offices in some 50 countries. It issued a survey in March and April 2024, and 2,345 academic researchers around the world responded.

Geographically, 46% of survey respondents were based in the United States, 11% in the United Kingdom, 17% in Europe, 9% in Asia Pacific, 8% in Canada, 3% in Latin America and the Caribbean, 3% in the Middle East and North Africa, 2% in Australia and New Zealand, and 1% in Sub-Saharan Africa.

Respondents were spread almost equally across the humanities, STM – medicine and health, science and mathematics, engineering and technology – and social sciences disciplines. The demographics collected were high-level discipline, career stage, country of residence, and age group, according to the survey report titled Researchers and AI: Survey findings.

Said Clark: “We wanted to share the survey results in their entirety to provide a really broad picture of how researchers are engaging with AI. For this reason, they’re open to interpretation by others in the research community. We hope this will encourage discussion and reflection, rather than dictate how people should be embracing the technologies.”

OUP is holding a series of webinars that will bring together academic researchers worldwide to explore topics highlighted in the research and to discuss how people in the research community can work together to build a sustainable AI future. On 20 June there will be a virtual roundtable titled “AI as a transformative force for academic research”.

Some key findings

Among the 76% of researchers who are already using AI during research, said Clark: “Our research shows that AI tools are mostly being used to streamline existing processes, rather than revolutionising how researchers are conducting their practice at different stages.

“AI was cited as being most used for discovering existing research, analysing research data, and for editing write-ups and summarising existing research. This is supported by machine translation and AI chatbots, which we found to be the most commonly used types of tools.”

According to the survey report, 49% of researchers who use AI deploy machine translation tools and 43% use chatbot tools, followed by AI-powered search engines or research tools (25%). Just over a quarter (27%) report having a good understanding of AI tools in general.

More than a third of researchers who use AI believe it saves time, and among those who have used AI in their research, 67% feel it has benefited them to some degree.

The survey analysis divided researchers into generational groups: Millennials up to 39 years old, Generation X researchers up to 59 years, Baby Boomers between 60 and 79 years, and the Silent Generation of 80 years or older.

Among Millennials there were more respondents who are completely against AI. A quarter of respondents in the early stages of their careers hold sceptical or challenging views of AI. Early career researchers are also more divisive than others in their opinions, with fewer expressing neutral views than later career researchers.

Baby Boomers and Gen X have a larger proportion of researchers who fully embrace AI, and researchers later in their careers are also more open to the use of technologies in their work – respondents who are more averse to AI usage is just 19%, says OUP.

A statistical cluster analysis identified eight different profiles of researchers ranging from ‘Challengers’ – researchers who are fundamentally against AI – through to ‘Pioneers’ who fully embrace the technology. Unsurprisingly, Clark said, all types of tools are used most by respondents who fell into the ‘Pioneers’ category, and they want more AI services.

The eight groups identified helped OUP to recognise interesting contrasts in how AI is perceived between different disciplines. “We found that there is the largest proportion of ‘Challengers’ in the humanities,” Clark told University World News.

“Our research also showed a larger proportion of ‘Neutrals’ (those still weighing up the positive and negative effects) in sciences, technology, and medicine (STM), compared to other disciplines, with a similarly large proportion of ‘Pioneers’ in the STM and social sciences disciplines, in comparison to humanities,” he said.

Down sides

Concerns surfaced around how AI will impact academic research in general. One in three respondents worry that researchers’ skills will be negatively impacted and, says the report, 25% of the researchers surveyed believe that AI reduces the need for critical thinking.

“Overall, trust in AI companies is very low, with only 8% trusting AI companies not to use their data without permission and 6% trusting them to meet data privacy and security requirements,” states the report.

Three in five researchers believe that the use of AI in research could undermine intellectual property, and result in authors not being recognised appropriately for use of their work. Consequently, 69% of respondents feel it is important to fully assess the implications of using AI tools before applying them to their own research.

The survey reveals inconsistencies in how institutions are responding to AI, raising questions around how university leadership is dealing with AI developments: 46% of the researchers say that the institution they work at has no AI policy, and 26% are unsure if a policy exists. Most look to academic societies as their main source of guidance, says OUP.

Some interpretation

In interpreting the survey findings, some important insights arose.

“The generational differences are striking but not shocking,” said Clark. “Those at different stages of their careers will typically face different challenges, have different priorities and have differing levels of exposure to technology,” he added.

“Concerningly, one of the key trends indicated that researchers feel there is a lack of available guidance about how they can engage with AI tools. We want research authors to feel that they can realise the benefits of new AI tools, while also ensuring that we are promoting the highest levels of academic integrity,” he explained.

Clark highlighted the importance of concerns about AI undermining intellectual property, which must be addressed, and researchers must be credited for their work. “This is something I imagine to be at the forefront of most researcher thinking as AI evolves,” he said.

“The risk for publishers and, fundamentally, for research authors is the potential of AI technologies to absorb, retain and re-use knowledge. Publishers like OUP are uniquely positioned to advocate for the protection of researchers and their research within LLMs.

“And we are beginning to do so because AI providers are seeking trusted scholarly data to train their models, and the responsible providers appreciate that seeking permission is the most sustainable way to move forward. Above all, we want to preserve the ecosystem that supports academia and the intellectual property which sustains it,” he said.

Latest news
Related news