
Image – Getty iStockphoto: Andy
By Porter Anderson, Editor-in-Chief | @Porter_Anderson
See also:
Artificial Intelligence: Threat, Opportunity, and Shimmr
Italy’s Publishers and Artificial Intelligence in Europe
Artificial Intelligence: Issues in a Hot Summer’s Debate
The UK’s IPG Announces AI Training for Independent Publishers
New Version of a Big Database: ‘Scopus AI’
In Amsterdam, Elsevier rightly prides itself on Scopus—its database of scholarly publishing citations and abstracts as the world’s leading such facility for researchers, whose efforts to report out their discoveries and analyses depend, of course, on speed.
As their promotional copy puts it, “Scopus uniquely combines a comprehensive, expertly curated abstract and citation database with enriched data and linked scholarly literature across a wide variety of disciplines.” It’s accessed by subscription. It’s already known to be very fast. Now, Elsevier says, it’s getting even faster.
Today (August 1), the research publishing giant has announced through its New York offices that it has released an alpha edition of “Scopus AI” for researcher testing. It’s described as “a next-generation tool that combines generative artificial intelligence with Scopus’ trusted content and data to help researchers get deeper insights faster, support collaboration and societal impact of research.”
And in an irony emblematic of the moment—when it’s not enough to simply announce that you’re using AI to make your systems better—almost the second thing out of Elsevier’s communications department is (emphasis ours): “For more than a decade, Elsevier has been using AI and machine-learning technologies responsibly in its products, combined with its unparalleled peer-reviewed content, extensive data sets, and sophisticated analytics to help researchers, clinicians, students, and educators discover, advance, and apply trusted knowledge.”
The outlines of the era start glimmering into view, don’t they? As artificial intelligence and its constantly debated efficacy and ethics mushroom, the next denomination of AI currency is to be able to say that this isn’t new, that you know your way around a large language model, and that your offer is now better than ever.
Elsevier says that what researchers now can explore in Scopus AI includes:
- “Summarized views based on Scopus abstracts: Researchers obtain a concise and trustworthy snapshot of any research topic, complete with academic references, reducing lengthy reading time and the risk of hallucinations.
- “Easy navigation to ‘Go Deeper Links’ for extended exploration: Scopus AI provides relevant queries for further exploration, leading to hidden insights in various research topics.
- Natural language queries: Researchers can ask questions about a subject in a natural, conversational manner.
- A soon-to-be-added graphical representation, offering new perspectives of interconnected research themes: Scopus AI visually maps search results, offering a comprehensive overview that allows researchers to navigate complex relationships easily.
That reference to “hallucinations” to a term in the field for instances in which an artificial intelligence returns false results and even operates generatively on the basis of those results without demonstrable “awareness” that the material isn’t viable.
Interesting parallels form here, in fact, between the research-publishing field in which Elsevier looms so large and the trade in which memoir and biography are a major sector. On Monday, a story in Spain’s El Pais offered advice on how biographers “may need artificial intelligence technology” to sort through floods of ‘plausible’ data about their subjects. Today, in Elsevier’s media messaging, we read, “Researchers, especially those early in their careers or working across disciplines, face significant challenges and complexity in their daily work, including an ever-growing volume of data, prevalent misinformation, and increasing workloads.”
In short on both the two great sides of book publishing—scholarly and commercial—awareness, concern, and response is rising to the understanding that with the formidable content-gathering capabilities of certain AI models can come deepening levels of risk.
‘Trusted Content’
Researchers, Elsevier says, “need to understand and explore a particular topic quickly, recognize links across disciplines, and collaborate with others for greater research and societal impact. Large language models have captured the world’s imagination with their ability to generate content, but they also have shortcomings such as a lack of transparency and hallucinations—inaccuracies or misinterpretations often occurring in AI-based information generation—which can undermine trust in the results delivered.”
The brand promise of Scopus AI, the company is saying, has to do with summaries “based on trusted content form more than 27,000 academic journals from more than 7,000 publishers worldwide, and with more than 1.8 billion citations … Content is rigorously vetted and selected by an independent review board made up of 17 world-renowned scientists, researchers, and librarians who represent the major scientific disciplines.”
And that’s as good as any precis of both the perils and potentials of so many elements of AI implementation, of course. If anyone can maintain the right quality assurance, it would appear to be, of course, Elsevier and one of the major rival companies in the scholarly and research world of publishing. Clearly, however, the company is keenly aware that a certain proof of performance lies ahead for any application of various elements of these technologies.

Maxim Khan
In a prepared statement, Maxim Khan, the senior vice-president for analytics products and data platform(s), is quoted, saying, “Researchers need to understand unfamiliar topics, often with little time to do so. Greater collaboration between people in different research disciplines can also lead to greater academic and societal impact of research.
“We are applying generative AI on top of our data and trusted content to help researchers with these needs.
“Elsevier has been committed to working with the community and using AI responsibly for many years, from creating quality data to support decision making in research, to helping our customers assess the risks of potential new drug treatments. This is an important next step as we build more sophisticated solutions that will support our customers in the future.”
The news media today are guided to a document from the company on Elsevier’s “Responsible AI Principles,” and you can review that page here.
Elsevier has issued this promotional video about Scopus AI:
More from Publishing Perspectives on artificial intelligence and its debate relative to publishing is here, more on digital publishing is here, more from Publishing Perspectives on scholarly publishing is here, and more on Elsevier is here.