Remember the days of endless PDF hunting? Scrolling through search results, clicking link after link, hoping to land on that one crucial paper? It felt like a digital treasure hunt, often more frustrating than rewarding. Well, buckle up, because the way we find and use information is undergoing a seismic shift, largely thanks to the rapid rise of Artificial Intelligence.
It’s hard to overstate just how quickly AI has woven itself into our daily digital lives. Tools like ChatGPT, which exploded onto the scene in late 2022, are now a go-to for millions, with user numbers soaring into the hundreds of millions. And it’s not just standalone chatbots; AI is increasingly integrated into search engines themselves. Think Google’s AI Overviews or Gemini – these features are changing how we discover information online, with a significant chunk of users now getting their answers directly within the search interface, often bypassing traditional websites altogether.
This transformation is profoundly impacting scholarly platforms and how we measure research engagement. A key challenge arises from AI tools that can summarize or extract information without necessarily linking back to the original source. This makes it incredibly difficult to track how content is truly being used. Traditional metrics, like how many times a paper was downloaded or how long a user spent on a page, are becoming less reliable. As one expert pointed out, current metrics are essentially “blind to how research articles are being used with AI.”
It’s not that researchers have stopped engaging with valuable content; it’s just that their engagement looks different now. They might be using AI to distill complex papers, extract key findings, or even generate initial drafts of their own work. This means a paper could be absolutely central to a researcher’s breakthrough, yet its contribution might go uncounted in the traditional sense. This phenomenon is often referred to as “usage leakage” or “invisible use” – where content is vital but its impact isn't captured by existing measurement systems.
The scholarly community is actively grappling with this. Organizations like COUNTER, which sets the standards for library usage data, are working on new ways to account for AI-driven interactions. They’re exploring the idea of adding new attributes to usage reports, like marking an access method as “Agent” to distinguish AI-facilitated access from direct human interaction. This would ensure that libraries and publishers get credit for the value their resources provide, even when accessed indirectly through AI.
Beyond the technical adjustments, there’s a broader conversation happening about AI literacy and its role in higher education. Libraries and publishers are experimenting with new approaches, recognizing that the future of research discovery and consumption is inextricably linked with these powerful AI tools. The goal isn't to stop using AI, but to understand its impact and adapt our systems to reflect this new reality, ensuring that valuable research continues to be recognized and supported.
