Beyond the Binary: Navigating the Nuances of AI Detection in Academia

The question of whether using AI for academic work constitutes cheating is a hot topic, sparking debates across campuses. But what if we shifted the focus? Instead of a simple yes or no, what if we asked: what is the student actually doing with AI?

This is the core of the challenge Shanu Sushmita, an assistant teaching professor at Northeastern’s Khoury College, has been tackling. With two decades immersed in artificial intelligence, machine learning, and natural language processing, Sushmita understands that a student using ChatGPT to churn out an entire paper is worlds apart from one using AI tools to refine grammar, organize notes, or proofread in a second language. The critical need, she realized, is for tools that can discern these crucial differences.

"I heard a lot of students’ college application essays that were flagged for plagiarism were not actually plagiarized; that really broke my heart," Sushmita shared, highlighting a significant flaw in many existing AI detectors. These tools, she points out, can be biased, particularly against non-native English speakers, leading to unfair accusations. "It’s not black and white. Our goal is to have AI for good."

Sushmita’s approach emphasizes context, nuance, and intent – the very elements that make human interaction so rich and complex. This philosophy guides her work at the Generative AI Research Lab, where she fosters an environment that embraces these grey areas. Her journey into AI began twenty years ago, fueled by a love for mathematics. While the field of computer science was just beginning to open its doors to women, she persevered, even as the sole woman in her PhD program. Her early research explored user interactions with search engines, social media, and predictive healthcare analytics, eventually leading her back to academia after a stint with the healthcare analytics company KenSci.

Upon joining Northeastern in Seattle in 2021, Sushmita pitched the idea for the Generative AI Research Lab. She wasn't entirely sure if her ambitious questions, like the possibility of detecting how AI was being used, could be answered. The enthusiastic support she received from Northeastern was, and continues to be, a source of great encouragement.

Her team's exploration into academic AI detection began in 2023. Initially, the field was skeptical about the very possibility of detecting AI-generated text, let alone differentiating its applications. "It turned out it was actually pretty easy; we achieved so much accuracy that it freaked us out," Sushmita recalled. After a month of rigorous debugging, their initial tool, designed to distinguish between human-written and AI-generated content, achieved an astonishing 99% accuracy. Building on this success, she and graduate student Rui Min developed a subsequent tool capable of differentiating AI-generated work from AI-paraphrased content, currently boasting 93% accuracy.

Beyond academic integrity, Sushmita's lab is also delving into other critical AI applications. They are developing a model to identify objectionable content in music, aiming to assist parents in monitoring their children's playlists by detecting subtle themes of violence, sexuality, and substance abuse, rather than just explicit language. Furthermore, her team engages in "jailbreaking" Large Language Models (LLMs) – a process akin to white-hat hacking. By discovering prompts that might elicit dangerous information, they aim to identify and flag malicious queries before they can be exploited. This proactive approach is crucial for ensuring the safety of increasingly accessible AI technologies.

"How do you tell something that has the knowledge of the world to not answer a question, not fall into a trap? One way is to identify the traps it can fall into in advance," Sushmita explained. This continuous effort to make LLMs safer is paramount, especially as they become more integrated into our daily lives.

Sushmita credits her graduate students as the driving force behind these diverse and impactful projects, describing them as "amazing, talented, and really, really shining" individuals. Their collaborative spirit and dedication are instrumental in pushing the boundaries of AI research.

Leave a Reply

Your email address will not be published. Required fields are marked *