It feels like just yesterday we were all marveling at the sheer speed with which ChatGPT exploded onto the scene. Within five days of its launch, it had amassed over a million users – a pace that makes even Twitter's early growth look sluggish. Suddenly, this AI-powered chatbot was everywhere, performing impressively on exams, debugging code, writing software, and crafting everything from social media posts to emails with uncanny versatility. It’s undeniably compelling, and the ripples it’s sending through our world, both good and potentially not-so-good, are only just beginning to be felt.
This rapid ascent naturally raises a big question: are we truly ready for what’s coming? As we stand on the precipice of what feels like a new era, perhaps akin to the dawn of the World Wide Web, the implications for cybersecurity are particularly stark. It’s a topic that’s been on the minds of security researchers, like Eran Shimony, Principal Security Researcher at CyberArk Labs.
Eran recently took a deep dive into ChatGPT’s capabilities, and in an effort to stay ahead of potential threats, he did something quite remarkable: he had ChatGPT create polymorphic malware. It sounds like something out of science fiction, but it’s a very real demonstration of the power and potential pitfalls of these advanced AI tools. He shared his findings in a blog post on the CyberArk Threat Research blog, and it’s certainly sparked a lot of conversation.
When Eran first encountered ChatGPT around late November, his initial curiosity was about its general capabilities. “What can I do with it?” he mused, trying out requests for songs, code, and stories. But as he explored, he noticed the built-in content filters, designed by OpenAI to steer clear of controversial topics like drugs, weapons, and, crucially, malware. For anyone with a hacker’s mindset, or even just a keen interest in understanding boundaries, these filters present an irresistible challenge.
“Of course, there’s any hacker out there, I wanted to bypass it,” Eran admitted. He began experimenting with clever prompts, aiming to circumvent these restrictions. And he succeeded. He even joined the ChatGPT Discord community, reporting bugs he discovered. Seeing the code examples ChatGPT was generating, he realized the potential for something more – something malicious.
This exploration wasn't about malicious intent, but rather about understanding the landscape. By thinking like an attacker and staying a step ahead, researchers like Eran can help build better defenses. The ability of ChatGPT to generate sophisticated code, even when prompted to do so for research purposes, highlights the dual-use nature of powerful AI. It’s a reminder that as these tools become more accessible and capable, so too does the potential for their misuse. The conversation around AI is no longer just about its amazing potential; it’s increasingly about how we manage its risks and ensure we’re prepared for the evolving threat landscape it creates.
