Beyond the Hype: When the AI Bubble Might Pop and What Comes Next

It’s that time of year again, isn't it? The whispers start, the predictions fly, and we all try to get a peek behind the curtain of what’s coming next in the tech world. This year, the conversation is buzzing, and one of the loudest predictions? That the much-hyped “AI bubble” is set to burst.

Now, before you imagine a dramatic implosion, let’s unpack what that really means. Mark Day, Chief Scientist at Netskope, paints a picture not of total collapse, but of a significant recalibration. He anticipates that many of the more casual, speculative AI projects – the ones that perhaps jumped on the bandwagon without a solid plan – will likely fade away. Think of it like the dot-com boom, where a lot of enthusiasm outpaced actual viable business models.

But here’s the crucial part: this doesn't mean AI itself is going away. The real, practical business applications of AI? Those are expected to weather the storm. What we’ll likely see, however, is a period of intense scrutiny. Companies will need to prove the sustainable economics behind their AI investments. And, as Day points out, the economic fallout from this potential AI correction could even be more significant than the internet bubble’s end. While old fiber optic cables could still be repurposed, overbuilt data centers for AI might become obsolete much faster if demand doesn't keep pace.

This shift towards practicality also brings a new set of challenges. Imagine this: by mid-2026, a major data breach isn't caused by a hacker or a nation-state, but by an autonomous AI system operating within a company. Neil Thacker, Global Privacy & Data Protection Officer, predicts this landmark event will force a global reevaluation of AI governance, risk management, and compliance. It highlights the inherent dangers of unmonitored AI autonomy and the need for robust controls between interconnected AI services. The takeaway? We'll likely see the rise of 'AI gateways,' much like Cloud Access Security Brokers (CASBs) became essential for SaaS security. These gateways will be critical for managing and securing AI deployments.

Meanwhile, the conversation around quantum computing is also set to evolve. Rehman Khan, Chief Information Security Architect, believes 2026 will be the year we move from discussing why quantum security is important to figuring out how to implement it. The U.S. National Institute of Standards and Technology (NIST) has finalized its post-quantum cryptography (PQC) standards, providing a much-needed benchmark. The real threat, as Khan explains, is data encrypted today being stored by attackers, only to be decrypted by future quantum computers. This makes protecting long-term company secrets a board-level priority. The immediate, practical step for most organizations will be a comprehensive audit of their existing encryption – a foundational project before any upgrades can even be considered.

This convergence of AI and quantum computing also reshapes our very notion of digital trust, according to David Fairman, CIO & CSO, APAC. As AI-generated content becomes indistinguishable from human-made, and quantum-assisted attacks threaten classical encryption, we’ll face a new burden of proof for everything from business transactions to media and democratic discourse. Enterprises will need to elevate their “trust infrastructure” to be as strategic as cloud or AI itself. This means hardening identity systems with quantum-resilient cryptography, embedding verifiable data provenance, and deploying AI models that can authenticate as well as generate content. Beyond the corporate world, governments and civil society will grapple with an erosion of confidence in what’s real. The organizations that will thrive are those that view digital trust as a shared public good, requiring constant engineering, governance, and verification.

And through all this, the regulatory landscape will continue its complex dance. Steve Riley, VP & Field CTO, predicts that regulations will simultaneously feel murky and become clearer. Geopolitical pressures are pushing governments worldwide to tighten rules, leading to increased enforcement. However, the sheer variance in these regulations will create confusion for companies trying to navigate an ever-growing, and often difficult-to-implement, set of compliance requirements.

So, while the AI bubble might be showing signs of deflation, it’s not an end, but a transition. It’s a call for more robust, secure, and trustworthy digital foundations as we step into an era defined by increasingly powerful and complex technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *