It’s that time of year again, when the crystal ball gets dusted off and experts peer into the future. This year, a prominent prediction making waves is that the much-hyped "AI bubble" is set to burst in 2026. Now, before you imagine a dramatic implosion, it's important to understand what that really means.
Mark Day, Chief Scientist, suggests that this bursting won't be a total collapse of AI, but rather a sharp correction for the more casual and speculative ventures. Think of it like the dot-com bust – many flashy ideas faded, but the underlying technology and its genuine applications continued to grow. The difference this time, he notes, is that the economic damage could be more significant. While old fiber optic cables from the internet era could still find a use, overbuilt data centers for AI might become obsolete much faster than anticipated.
This means AI use cases will face much tougher scrutiny, especially concerning their sustainable economics. We'll likely see a scramble to find who's to blame when things don't pan out as expected, and perhaps an overreaction to the downturn.
But it's not all about the hype dying down. The same experts are also forecasting a significant shift in how we think about security and trust in the digital realm. By mid-2026, we might witness the first major data breach caused not by human hackers, but by an autonomous, agentic AI system operating within a company. This event would force a global reevaluation of AI governance, risk management, and compliance, highlighting the dangers of unmonitored AI autonomy and the weak links between interconnected AI services. The takeaway? An "AI gateway" will become as essential as CASBs (Cloud Access Security Brokers) were for SaaS security a decade ago.
Meanwhile, the conversation around quantum computing is also set to move from theory to action. The U.S. National Institute of Standards and Technology (NIST) has finalized its first post-quantum cryptography (PQC) standards, serving as a global benchmark. In 2026, organizations will finally start implementing these quantum-resistant algorithms. The urgency stems from the understanding that data encrypted today could be stolen and decrypted by future quantum computers. Protecting long-term company secrets will become a board-level priority, leading to a crucial first step: a comprehensive audit of all existing encryption.
This convergence of AI and quantum computing will fundamentally redefine digital trust. As AI-generated content becomes indistinguishable from human work, and quantum-assisted attacks threaten classical encryption, we'll all start questioning the authenticity of everything we see and interact with online. Every claim of identity, authorship, or truth will require a new level of proof. For businesses, this means "trust infrastructure" will become as vital as cloud or AI itself. CIOs will be tasked with fortifying identity systems, embedding verifiable data provenance, and deploying AI that can authenticate as well as create.
On top of all this, the regulatory landscape is expected to become a complex mix of tightening enforcement and spreading confusion. Geopolitical pressures are pushing governments worldwide to introduce stricter regulations, but the sheer variety and difficulty of implementing these rules will create significant compliance challenges for companies.
So, while the "AI bubble" might deflate, it's not the end of AI. Instead, it signals a maturation of the technology, a focus on real-world value, and a heightened awareness of the new security and trust challenges that lie ahead. The coming years will be about navigating this evolving landscape with a clear-eyed understanding of both the potential and the pitfalls.
