As we approach the tail end of 2025, it's fascinating to see how far AI benchmarking has come, especially when we cast our minds back just a year or two.
It feels like just yesterday, in October 2024, that the Procyon AI Image Generation Benchmark started supporting NPUs. That was a significant step, bringing dedicated AI processing power into the testing arena. Before that, the focus was often on broader GPU or CPU capabilities. Then, in December 2024, we saw the addition of a DirectStorage test in 3DMark, hinting at the growing importance of fast data access for complex workloads, including AI.
Looking at the timeline, the first half of 2025 was a busy period. We saw the Procyon AI Text Generation Benchmark arrive in December 2024, allowing us to finally test Large Language Model (LLM) performance directly. This was a game-changer for anyone interested in the burgeoning field of generative AI. By May 2025, NVIDIA DLSS 4 was being integrated into 3DMark, showcasing advancements in AI-powered graphics rendering. And then, a big leap: new Inference Engines became available in Procyon in June 2025, offering more granular control and insight into AI model execution.
The summer and fall of 2025 continued this momentum. The Procyon AI Benchmarks were providing increasingly comprehensive and actionable performance insights by January 2026, suggesting a robust ecosystem was solidifying. The launch of the FLUX.1 AI Image Generation Demo by Procyon Labs in November 2025 also points to exciting new applications and testing methodologies emerging. Even beyond pure AI, the release of 3DMark Solar Bay Extreme in August 2025 and the Speed Way collaboration with PC Building Simulator 2 in July 2025 indicate a broader trend of integrating advanced graphics and simulation technologies, often with AI playing a supporting or enhancing role.
What's particularly striking is the expansion across platforms. The availability of a new Procyon AI Benchmark for Macs in July 2025, following the general 3DMark for macOS release in June, highlights the cross-platform nature of modern computing and the need for consistent AI performance metrics regardless of the operating system. This push for broader accessibility and deeper insights into AI hardware capabilities has been a constant theme.
It’s clear that by late 2025, AI benchmarking isn't just about raw numbers; it's about understanding how AI integrates into our daily digital lives, from generating images and text to enhancing graphics and optimizing system performance. The tools and benchmarks available now offer a much richer picture than ever before, and it’s exciting to think about what the next year will bring.
