We are at an inflection point in the AI cycle.
Between 2023 and 2026, the market has been dominated by a race at the model and infrastructure layer, scaling foundation models, securing compute, and pushing raw capability forward at unprecedented speed. The priority has been clear: make AI real, usable, and fast. Large language models, multimodal systems, and hyperscale infrastructure investments defined this phase, with breakthroughs like GPT-5, Claude, Gemini, and massive GPU buildouts leading the way. As we move into the next phase, the question is shifting decisively. The frontier is no longer how powerful the model is, but how useful, reliable, and economically embedded AI can become. The center of gravity is moving toward the application and system layer, where AI is integrated into workflows, products, and decision-making at scale.
Examples of the first cycle include frontier model releases and cloud AI platforms; examples of the next include AI agents, copilots, autonomous workflows, and system-level products that can reason, act, and learn continuously.
To understand this transition, it is useful to revisit the four layers of the AI stack: Data, Models, Infrastructure, and Applications.
Each layer is becoming more strategically important in a different way. Data is re-emerging as a core moat, with emphasis on clean, proprietary, and sovereign datasets that enable differentiation and regulatory control. Models are evolving away from a pure “bigger is better” mindset toward small language models (SLMs), task-specific systems, and AI that can run efficiently on-device. Infrastructure remains critical, as hyperscalers continue to support explosive demand for training and inference, while also optimising cost, latency, and energy efficiency. The application layer is where value ultimately crystallises, where AI products are built that can reason through neural networks, act through tools and software, and learn over time via memory. This is where defensible businesses are formed.
This moment fits into a broader historical progression that has defined every major AI wave.
In the past, the core question was: Can we build it? This era was driven by research breakthroughs and proof-of-concept systems. The present phase asks: Can we scale and deploy it? focusing on performance, reliability, speed, and distribution. The emerging future reframes the challenge again: Is it dependable, integrated, and economically useful at scale? This final question is the hardest and the most valuable. It is where AI moves from impressive technology to indispensable infrastructure, embedded across enterprises, consumer products, and entire industries.
Anthropic provides a clear example of how this sequencing is unfolding in real time.
The company began at the model layer, investing deeply in the development of Claude as a frontier language model focused on safety and performance. From there, it moved into deployment and usability with products like Claude Code, enabling developers to integrate the model into real workflows. Most recently, Anthropic has expanded into system-level products such as Claude for collaborative work (“Co-Work”), signalling a shift toward AI that operates as an ongoing participant in knowledge work. Each step builds on the last. Model capability enables tools, tools enable workflows, and workflows evolve into systems. This iterative journey reflects the broader market transition from isolated models to integrated AI systems.
Takeaway
We are entering the AI Systems Era. A phase defined not by raw intelligence alone, but by dependable, integrated, and economically meaningful AI at scale. As models commoditise, value is concentrating in data, system design, and applications that embed AI deeply into real-world processes. This transition creates a new investment landscape, where winners will be those who understand not just what AI can do, but how it becomes indispensable.