AI works in code. Everywhere else, it’s lost.
Why AI adoption works in coding but is limited elsewhere, why RAG isn’t enough, and why context can’t be resolved at runtime.
Written by

Lorenz Hieber
Co-founder & CEO
Date
Feb 25, 2026
Category
Thought pieces
See all articles
The current state of artificial intelligence presents a weird paradox. On one hand, we are witnessing an exponential jump in model capabilities; “holy fuck” moments have become a weekly occurrence. On the other hand, the actual impact of AI inside most companies remains stubbornly marginal. While the intelligence and potential are clearly there, the ROI usually isn't. We’ve landed in a strange transitional space where the world’s most powerful reasoning engines are mostly used to summarize emails or fix the tone of a Slack message.
The coding exception
To understand why this impact is so uneven, we have to look at the one area where the promise is actually being kept: software engineering. Developers report real productivity gains. Features are scaffolded in minutes, and refactoring is trivial.
The difference isn't about the intelligence of the model; it’s about the foundations. Code itself is a structured foundation, as it's self-documenting by design. It lives in repositories with strict logic and clear version histories. When an LLM reads a codebase, it isn’t just looking at text; it’s reading the operating reality of a system. In the world of code, the “map” of the system is visible, allowing the agent to navigate with full orientation.
Outside of engineering, that map disappears. This is where the pattern reveals itself: AI doesn’t struggle with reasoning; it struggles with orientation. It works best where knowledge is explicit and centralized. Where context is fragmented or implicit, agents get lost.
And when agents get lost, humans step in to guide them.
The FDE hype is a symptom
This is where the rise of the “Forward Deployed Engineer” comes in. Outside of engineering, AI rollouts drag because someone has to manually reconstruct the “map” every time. FDEs and consultants act as human translators between the model and the messy reality of the business. They stitch context together, explain edge cases, and define rules that were never formally written down. But this boom in AI services isn’t a sign of technological maturity; it’s a sign that the underlying infrastructure is missing.
That gap is context fragmentation.
The structured state of a company, the web of customers, products, processes, decisions, and exceptions that defines how work gets done, is scattered across a dozen silos. The sales process lives in HubSpot, the pricing exception sits in a Slack thread, and the “why” behind a major decision exists only in someone’s memory. Instead of building a centralized foundation, we are paying an “FDE tax” or hiring a consultancy to manually reconstruct this context for every single AI rollout, reinventing the wheel each time a new agent is deployed.
Why the existing stack isn't enough
For the past fifteen years, the Modern Data Stack (MDS) centralized an abstraction of context in data warehouses like Snowflake or Databricks. But that infrastructure was built for a different era. It was optimized for analytics and historical reporting, essentially designed for humans to look backward at tables (or fancy charts) to decide what to do next.
Now, LLMs need to make decisions. For this, the implicit understanding of the business that is currently locked in various tools and heads is necessary. They need to know what the business is doing now and why, which processes are active right this second, and how a similar task was resolved five minutes ago.
Many believe RAG (Retrieval-Augmented Generation) or the Model Context Protocol (MCP) are silver bullets. They aren't. RAG solves for lookup within fragmented systems, but it doesn't help an AI understand the overarching, living state of an organization.
Relying on “runtime retrieval” – asking the agent to fetch data from different tools on the fly – is also not the solution. Every time an agent fetches data at runtime, it has to decide from scratch how to interpret reality. Should it treat “John” and “John Doe” as the same person? Is a customer “closed” if the CRM says so, but the signed contract hasn't been uploaded to the drive yet? Deciding this at runtime leads to inconsistent logic, high latency, and massive costs. Therefore, core entity resolution and context normalization cannot be left to the moment of execution; they must exist as a pre-existing, coherent state.
The anatomy of the context layer
What organizations actually need is a “context layer”, as already outlined by Jaya Gupta (→ link) and Andy Triedman (→ link) in their recent blog posts. This layer isn't just another database; it’s a continuously synchronized, relational model of the business that serves as the first real company brain. It connects the “how” (the process) with the “what” (the data) and the “why” (the previous decisions).
Think about it: when an expert resolves a complex customer issue or approves a budget exception, the context layer captures more than just the result. It captures the inputs, the reasoning, and the specific rules they followed. It turns tribal knowledge into a structured, machine-readable map. Instead of a support rep digging through a static 100-page manual, the context layer gives the AI agent the logic from the most recent relevant decision.
Building this is a massive technical challenge. It requires the ability to turn messy, distributed data into a clear structure, keep it updated in real-time, and ensure access is strictly controlled. But once it's there, this layer becomes the space where humans validate the “unwritten rules” so agents can execute them with precision.
The strategic shift: Breaking the vertical silo
The reality we have to face is that intelligence has already commoditized. When everyone has access to the same frontier models, the model itself is no longer a competitive advantage; it is a rented utility. Differentiation moves to the foundation: the context quality and architecture. This is what determines whether intelligence can actually do work, making the question of where that context lives the most important strategic decision for any company today.
That's where our views at Qontext differ from those of the industry. Many suggest that the future lies in AI-native, verticalized systems of record like an AI CRM. But building a better agentic silo is still building a silo. If we’ve learned anything from the last decade of SaaS, it’s that fragmentation is the enemy of execution. Why should the “how” and “why” of a customer escalation be trapped in a support tool, while the subsequent churn risk stays invisible to the sales AI?
True agency requires breaking the silos entirely. The goal isn't to build a new vertical system; it's to build a horizontal, company-wide state. In this architecture, vertical AI tools still exist. They will always be necessary for their specific interfaces and domain-specific logic, but they will probably not win based on a “data moat” anymore. Instead, they must sync with and contribute back to a shared, horizontal context layer.
This vision is ambitious, perhaps even idealistic, but it is driven by a simple conviction: AI only works when there is a unified, company-wide state and ROI only compounds when that state is reusable across every department. When context becomes infrastructure, several things change: AI stops guessing and starts operating with clarity. Automation rates increase because agents no longer need to reconstruct reality from scratch. Governance becomes manageable because every decision traces back to a shared state. Switching models or tools becomes low-risk because the intelligence layer is replaceable while the company’s operating logic remains intact. Onboarding a new AI tool no longer requires months of setup. You plug it into an existing context base.
By decoupling “state” from “intelligence,” context becomes durable. Models will change. Tools will change. Vendors will change. But a company’s identity shouldn’t have to be reconstructed every time they do.
While model providers race for the smartest model, companies are racing for real adoption. The winners won’t be those with the most powerful model, but those with the most coherent, reusable company state.
In the end, context is all you need.
