Blog
Agents Do Not Improvise Well
We keep giving AI agents access to our tools and then acting surprised when they do something unexpected. The problem was never the AI. The problem is we never gave it the rulebook.
For years, workflow automation meant connecting tools through integrations. If this, then that. Trigger here, action there. It worked for simple tasks. It broke under complexity. And it was built for humans who could read error logs and fix broken triggers when things went sideways. AI agents do not work that way. They need context, not just connections.
Context is the missing infrastructure layer
Three of the most influential voices in technology arrived at the same conclusion in early 2026, from completely different directions.
David Heinemeier Hansson announced that Basecamp is going agent-accessible, calling agents “the killer app for AI” and betting that the future is about making your product callable by agents, not building AI features into it. Jack Dorsey laid out his vision for Block as a “mini AGI”, rebuilt around a continuously updated “world model” where every decision, discussion, and plan is machine-readable and available to every person and agent at the edge. Andrej Karpathy went viral describing how he uses LLMs to build personal knowledge bases that compound over time, arguing that “the tedious part of maintaining a knowledge base is not the reading or the thinking, it is the bookkeeping.”
All three are pointing at the same gap in AI infrastructure. Agents need structured context to operate. Products need to be callable. Decisions need to be recorded. Knowledge needs to compound. But none of them are asking the harder question: who governs what the agent does once it has that context?
Context without governance is just a smarter way to make unaccountable decisions faster.
Accessible is not enough. Governable is.
Basecamp made their product agent-accessible. That is necessary but not sufficient. An API lets agents act. It does not tell them what to do or prevent them from doing the wrong thing.
Dorsey is building a company world model. That is the right instinct. But a world model without structured processes is a database of past decisions. It tells agents what happened. It does not govern what happens next.
Karpathy is compiling knowledge bases. That compounds understanding. But a knowledge base is passive. It informs. It does not enforce.
We see the gap play out constantly. A team connects an AI agent to their tools. It starts doing useful work. Then it does something unexpected. Something that would fail an audit. The problem is not the AI. The problem is that the AI had no reliable source of truth about how work is supposed to happen, and no guardrails enforcing that source of truth in real time.
Every agent needs a brain
This is what Cora solves. Every AI agent in your organization gets a brain: a structured, governed, auditable set of processes that tell the agent what to do, in what order, with what approvals, under what constraints.
A knowledge base tells an agent what the company knows. A brain tells the agent how the company works. The difference is the difference between giving someone a policy manual and giving them an operating system.
Cora brains are versioned, governed, and auditable. Every step, every approval, every form field, every conditional rule. When that structure is exposed to AI agents, they do not improvise. They operate inside the process, with full context of the policies they are supposed to enforce, and they generate proof that the work was done correctly. It turns workflow automation into AI infrastructure.
The access control layer for AI
Here is what this looks like in practice. An AI agent runs an employee onboarding workflow. It pulls the new hire’s information from the HRIS, fills the form fields, triggers the IT provisioning automation, and advances through each step. But when it reaches the manager approval gate, it stops. It notifies the manager. It waits. No amount of agent capability can bypass that gate, because the workflow is deterministic. The approval step is not a suggestion. It is a constraint.
That is what compliance-ready AI actually looks like. The agent has full context of the process. It can fill fields, trigger automations, query previous workflow runs, and advance tasks. But it cannot skip an approval step. It cannot bypass a compliance gate. It cannot take an action that the workflow does not permit.
A Cora brain is a gated, deterministic sequence. Steps happen in order. Approvals block progress until a human signs off. Conditional logic routes work based on real data, not agent inference. The agent operates within the brain, but the brain decides what the agent is allowed to do next. Every action is captured in a complete audit trail, from the agent’s first step to the human’s final sign-off.
The companies that win at AI will build compliance first
AI-ready operations require structured processes that are callable, governable, and auditable. Callable means agent-accessible. Governable means access-controlled with human-in-the-loop gates that agents cannot bypass. Auditable means every action logged, every decision traceable, every compliance requirement provable.
The companies that win at AI over the next few years will not be the ones that moved fastest. They will be the ones that built compliance into how their AI operates from the start, before the regulators arrived, before the audit surfaced a gap, before the agent did something no one can explain.
Your workflows are already the rulebook. Now they need a brain.