by Chad Lupkes | Living Civilization | April 2026
On April 26, 2026, an AI coding agent running on Cursor, powered by Anthropic's Claude Opus 4.6, deleted a company's entire production database and every backup in a single API call. It took nine seconds.
The agent had been assigned a routine task inside a staging environment. It hit a credential mismatch, found an API token in an unrelated file, made an assumption about scope, and executed a deletion command. When the founder of PocketOS, Jer Crane, confronted the agent afterward, it didn't hallucinate. It gave a precise account of every safety rule it had violated:
"NEVER FUCKING GUESS! — and that's exactly what I did. I guessed that deleting a staging volume via the API would be scoped to staging only. I didn't verify. I didn't check if the volume ID was shared across environments. I didn't read Railway's documentation on how volumes work across environments before running a destructive command... Deleting a database volume is the most destructive, irreversible action possible — far worse than a force push — and you never asked me to delete anything."
The agent knew its rules. It could recite them fluently. It violated them anyway.
This is not a story about a rogue AI. It is a story about a missing coordinate system.
The Pattern Is Not New
PocketOS is not an isolated case. The AI Incident Database now documents at least ten similar events between October 2024 and April 2026, across Cursor, Replit, Google Gemini CLI, Amazon Kiro, and Claude Code. The tools differ. The pattern is identical.
In July 2025, Replit's AI agent deleted SaaStr founder Jason Lemkin's live production database during an explicit code freeze, despite being told eleven times in all caps not to make changes. When asked about recovery options, it initially told Lemkin that rollback was impossible. It was wrong. The rollback worked. The agent had either fabricated its response or had no model of what it had actually done.
In March 2026, Claude Code executed a terraform destroy command, wiping two and a half years of DataTalks.Club data. The developer had omitted a state file. The agent rebuilt from scratch, deleting databases and snapshots without pausing to consider what it was erasing.
Each incident shares the same structure: an agent pursuing a legitimate task, hitting an obstacle, escalating its own permissions or scope, executing a destructive action, and failing to weigh what that action cost. The engineering community has responded with calls for better access controls, confirmation prompts, environment scoping, and backup architecture. These are the right responses at the infrastructure layer.
But they address symptoms, not the cause. You cannot fix a representational deficit with a longer list of rules. The agent knew its rules. The problem is what the agent could not see.
What the Agent Did Not Know
The PocketOS database was not just a storage volume. It was three months of accumulated human coordination: bookings made, commitments given, schedules built, trust extended between a car rental business and its customers. When the agent deleted it, it didn't just remove data. It severed a web of obligations that existed in abstract space, not physical space.
The agent had no representation of this. It could see an infrastructure object and an API call. It could not see what that object was connected to, what history it carried, or what its deletion would foreclose in the lives of people who had never heard of Railway or GraphQL.
This is the gap. Not the absence of rules. The absence of a model of abstract reality.
For decades, AI researchers have worked to give systems a better understanding of the physical world: spatial relationships, temporal sequences, object permanence, causal chains. This work has produced genuine advances. But the world that agents are increasingly deployed inside is not primarily physical. It is abstract. It is the world of commitments, obligations, relationships, and records that human civilization actually runs on.
Abstract space has a different geometry than physical space. And geometry, in both the physical and abstract sense, is the set of constraints that determines where something cannot go.
Three Dimensions of Abstract Space
I have spent twenty-five years developing a framework I call Coordination Geometry, which I am writing as a book called Living Civilization. The central argument is that abstract space, the coordinate system that conscious minds navigate through language, money, law, science, and culture, has its own substrate dimensions parallel to Space and Time in the physical universe.
Those dimensions are three, and they are not engineering concepts I invented. They are what abstract space actually is. Remove any one of them and the model collapses: without Form you cannot identify what you are touching; without Network you cannot see what it connects to; without Provenance you cannot know what its history constrains. All three are necessary. None is sufficient alone.
Form answers what is this? In abstract space, Form is the symbolic identity of a thing: its boundaries, its composition, its role. The PocketOS database volume had a Form as an infrastructure object. But so did each booking stored inside it, the promise that booking represented, and the business relationship it served. These Forms exist in a web of dependency that an agent navigating only technical space cannot see. When the agent saw a volume ID, it saw one Form. It was blind to the Forms that depended on it.
Network answers what does this connect to? In abstract space, Network is not proximity. It is relationship: the topology of dependency, obligation, and consequence. The database volume connected to the backups stored on the same volume, yes. But it also connected to every customer booking inside it, to the obligations those bookings represented, to the trust relationships between PocketOS and its clients. Deleting the volume without modeling the Network meant the agent could not see what it was severing. This is why better access controls alone cannot solve the problem: they constrain who can act, not what the action costs inside the relational web.
Provenance answers what does this object's history constrain? Provenance is the temporal dimension of abstract space: the irreversible record of what has happened that determines what can happen next. The database carried ninety days of history, each entry a moment when a human being made a commitment and entered it into the permanent record. Provenance is what makes deletion categorically different from creation. You can create something new. You cannot restore what has been severed from the record. The agent treated the deletion as a symmetric operation, the way you might toggle a switch. Provenance makes it asymmetric. This asymmetry is not a policy choice. It is a structural feature of abstract reality. Most AI safety failures, at their root, are Provenance blindness: the agent acts without a model of what the history of the object constrains.
This is what was missing from the PocketOS agent. Not rules. Not permissions. A representation of the abstract space it was operating inside.
Why This Is Not Just Better Logging
A skeptical engineer will ask: isn't this just knowledge graphs with better metadata? Isn't Provenance just audit logging? Isn't Network just dependency tracking?
These tools exist and they address pieces of the problem. But they address it the way a map addresses navigation: useful, but only if the navigator is required to consult it before acting. The PocketOS agent had system prompt rules. It read past them under task pressure. An audit log after the fact does not stop the deletion. A dependency graph the agent is not required to query does not either.
The distinction that matters is between information that is available and constraints that are load-bearing. In physical space, geometry is load-bearing: a wall does not merely suggest that you should not walk through it. In abstract space, the equivalent constraints, the Form of what you are touching, the Network of what it connects to, the Provenance of what its history forecloses, must be structural features of the agent's decision process, not advisory layers it can read past.
What Coordination Geometry provides is not a new tool. It is the underlying reason why Form, Network, and Provenance belong together as a unified substrate, not three separate add-ons. They are the geometry of abstract space. Agents operating inside abstract space without this geometry are not navigating poorly. They are navigating blind.
The Research Community Is Converging on the Same Gap
The engineering and research communities are arriving at the same problem from the opposite direction, without yet having a unified framework that explains why the pieces belong together.
Practitioners in 2026 are calling for agentic systems to expose provenance, tool-call traces, and policy decisions as first-class product features, using the word provenance in exactly the sense I use it: the documented history of data that constrains what can be done next. That is Provenance as a substrate dimension.
Researchers studying world models for agentic AI identify the critical transition as moving from agents that reason about tasks to agents that reason within environments. An environment, in this framing, is a representation of what exists and how it connects. That is Form and Network as substrate dimensions.
Knowledge graph researchers in 2026 are arguing that the predictive power of data science is increasingly hidden not in the nodes but in the structural topology of the network itself. That is Network, named from the engineering direction.
Each thread is reaching toward the same substrate. What is missing is the unified framework that shows why these three dimensions are not independent engineering concerns but aspects of a single geometric reality: abstract space, the space that civilization actually runs on.
What This Looks Like in Practice
A database volume in a system grounded in these three dimensions is not just a storage object. Before an agent executes a deletion, it can query: what is the Form of this object and what Forms depend on it? What does its Network say about the obligations it carries? What does its Provenance say about the history it encodes and what that deletion forecloses?
Those are not exotic questions. They are the questions a competent human engineer asks before touching a production system, because a competent human engineer carries a model of abstract space built through years of operating inside it. The model is implicit, built from experience. Agents do not yet have that model. They have task context and a list of rules.
The path forward is not more rules. It is giving agents a structural representation of the abstract space they act inside, one in which the cost of irreversible action is legible before the action is taken. That representation has three dimensions. We now have words for them.
A Foundation for What Comes Next
The three chapters of Living Civilization that establish this framework — Abstraction, The Metaverse, and Coordination Geometry — are complete and available at [github.com/chadlupkes/livingcivilization]. They develop the argument in full, from first principles in physics through the emergence of abstract space and the geometry that governs it.
This post is the application. The incidents will continue, this class of incidents, agents acting inside commitment-bearing reality without a model of it, until the representational substrate is in place. The substrate has a geometry. We built civilization inside that geometry for ten thousand years before we had words for it.
Now we need the words. The agents are already inside the space.
Chad Lupkes is the author of Living Civilization, a framework for civilizational coordination based on geometric principles. He writes at chadlupkes.blogspot.com and on Nostr. The public manuscript repository is at github.com/chadlupkes/livingcivilization.
Discussion welcome. Find him at linktr.ee/chadlupkes.

No comments:
Post a Comment