I’ve spent more than a decade building data infrastructure, including observability systems at Twitter, streaming data processing systems at Google, and declarative data processing systems at Snowflake. Over that time, I kept running into the same problem: there is a big gap between the elegance of programming languages and databases in theory, and the messy reality of building and operating real systems with them. In practice, the work is often tedious, stressful, and fragile. The systems I’ve worked on have usually been hard to change and easy to break.
Infrastructure engineers often become overly cautious about change. We spend more time testing and deploying updates than actually creating them. We call that maturity, but it is worth questioning. The ideal would be to let tools handle the boring, repetitive work so engineers can focus on ideas, experimentation, and understanding results.
The deeper question is what is actually missing. Computing has already been shaped by decades of work from some of the brightest minds in the field. A lot of that effort has gone into reducing accidental complexity. So if a major foundation-level breakthrough were possible, why hasn’t it already been found?
Maybe it has. Maybe it hasn’t. The structure of modern abstractions suggests a very specific opportunity: today’s software stack often forces a choice between powerful tools and general-purpose tools. That feels like a false tradeoff. There should be a way to have both, if the right model exists.
After years of searching, I believe such a model may exist. Building it is bigger than one person, which is why my cofounders, Daniel Mills and Skylar Cook, and I are starting Cambra. We are building a new programming system that rethinks the traditional internet software stack around a new model. The goal is simple: make building internet software feel like working within one coherent system, instead of assembling a fragmented set of parts.
Models Give You Superpowers
Computers are powerful because they turn abstract ideas into real-world effects. A spreadsheet formula changes a budget decision. A routing algorithm determines the shortest path. A database records a transaction and moves money between accounts.
Every program works through a model, which is an abstract representation of some part of the world. Models let software ignore irrelevant complexity and focus on the parts that matter for the task at hand. At the simplest level, a program can be described as a loop that takes input, updates state, computes consequences, and produces output. But the choice of model has a huge effect on what can be built, understood, and maintained.
Some models are better than others. Good models use intuitive concepts and give programmers useful rules for reasoning about behavior. Great models go further: they make programs easier to read, easier to verify, easier to optimize, and easier to refactor automatically.
So why not use the best models everywhere? The answer starts at the bottom. All modern software ultimately runs on the same foundation: bits in memory and instructions that manipulate them. But that foundation is so low-level that it is hard to connect directly to the concepts people care about. If a program is written in terms of bits and instructions, its purpose is hard to infer. If you start with an intuitive specification of real-world behavior, it is equally hard to map that specification down to raw machine terms.
To bridge that gap, we build higher-level models on top of the foundation, such as programming languages, operating systems, and databases. Working in a higher-level model always comes with a tradeoff: you give up control over how the system is lowered into simpler terms. But in return, you reduce complexity. Garbage collection is a good example: programmers do not have to manage deallocation manually, even though they lose direct control over memory management.
Models form a hierarchy. A model can be higher than the ones beneath it and lower than the ones built on top of it. But higher-level does not automatically mean better for a given task. The best model is the one whose concepts map cleanly to the problem domain. When a model is domain-aligned, it becomes much easier to move between requirements and implementation.
Tooling is where this matters most. Tools help us verify correctness, improve performance, and evolve systems over time. But tools only work within the model they understand. For example, an OS-level tool like top can show resource usage, uptime, and network throughput. It cannot do what a language-level tool like gdb can do, because gdb understands the programming model itself.
That limitation becomes painful when you have to keep dropping down to a lower level. The best higher-level models are sealed, meaning their abstractions do not leak often. Modern software already has many sealed models: people rarely write directly in assembly, implement their own operating systems, or manage state without a database. Once a model becomes sealed, development naturally splits into two tracks: people who build on top of it, and people who implement it.
That is the ideal: work inside a sealed, domain-aligned model and let tooling handle the boring parts. But what happens when the system you are building does not fit inside a single model?
Interoperability Causes Fragmentation
Modern software is usually assembled from components: databases, caches, queues, services, and frontends. In theory, this is empowering. You can take pieces off the shelf, connect them, and build something sophisticated.
In practice, it often turns into a frustrating process:
- It is tedious. A large share of software work has become configuration management and quality assurance instead of creativity and experimentation.
- It is inflexible. Once components are chosen and connected, changing the system is difficult because swapping pieces often requires major rewrites.
- It is error-prone. Developers are responsible for making everything fit together correctly, but the available tooling is limited.
- It is unperformant. Teams focus on minimizing development cost and deployment risk, so performance often gets less attention and degrades over time.
This is why so many systems become brittle. The reason is not just complexity in general. It is fragmentation.
Each component has an internal model, meaning the concepts it uses internally. But components also need to interact with each other, and those interactions often happen through a different, lower-level model. A library usually interacts in the same model as your code. A microservice exposing an API usually does not.
When a system is built from components, the model used to reason about the overall system is determined by these interaction models, not by the internal models. If the interaction model is lower-level, the whole system gets pushed down to that level. In internet software, that usually means a “networks and operating systems” model: machines, processes, memory, addresses, and packets.
Those abstractions are useful, but they are far removed from what most applications actually care about. They describe bytes and addresses, not objects, people, places, and actions.
Consider a program connected to a relational database. Both the application and the database may have clean internal semantics and may model the domain well. But the behavior of the combined system is not governed neatly by either model. Instead, problems that cross component boundaries have to be understood through the lens of networks and operating systems, such as server crashes, corrupted encoding, or dropped connections.
There is a reason components fall back to a different interaction model than their internal model: interoperability. Many useful models exist, but most of them are not compatible with each other. Some have incompatible concepts, such as programming languages and databases. Others simply do not express the concepts needed for distributed systems, where software runs across multiple machines instead of a single process.
When components cannot interact directly, they must fall back to a lower-level shared model. That is why the networks-and-OS model is everywhere: it is powerful, battle-tested, and low-level enough to support almost anything. But this compatibility comes at a cost. It sacrifices the advantages of reasoning within a domain-aligned model.
The Costs of Fragmentation
Let’s call this a fragmented system: a system assembled from many components with incompatible internal models. Fragmented systems are brittle. They are hard to change and easy to break.
That brittleness shows up in several ways:
Contract mismatches
- You change the meaning of an API field, but a downstream service still expects the old meaning, causing a runtime error.
- Microservice A deploys version 2 while Microservice B still expects version 1, causing another runtime error.
- These issues are not caught at compile time because the overall system is not represented in one place except at runtime.
Cross-component optimizations
- Push a filter down: you want to fetch less data, but that requires changing the API contract across multiple layers between the UI and the database.
- Reorder a join: changing lookup order can significantly improve performance, but may require awkward logic changes across components.
- Move logic between the app and the database: rewrite it in another language, retest it, and hope the semantics still match.
Ceremony and risk around changes
- Database migrations require SQL, rollback SQL, deployment coordination, and failure handling.
- Changing a shared data model means updating the schema, updating every service, deploying in the right order, and often relying on extensive staging tests.
Impedance mismatches
- Database type systems and programming language type systems often do not line up cleanly, creating subtle edge cases that only appear with real data.
- An ORM may make relationships easy to traverse, but still generate N+1 queries because it does not fully understand the database.
These are symptoms. The root cause is that, in a fragmented system, developers must reason about behavior through a low-level interaction model. Components are not truly composable. Every time one is added or changed, the impact on the rest of the system is not constrained by that component’s internal model. It is only constrained by the interaction model, which is usually poorly aligned with the domain and therefore difficult to match to requirements.
As a result, confidence usually requires extensive validation. Fragmented systems are inherently brittle. Good architecture can reduce the pain, but without a single domain-aligned model, tooling remains limited. The effort needed to build and maintain these systems grows badly as complexity increases.
Coherence Has Limits
If fragmented systems are the problem, the alternative is a coherent system. A coherent system works entirely within a single, domain-aligned model. That allows tooling to operate across the whole system in one shared conceptual framework, which creates major opportunities for verification, optimization, and automation.
There are already strong examples of coherent models within specific domains:
- Type systems in programming languages catch many logic errors and interface mistakes.
- The relational model in databases gives programmers scale and performance with relatively little effort.
- Web frameworks such as Rails, Express, and Django, along with backend platforms like Firebase, Supabase, and Convex, remove a lot of the toil from web development.
- Actor systems such as Erlang or Ray make certain distributed systems easier to build.
- Durable execution systems such as Temporal and ReState make fault tolerance more accessible.
When you stay inside models like these, tooling becomes much more powerful. Developers become more productive, and the resulting systems often achieve better correctness and performance than fragmented alternatives.
So why not just use one of these models everywhere? The problem is that they are domain-specific. They do not generalize cleanly across all the things modern internet software needs to do. Real applications often span web serving, APIs, transaction processing, background jobs, analytics, and telemetry. As capabilities increase, the application is pushed beyond a single domain. At that point, the system starts fragmenting again.
The common response is to accept this as unavoidable: use the right tool for the job, wire the components together, and move on. That is practical advice, but it also assumes fragmentation is the best we can do. I do not accept that assumption.
Generality Without Fragmentation
Rejecting that assumption is not the same as disproving it. Is a general-purpose model for internet software actually possible? If it were, why would it not already exist?
It is easy to imagine a tradeoff between generality and domain alignment. In practice, there does seem to be a relationship there. But it does not appear to be a hard limit. Some models are both general-purpose and sealed. C is embedded in nearly all modern software. Linux is widely used, except in special cases such as hard real-time and safety-critical systems. Relational databases are nearly as universal, and most applications rarely need to go below that level. Sometimes the tradeoff gets pushed even further. Rust is more general-purpose than C, fits more domains, and offers better tooling without sacrificing performance.
Those kinds of advances are rare, but they do happen.
So imagine one more. If we could build a sealed model that is both general-purpose and aligned with the domains needed for internet software, we could build coherent systems on top of it without being trapped inside a narrow use case. That would open up major opportunities for tooling:
- Development could speed up dramatically because components would interact directly in terms of a shared domain-aligned model.
- System-wide verification would become practical for many applications.
- Performance tools could profile and optimize across the whole system automatically.
- Operational tools could instrument, monitor, and orchestrate services with very little setup.
If that works, it could change how internet software gets built.
That is what we are trying to build: a general-purpose, domain-aligned, sealed model for internet software. It goes against the usual advice to simply use the right tool for each job and accept fragmentation as the cost. We believe software should not have to trade coherence for generality, and that getting both would be extremely valuable.
We are not the first to try. Many people have attempted to build general-purpose, domain-aligned models for software. Most give up domain alignment to get generality. Some give up generality to get domain alignment, such as Erlang, Smalltalk, and Prolog. The rare systems that achieved both still failed to become sealed, so they leaked too often to replace lower-level alternatives. Notable attempts include object-oriented databases and J2EE.
Why do we think this time could be different? We will share more about our approach later. For now, the short answer is that advances in programming language theory and database systems have opened a path that was not available before.
Postscript: What About AI?
That is the main argument. But there is an obvious question here: does AI change all of this? If agents can handle complexity for us, why worry so much about models and coherence?
We have been talking about tooling in general, mostly using traditional tools as examples. But the most powerful new form of tooling is agentic AI. Agents are far more flexible than older tools, and they already have a major impact on developer productivity. They are a real shift in software development.
That has led to a popular idea: AI alone may be enough to unlock everything described above. In that view, code is just a by-product, a rough intermediate form of the programmer’s intent, while the real truth lives in prose prompts and documents given to the agents. AI will manage the complexity. We will no longer need to care about abstractions or maintainability. Those concerns supposedly belong to an outdated era. AI is the future, and traditional software engineering will fade away.
I think that view rests on several misunderstandings.
The first is confusing ambiguity with abstraction. People often describe code as “low level” when what they really mean is that it is domain-misaligned. That is not true of all code. It depends on the model and the problem. Some models are lower-level, some are higher-level. Code can live at either level. What code is really about is precision. It is unambiguous, even when it is abstract.
That matters because prose and code solve different problems. Prose is inherently ambiguous, and that is part of its usefulness. Meaning can remain fluid. Code is precise, and that precision is central to its usefulness. Whether a human or an AI writes it may change over time, but code will not be replaced by prose, because complex systems always need precision.
The second misunderstanding concerns the relationship between models and domains. Even if humans and AI communicate in prose, they still need shared concepts to reason about the system. Models provide that meaning. And those models still need to align with the domain of the problem. If they do not, the problem quickly becomes hard to manage. That is exactly why fragmented systems are so painful compared with coherent ones.
The third misunderstanding is the idea that AI will become so capable that programmers will no longer need good models. But that ignores what has to happen inside the AI for that to work. If you want AI to protect developers from domain misalignment, the best solution is still to invent a sealed, domain-aligned model and give it to the AI. An AI with such a model will outperform one without it. Good models are hard to create, but once they are found, they remain valuable. That will not change whether humans or AIs are doing the searching.
So the more careful conclusion is different from the popular one. AI is powerful, and it will likely keep getting more powerful. But no amount of intelligence removes the limits of brute force. The universe is too large to search blindly. Intelligence is about finding good models for understanding and shaping the world. Software already has many powerful models, but there is still room for better ones. New useful models will make AI more capable, just as they make people more capable.
There is already evidence for this in agentic coding. If you let even strong AI agents loose in a large, poorly structured codebase, they often make a mess. They do much better in narrow environments with clear rules, such as:
- Stateless single-page JavaScript apps
- Standard three-tier architectures built with a solid web framework
- Clearly defined analytics tasks inside a single well-documented data warehouse
Those are all cases where agents are operating inside coherent systems and domain-aligned models.
AI capabilities will keep improving, and the range of domains where AI can be useful will keep expanding. But there will still be room for innovation that makes AI more productive. AI helps us work at higher levels, but it is not a reason to stop improving the lower levels. What we need now is a programming model that makes coherent internet applications possible. AI can help, but it will not build that model for us. We still have to do it ourselves.