
Back in January I sat down and wrote about the idea that maybe we should we look at building a programming language optimised for AI. Since writing that article I got a bit obsessed about the idea. In January I argued it might be solving the wrong problem. I still think that’s partially true. But I also couldn’t stop thinking about what it would look like if you tried. So I went and did that.
Say “Hello” to Vera, a language designed for machines to write.
$ vera run examples/hello_world.vera
Hello, World!
The idea was that instead of making models write code in languages built for how we work, we should design a language that is built for how they work. Which leads to a question: what needs to change to make a programming language suited for models to use, rather than humans?
The evidence suggests the biggest problem models face isn’t syntax. It’s coherence over scale. Models struggle with maintaining invariants across a codebase, understanding the ripple effects of changes, and reasoning about state over time. They’re pattern matchers optimising for local plausibility, not architects holding the entire system in mind. So building a language for them means solving that problem. Every design decision in Vera is meant to be an answer.
Vera favours checkability over correctness. The design emphasises code that can be mechanically checked, and when there is an error, the compiler provides a natural-language explanation with a concrete fix. It doesn’t give the model an opaque status report, it feeds back instructions to fix it.
Traditional compilers produce diagnostics for humans. Vera instead produces instructions for the model that wrote the code. What went wrong, why, how to fix it with a concrete code example, and a reference back to the specification. This way, the compiler’s output is designed to be fed directly back to the model as corrective context.
This matters because the natural workflow for AI code generation is a loop: write code, check, fix, repeat. If the check step produces output that the model can’t act on, the loop breaks. Vera’s diagnostics closes that loop.
Vera favours explicitness over convenience. All state changes are declared, all effects are typed, all function contracts are mandatory. There are no implicit behaviours. Contracts are the source of truth: every function declares what it requires, what it guarantees, and what effects it performs. Even a one-liner has a full contract. The compiler then verifies this statically where possible.
Here’s what that looks like in practice. A function that divides two integers:
public fn safe_divide(@Int, @Int -> @Int)
requires(@Int.1 != 0)
ensures(@Int.result == @Int.0 / @Int.1)
effects(pure)
{
@Int.0 / @Int.1
}
Here the require means this function cannot be called with a zero divisor. The compiler checks every call site to prove the precondition holds. If it can’t prove it, the code doesn’t compile. Division by zero isn’t a runtime error, it’s a type error. The model doesn’t need to remember to check for zero. The compiler enforces the obligation. Effects work the same way. Vera is pure by default. If a function reads or writes state, that must be declared as an effect. The compiler tracks what effects each function performs, and if a function claims to be pure but calls something that isn’t, it won’t type-check. The model can’t accidentally introduce hidden mutation.
The remaining design choices attack the problem from the other direction: instead of making it easier for the model to be right, they make it harder for the model to be wrong.
Every construct has exactly one textual representation. There are no style choices, no formatting decisions, no equivalent ways to express the same thing. In theory, if you ask two models to write the same function, the code they produce should be identical. We constrain expressiveness deliberately —fewer valid programs means fewer opportunities for the model to diverge from the intended solution.
The most visible consequence of this is that Vera has no variable names. The language uses typed De Bruijn indices. Here @Int.0 refers to the most recent integer binding, while @Int.1 refers to the one before that. This feature eliminates an entire class of model errors around naming coherence: choosing misleading names, reusing names across scopes, or losing track of which variable is which. The compiler resolves references structurally, not by matching strings.
Here’s a pure function that computes an absolute value.
public fn absolute_value(@Int -> @Nat)
requires(true)
ensures(@Nat.result >= 0)
ensures(@Nat.result == @Int.0 || @Nat.result == -@Int.0)
effects(pure)
{
if @Int.0 >= 0 then {
@Int.0
} else {
-@Int.0
}
}
The ensures clauses are machine-checkable promises, the result is non-negative and equals the absolute value of the input, and the compiler proves them via Satisfiability Modulo Theories (SMT) solver.
$ vera run examples/absolute_value.vera --fn absolute_value -- -42
42
This is also where the trade-off becomes sharpest. A language designed for models is a poor choice for humans. Vera’s syntax is optimised for unambiguous machine emission, not human readability. There is no syntactic sugar: no shorthand, no operator overloading, no implicit conversions, no default arguments. Reading Vera code is not a pleasant experience. That’s not a bug, it’s the thesis. Every feature that makes code easier for a human to scan is a feature that introduces ambiguity or optionality that a model can get wrong.
Vera is in active development. The full compiler pipeline works end-to-end: source code goes through parsing, type checking, contract verification via Z3, and WASM code generation. Programs compile to WebAssembly and execute via wasmtime. The compiler handles the full language surface: algebraic data types, pattern matching, closures, generics via monomorphisation, and algebraic effect handlers. The module system supports cross-file imports with public/private visibility and cross-module contract verification. You can write a function in one file, import it in another, and the compiler will verify that the caller satisfies the callee’s preconditions across the module boundary.
But I should be honest about the limitations, the decidable fragment that Z3 can automatically prove is still limited to linear arithmetic over integers, booleans, and array lengths. Contracts involving more complex reasoning fall to runtime checks. Termination measures are syntactically enforced but not yet verified by the SMT solver. The type system has gaps around type variable subtyping and effect row unification. There’s no garbage collector. Memory is managed by a bump allocator, which means long-running programs will eventually exhaust the heap. String operations are minimal. There’s no package manager, no canonical formatter, and no contract-driven test runner, though all three are on the roadmap.
However the biggest open question is that Vera hasn’t been stress-tested by its intended users. Models can write Vera programs, the compiler ships with agent-facing documentation designed for exactly that, but nobody has yet run the experiment at scale to measure whether models actually produce more reliable code in Vera than in existing languages. The thesis is plausible. The tooling exists to test it. The data doesn’t exist yet, and that matters.
If models become the primary authors of software, and the trajectory suggests they well might, then the entire toolchain needs to adapt. Not just the editors, not just the deployment pipelines, but the languages themselves. Programming languages have always co-evolved with their users. Assembly emerged from hardware constraints. C from operating system needs. Python from productivity needs. The idea that the language should adapt when the author changes isn’t radical. It’s historically consistent.
Vera is one experiment in what that adaptation looks like. It’s MIT-licensed and on GitHub at github.com/aallan/vera, with a companion site at veralang.dev. It ships with agent-facing documentation, a complete language reference designed to be dropped into a model’s context, so if you want to point a model at it and see what happens, the infrastructure is there.
What I’d particularly welcome is exactly that. People pointing models at Vera and reporting what they find. The interesting experiment isn’t whether the compiler works: we know it does. The interesting experiment is whether models actually write better code in a language designed for them. That question hasn’t been answered yet.
The broader question for anyone thinking about where technology is heading is what happens to the developer tools market when the developer isn’t human? Vera is a small, very early answer. But the question is going to get very large, very quickly.