Byte Sauna Logo

No free lunch in vibe coding

November 24, 2025

Everything should be made as simple as possible, but not simpler.

— Einstein

In The Library of Babel, Jorge Luis Borges imagines a universe made up of endless hexagonal rooms. These rooms are full of books, and the totality of those books contains every possible combination of letters. Among the meaningless volumes must exist everything we know, everything we will ever know, and everything that could in theory be known. Knowledge turns into a matter of navigation: It's not about producing information, it's about locating the thing you want to know.

Consider an LLM output: It may represent actual knowledge, it may be slightly incorrect, or it may be utter nonsense. ChatGPT is essentially a 21st-century Library of Babel, where you navigate from one room to another by adjusting your prompts.

I think this is a key idea that is not discussed in mainstream AI speculation: Navigating an immense mass of information is a nontrivial task in itself. I also think this has some important implications that aren't talked about that much. That's why I would like to take this opportunity to zoom in on it, and speculate what it means in terms of the future of software development.

The oracle argument

Imagine an oracle agent: An LLM that produces any program upon request with zero mistakes. This seems like the ultimate conclusion of the ongoing development, and it's probably something Jensen Huang had in mind when he said that “kids shouldn't learn to code”.

At first glance, the inception of an oracle indeed seems to mark the end of software development. Everything is prompting from now on, as natural language is — obviously — easier than any formal language used by computer programmers.

I disagree. And this comes down to control: The existence of an oracle would not abolish the need for meticulous control over what the program actually does. This is an essential business requirement. There must be near-absolute certainty that mission-critical operations are not disturbed by a single surprising input or program state. Confidential data must remain confidential. Access to different resources needs to be regulated. No matter how the program was built, any serious engineer needs to maintain meticulous control over these issues.

Hence, we enter The Library of Babel. Assuming an oracle agent, consider the following question: "What is the simplest prompt that produces the desired program?" And by "desired" I mean a program that doesn't just do something roughly along the lines of what was requested, but actually satisfies a well-defined specification, and respects all sorts of security constraints and integrity requirements.

A couple of information-theoretical ideas emerge. It’s a well-known fact that any computer program admits a specification of minimal complexity, i.e. every program x has an optimal representation C(x). (In the sense that the length of C(x) represents the minimal number of bits it takes to express x.)

On the other hand, there is a famous theorem by Shannon, which introduces an ultimate limit on data compression. When you mix these ingredients, you get two facts regarding any computer program x.

  • There is an optimal representation C(x) of x.
  • There is a limit on how much C(x) can be compressed.

While I don’t have sufficient background in information theory to prove an actual theorem here, this seems to produce a no free lunch situation: On average, a Kolmogorov-complex program x requires a lengthy prompt.

Conservation of complexity

To me, the moral of the oracle argument is that complexity can be transformed, but not eradicated. In theory, you can produce software with a single prompt, yes. But at what point does a prompt become complex enough to qualify as a project on its own right? Does the whole scheme merely transform programming complexity into prompt-engineering complexity?

The way I see it is that LLMs are another step in the saga of machine-human interaction. It may even be a revolutionary step, something akin to the introduction of the graphical user interface. This development, however, is orthogonal to complexity. Complexity is not a matter of implementation, but an inherent mathematical property of the whole engagement.

Complexity can be managed, though, and guess what’s really great for managing complexity — a programming language. That would, ironically, complete the circle: First, we used our oracle to deprecate the existing programming languages. Then we introduced a new one to manage the resulting prompt-engineering complexity.

When it comes to AI and software engineering, people are laser-focused on removing the need to write code. Presumably, code is seen as something "difficult" or "cryptic". AI is seen as an enabler for "mere humans" to enter this domain that is (maybe even intentionally) intimidating.

Nothing could be further from the truth. Serious programming languages are not obfuscations; they are not intimidating or difficult on purpose. It’s quite the opposite: these are best efforts at making an extremely complex thing as simple as possible. And however you tackle this, that same complexity will be present in one form or another, and has to be managed in one way or another. Whatever tools you introduce — let them be LLMs or something that does not even exist yet — as long as absolute control remains a requirement, there is no way around addressing the resulting complexity.

Perhaps we are still far from the optimal way for us humans to deal with complexity. Maybe LLMs turn out to be groundbreaking. Portraying technical skills as obsolete, though, doesn’t seem to stand up to scrutiny.

My photo

Article by

Matias Heikkilä