The Myth of Write-Once Code

There is a persistent fantasy in software development that you write code, it works, and you move on. It has never been true. All code becomes legacy code. The only question is whether it becomes legacy code that people can work with.

Code editor showing the same file with multiple years of modification history layered across git blame annotations

Every developer has experienced the moment. You open a file you wrote six months ago and you do not recognise it. Not because someone else changed it, but because the context you held in your head when you wrote it has evaporated. The variable names that seemed obvious now seem cryptic. The control flow that felt elegant now feels convoluted. The comment that says “temporary fix” is still there, and it is now load-bearing.

This is what happens to all code, written by anyone, in any language, on any project. Code is not a finished product. It is a living artefact that must be read, understood, modified, and extended by people who were not in the room when it was written – including, often enough, the person who wrote it.

The myth of write-once code – the idea that you write it, ship it, and never look at it again – is one of the most damaging assumptions in software development. It shapes how teams plan, how they allocate time, and how they think about quality. And it is completely wrong.


The read-to-write ratio

The commonly cited statistic is that code is read ten times more often than it is written. The actual ratio is probably higher. Every bug investigation starts with reading code. Every feature addition starts with reading the code it will integrate with. Every onboarding session involves reading code that someone else wrote. Every code review is an act of reading.

Despite this, most development practices optimise for writing. Sprints are planned around features to be implemented. Productivity is measured by code shipped. Developer tools focus on generating code faster. The implicit assumption is that the bottleneck is in the writing.

It is not. The bottleneck is in the understanding. The time a developer spends trying to understand existing code before they can modify it safely is, in most mature codebases, the majority of their working day. A function that took an hour to write might take three hours to understand when someone encounters it eighteen months later with no context.

If you optimise only for writing speed, you are optimising for the minority of the lifecycle while degrading the majority.


Every line of code is a liability

This sounds cynical, but it is mechanically true. Every line of code you write is a line that must be maintained. It must be kept compatible with the libraries it depends on. It must be kept consistent with the patterns used elsewhere in the codebase. It must be understood by the next person who encounters it. It must be tested, or at minimum, it must behave predictably when the surrounding code changes.

Code that is written with the assumption that nobody will ever need to change it is code that is written without error handling for edge cases the original author did not consider, without documentation of the assumptions it makes, without tests that verify its behaviour independently of the current system state, and without consideration for how it will interact with code that does not yet exist.

This is not hypothetical. It is the default state of most codebases. Features are shipped under deadline pressure with the intention of coming back to clean them up. The clean-up never happens because the next deadline arrives. The code calcifies. It becomes the foundation on which new features are built. And each new feature inherits the assumptions, the shortcuts, and the implicit contracts of the code beneath it.


The transition from new code to legacy code

There is no bright line between “new code” and “legacy code.” The transition is gradual, and it begins the moment the code is committed. The first step is context loss: the author moves on to the next task, and the detailed understanding of why specific decisions were made starts to fade. The second step is environment drift: the libraries, runtime, and deployment context evolve around the code, creating subtle incompatibilities. The third step is assumption divergence: other parts of the system change in ways that invalidate assumptions the original code makes.

By the time code is six months old, it is functionally legacy code. Not because it is bad, but because the context in which it was written no longer exists. The developer who wrote it has forgotten the details. The system around it has changed. The requirements it was designed to meet have evolved.

This is normal. This is how software works. The problem is not that code becomes legacy code. The problem is that teams treat code as if it will not, and therefore do not invest in the practices that make legacy code manageable.


What write-once thinking costs you

When a team operates under the write-once assumption, several predictable things happen. Documentation is treated as optional, because the people writing the code understand it right now. Tests are written to satisfy coverage requirements rather than to explain behaviour, because the current implementation is fresh in everyone's mind. Error handling is minimal, because the happy path is the one the team is focused on. Naming is expedient rather than descriptive, because the author knows what the variables mean today.

Six months later, a new developer is assigned to modify that code. They cannot find documentation. The tests do not explain what the code is supposed to do. The error handling does not cover the case they are trying to handle. The variable names do not communicate the domain concepts they represent. The developer spends two days understanding the code before they can spend one day changing it.

Multiply this across every module in the codebase. Multiply it across every developer who joins the team. Multiply it across every year the system remains in production. The cumulative cost of write-once thinking is staggering, and it is entirely avoidable.


Review as a maintainability practice

Code review is often positioned as a quality gate – a check that happens before code is merged to catch bugs. That is one function, but it is not the most important one. The most important function of code review is to ensure that code can be understood by someone other than its author.

When a reviewer reads your code, they are simulating the experience of every future developer who will encounter it. If the reviewer cannot follow the logic without asking questions, neither will anyone else. If the reviewer finds the naming confusing, so will the next person. If the reviewer does not understand why a particular approach was chosen, that context will be lost permanently unless it is documented.

This is why review matters beyond bug detection. It is the mechanism by which code transitions from being understood by one person to being understandable by anyone. It is the practice that counters the write-once assumption by explicitly verifying that the code communicates its intent.


Full-codebase review and the accumulated problem

PR review catches maintainability issues in new code. It does not catch the accumulated maintainability issues in code that was written before the team adopted rigorous review practices. Every codebase has a stratum of code that was written before the current standards, before the current team, before the current understanding of the domain. That code is still there, still running, and still being read by developers who need to work around it.

Full-codebase review addresses this accumulated problem. It examines the existing code – not just the latest changes – and identifies the places where intent has become obscured, where patterns have diverged, where assumptions have drifted. This kind of review was impractical manually for large codebases. AI-assisted review has made it feasible, because an AI system can read hundreds of thousands of lines and identify cross-cutting issues that would take a human reviewer weeks to surface.


Writing for the future reader

The antidote to write-once thinking is simple in principle and difficult in practice: write every line of code as if someone else will need to understand it without your help. Because they will.

This means choosing descriptive names over short ones. It means writing comments that explain why, not what. It means structuring code so that the control flow is obvious from the shape of the function, not from memorising the implementation details. It means writing tests that serve as documentation of expected behaviour. It means handling errors explicitly rather than hoping they do not occur.

None of this is free. Writing code for future readability takes more time than writing code for current functionality. But the investment pays for itself many times over, because the code will be read far more often than it will be written. Every minute spent making code clearer saves hours of future confusion.

Write-once code is a myth. All code is read-many code. The teams that accept this – and build their practices around it – are the teams whose systems remain workable as they grow, whose developers remain productive as the codebase ages, and whose technical debt stays manageable rather than compounding silently into a crisis.


Limits and tradeoffs

  • It can miss context. Treat findings as prompts for investigation, not verdicts.
  • False positives happen. Plan a quick triage pass before you schedule work.
  • Privacy depends on your model setup. If you use a cloud model, relevant code is sent to that provider; local models can keep inference on your own hardware.