You can learn more about an engineering team by reading their code than by reading their process documentation. Process documents describe how the team intends to work. The codebase reveals how they actually work.
Every codebase is the accumulated output of hundreds or thousands of decisions made by real people under real constraints. The patterns in that code – the consistencies and inconsistencies, the things that are tested and the things that are not, the abstractions that are clean and the abstractions that are tangled – are not random. They are artefacts of the team's culture, incentives, and organisational structure.
Reading the codebase as an organisational diagnosis is not conventional. But it is remarkably accurate. Here are the patterns and what they reveal.
Inconsistent error handling: no shared standards
Open a codebase and search for how errors are handled. In one module, exceptions are caught, logged, and re-thrown with context. In another, errors are caught and silently swallowed. In a third, they are not caught at all. In a fourth, there is a custom error class with a different structure from the custom error class in module two.
This pattern tells you that the team does not have shared coding standards – or has standards that are not enforced. It suggests a culture where individual developers make independent decisions about fundamental patterns, and nobody reviews those decisions for consistency.
It also suggests that the team has never done a cross-cutting review of the codebase. If someone had looked at error handling across all modules simultaneously, the inconsistency would be immediately obvious. The fact that it persists means nobody has looked. Each PR review checks the error handling within the diff. Nobody checks error handling across the system.
The organisational fix is not technical. It is cultural: agree on a pattern, document it, enforce it in reviews, and run periodic audits to catch drift. The codebase will reflect the change once the culture changes.
Dead code: no cleanup culture
Dead code is code that is no longer executed. Feature flags that will never be toggled back on. Functions that nothing calls. Imports that nothing uses. Entire modules that served a feature which was removed two years ago.
Every codebase has some dead code. But when dead code is widespread – when you find entire files that could be deleted with no effect – it reveals something about the team's relationship with their codebase. They do not feel ownership of it. They add to it but they do not curate it. Nobody takes responsibility for removing things that are no longer needed.
This usually reflects an incentive problem. Teams are rewarded for shipping features, not for maintaining the codebase. Adding a feature is visible, celebrated, and trackable. Removing dead code is invisible. Nobody gets promoted for deleting code. So the dead code accumulates, and over time it becomes harder to distinguish from the living code, which makes the entire system harder to understand.
Dead code also indicates a lack of confidence in the test suite. If the team trusted their tests, they would be more willing to delete code that appears unused. When the tests are thin, every deletion carries the risk of breaking something nobody knew depended on it. So the safest course of action is to leave it there. The dead code is a symptom of insufficient test coverage, which is itself a symptom of a culture that does not prioritise testing.
Duplicated implementations: siloed teams
Duplication in a codebase comes in several forms. The obvious form is copied code – the same function repeated in multiple files. The more revealing form is independent implementations of the same concept.
Two different validation libraries. Three different approaches to date formatting. Four different patterns for API response structures. Each implementation works on its own. But they reveal that different parts of the team were solving the same problem without talking to each other.
This pattern is a direct reflection of team structure. Conway's Law states that systems reflect the communication structures of the organisations that build them. When you see duplicated implementations in a codebase, you are seeing the boundaries between teams or individuals who do not communicate enough.
The frontend team built their own date formatting utility because they did not know the backend team had already built one. The payments team built their own validation library because the shared validation library did not cover their use cases, and requesting changes to it would require cross-team coordination that nobody wanted to initiate.
Reducing duplication requires improving communication, not just refactoring code. A shared library only stays shared if the teams using it have a process for contributing to it. Otherwise, the duplication will reappear after the refactoring, because the organisational structure that created it has not changed.
Missing tests: speed-over-quality pressure
When you find modules with comprehensive test coverage alongside modules with no tests at all, the boundary usually corresponds to a change in team priorities. The well-tested modules were likely built during a period when quality was emphasised. The untested modules were built when deadlines dominated.
Missing tests are the most common symptom of sustained pressure to ship faster. Writing tests takes time. When the team is under pressure, tests are the first thing to be cut, because cutting them saves time today while the cost is deferred to the future. The team knows they should write tests. They intend to come back and add them later. They rarely do.
This pattern reveals a leadership problem more than an engineering problem. The team is responding rationally to the incentives they face. If meeting deadlines is rewarded and code quality is not measured, the rational choice is to skip tests. Changing this requires changing the incentive structure: measuring code quality, setting coverage expectations, and protecting time for test writing in sprint planning.
The absence of tests also makes every other problem worse. Without tests, refactoring is risky. Without safe refactoring, technical debt accumulates. Without addressing technical debt, velocity decreases. Without velocity, deadline pressure increases. Without slack, tests get cut. The cycle is self-reinforcing, and it is driven by culture, not technology.
Overly complex abstractions: premature optimisation culture
Some codebases are over-engineered. You find factory factories, six layers of abstraction for a CRUD operation, dependency injection frameworks wrapping simple function calls, and design patterns applied where a straightforward implementation would suffice.
This reveals a team culture that values cleverness over clarity. It often reflects a team with strong senior engineers who enjoy architectural work but are not disciplined about applying the right level of abstraction to each problem. The resulting code is technically sophisticated and practically incomprehensible to anyone who did not build it.
It can also indicate a team that has been burned by insufficient abstraction in the past and has overcorrected. If a previous system was a tangled mess because it lacked structure, the team might have resolved to never make that mistake again – and ended up making the opposite mistake instead.
The organisational signal is that the team does not have effective review norms for complexity. Code review should push back on unnecessary abstraction just as it pushes back on insufficient abstraction. If the review culture favours adding structure but not simplifying it, complexity will accumulate.
Inconsistent naming: no shared vocabulary
In one module, the concept is called a “user.” In another, it is a “customer.” In a third, it is an “account.” They all refer to the same entity. In one part of the codebase, the operation is “create.” In another, it is “register.” In a third, it is “provision.” They all do the same thing.
Inconsistent naming reveals that the team does not have a shared domain language. This is more than a cosmetic issue. When the same concept has three different names, developers must constantly translate between vocabularies. New team members do not know which name is canonical. Bug reports use one term while the code uses another.
This typically reflects a team that has not invested in domain modelling. The developers understand the implementation but not the domain in a shared, agreed-upon way. Fixing it requires more than a rename refactoring. It requires the team to sit down, agree on what the core concepts are, give them definitive names, and then enforce those names in code, documentation, and conversation.
Using the codebase as a diagnostic tool
Running a full codebase review produces a list of technical findings. But those findings, read collectively, also produce an organisational diagnosis. The patterns above – inconsistent error handling, dead code, duplication, missing tests, over-engineering, naming inconsistency – are not isolated problems. They are symptoms of how the team communicates, what it prioritises, and how it manages quality.
VibeRails generates these findings systematically. When you run a review, you get a catalogue of what is in your codebase. But you also get, implicitly, a catalogue of how your team works. The dead code tells you about cleanup habits. The duplication tells you about communication patterns. The test coverage tells you about deadline pressure.
The most valuable use of code review findings is not just to fix the code. It is to understand the organisational dynamics that created the patterns and to address those dynamics. Fix the code, and the symptoms recur. Fix the culture, and the code improves sustainably.
Your codebase is already telling you what your engineering culture looks like. The question is whether you are listening.
Limits and tradeoffs
- It can miss context. Treat findings as prompts for investigation, not verdicts.
- False positives happen. Plan a quick triage pass before you schedule work.
- Privacy depends on your model setup. If you use a cloud model, relevant code is sent to that provider; local models can keep inference on your own hardware.