Nobody Has Ever Reviewed Your Codebase

Your team reviews every PR. But the 400,000 lines that were there when you arrived? Nobody has ever looked at those.

Human process: a meeting table with a few chairs, a whiteboard with blank boxes, and a single notebook open to a page with simple shapes (no words)

Your team reviews every pull request. You have linting rules, CI checks, and a culture of careful code review. Good. That discipline matters.

But think about what that actually covers. Every PR review examines a diff: a handful of changed files, evaluated against the surrounding context. The reviewer checks that the new code makes sense, follows conventions, and doesn't introduce obvious problems.

Now think about what it doesn't cover. The 412,000 lines that were already there when you arrived. The code that was written by people who left three years ago. The modules that work but nobody fully understands. The authentication flow that was copied from a tutorial in 2019. The error handling that swallows exceptions in some files and crashes in others.

Nobody has ever reviewed that code. Not as a whole. Not with fresh eyes. Not with the question: does this codebase make sense?


This isn't a tooling problem. It's a category problem.

The tools you have are designed for specific scopes, and they're good at those scopes. But none of them are designed to read an entire codebase and reason about it.

  • PR review tools see diffs. They compare what changed to what was there before. They can't tell you that the thing that was there before was already broken.
  • Static analyzers check rules. They can find unused variables, unchecked nulls, and pattern-matched vulnerabilities. They can't tell you that your codebase has three different approaches to configuration management and none of them are documented.
  • IDE agents look forward, not back. They help you write new code. They don't look at the 200 files you aren't currently editing and ask whether those files should exist at all.

There's a gap between reviewing every change and reviewing the whole thing. That gap has existed since version control was invented, because the only way to fill it was to hire someone expensive to read through the entire codebase. And nobody does that.


So we built a tool that reads the whole book.

VibeRails is a desktop application that orchestrates frontier AI models to perform full-codebase code review. Not a diff. Not a rule check. A review of every file in your project, evaluated as a coherent whole.

The workflow is four steps:

  1. Point. Add your project directory. VibeRails scans the file tree locally and prepares a review scope.
  2. Review. VibeRails orchestrates Claude Code or Codex CLI to read every file, accumulate context, and identify issues across 17 detection categories – from security vulnerabilities and performance bottlenecks to dead code, complexity hotspots, and architectural inconsistencies.
  3. Triage. Findings are presented one at a time with full code context. Accept, reject, or defer each finding using keyboard shortcuts. Your engineering judgement shapes the remediation plan.
  4. Fix. For accepted findings, dispatch AI fix sessions that implement changes directly in your local repository. Review the diff, test, commit or revert. You stay in control.

The result is a structured, prioritised inventory of everything in the codebase that needs attention, produced by an AI that actually read every file.


Two things that are different

Most AI developer tools work the same way: you sign up for a hosted vendor platform, connect a repo, send them your code, and pay a recurring fee. VibeRails takes a different approach: it runs as a desktop app and orchestrates the AI tooling you already use.

First: you bring your own AI. VibeRails orchestrates your existing Claude Code and Codex CLI installations. Your API keys stay on your machine, and your existing AI subscriptions pay for the compute. When you run sessions, relevant code is sent directly from your machine to the AI provider configured in those tools. VibeRails does not run a cloud backend or proxy your requests, and we don't receive or store your source code. This is the BYOK model – Bring Your Own Key.

Second: the pricing reflects that. Because we don't pay for your AI usage, there's no AI markup baked into the licence. VibeRails is $299 per developer for a lifetime licence, or $19/mo per developer on the monthly plan. One licence per machine. There's also a free tier: up to 5 issues per review session, no signup, no credit card.


Read the whole book

You've been reviewing every chapter as it gets written. That's necessary. But it doesn't tell you whether the book makes sense.

VibeRails reads the whole book. Download it, point it at your codebase, and find out what's been hiding in plain sight.


Limits and tradeoffs

  • It can miss context. Treat findings as prompts for investigation, not verdicts.
  • False positives happen. Plan a quick triage pass before you schedule work.
  • Privacy depends on your model setup. If you use a cloud model, relevant code is sent to that provider; local models can keep inference on your own hardware.