AI Code Review for Django Codebases

Django's conventions make the first version fast to build, but they hide complexity that compounds over time. VibeRails reads your entire Django project and finds N+1 queries, ORM misuse, security gaps, migration conflicts, and architectural drift across every app.

How Django projects accumulate hidden debt

Django's "batteries included" philosophy provides a structured path from idea to deployed application. Models define the schema. Views handle requests. Templates render responses. The ORM abstracts SQL. But as projects grow beyond a handful of apps, the conventions that made Django productive start to obscure the real complexity underneath.

N+1 query problems are the most common performance issue in Django projects, and they are almost entirely invisible without deliberate profiling. A template that iterates over a queryset and accesses a related object on each iteration generates one query for the list and one additional query per item. The code reads cleanly – order.customer.name – but the database executes hundreds of queries where one with a select_related call would suffice. These patterns multiply across views, serialisers, and management commands.

Model validation is another area where Django projects diverge from their own principles. Django provides model-level validators, form validation, and serialiser validation, but many projects apply validation inconsistently. A model might enforce a constraint at the database level with a unique_together but not at the form level, leading to unhelpful error messages. Or a view might validate manually with if statements, bypassing the model's own validation entirely.

Settings drift is a less visible but equally damaging pattern. Django projects typically maintain separate settings files for development, staging, and production. Over time, these diverge: a middleware added to production but not development, a cache backend configured differently, a logging level that masks errors in staging. Migration conflicts compound the problem – when two developers create migrations on the same model simultaneously, the merge migration often papers over schema inconsistencies rather than resolving them.

What Django-specific tools miss

Django has a strong ecosystem of analysis tools. django-debug-toolbar shows query counts per request during development. flake8 and pylint enforce coding style. bandit scans for common security patterns. mypy with django-stubs adds type checking. But each tool operates in isolation, and none can reason about the project's architecture as a whole.

Consider a Django project with fifteen apps. Each app has models, views, serialisers, and URL configurations. Some apps follow REST conventions with Django REST Framework. Others use traditional function-based views with template rendering. A few have custom middleware. No single tool can evaluate whether the boundaries between apps are correct, whether business logic has leaked from models into views or templates, or whether the authentication and permission patterns are consistent across all endpoints.

Raw SQL injection is another gap. bandit can flag obvious cursor.execute calls with string formatting, but it cannot trace a value from a request parameter through a service function and into a raw query two modules away. It also cannot distinguish between raw SQL that is parameterised safely and raw SQL that concatenates user input. That distinction requires reading the code in context, which is what code review does.

Template logic is a subtler concern. Django templates were designed to be deliberately limited, but projects frequently work around those limits with custom template tags, complex filter chains, and conditional blocks that encode business rules. When business logic lives in templates, it cannot be unit tested, and it is invisible to Python-level static analysis tools.

How VibeRails reviews Django projects

VibeRails performs a full-codebase scan using frontier large language models. Every Python file, template, migration, configuration file, and test suite is analysed. The AI reads each module and reasons about its purpose, data access patterns, security posture, and relationship to the rest of the project.

For Django codebases specifically, the review covers:

  • N+1 queries – querysets that access related objects without select_related or prefetch_related, nested loops over querysets, template iterations that trigger lazy loading, serialisers that cause additional queries per object
  • ORM misuse – raw SQL where ORM queries would suffice, .all() calls without filtering or pagination, aggregation done in Python instead of the database, queryset evaluation in loops instead of bulk operations
  • Security patterns – raw SQL with string interpolation, missing CSRF protection on state-changing views, overly permissive ALLOWED_HOSTS, exposed debug settings, hardcoded secrets in settings files, missing authentication on API endpoints
  • Model and validation consistency – constraints enforced at the database level but not at the form or serialiser level, inconsistent use of clean() methods, model fields without appropriate validators, blank=True and null=True used interchangeably on string fields
  • Migration and settings drift – conflicting migrations on the same model, settings that diverge across environments, middleware ordering inconsistencies, apps registered in one environment but not another

Each finding includes the file path, line range, severity, category, and a detailed description with suggested remediation. Findings are organised into 17 categories so teams can prioritise by the area of concern most relevant to their project.

Cross-app analysis for Django architecture

Django projects are structured as collections of apps, but the boundaries between apps are often arbitrary or poorly maintained. Models in one app import from another. Views bypass service layers and query models directly. URL routing spreads business rules across urls.py files in every app.

VibeRails supports dual-model verification – running reviews with both Claude Code and Codex CLI in sequence. The first model discovers issues. The second model verifies them using a different architecture. When both models independently flag the same finding, teams can triage with confidence. When they disagree, the finding warrants closer human review.

This is particularly valuable for Django because many architectural concerns are matters of degree. A model with fifteen fields might be well-designed if it represents a complex domain entity. A view that queries three models might be acceptable for a dashboard. Cross-validation helps distinguish genuine architectural issues from acceptable design choices that reflect the domain complexity.

From findings to fixes in your Django project

After triaging findings, VibeRails can dispatch AI agents to implement fixes directly in your local repository. For Django projects, this typically means adding select_related and prefetch_related to querysets, parameterising raw SQL, consolidating validation logic, fixing migration conflicts, and aligning settings across environments.

Each fix is generated as a local code change you can inspect, test, and commit or discard. The AI works within the conventions of your existing codebase, matching your project's app structure, naming patterns, and framework idioms.

VibeRails runs as a desktop app with a BYOK model – it orchestrates Claude Code or Codex CLI installations you already have. No code is uploaded to VibeRails servers. AI analysis is sent directly to the provider you configured, billed to your existing subscription. The lifetime license is $299 per developer for the lifetime option (or $19/mo monthly). The free tier includes 5 issues per session to evaluate the workflow.

Kostenlos herunterladen Preise ansehen