FastAPI makes building async Python APIs deceptively simple. But async/await patterns that look correct can silently block the event loop, dependency injection graphs can become untraceable, and Pydantic models can drift from reality. VibeRails reads your entire FastAPI application and finds the framework-specific issues that accumulate as the API surface grows.
FastAPI's async-first design is its greatest strength and its most common source of
hidden bugs. An endpoint declared with async def runs on the event loop, which
means any blocking operation inside it – a synchronous database query, a file system
read, a CPU-intensive computation – blocks every other request being processed
concurrently. The application continues to accept connections, but response times degrade
across all endpoints because the event loop is stalled.
Python does not distinguish between async-safe and blocking calls at the type level.
Calling requests.get() inside an async def endpoint compiles and
runs without warning, but it blocks the event loop for the duration of the HTTP request.
The correct approach is httpx.AsyncClient or asyncio.to_thread.
In a codebase with dozens of endpoints, these blocking calls are scattered and difficult to
find by grepping alone, because the blocking function might be called through several layers
of indirection.
Database access is where this pattern bites hardest. SQLAlchemy's async engine requires
AsyncSession and careful use of await on every query operation. A
codebase that migrated from synchronous SQLAlchemy often contains a mixture of sync and async
session usage, with some queries executing correctly on the async engine and others silently
falling back to blocking behaviour. These mixed patterns create performance problems that only
manifest under concurrent load, making them invisible during development and early testing.
FastAPI's dependency injection system is elegant for small applications. An endpoint declares its dependencies as function parameters, and FastAPI resolves the dependency graph at request time. But as the application grows, the dependency graph becomes a hidden layer of architecture that no file explicitly defines. A database session depends on a connection pool, an authentication dependency depends on a token decoder that depends on a configuration object, and a rate limiter depends on a Redis client that depends on connection settings.
The problem is not that these dependencies exist, but that their relationships are implicit. There is no single file where the entire dependency graph is visible. A change to a low-level dependency – switching from an in-memory cache to Redis, for instance – requires tracing every endpoint that transitively depends on it. FastAPI does not provide tooling to visualise or validate the dependency graph, so developers must hold the entire structure in their heads or discover breakages at runtime.
Dependency overrides for testing introduce another layer of risk. FastAPI allows overriding
any dependency via app.dependency_overrides, which is essential for testing but
creates a parallel dependency graph that can diverge from production. A test that overrides
the database session dependency might pass even though the real session has been configured
differently. When the override map grows large, the test environment becomes a separate
application that happens to share routes with the production one.
Pydantic models are the backbone of FastAPI's request validation and response serialisation.
When they are well maintained, they provide type safety and automatic documentation. When they
drift, they create a false sense of security. A response model that was defined when an endpoint
returned five fields might still be used after the endpoint was modified to return eight. The
additional fields are silently stripped from the response. A request model that uses
Optional fields with default values might accept payloads that the business logic
cannot actually handle, passing validation but failing downstream.
Model inheritance and composition add further complexity. A base model shared across multiple
endpoints accumulates fields from every consumer, growing into a monolithic schema that no
single endpoint needs in its entirety. Pydantic v1 to v2 migration introduces another
dimension of drift – orm_mode becomes model_config,
validator becomes field_validator, and subtle behavioural changes
in coercion rules can alter how incoming data is parsed without producing errors.
CORS configuration is a persistent source of security misconfigurations. During development,
teams often set allow_origins=["*"] to avoid cross-origin errors, and this
wildcard configuration ships to production. Even when origins are restricted, combining
allow_credentials=True with a broad origin list creates vulnerability. Middleware
ordering matters as well: a CORS middleware placed after an authentication middleware may not
handle preflight requests correctly, causing browsers to reject legitimate requests without
any server-side error log.
Background tasks present lifecycle management challenges. A BackgroundTask that
accesses a database session after the response has been sent may find the session already
closed. The newer lifespan pattern requires different handling from the deprecated
on_event decorator, and codebases that have evolved across FastAPI versions
often contain both patterns with unclear ownership of resource lifecycle.
VibeRails performs a full-codebase scan using frontier large language models. Every Python file, Pydantic model, configuration module, test file, and deployment script is analysed – not just recent diffs, but the entire application including background workers, migration scripts, and infrastructure configuration.
For FastAPI codebases specifically, the AI reasons about:
async def endpoints, synchronous database access on the async event loop, CPU-bound operations that should be offloaded to thread pools or process pools, and mixed sync/async session patterns after SQLAlchemy migrationOptional fields that mask required business logic, v1-to-v2 migration inconsistencies, and monolithic base models shared across unrelated endpointson_event patterns, and background tasks that swallow exceptions without loggingEach finding includes the file path, line range, severity, category, and a detailed description explaining why the pattern is problematic and how to address it. Findings are organised into 17 categories so teams can filter and prioritise by area of concern.
The most valuable findings in a FastAPI review span multiple layers. A blocking database call might be hidden behind three layers of utility functions. A dependency injection issue might only manifest when two specific endpoints are called concurrently. A Pydantic model that silently strips fields might cause data loss that only surfaces in a downstream service that expects those fields.
VibeRails supports running reviews with two different AI backends – Claude Code and Codex CLI – in sequence. The first pass discovers issues, the second verifies them using a different model architecture. When both models independently flag the same blocking call or CORS misconfiguration, confidence is high. Disagreements highlight areas where the pattern may be intentional – a synchronous call in an endpoint that is never called concurrently, or a wildcard CORS configuration on an internal-only API.
After triaging findings, VibeRails can dispatch AI agents to implement fixes directly in your local repository. For FastAPI projects, this typically means replacing blocking calls with async equivalents, restructuring dependency graphs for clarity, updating Pydantic models to match actual data contracts, restricting CORS configuration, adding proper lifecycle management for background resources, and migrating deprecated patterns to current FastAPI idioms.
Each fix is generated as a local code change you can inspect, test, and commit or discard. The AI works within the conventions of your existing codebase, matching your project's directory structure, naming conventions, and testing framework – whether you use pytest with httpx, async test clients, or factory-based test data.
VibeRails runs as a desktop app with a BYOK model – it orchestrates Claude Code or Codex CLI installations you already have. No code is uploaded to VibeRails servers. AI analysis is sent directly to the provider you configured, billed to your existing subscription. The lifetime license is $299 per developer for the lifetime option (or $19/mo monthly). The free tier includes 5 issues per session to evaluate the workflow.
Vertel over je team en doelen. We reageren met een concreet uitrolplan.