CMMC 2.0 and AI Code Review for Defense Contractors

If your development workflows involve CUI, the AI tools in those workflows need to fit within your boundary. Local inference can simplify scoping and evidence generation for C3PAO assessments.

Secure server room with defense contractor workstation showing code review interface and compliance documentation

If you build software under Department of Defense contracts, the Cybersecurity Maturity Model Certification is changing how development teams operate. As CMMC requirements show up in contracts and assessment expectations mature, every tool in your development pipeline (including AI code review tools) needs to fit within your compliance boundary. (Validate current timelines and contract clauses with your compliance team; this article is not legal advice.)

Most engineering teams at defense contractors are already familiar with NIST SP 800-171. What many have not yet worked through is how AI-powered development tools fit into that framework. When an AI tool processes your source code, where does that processing happen? Who has access? What audit trail exists? These questions matter for C3PAO assessors, and the answers determine whether your AI tools help or hinder your certification timeline.


CMMC 2.0 basics for development teams

The Cybersecurity Maturity Model Certification is the DoD's framework for ensuring that contractors adequately protect Controlled Unclassified Information. CMMC 2.0 consolidates the original five levels into three.

Level 1 (Foundational) covers basic safeguarding of Federal Contract Information (FCI) with 17 practices derived from FAR 52.204-21. This level requires annual self-assessment. Most contractors already meet Level 1 requirements.

Level 2 (Advanced) requires implementation of all 110 security requirements from NIST SP 800-171 Revision 2. This is the level that applies to contractors handling CUI. On the emerging CMMC timeline, some Level 2 contracts are expected to require certification by an accredited C3PAO rather than self-assessment. The assessor will evaluate whether your organisation has implemented each of the 110 controls and whether those controls are operating effectively.

Level 3 (Expert) adds requirements from NIST SP 800-172 for protection against advanced persistent threats. This level applies to the most sensitive programs and requires government-led assessment.

For software development teams, CMMC affects how source code is handled, where it is processed, who has access, and what audit trails exist. If your project involves CUI, every system that touches that data – including your development tools – must operate within the scope of your CMMC assessment.


Where AI code review tools intersect with CUI requirements

Here is the question that many development teams have not fully considered: if your software project involves CUI, is the source code itself CUI?

In many cases, yes. Source code for CUI-handling systems frequently contains or reveals information about how CUI is processed, stored, and transmitted. Database schemas define the structure of controlled data. Configuration files reference CUI data stores. Business logic implements CUI handling procedures. Even when the source code does not contain CUI directly, some programs may treat source code for CUI-handling systems as CUI. Validate markings and handling requirements for your specific contracts and systems.

NIST SP 800-171 requires controls across multiple families that directly affect how AI tools can process your code:

Access Control (AC): Only authorised personnel should have access to CUI. When your source code is sent to a cloud AI service, the personnel at that service provider – and potentially their sub-processors – have access to your code. Can you demonstrate to a C3PAO assessor that access is limited to authorised individuals?

Audit and Accountability (AU): You must maintain audit logs of who accessed CUI and what actions were taken. When code is processed by a third-party AI service, do you have visibility into their access logs? Can you produce those logs for an assessor?

Configuration Management (CM): Baseline configurations must be established and maintained. If your AI tool is a cloud service, you have limited control over the configuration of the processing environment.

Media Protection (MP): CUI on digital media must be protected. When your code transits to a cloud provider, it exists on their media. Is that media within your protection boundary?

System and Communications Protection (SC): CUI must be protected during transmission. Encrypted connections to cloud APIs address transit security, but the data is decrypted and processed on the provider's infrastructure – outside your system boundary.

Many commercial AI APIs are general-purpose services. Depending on the provider, plan, and contract terms, you may not have the documentation, contractual commitments, or architectural transparency a C3PAO assessor will expect for in-scope CUI processing. Treat external AI services as a boundary decision: if you cannot document where the data goes and who can access it, assume it will be difficult to defend in an assessment.


Why local processing is the simplest path to compliance

Instead of trying to certify that a third-party AI provider's infrastructure meets every applicable NIST 800-171 control, there is a simpler approach: process the code locally so you can keep CUI processing within your authorization boundary during inference.

With VibeRails configured to use a local model server – running an open-weight model on your own hardware – the entire code review workflow stays within your controlled environment. The compliance narrative becomes straightforward because the CUI never transits to an external system.

Access Control: Only authorised personnel on your network can run the review. Access is governed by your existing identity and access management controls, which you already document for CMMC.

Audit and Accountability: VibeRails stores review sessions locally. Where you need an audit trail, you can treat the review reports and session artifacts as evidence inputs to your existing logging and governance process.

Media Protection: Source code and review data can remain on controlled media within your physical and logical boundary. This reduces the need to evaluate third-party storage for the inference step.

System and Communications Protection: When the model server runs on localhost or within your private network and egress is restricted, there is no external transmission path for CUI during inference. This can materially reduce the exfiltration surface area.

Configuration Management: You control the model server configuration, the model weights, and the review tool configuration. Baselines are established and maintained by your team.

This approach does not eliminate the need for CMMC controls – your local environment still needs to satisfy all 110 requirements. But it eliminates the need to extend your compliance boundary to include a third-party AI provider's infrastructure, which is where most compliance efforts stall.


Network-isolated VPC architecture for CMMC Level 2

For organisations that need cloud-based GPU compute but cannot send CUI to commercial AI APIs, a network-isolated VPC architecture in AWS GovCloud can be a pragmatic path.

The architecture is straightforward:

AWS GovCloud with a private VPC. No internet gateway. No NAT gateway. The VPC has no route to the public internet. All traffic stays within the GovCloud region.

GPU instances for model inference. P5 instances with H100 GPUs are available in us-gov-west-1. These provide sufficient compute to run 70B+ parameter models for code review workloads. Instance provisioning and management use AWS Systems Manager (SSM) through a VPC endpoint – no SSH over the internet.

Model weights loaded via S3 VPC endpoint. The model weights are stored in an S3 bucket within GovCloud and accessed through a VPC endpoint. The weights never transit the public internet.

VibeRails connects to the local model server. VibeRails is configured to point at the model server's private IP address within the VPC. The review workflow is identical to using a cloud API, but the entire processing chain stays within the GovCloud boundary.

AWS GovCloud has compliance and authorization characteristics that may be relevant for some CMMC-scoped environments, but whether it satisfies your CMMC boundary depends on your architecture, contracts, and assessor expectations. The key technical goal here is to keep code processing within your defined boundary and remove paths to the public internet.


Evidence generation: how review reports support audit trail requirements

CMMC assessment is not just about having controls in place. A C3PAO assessor needs to see evidence that controls are implemented, operational, and effective. This is where structured code review reports become valuable compliance artifacts.

VibeRails generates structured, timestamped review reports with categorised findings across 17 detection categories. Several of these categories are directly relevant to security compliance:

Security findings identify vulnerabilities, hardcoded credentials, improper authentication patterns, and other security issues in your codebase.

Error handling findings identify places where errors are silently swallowed or improperly handled – conditions that can lead to security-relevant failures.

Configuration findings identify misconfigurations that could affect the security posture of your application.

These reports serve as evidence artifacts in three ways. First, they demonstrate that code review is happening systematically and regularly, not ad hoc. Second, they document specific security findings and their severity, showing that your organisation actively identifies security issues. Third, they create a timeline of findings and remediation that demonstrates continuous improvement – something C3PAO assessors look for when evaluating whether controls are operating effectively.

The reports are stored locally as JSON files. They can be exported, archived, and integrated into whatever audit management or GRC (governance, risk, and compliance) platform your organisation uses. Because the reports are generated locally, there is no concern about report data transiting to a third party.


The 2026 timeline pressure

As of February 2026, many contractors are working toward a near-term window where third-party assessment requirements begin appearing in contracts. That sounds like a reasonable amount of time. It is not as much as it seems.

C3PAO assessments take time to schedule. The pool of accredited assessors is still ramping up. Remediation of assessment findings takes time. And the assessment itself evaluates not just whether controls exist, but whether they have been operating effectively – which means you need to have controls in place and generating evidence well before the assessment date.

If your development workflows involve CUI and you currently use AI tools in those workflows – whether for code review, code generation, or code completion – now is the time to evaluate whether those tools meet the applicable security requirements. Questions to answer:

Does your AI tool process CUI? If it analyses source code for CUI-handling systems, the answer is likely yes.

Where does that processing occur? Cloud APIs process code on the provider's infrastructure. Local tools process code on your infrastructure. The compliance implications are fundamentally different.

Can you document the data flow for an assessor? A C3PAO assessor will want to understand exactly where CUI goes when your AI tool processes it. If you cannot draw that data flow diagram today, you have work to do.

Do you have audit evidence? Can you show six months of structured review reports demonstrating that security-focused code review is part of your development process?

Transitioning to local AI processing before the deadline is significantly easier than trying to certify cloud-based workflows after the fact. Local processing keeps CUI within your existing authorisation boundary. Cloud-based processing requires extending that boundary to encompass a third-party provider – a provider that may not have the documentation, contractual commitments, or architectural transparency that a C3PAO assessor requires.

For a detailed walkthrough of setting up local AI code review with open-weight models, see our Local AI Code Review Guide.


Limits and tradeoffs

  • It can miss context. Treat findings as prompts for investigation, not verdicts.
  • False positives happen. Plan a quick triage pass before you schedule work.
  • Privacy depends on your model setup. If you use a cloud model, relevant code is sent to that provider; local models can keep inference on your own hardware.