Concordance Labs · April 2026
C
The Team at Concordance
April 2026 · 7 min read

The 6 Compliance Questions Your SDLC Must Answer in 2026

Compliance teams have evolved from policy writers to workflow auditors. They're no longer satisfied with high-level AI ethics statements — they're asking technically specific questions about your SDLC to verify it produces evidence for the EU AI Act and CRA deadlines. If your development pipeline can't answer these questions, you have a compliance gap.

1. Can You Differentiate AI-Generated Code from Human-Written Code?

With a growing number of organisations reporting vulnerabilities introduced by AI-generated code, compliance needs to know which parts of the codebase require heightened security review. The requirement is traceability — tools that tag AI-assisted commits so they can be audited separately if a breach occurs.

This isn't about banning AI code. It's about knowing where it is. When an incident happens and regulators ask how AI was involved in the affected component, you need an answer that takes minutes, not weeks. Commit metadata, PR labels, and code provenance tracking are the practices that make this possible.

2. Is There Human Sign-Off on Every AI-Assisted Pull Request?

Under the EU AI Act, high-risk systems must have human oversight. Compliance teams are checking whether your PR workflow has explicit gates that prevent AI-generated code from merging without a named human owner. The key word is "named" — a generic approval doesn't satisfy the requirement. There must be an identifiable person accountable for the review.

This maps directly to code review practice maturity. Teams with strong review practices — required approvals, review checklists, meaningful review comments — already satisfy this requirement. Teams where reviews are rubber-stamped or bypassed have a compliance gap that AI adoption makes visible.

3. Does Your SBOM Include AI Model Dependencies?

Compliance is looking for a Software Bill of Materials that lists not just libraries, but the specific AI models and APIs your product depends on. When the September 2026 CRA deadline requires 24-hour reporting of exploited vulnerabilities, you need to know instantly whether a compromised model or API affects your product.

Traditional SBOMs cover package dependencies. AI-aware SBOMs must also capture model versions, API endpoints, training data sources, and inference service providers. This is an extension of dependency management practices — the same discipline that tracks npm packages needs to track GPT-4o versions and embedding model endpoints.

4. How Was the Training Data Vetted?

Compliance teams are asking about data provenance. If an AI tool was trained on copyrighted, biased, or non-representative data, the downstream legal risk attaches to everyone who deploys it. The requirement is documentation showing training data was relevant, representative, and free from protected characteristics that could create discrimination claims.

For teams using third-party AI services, this means maintaining vendor assessments that include training data disclosures. For teams building custom models, it means data lineage documentation as a first-class practice — not an afterthought before an audit.

5. What Is the Logging Retention for Model Decisions?

For high-risk AI systems, regulations now require automatic logging of system operations. Compliance needs to verify these logs are tamper-proof and retained for the required period — typically six months or more. The logs must capture what the model was asked, what it decided, what data it used, and whether a human reviewed the decision.

This is an observability practice challenge. Teams with mature logging and monitoring practices can extend them to cover AI decision trails. Teams without them face a larger infrastructure investment to meet the retention and tamper-proofing requirements.

6. Is There an Override Mechanism for Autonomous Agents?

For teams deploying agentic AI — systems that take actions autonomously — compliance wants to see a human-override mechanism. How can a human instantly stop an agent that begins hallucinating, violating policy, or taking actions outside its intended scope? The override must be accessible, tested, and documented.

This parallels the circuit-breaker patterns engineering teams already use for microservices. The practice maturity question is whether you've designed these safeguards into the agent architecture from the start, or whether you're hoping to add them after something goes wrong.

Shift-Left Compliance Requirements by SDLC Phase

Compliance requirements are embedding earlier in the development lifecycle. During planning, features must be formally graded as minimal, limited, or high-risk. During development, evidence must show AI tools didn't bypass encryption or network isolation controls. During testing, independent bias auditing must demonstrate no disparate impact. At release, users must receive clear notice that they're interacting with AI.

Each of these requirements maps to a specific engineering practice. The teams that can produce this evidence automatically — because their practices generate it as a natural byproduct — are the ones that will meet 2026 deadlines without heroic last-minute efforts.

Read: What CRA and NIS2 Actually Require as Compliance Evidence →

Read: AI Governance Implementation Roadmap — From Policy to Pipeline →

Run the Free CRA Scanner to assess your SDLC compliance readiness →