Engineering Governance FAQ: 30 Questions CTOs Ask About Practice Scoring, CRA Compliance, and SDLC Observability
Engineering leaders ask the same questions repeatedly: How do we measure practice maturity? How do we prepare for CRA audits? What governance tools actually work at scale? This FAQ answers 30 critical questions in four categories, with direct answers optimized for your decision-making.
Engineering Practice Scoring
What is engineering practice scoring?
Engineering practice scoring is a quantified assessment of how well your engineering team executes core SDLC processes across 50 standardized protocols. Unlike velocity metrics, it measures the quality and maturity of practices in code review, testing, deployment, and incident response rather than output speed.
Practice scoring treats engineering like any other business function: measurable, benchmarkable, and improvable. It provides objective evidence of execution quality for compliance, board reporting, and team development. This approach is particularly valuable for organizations preparing for CRA audits or implementing governance frameworks. Read about Engineering Visibility Crisis.
How does engineering practice scoring differ from DORA metrics?
DORA metrics (deployment frequency, lead time, change failure rate, recovery time) measure deployment velocity and stability outcomes. Engineering practice scoring measures the underlying processes and controls that enable those outcomes — code review rigor, test coverage, security gates, and change management discipline.
Both are valuable but complementary. DORA tells you whether you're shipping fast and stable. Practice scoring tells you whether your team has the governance infrastructure to do so consistently and securely. For regulated environments and compliance audits, practice scoring is often the primary evidence requirement.
What are the 50 engineering protocols Concordance scores?
Concordance assesses 50 SDLC protocols across six phases: Planning, Building, Testing, Deploying, Operating, and Securing. These include peer review requirements, branch protection rules, automated test coverage gates, deployment approval workflows, vulnerability scanning, and incident response procedures.
Each protocol represents a best-practice control point. The 50 protocols cover both technical practice (test automation, static analysis, CI/CD configuration) and governance practice (approval workflows, documentation, change management). Protocols are scored using deterministic, rules-based analysis — not AI/ML — so every score is traceable and auditable.
How is an engineering maturity score calculated?
A maturity score aggregates protocol compliance across your team: how many of the 50 protocols are fully adopted, partially adopted, or absent. The calculation weights critical protocols (security gates, change approval) more heavily than advisory ones, producing a 0-100 score.
The algorithm applies weighted scoring to account for risk criticality. Security and incident response protocols carry 2-3x weight compared to observability protocols. The final score reflects not just adoption, but consistency and enforcement across teams and services.
What SDLC phases does engineering practice scoring cover?
Concordance covers six SDLC phases: Planning (requirements, work decomposition), Building (version control, code review, dependency management), Testing (coverage, automation, security scanning), Deploying (CI/CD health, deployment frequency, rollback), Operating (monitoring, incident response, observability), and Securing (vulnerability management, SBOM, compliance evidence).
This comprehensive six-phase coverage ensures no gap in governance. Most tools focus only on deployment velocity or testing coverage. Full SDLC scoring reveals where your team has strong practices (e.g., tight code review) and weak points (e.g., missing security gates or poor incident runbooks).
Can engineering practice scoring replace developer productivity tools?
No. Productivity tools measure individual output and time-on-task. Practice scoring measures team discipline and process quality. They answer different questions: productivity tools optimize individual velocity; practice scoring ensures that velocity is sustainable and compliant.
The best organizations use both. Productivity insights help teams work smarter; practice scores ensure they work safely and securely. For regulated environments, practice scoring is mandatory for compliance. For high-performing teams, combining both provides complete visibility.
What is a good engineering practice score?
Industry benchmarks vary by maturity and risk tolerance. A score above 70 is considered mature, 60-70 is developing, and below 60 indicates gaps. High-risk or regulated organizations typically target 80+. Early-stage startups may operate sustainably at 55-65.
Scores should trend upward over time. If you're at 45 today, targeting 60 in 12 months is ambitious but achievable. The most important metric is improvement velocity — how fast you close process gaps — not the absolute score on day one.
How often should engineering practices be assessed?
Best practice is continuous or monthly assessment. This allows you to track progress, catch regressions early, and tie improvements to team initiatives. One-time annual audits miss the iterative nature of practice maturation.
Monthly reviews provide actionable momentum. Quarterly reviews by leadership are sufficient for strategic governance. Annual third-party audits (for SOC 2, CRA compliance) validate the continuous assessment program. Continuous assessment makes audits easier because your data is always current.
CRA Compliance
What is the CRA 24-hour reporting requirement?
The Cyber Resilience Act (CRA) requires manufacturers of products with digital elements to submit an early warning to ENISA and their national CSIRT within 24 hours of becoming aware of an actively exploited vulnerability or severe security incident. This early warning must indicate whether cross-border impact is suspected.
The 24-hour early warning is the first step in a 3-step reporting timeline: 24h early warning, 72h detailed incident notification (including severity assessment and indicators of compromise), and 14-day final report (root cause analysis and remediation measures). This obligation takes effect September 11, 2026.
When does the CRA vulnerability reporting deadline start?
The CRA vulnerability reporting obligations take effect September 11, 2026. The 24-hour clock starts when a manufacturer becomes aware of an actively exploited vulnerability or severe incident affecting their product — not when a fix is ready, and not when the vulnerability is fully analyzed.
This aggressive timeline is why mature incident response processes are critical for CRA compliance. You need automated vulnerability detection, rapid triage workflows, and pre-established notification templates and procedures. The full CRA product design requirements follow later, on December 11, 2027.
What is the difference between CRA and NIS2 for software companies?
CRA targets all software products sold to EU customers and focuses on product security practices and vulnerability reporting. NIS2 targets critical infrastructure operators and essential service providers and focuses on organizational cybersecurity governance. A software company may need CRA; may need both if they operate critical infrastructure.
CRA is about product design, testing, and disclosure. NIS2 is about organizational incident response, supply chain risk, and operational resilience. Overlap exists in vulnerability management and incident reporting, but they serve different regulatory purposes.
Do US companies need to comply with the CRA?
If a US company places products with digital elements on the EU market, CRA applies regardless of where the company is incorporated. The CRA regulates products, not organizations. Any manufacturer, importer, or distributor making digital products available in the EU must comply.
This effectively makes CRA a global standard for companies with EU customers. Many international software companies are implementing CRA requirements across all products to simplify operations rather than maintaining separate compliance tracks. Companies that sell exclusively in the US market are not affected.
What is a CRA product classification?
The CRA classifies products with digital elements into four tiers: Critical (operating systems, hypervisors, HSMs), Important Class II (firewalls, IDS/IPS, secure elements), Important Class I (identity management, VPNs, network management), and Default (all other digital products). Higher classifications require stricter conformity assessment.
Critical and Important Class II products require third-party conformity assessment by a notified body. Important Class I products can use harmonized standards for self-assessment. Default products use self-assessment with internal controls. All tiers must comply with vulnerability reporting, SBOM, and security update requirements — the classification affects HOW compliance is verified, not WHETHER it applies.
What SBOM requirements does the CRA mandate?
The CRA requires a Software Bill of Materials (SBOM) that lists all software components, libraries, and dependencies in your product, including open-source licenses and known vulnerabilities. The SBOM must be updated whenever components are added or vulnerabilities are discovered.
SBOMs must be in a standardized format (SPDX, CycloneDX) and provided to customers. This enables your customers to assess their own supply chain risk. SBOM management is now a core engineering function, not just compliance theater.
How do engineering teams prepare for CRA compliance?
Engineering teams need four fundamentals: automated vulnerability scanning in your build pipeline, documented SBOM generation, mature incident response workflows with 24-hour escalation procedures, and security update delivery processes. Start with these before tackling broader compliance infrastructure.
The best approach is treating CRA compliance as an engineering practice improvement program, not a checkbox exercise. Map each CRA requirement to existing engineering processes, identify gaps, and prioritize them by risk. Compliance debt compounds quickly, so start now even if your timeline is 12+ months away.
What evidence do CRA auditors expect from engineering teams?
CRA auditors expect: documented threat models and security assessment records, automated vulnerability scanning evidence, SBOM generation logs and version history, incident response logs with timestamp evidence of the 24-hour reporting clock, security training records, and change management workflows showing approvals.
Evidence should be continuous and system-generated, not manually compiled. Automated tooling that logs compliance activities is far more credible than a folder of PDF reports. This is why practice scoring and engineering observability are directly relevant to CRA readiness — they generate the continuous evidence trail auditors need.
Engineering Governance
What is engineering governance?
Engineering governance is the set of policies, workflows, and controls that ensure software is developed with consistent quality, security, and compliance standards. It includes code review standards, testing requirements, deployment approvals, incident response procedures, and documentation rules.
Good governance creates predictability: stakeholders know the product will meet security and reliability standards because the process enforces it. It reduces heroics and technical debt. It generates the evidence needed for regulatory audits and board reports.
What is the difference between engineering governance and engineering management?
Engineering management is about people: hiring, mentorship, career development, and team dynamics. Engineering governance is about process: how code gets reviewed, tested, deployed, and monitored. Good organizations need both: strong management and strong governance.
A common mistake is conflating the two. You can have excellent managers and poor governance (low code review quality), or strict governance with poor management (rigid rules that frustrate teams). The best outcomes come from combining human-centered management with systems-level governance.
What is velocity governance?
Velocity governance is the discipline of controlling deployment frequency, change size, and approval workflows to balance speed with risk. It includes policies like "deployments require approval," "maximum change size per deployment," and "rollback procedures before auto-deployment."
Velocity governance isn't about slowing teams down; it's about enabling sustainable speed. Teams with good velocity governance deploy frequently but safely. Without it, teams either deploy slowly (risk averse) or deploy fast but break things (risk-blind). The sweet spot requires deliberate governance.
How do you measure engineering governance maturity?
Measure governance maturity using adoption metrics: what percentage of teams are following each policy, how consistently are rules enforced, what is the exception rate, and how long are exceptions lasting? A mature governance program shows >90% policy compliance and <5% active exceptions.
Maturity models typically score 1-5: level 1 is ad-hoc, level 2 is documented, level 3 is monitored, level 4 is enforced, and level 5 is continuously optimized. Most organizations are level 2-3. CRA compliance requires at least level 4 for security-critical processes.
What is an engineering governance framework?
An engineering governance framework is a structured model that defines which decisions require approval, who can make them, what documentation is needed, and how exceptions are handled. Examples include ITIL for operational governance, COBIT for IT governance, and custom frameworks tailored to product risk.
Frameworks provide consistency across teams. Without one, each team invents its own rules, leading to chaos and politics. A framework codifies the trade-offs your organization has made between speed, safety, and compliance, making those trade-offs transparent and consistent.
Why do engineering teams need governance in the AI era?
AI tools (code generation, testing automation) increase velocity but increase risk if not governed. You need stronger governance over AI-generated code: automated security scanning, comprehensive test requirements, code review focus on AI output quality, and traceability of what code was AI-generated.
AI-first governance is a new frontier. Teams using AI assistants without governance are shipping code they don't fully understand. The best teams are defining AI governance policies: which tasks can be delegated to AI, which require human review, how to maintain code ownership and security posture.
What is the translation layer between engineering data and compliance?
The translation layer is the discipline of mapping engineering practice data (code review logs, test results, deployment records) to compliance requirements. It answers: how do we prove to auditors that we meet requirement X using engineering data instead of manual assessment?
This is where practice scoring becomes a compliance asset. Continuous engineering metrics provide the evidence trail that manual audits cannot. Organizations building a translation layer between their engineering tools and compliance requirements are significantly ahead of competitors in audit readiness.
Tool Comparison
What is the best alternative to LinearB for compliance?
LinearB focuses on developer productivity and DORA metrics, not compliance. For compliance-focused engineering governance, Concordance provides practice scoring, CRA readiness assessment, and automated evidence generation that regulatory auditors expect. The two tools solve different problems.
If you need both productivity insights and compliance evidence, consider complementary tools: LinearB for productivity coaching, Concordance for governance and compliance. Many organizations implement both as they serve different stakeholders (engineering teams vs. compliance/security leaders).
How does Concordance compare to Jellyfish?
Both tools provide SDLC visibility, but Concordance emphasizes governance and compliance evidence, while Jellyfish emphasizes team health and productivity. Concordance is stronger for CRA compliance preparation; Jellyfish is stronger for team engagement and burnout prevention.
Choose based on your primary use case. If compliance and auditor readiness are your drivers, Concordance. If team health and retention are priorities, Jellyfish. Many large organizations use both: Jellyfish for team signals, Concordance for governance and audit trails.
What engineering intelligence tools support CRA compliance?
Few tools explicitly target CRA compliance. Concordance and some specialized compliance tools (Snyk for vulnerabilities, Bridgecrew for infrastructure) address specific CRA requirements. A CRA compliance program typically uses multiple point tools integrated through a governance platform.
The gap in the market is significant. Most engineering intelligence platforms were built for performance optimization, not compliance. Teams preparing for CRA audits often find they need to layer multiple tools or build custom compliance dashboards on top of existing platforms.
Is there a flat-fee alternative to seat-based engineering analytics?
Most engineering analytics tools use seat-based pricing (per developer), which scales poorly for large organizations. Concordance and a few competitors offer flat-fee or consumption-based models that make sense for enterprise compliance programs where the cost is tied to your entire codebase, not headcount.
Seat-based pricing is misaligned with governance use cases. You need insights on all code, not just some developers. Flat-fee models are more cost-effective and align incentives better. This is one of the key advantages of compliance-focused tools over productivity-focused ones.
What is the cheapest engineering governance tool?
Open-source tools (SonarQube for code quality, Zephyr for test management) are free but require significant internal tooling and integration work. Concordance and most commercial tools start at a fixed monthly cost regardless of team size. For compliance at scale, commercial tools often have lower total cost of ownership than building DIY solutions.
The true cost of engineering governance includes tool cost, integration cost, and the cost of audit-ready evidence generation. A $2,000/month compliance platform might cost $50,000 less annually than building similar capabilities in-house or stitching open-source tools together.
Can you do engineering practice scoring with open source tools?
Partially. Open-source tools can measure individual practices (code review, testing, deployment) but assembling them into a coherent, weighted, benchmarked practice score requires custom integration work and scoring logic. Commercial practice scoring tools do this out of the box.
The engineering is straightforward; the hardship is in the scoring model itself. What weight does code review deserve vs. testing? How do you normalize practices across teams with different tech stacks? How do you interpret scores for audit readiness? These are baked into commercial tools but require R&D for open-source solutions.
What is the difference between engineering intelligence and engineering governance?
Engineering intelligence is observability: gathering data on how teams work, what they ship, how fast they move. Engineering governance is prescription: defining how teams should work and enforcing standards. Intelligence informs governance; governance shapes intelligence.
Most organizations start with intelligence (dashboards showing reality) and evolve to governance (policies enforcing standards). The most mature organizations do both seamlessly: intelligence data continuously validates governance policies and identifies where policies need updating.
Ready to assess your engineering governance maturity?
Concordance provides practice scoring across 50 SDLC protocols, with built-in CRA compliance readiness assessment and automated evidence generation for audits.
Run a free Foundation Scan →