Deployment Frequency: The DORA Metric That Tells Half the Story
What DORA Measures
DORA's deployment frequency metric measures how often your team ships code to production. It's one of four key metrics from the State of DevOps Report. High performers deploy multiple times per day. Low performers deploy once every six months. The thinking is: if you deploy frequently, you're shipping value faster and learning from real users quicker.
This is useful. But it's incomplete. Frequency without governance is just chaos with a number attached.
The Incomplete Picture
A team deploying 50 times per day looks great on a DORA scorecard. But what if those 50 deployments have 10 rollbacks? What if each deployment needs emergency hotfixes the next day? What if there's no runbook, no incident response process, no way to trace what changed?
Frequency measures speed. It doesn't measure safety. It doesn't measure whether you have the controls in place to recover from a bad deploy. It doesn't measure whether your deployment process is repeatable or documented.
A board will see "100 deployments per month" and think your team is shipping faster. They might not see that each of those deployments requires manual intervention, that your on-call rotation is burning out, or that you're spending more time rolling back than shipping features.
Speed + Safety = Maturity
High-maturity teams deploy frequently AND safely. They have:
- Deployment runbooks: Every deploy follows a checklist. It's repeatable, not ad-hoc.
- Rollback procedures: When a deploy goes wrong, you know how to revert quickly.
- Blue-green or canary deployments: You don't flip a switch and send all traffic to new code. You test it with a subset first.
- Feature flags: You can deploy code without turning on new features, letting you separate deployment from release.
- Monitoring and alerting: Errors and anomalies are caught within minutes, not hours.
- Low rollback rates: Deployments succeed the first time most of the time.
How Concordance Adds Context
Concordance measures frequency, but also the governance around it. It answers:
- How many deployments per week?
- What's your rollback rate? (Deployments that had to be reverted.)
- How many production incidents per deployment?
- Do you have documented deployment processes?
- Is deployment gated by approvals? Or can anyone deploy?
- What's the time from "merge to main" to "in production"?
So you can see: Team A deploys 10 times per week with 5% rollback rate. Team B deploys 30 times per week with 25% rollback rate. Both look "high-performing" on frequency. But the stories are very different. Team A is shipping safely. Team B is shipping fast but breaking things.
Balancing Speed and Safety
The goal is not maximum frequency. It's sustainable frequency. A team deploying once a week with zero rollbacks is more mature than a team deploying 20 times a day with 10 rollbacks.
Ask yourself: Can I explain why we deploy this often? Can I explain our deployment process to someone new? What happens when a deployment goes wrong? If you can't answer these questions clearly, your frequency might be a warning sign, not a win.
Related Guides
Measure deployment frequency AND the safety controls that govern it.