Concordance Labs · April 2026
C
The Team at Concordance
April 2026 · 5 min read

CI Pipeline Coverage: Are Your Tests Actually Running?

The Test Gap

A team tells you they have 80% code coverage. You think: great, they write tests. But coverage is not execution. You can have 80% test code without running those tests on every pull request. You can have tests that pass locally but never run in the pipeline. You can have a CI system that doesn't block merges when tests fail.

This is the test gap. Tests exist on disk, but they're not gatekeeping your code. A bad change can slip through because your CI pipeline doesn't catch it.

What Pipeline Coverage Actually Means

Pipeline coverage is not about test coverage percentages. It's about whether your tests run on every pull request and whether they actually block a bad merge. It's about whether your security scanning happens, whether your linting catches style issues, whether your build catches compilation errors.

A mature CI pipeline coverage means:

The Key Distinction: Configuration vs. Execution

Your CI system might be configured to run tests. But if it's not actually running them on your pull requests, the configuration is just theater. Concordance doesn't look at your CI configuration; it looks at execution data. Did tests actually run on the last 100 PRs? Did they block any merges? That's what matters.

How to Build Pipeline Coverage

Start with the basics: unit tests must run and must pass before merge. Then layer in integration tests. Add linting. Add security scanning. Each layer catches different issues. A change might pass unit tests (correct logic) but fail security scanning (uses a deprecated API). Both checks are valuable.

The critical rule: if a CI check fails, the pull request cannot merge. Not "the developer should fix this later." Not "it's a warning." Required checks must be required.

How Concordance Scores It

Concordance queries your CI system (GitHub Actions, GitLab CI, Jenkins, CircleCI, etc.) and analyzes execution patterns. It measures:

A team with comprehensive checks running on every PR, blocking merges on failures, and regularly catching issues scores high. A team with no CI, or CI that only runs sometimes, or checks that don't block merges scores low.

CRA Relevance: Automated Quality Gates

The CRA requires "secure-by-design" practices. One clear demonstration is automated testing and scanning in your pipeline. It shows that you're not relying on human oversight alone—you have automated checks that catch issues before code reaches production. Security scanning in CI is concrete evidence of this.

When CRA auditors ask "How do you ensure code quality before it ships?", you can show them your CI configuration. When they ask "What would prevent a vulnerable dependency from reaching production?", you point to your dependency scanning. This is evidence, not promise.

Related Guides

Audit your CI pipeline coverage and find gaps in automated testing.

Run a free Foundation Scan →See how practice scoring works →