~/blog/devops-maturity-assessment-framework-2026
zsh
DEVOPS

DevOps Maturity Assessment: Our 5-Stage Framework (Free Template)

Engineering Team 2026-03-19

Most DevOps maturity assessments are academic exercises that produce a nice PDF and change nothing. We built a framework that actually drives improvement — because we use it at the start of every consulting engagement to understand where teams are and what to fix first.

This is the framework, the scoring system, and the template. Use it to assess your own team.

Why Assess DevOps Maturity

Three reasons:

  1. Identify bottlenecks — your deployment frequency might be limited by manual testing, not missing automation. The assessment reveals which area is holding everything else back.

  2. Prioritise investments — should you invest in monitoring, CI/CD, or infrastructure as code? The assessment shows which improvement delivers the most impact relative to effort.

  3. Benchmark progress — run the assessment quarterly. If your score is not improving, your DevOps transformation is stalled.

The 5 Stages

StageNameDescription
1Ad HocManual processes, tribal knowledge, reactive firefighting
2ManagedBasic automation, some documentation, teams starting to collaborate
3DefinedCI/CD pipelines, IaC, monitoring in place, consistent processes
4MeasuredDORA metrics tracked, feedback loops, data-driven improvements
5OptimisedContinuous improvement, self-service platforms, elite DORA metrics

Most teams we assess score Stage 2-3. That is normal. The goal is not to reach Stage 5 overnight — it is to identify the highest-leverage improvements at your current stage.

The 8 Assessment Dimensions

Dimension 1: Source Control and Collaboration

StageWhat It Looks Like
1Code on local machines or shared drives, no version control
2Git repositories, basic branching, some code review
3Pull request workflow with required reviews, branch protection rules
4Trunk-based development, automated PR checks, small frequent commits
5Feature flags, automated release notes, < 1 day lead time for changes

Key questions:

  • How do developers get code reviewed?
  • What is your branching strategy?
  • How long does a PR stay open before merge?

Dimension 2: Build and CI

StageWhat It Looks Like
1Manual builds on developer machines
2CI server runs builds on merge, basic unit tests
3CI runs on every PR: build, test, lint, security scan
4Fast CI (< 10 min), parallel testing, test coverage tracked
5CI optimised (< 5 min), flaky tests eliminated, build caching

Key questions:

  • Does CI run on every pull request?
  • How long does a CI build take?
  • What percentage of tests are automated?

Dimension 3: Deployment and CD

StageWhat It Looks Like
1Manual deployment via SSH or console, documented in a wiki
2Scripts automate deployment, but someone runs them manually
3CI/CD pipeline deploys to staging automatically, production requires approval
4GitOps, canary/blue-green deployments, automated rollback
5Progressive delivery, feature flags, deploy multiple times per day

Key questions:

  • How do you deploy to production?
  • How long does a deployment take?
  • How often do you deploy?
  • Can you roll back in under 5 minutes?

Dimension 4: Infrastructure

StageWhat It Looks Like
1Manually provisioned via console, no documentation
2Some scripts, partial documentation, snowflake servers
3Terraform/OpenTofu for infrastructure, environments reproducible
4Full IaC, immutable infrastructure, Terraform + Ansible pipeline
5Self-service platform, developers provision infrastructure via PRs

Key questions:

  • Can you recreate your production environment from code?
  • How long would it take to rebuild after a complete environment loss?
  • Is all infrastructure defined in code?

Dimension 5: Monitoring and Observability

StageWhat It Looks Like
1No monitoring, issues discovered by users
2Basic uptime monitoring, manual log checking
3Prometheus + Grafana, centralised logging, alerts for critical issues
4Distributed tracing, SLOs defined, proactive alerting
5AIOps, predictive alerting, automated remediation

Key questions:

  • How do you find out about production issues?
  • Do you have dashboards for each service?
  • Are SLOs defined and tracked?

Dimension 6: Security

StageWhat It Looks Like
1Security is an afterthought, no scanning, shared credentials
2Basic access controls, occasional manual security reviews
3Security scanning in CI/CD, secret management, RBAC
4Automated compliance checks, vulnerability management, container scanning
5DevSecOps fully integrated, security as code, zero-trust architecture

Key questions:

  • Are secrets stored in a secret manager or hardcoded?
  • Is security scanning part of your CI/CD pipeline?
  • When was your last security assessment?

Dimension 7: Testing

StageWhat It Looks Like
1Manual testing only, no test suite
2Some unit tests, manual QA before releases
3Unit + integration tests in CI, > 70% coverage on critical paths
4Contract testing, performance testing, chaos engineering
5Testing in production, canary analysis, automated quality gates

Key questions:

  • What percentage of your codebase has automated tests?
  • How long does your test suite take to run?
  • Do you test for performance and resilience?

Dimension 8: Culture and Collaboration

StageWhat It Looks Like
1Silos between dev and ops, blame culture, change is feared
2Some collaboration, incident reviews happen occasionally
3Shared ownership of production, blameless postmortems
4Cross-functional teams, developers on-call for their services
5Learning culture, internal tech talks, innovation time allocated

Key questions:

  • Who gets paged when production goes down?
  • Do you conduct blameless postmortems?
  • How comfortable are developers with deploying on a Friday?

DORA Metrics: The Industry Benchmark

The DORA (DevOps Research and Assessment) metrics provide an objective measure of software delivery performance:

MetricEliteHighMediumLow
Deployment frequencyMultiple times/dayWeekly-dailyMonthly-weekly< Monthly
Lead time for changes< 1 hour1 day-1 week1 week-1 month> 1 month
Change failure rate< 5%5-10%10-15%> 15%
Mean time to recovery< 1 hour< 1 day1 day-1 week> 1 week

How to measure:

  • Deployment frequency: Count production deployments per week (check your CI/CD logs)
  • Lead time: Measure time from first commit to production deployment
  • Change failure rate: Percentage of deployments that cause an incident or rollback
  • MTTR: Average time from incident detection to resolution

Most teams we assess fall in the Medium category. Moving from Medium to High delivers the biggest business impact.

The Scoring Template

Score each dimension 1-5, then calculate your overall maturity:

DevOps Maturity Scorecard
─────────────────────────────────────────
Dimension                    Score (1-5)
─────────────────────────────────────────
1. Source Control                 [ ]
2. Build and CI                   [ ]
3. Deployment and CD              [ ]
4. Infrastructure                 [ ]
5. Monitoring                     [ ]
6. Security                       [ ]
7. Testing                        [ ]
8. Culture                        [ ]
─────────────────────────────────────────
Total                           [ ] / 40
Average                         [ ] / 5
─────────────────────────────────────────
Overall Stage:                  [ ]
─────────────────────────────────────────

DORA Metrics:
- Deployment frequency:     [ ]
- Lead time for changes:    [ ]
- Change failure rate:      [ ]
- Mean time to recovery:    [ ]
- DORA Category:            [ ]

Interpreting Your Score

Average ScoreOverall StagePriority
1.0-1.9Ad HocVersion control, basic CI/CD, monitoring
2.0-2.9ManagedIaC, security scanning, automated testing
3.0-3.9DefinedDORA metrics, GitOps, observability
4.0-4.9MeasuredPlatform engineering, self-service, chaos engineering
5.0OptimisedContinuous improvement, AIOps

Improvement Roadmap by Stage

Stage 1 → 2 (Foundation)

Timeline: 4-8 weeks

Focus on the basics that unblock everything else:

  1. Move all code to Git with branch protection and required reviews
  2. Set up a CI pipeline that runs tests on every PR
  3. Containerise applications (Dockerfile for each service)
  4. Set up basic monitoring (uptime + error tracking)
  5. Move secrets to a secret manager

Stage 2 → 3 (Automation)

Timeline: 2-3 months

Automate the manual steps that slow you down:

  1. Implement Infrastructure as Code for all environments
  2. Build a CD pipeline that deploys to staging automatically
  3. Deploy Prometheus and Grafana for metrics and dashboards
  4. Add security scanning to CI (container scanning, SAST)
  5. Implement automated database backups with tested restoration

Stage 3 → 4 (Measurement)

Timeline: 3-6 months

Start measuring and optimising:

  1. Track DORA metrics (deployment frequency, lead time, CFR, MTTR)
  2. Define SLOs for each service and alert on SLO budget burn
  3. Implement GitOps with ArgoCD for Kubernetes deployments
  4. Set up distributed tracing (OpenTelemetry)
  5. Conduct blameless postmortems for every incident

Stage 4 → 5 (Optimisation)

Timeline: 6-12 months

Build a self-service platform:

  1. Build an internal developer platform
  2. Implement golden paths for common deployment patterns
  3. Progressive delivery with feature flags and canary analysis
  4. Chaos engineering to test resilience
  5. Automated cost optimisation with FinOps practices

Common Anti-Patterns

1. Skipping stages. Teams try to implement GitOps (Stage 4) before they have CI/CD (Stage 3). Each stage builds on the previous one.

2. Tool-first thinking. Buying Datadog does not give you Stage 4 monitoring. The tool matters less than the practices — alerting on SLOs, running postmortems, building dashboards that teams actually use.

3. Ignoring culture. You can have perfect automation and still score Stage 2 if developers are afraid to deploy and teams blame each other for incidents. Culture is a dimension, not a side effect.

4. Assessing once. A single assessment is a snapshot. Run it quarterly to track progress. If your scores are not improving, something is wrong with your improvement plan.


Want a Professional DevOps Assessment?

We run structured DevOps maturity assessments for teams of all sizes — from 5-person startups to 200-person engineering organisations.

Our DevOps consulting services include:

  • Maturity assessment — score your team across all 8 dimensions with DORA benchmarking
  • Improvement roadmap — prioritised recommendations based on your current stage
  • Implementation support — help you move from one stage to the next
  • Quarterly re-assessment — track progress and adjust priorities

We also offer a free 30-minute strategy consultation where we can do a high-level assessment of your current setup.

Book a free DevOps assessment →

Continue exploring these related topics

$ suggest --service

Need DevOps help?

From CI/CD to infrastructure automation, we help teams ship faster and more reliably.

Get started
Chat with real humans
Chat on WhatsApp