~/blog/docker-compose-to-kubernetes-migration-2026
zsh
KUBERNETES

Docker Compose to Kubernetes Migration: Step-by-Step (2026)

Engineering Team 2026-03-19

Docker Compose gets you from zero to running locally in minutes. But when your application needs to serve real traffic — with autoscaling, rolling updates, health checks, and multi-node resilience — Compose stops being enough.

This guide walks through migrating a Docker Compose application to Kubernetes, step by step. We cover automated conversion with Kompose, what it gets wrong, and the manual adjustments needed for a production-ready deployment.

When to Migrate

Docker Compose is not a production orchestrator. Migrate when:

  • You need horizontal scaling — Compose can set replicas but has no autoscaling, no health-based rescheduling, and no bin-packing across nodes
  • You need rolling updates — Compose restarts containers, Kubernetes does zero-downtime rolling deployments
  • You need resilience — if a Compose host dies, everything dies. Kubernetes reschedules workloads across nodes automatically
  • You are running on multiple servers — Compose is single-host. Kubernetes orchestrates across clusters
  • Your team is growing — RBAC, namespaces, and resource quotas provide governance that Compose cannot

The Starting Point: A Typical Compose File

Here is a representative docker-compose.yml for a web application with a database, cache, and background worker:

# docker-compose.yml
version: "3.8"

services:
  web:
    build: .
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
      - REDIS_URL=redis://cache:6379
      - SECRET_KEY=my-secret-key
    depends_on:
      - db
      - cache
    restart: always

  worker:
    build: .
    command: python manage.py run_worker
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      - db
      - cache
    restart: always

  db:
    image: postgres:16
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=myapp

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:

Step 1: Automated Conversion with Kompose

Kompose is an official Kubernetes project that converts Compose files to Kubernetes manifests. Install and run it:

# Install
# macOS
brew install kompose

# Linux
curl -L https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 \
  -o /usr/local/bin/kompose && chmod +x /usr/local/bin/kompose

# Convert
kompose convert -f docker-compose.yml

This generates:

├── web-deployment.yaml
├── web-service.yaml
├── worker-deployment.yaml
├── db-deployment.yaml
├── db-service.yaml
├── cache-deployment.yaml
├── cache-service.yaml
└── pgdata-persistentvolumeclaim.yaml

Kompose does the tedious translation — ports, volumes, environment variables, service names. But it does not produce production-ready manifests. Here is what you need to fix.

Step 2: What Kompose Gets Wrong

Problem 1: No Resource Limits

Kompose does not set CPU or memory requests/limits. Without these, a single container can consume all node resources and starve other workloads.

# Add to every container spec
resources:
  requests:
    cpu: 100m
    memory: 256Mi
  limits:
    memory: 512Mi

Setting the right values requires observing actual usage. Start conservative and adjust. See our guide on Kubernetes VPA for automated right-sizing.

Problem 2: No Health Checks

Kompose does not generate readiness or liveness probes. Without these, Kubernetes cannot detect unhealthy containers or route traffic away from pods that are not ready.

# Add to web container
readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10
livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 20

Your application needs a /health endpoint. If it does not have one, add it — even a simple endpoint returning 200 OK is better than no health check.

Problem 3: Secrets in Plain Text

Kompose converts environment variables directly into Deployment specs. Your database password and secret key are now in plaintext YAML files that will be committed to Git.

Move secrets to Kubernetes Secrets:

# Create secret
kubectl create secret generic app-secrets \
  --from-literal=DATABASE_URL='postgres://user:pass@db:5432/myapp' \
  --from-literal=SECRET_KEY='my-secret-key'
# Reference in deployment
envFrom:
  - secretRef:
      name: app-secrets
  - configMapRef:
      name: app-config

Read our guide on Kubernetes secrets security mistakes to avoid common pitfalls.

Problem 4: No Ingress

Compose exposes ports directly. Kubernetes needs an Ingress resource to route external HTTP traffic:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - myapp.example.com
    secretName: myapp-tls
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web
            port:
              number: 8080

Problem 5: Database in Kubernetes

Kompose converts the db service into a Kubernetes Deployment. Running databases in Kubernetes is possible but adds operational complexity. For production, consider:

  • Managed database (RDS, Cloud SQL, Azure Database) — we recommend this for most teams
  • StatefulSet with PV — if you need to run Postgres in-cluster, use a StatefulSet, not a Deployment
  • Operators (CloudNativePG, Zalando Postgres Operator) — automate backup, failover, and scaling

Step 3: The Production-Ready Manifest

Here is what the web deployment looks like after fixing all Kompose issues:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  labels:
    app: myapp
    component: web
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: myapp
      component: web
  template:
    metadata:
      labels:
        app: myapp
        component: web
    spec:
      containers:
      - name: web
        image: registry.example.com/myapp:v1.2.3
        ports:
        - containerPort: 8080
        envFrom:
        - secretRef:
            name: app-secrets
        - configMapRef:
            name: app-config
        resources:
          requests:
            cpu: 200m
            memory: 256Mi
          limits:
            memory: 512Mi
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 15
          periodSeconds: 20
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: myapp
            component: web

Changes from Kompose output:

  • 3 replicas with rolling update strategy (zero downtime)
  • Resource requests and limits to prevent resource starvation
  • Health checks for automatic recovery
  • Secrets via envFrom instead of plaintext
  • Tagged image instead of latest (never use latest in production)
  • Topology spread to distribute pods across nodes

Step 4: Set Up CI/CD

Replace docker-compose up in your development workflow and build pipeline:

Local development: Keep using Docker Compose for local dev. Add a separate k8s/ directory for Kubernetes manifests:

project/
├── docker-compose.yml          # Local development
├── Dockerfile
├── k8s/
│   ├── base/                   # Shared manifests
│   │   ├── web-deployment.yaml
│   │   ├── worker-deployment.yaml
│   │   └── kustomization.yaml
│   ├── staging/                # Staging overrides
│   │   └── kustomization.yaml
│   └── production/             # Production overrides
│       └── kustomization.yaml
└── .github/workflows/
    └── deploy.yml

Use Kustomize for environment-specific overrides without duplicating manifests.

CI/CD pipeline:

# .github/workflows/deploy.yml
name: Deploy
on:
  push:
    branches: [main]

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4

    - name: Build and push image
      run: |
        docker build -t $ECR_REGISTRY/myapp:${{ github.sha }} .
        docker push $ECR_REGISTRY/myapp:${{ github.sha }}

    - name: Deploy to Kubernetes
      run: |
        cd k8s/production
        kustomize edit set image myapp=$ECR_REGISTRY/myapp:${{ github.sha }}
        kustomize build | kubectl apply -f -
        kubectl rollout status deployment/web -n production

Migration Checklist

Use this checklist to track your migration:

  • Application runs in Docker (Dockerfile exists and builds)
  • Run kompose convert to generate initial manifests
  • Add resource requests and limits to all containers
  • Add readiness and liveness probes to all containers
  • Move secrets from environment variables to Kubernetes Secrets
  • Replace database Deployment with managed database or StatefulSet
  • Add Ingress resource with TLS
  • Set up container registry (ECR, GCR, Docker Hub)
  • Configure Kubernetes namespaces for environment isolation
  • Set up CI/CD pipeline for automated deployment
  • Test rolling updates (deploy, verify zero downtime)
  • Configure monitoring (Prometheus + Grafana)
  • Test failure scenarios (kill a pod, kill a node)
  • Document runbooks for common operations

Common Mistakes

Using latest tag. Always use specific image tags (commit SHA or semver). latest makes rollbacks impossible and deployments non-reproducible.

Skipping resource limits. Without limits, one container can OOM-kill everything on the node. Always set memory limits.

Running databases as Deployments. Databases need stable storage and network identity. Use StatefulSets or managed databases, not Deployments.

Ignoring depends_on translation. Compose’s depends_on has no direct Kubernetes equivalent. Your application must handle database connection retries on startup — do not assume services start in order.


Need Help Moving to Kubernetes?

We migrate Docker Compose applications to production Kubernetes clusters — from simple web apps to complex microservice architectures.

Our Kubernetes consulting team provides:

  • Architecture review — evaluate your Compose setup and design the Kubernetes target state
  • Cluster provisioning — set up EKS, AKS, or GKE with Terraform
  • Migration execution — convert, test, and deploy with zero downtime
  • Team training — get your developers comfortable with Kubernetes workflows

Talk to our Kubernetes migration team →

Continue exploring these related topics

$ suggest --service

Need Kubernetes expertise?

From architecture to production support, we help teams run Kubernetes reliably at scale.

Get started
Chat with real humans
Chat on WhatsApp