Engineering

Kubectl Rollout Restart 2026: Zero-Downtime Pod Restarts in Kubernetes

Engineering Team

Kubectl rollout restart is the recommended method for restarting Kubernetes workloads without downtime. Introduced in Kubernetes 1.15, this command triggers a rolling update that gracefully replaces pods while maintaining application availability. This guide covers everything you need to know about kubectl rollout restart in 2026.

Quick Reference: Kubectl Rollout Restart Commands

CommandDescription
kubectl rollout restart deployment/nginxRestart a deployment
kubectl rollout restart statefulset/mysqlRestart a StatefulSet
kubectl rollout restart daemonset/fluentdRestart a DaemonSet
kubectl rollout restart deployment -n prodRestart all deployments in namespace
kubectl rollout restart deployment -l app=apiRestart by label selector
kubectl rollout status deployment/nginxCheck rollout progress
kubectl rollout undo deployment/nginxRollback if issues

What Is Kubectl Rollout Restart?

The kubectl rollout restart command triggers a rolling update by adding a restart annotation to the pod template. Kubernetes then gradually replaces old pods with new ones, ensuring zero downtime.

How It Works

  1. Command adds kubectl.kubernetes.io/restartedAt annotation with current timestamp
  2. Kubernetes detects pod template change
  3. Rolling update begins based on deployment strategy
  4. New pods start and pass readiness probes
  5. Old pods are terminated after new pods are ready
# What kubectl rollout restart adds:
spec:
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2026-01-25T10:30:00Z"

Basic Usage

Restart a Deployment

kubectl rollout restart deployment/nginx

Output:

deployment.apps/nginx restarted

Restart with Namespace

kubectl rollout restart deployment/api-server -n production

Restart StatefulSet

kubectl rollout restart statefulset/mysql

Restart DaemonSet

kubectl rollout restart daemonset/fluentd

Restart Multiple Resources

All Deployments in Namespace

kubectl rollout restart deployment -n production

By Label Selector

# Restart all deployments with app=backend
kubectl rollout restart deployment -l app=backend

# Restart deployments with tier=frontend
kubectl rollout restart deployment -l tier=frontend -n production

Multiple Specific Deployments

kubectl rollout restart deployment/api deployment/worker deployment/scheduler

Monitor Rollout Progress

Check Status

kubectl rollout status deployment/nginx

Output:

Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
deployment "nginx" successfully rolled out

Watch Pods During Rollout

kubectl get pods -l app=nginx -w

Output:

NAME                    READY   STATUS        RESTARTS   AGE
nginx-7d8f675f-abc12    1/1     Running       0          2d
nginx-7d8f675f-def34    1/1     Running       0          2d
nginx-8e9g786g-ghi56    0/1     ContainerCreating   0   5s
nginx-8e9g786g-ghi56    1/1     Running       0          10s
nginx-7d8f675f-abc12    1/1     Terminating   0          2d

Non-Blocking Status Check

# Don't wait for completion
kubectl rollout status deployment/nginx --watch=false

# Check specific revision
kubectl rollout status deployment/nginx --revision=3

Set Timeout

# Fail if not complete in 5 minutes
kubectl rollout status deployment/nginx --timeout=5m

Rollback If Issues

Undo Last Rollout

kubectl rollout undo deployment/nginx

Rollback to Specific Revision

# View revision history
kubectl rollout history deployment/nginx

# Rollback to revision 2
kubectl rollout undo deployment/nginx --to-revision=2

Pause and Resume

# Pause rollout
kubectl rollout pause deployment/nginx

# Make changes without triggering rollout
kubectl set image deployment/nginx nginx=nginx:1.21

# Resume rollout
kubectl rollout resume deployment/nginx

Zero Downtime Requirements

For true zero-downtime restarts, ensure your deployment has:

1. Multiple Replicas

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 3  # Minimum 2 for zero downtime

2. Readiness Probe

spec:
  containers:
    - name: api
      readinessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3

3. Proper Rolling Update Strategy

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0      # Never have fewer than desired replicas
      maxSurge: 1            # Create 1 extra pod during update

4. Graceful Shutdown Handling

spec:
  containers:
    - name: api
      lifecycle:
        preStop:
          exec:
            command: ["/bin/sh", "-c", "sleep 10"]
  terminationGracePeriodSeconds: 30

5. Pod Disruption Budget

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: api-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: api

Complete Example: Production-Ready Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  labels:
    app: api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: api:v1.0.0
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /ready
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /live
              port: 8080
            initialDelaySeconds: 15
            periodSeconds: 20
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "sleep 15"]
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 512Mi
      terminationGracePeriodSeconds: 30
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: api-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: api

CI/CD Integration

GitHub Actions

name: Deploy
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Configure kubectl
        uses: azure/setup-kubectl@v3

      - name: Restart Deployment
        run: |
          kubectl rollout restart deployment/api -n production
          kubectl rollout status deployment/api -n production --timeout=300s

GitLab CI

restart:
  stage: deploy
  script:
    - kubectl rollout restart deployment/api -n production
    - kubectl rollout status deployment/api -n production --timeout=5m
  only:
    - main

ArgoCD

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: api
spec:
  syncPolicy:
    automated:
      selfHeal: true
    syncOptions:
      - RespectIgnoreDifferences=true

Troubleshooting

Rollout Stuck

# Check rollout status
kubectl rollout status deployment/api

# Check pod events
kubectl describe deployment api

# Check new pod status
kubectl get pods -l app=api
kubectl describe pod <new-pod-name>

Pods Not Becoming Ready

# Check readiness probe
kubectl describe pod api-new-abc123 | grep -A 10 "Readiness"

# Check logs
kubectl logs api-new-abc123

# Check events
kubectl get events --sort-by='.lastTimestamp' | grep api

Rollback Not Working

# Check revision history
kubectl rollout history deployment/api

# Check specific revision
kubectl rollout history deployment/api --revision=2

# Force rollback
kubectl rollout undo deployment/api --to-revision=1

Best Practices

1. Always Check Status After Restart

kubectl rollout restart deployment/api && \
kubectl rollout status deployment/api --timeout=5m

2. Use Labels for Organized Restarts

# Restart all backend services
kubectl rollout restart deployment -l tier=backend

# Restart by environment
kubectl rollout restart deployment -l env=staging

3. Set Appropriate Timeouts

# CI/CD pipeline with timeout
kubectl rollout status deployment/api --timeout=300s || exit 1

4. Monitor During Rollout

# Watch pods and events
kubectl get pods -l app=api -w &
kubectl get events --watch &
kubectl rollout status deployment/api

5. Document Restart Reasons

# Use annotations to document why
kubectl annotate deployment/api restart-reason="Config change for feature-x"
kubectl rollout restart deployment/api

Rollout Restart vs Alternatives

MethodZero DowntimeUse Case
kubectl rollout restartYesStandard restart
kubectl delete podNo*Single pod issues
kubectl scaleNoFull reset
kubectl set imageYesImage updates
kubectl patchYesConfig changes

*Depends on replicas and PDB


Conclusion

Kubectl rollout restart is the gold standard for restarting Kubernetes workloads in 2026. Key takeaways:

  • Use for Deployments, StatefulSets, and DaemonSets
  • Ensures zero downtime with rolling updates
  • Always verify with kubectl rollout status
  • Configure readiness probes and PDBs for reliability
  • Use kubectl rollout undo for quick rollbacks

Master this command to safely restart applications in production without impacting users.


Need help with Kubernetes deployments? Book a free 30-minute consultation with our experts.

Chat with real humans
Chat on WhatsApp