Need to kubectl restart deployment in Kubernetes? This guide covers all methods to restart deployments, from the recommended kubectl rollout restart to alternative approaches. Learn how to achieve zero-downtime restarts and troubleshoot common issues in 2026.
Quick Reference: Restart Deployment Commands
| Command | Zero Downtime | Use Case |
|---|---|---|
kubectl rollout restart deployment/nginx | Yes | Standard restart |
kubectl rollout restart deployment -n prod | Yes | All deployments in namespace |
kubectl scale deployment/nginx --replicas=0 && kubectl scale deployment/nginx --replicas=3 | No | Full reset |
kubectl set env deployment/nginx RESTART=$(date) | Yes | Trigger via env change |
kubectl patch deployment nginx -p '{"spec":{"template":{"metadata":{"annotations":{"restart":"'$(date)'"}}}}}' | Yes | Manual annotation |
The Recommended Method: Kubectl Rollout Restart
The kubectl rollout restart command is the official and recommended way to restart a deployment in Kubernetes.
Basic Usage
# Restart a single deployment
kubectl rollout restart deployment/nginx
# Restart deployment in specific namespace
kubectl rollout restart deployment/api-server -n production
# Restart all deployments in namespace
kubectl rollout restart deployment -n staging
# Restart deployments by label
kubectl rollout restart deployment -l app=backend
Verify the Restart
# Check rollout status
kubectl rollout status deployment/nginx
# Watch pods being replaced
kubectl get pods -l app=nginx -w
# Check deployment events
kubectl describe deployment nginx | tail -20
Example Output
$ kubectl rollout restart deployment/nginx
deployment.apps/nginx restarted
$ kubectl rollout status deployment/nginx
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
deployment "nginx" successfully rolled out
How Rollout Restart Works
When you run kubectl rollout restart, Kubernetes:
- Adds restart annotation with current timestamp
- Triggers rolling update based on deployment strategy
- Creates new pods before terminating old ones
- Waits for readiness probes to pass
- Terminates old pods after new pods are ready
# What gets added to pod template:
spec:
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2026-01-25T10:30:00Z"
Zero-Downtime Configuration
For true zero-downtime restarts, configure your deployment properly:
1. Multiple Replicas
spec:
replicas: 3 # Minimum 2 for zero downtime
2. Rolling Update Strategy
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0 # Never have fewer pods than desired
maxSurge: 1 # Create one extra pod during update
3. Readiness Probe
spec:
template:
spec:
containers:
- name: app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
4. Graceful Shutdown
spec:
template:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: app
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
5. Pod Disruption Budget
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: myapp
Complete Production-Ready Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: api
spec:
terminationGracePeriodSeconds: 30
containers:
- name: api
image: api:v1.0.0
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /live
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
Alternative Restart Methods
Method 2: Scale to Zero and Back
Forces complete pod recreation:
# Scale down
kubectl scale deployment/nginx --replicas=0
# Scale back up
kubectl scale deployment/nginx --replicas=3
One-liner:
kubectl scale deployment/nginx --replicas=0 && sleep 5 && kubectl scale deployment/nginx --replicas=3
Warning: Causes downtime. Use only when full reset is needed.
Method 3: Environment Variable Change
Triggers rolling update by changing pod spec:
kubectl set env deployment/nginx RESTART_TIME="$(date +%s)"
Works on Kubernetes versions before 1.15 (when rollout restart was added).
Method 4: Annotation Patch
Manually add restart annotation:
kubectl patch deployment nginx -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"restart-time\":\"$(date +%s)\"}}}}}"
Method 5: Delete Pods
Delete pods and let deployment recreate them:
# Delete all pods in deployment
kubectl delete pods -l app=nginx
# Delete specific pod
kubectl delete pod nginx-abc123
Note: Less controlled than rollout restart.
Restart Multiple Deployments
All Deployments in Namespace
kubectl rollout restart deployment -n production
By Label Selector
# Restart backend services
kubectl rollout restart deployment -l tier=backend
# Restart by team
kubectl rollout restart deployment -l team=platform -n production
Specific Deployments
kubectl rollout restart deployment/api deployment/worker deployment/scheduler
All Deployments in All Namespaces
# List all deployments
kubectl get deployments --all-namespaces
# Restart all (be careful!)
kubectl get deployments --all-namespaces -o custom-columns=NS:.metadata.namespace,NAME:.metadata.name --no-headers | \
xargs -n2 sh -c 'kubectl rollout restart deployment/$1 -n $0'
CI/CD Integration
GitHub Actions
name: Restart Deployment
on:
workflow_dispatch:
inputs:
deployment:
description: 'Deployment name'
required: true
namespace:
description: 'Namespace'
default: 'production'
jobs:
restart:
runs-on: ubuntu-latest
steps:
- name: Restart Deployment
run: |
kubectl rollout restart deployment/${{ inputs.deployment }} -n ${{ inputs.namespace }}
kubectl rollout status deployment/${{ inputs.deployment }} -n ${{ inputs.namespace }} --timeout=5m
GitLab CI
restart:
stage: deploy
script:
- kubectl rollout restart deployment/api -n production
- kubectl rollout status deployment/api -n production --timeout=300s
when: manual
Shell Script with Validation
#!/bin/bash
set -e
DEPLOYMENT=${1:-api}
NAMESPACE=${2:-production}
TIMEOUT=${3:-5m}
echo "Restarting $DEPLOYMENT in $NAMESPACE..."
# Restart
kubectl rollout restart deployment/$DEPLOYMENT -n $NAMESPACE
# Wait for completion
if kubectl rollout status deployment/$DEPLOYMENT -n $NAMESPACE --timeout=$TIMEOUT; then
echo "Restart successful!"
# Verify pods are running
kubectl get pods -l app=$DEPLOYMENT -n $NAMESPACE
else
echo "Restart failed!"
# Show debugging info
kubectl describe deployment $DEPLOYMENT -n $NAMESPACE
kubectl get events -n $NAMESPACE --sort-by='.lastTimestamp' | tail -20
# Rollback
echo "Rolling back..."
kubectl rollout undo deployment/$DEPLOYMENT -n $NAMESPACE
exit 1
fi
Troubleshooting
Rollout Stuck
# Check status
kubectl rollout status deployment/api
# Check deployment conditions
kubectl get deployment api -o jsonpath='{.status.conditions}'
# Check pod status
kubectl get pods -l app=api
# Check events
kubectl get events --sort-by='.lastTimestamp' | grep api
Pods Not Becoming Ready
# Describe the new pod
kubectl describe pod api-new-abc123
# Check logs
kubectl logs api-new-abc123
# Check readiness probe
kubectl get pod api-new-abc123 -o jsonpath='{.spec.containers[0].readinessProbe}'
Rollback if Issues
# Undo last rollout
kubectl rollout undo deployment/api
# Rollback to specific revision
kubectl rollout history deployment/api
kubectl rollout undo deployment/api --to-revision=2
Common Issues
| Symptom | Cause | Solution |
|---|---|---|
Pods stuck in Pending | Insufficient resources | Check node capacity, quotas |
ImagePullBackOff | Canβt pull image | Check image name, registry access |
| Readiness probe failing | App not ready | Fix app or probe config |
CrashLoopBackOff | App crashing | Check logs, fix app |
| Rollout timeout | Slow startup | Increase progress deadline |
Best Practices
1. Always Check Status After Restart
kubectl rollout restart deployment/api && \
kubectl rollout status deployment/api --timeout=5m
2. Use Labels for Organized Restarts
metadata:
labels:
app: api
tier: backend
team: platform
kubectl rollout restart deployment -l tier=backend
3. Document Restart Reasons
kubectl annotate deployment/api restart-reason="Config refresh" --overwrite
kubectl rollout restart deployment/api
4. Monitor During Restart
# Watch in real-time
watch kubectl get pods -l app=api
5. Set Up Alerts
Configure monitoring to alert on:
- Failed rollouts
- Increased pod restarts
- Decreased available replicas
Conclusion
Kubectl restart deployment is straightforward with kubectl rollout restart. Key takeaways:
- Use
kubectl rollout restart deployment/namefor zero-downtime restarts - Configure readiness probes and rolling update strategy properly
- Always verify with
kubectl rollout status - Use
kubectl rollout undofor quick rollbacks - Integrate status checks in CI/CD pipelines
Master deployment restarts to maintain application availability in production.
Related Resources
- Kubectl Rollout Restart 2026
- Kubectl Restart Pod 2026
- Kubectl Rollout Status Deployment 2026
- Kubernetes Consulting Services
Need help with Kubernetes deployments? Book a free 30-minute consultation with our DevOps experts.