Need to kubectl restart pod in Kubernetes? Unlike Docker’s simple docker restart command, Kubernetes doesn’t have a direct kubectl restart pod command. However, there are several effective methods to restart pods, each with different use cases and trade-offs. This guide covers all 5 methods for restarting Kubernetes pods in 2026.
Quick Reference: Kubectl Restart Pod Methods
| Method | Command | Zero Downtime | Best For |
|---|---|---|---|
| Rolling Restart | kubectl rollout restart deployment/name | Yes | Production deployments |
| Delete Pod | kubectl delete pod pod-name | No* | Single pod issues |
| Scale to Zero | kubectl scale --replicas=0 then --replicas=N | No | Full reset needed |
| Env Variable | kubectl set env deployment/name VAR=value | Yes | Config-triggered restart |
| Replace Pod | kubectl replace --force -f pod.yaml | No | Standalone pods |
*Managed pods are automatically recreated by their controller
Why Kubernetes Doesn’t Have “kubectl restart pod”
Kubernetes is designed around declarative state management, not imperative commands. Instead of telling Kubernetes “restart this pod,” you describe the desired state and let Kubernetes figure out how to achieve it.
Key concepts:
- Pods are ephemeral and meant to be replaced, not restarted
- Controllers (Deployments, StatefulSets, DaemonSets) manage pod lifecycle
- Rolling updates ensure zero downtime during restarts
Method 1: Rolling Restart (Recommended)
The kubectl rollout restart command is the recommended way to restart pods managed by a Deployment, StatefulSet, or DaemonSet. It performs a rolling update with zero downtime.
Basic Usage
# Restart a deployment
kubectl rollout restart deployment/nginx-deployment
# Restart in specific namespace
kubectl rollout restart deployment/api-server -n production
# Restart a StatefulSet
kubectl rollout restart statefulset/mysql
# Restart a DaemonSet
kubectl rollout restart daemonset/fluentd
How It Works
- Kubernetes adds a restart annotation to the pod template
- This triggers a rolling update
- New pods are created before old ones are terminated
- Readiness probes ensure traffic only goes to healthy pods
Verify the Restart
# Check rollout status
kubectl rollout status deployment/nginx-deployment
# Watch pods being recreated
kubectl get pods -l app=nginx -w
# Check pod ages
kubectl get pods -l app=nginx -o custom-columns=NAME:.metadata.name,AGE:.metadata.creationTimestamp
Example Output
$ kubectl rollout restart deployment/nginx
deployment.apps/nginx restarted
$ kubectl rollout status deployment/nginx
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
deployment "nginx" successfully rolled out
Method 2: Delete the Pod
Deleting a pod forces Kubernetes to create a new one (if managed by a controller).
Basic Usage
# Delete a single pod
kubectl delete pod nginx-abc123
# Delete with namespace
kubectl delete pod api-server-xyz -n production
# Delete multiple pods
kubectl delete pod pod1 pod2 pod3
# Delete pods by label
kubectl delete pods -l app=nginx
Force Delete (Stuck Pods)
For pods stuck in Terminating state:
# Force delete immediately
kubectl delete pod stuck-pod --grace-period=0 --force
# Force delete in namespace
kubectl delete pod stuck-pod -n production --grace-period=0 --force
Warning: Force delete should be used sparingly. It doesn’t wait for graceful shutdown and may cause data loss.
When to Use
- Single pod is unhealthy or stuck
- Quick restart of specific pod
- Testing pod scheduling behavior
Limitations
- Standalone pods won’t be recreated - Only pods managed by Deployments, ReplicaSets, StatefulSets, or DaemonSets are automatically recreated
- No rolling update - The pod is terminated before a new one starts
- Potential downtime - If you only have one replica
Method 3: Scale to Zero and Back
Scaling replicas to zero and back forces all pods to be recreated.
Basic Usage
# Scale down to zero
kubectl scale deployment/nginx --replicas=0
# Scale back up
kubectl scale deployment/nginx --replicas=3
One-Liner
kubectl scale deployment/nginx --replicas=0 && kubectl scale deployment/nginx --replicas=3
When to Use
- Need a complete restart of all pods simultaneously
- Troubleshooting persistent issues
- Testing startup behavior
Limitations
- Causes downtime - All pods are terminated before new ones start
- Not suitable for production - Unless downtime is acceptable
- Loses in-memory state - No graceful handoff
Method 4: Update Environment Variable
Changing the pod template (like adding an environment variable) triggers a rolling restart.
Basic Usage
# Add/update an environment variable
kubectl set env deployment/nginx RESTART_TIME="$(date)"
# Or use a more descriptive variable
kubectl set env deployment/nginx DEPLOY_TRIGGER="restart-$(date +%s)"
How It Works
- Kubernetes detects the pod template changed
- Triggers a rolling update
- Same zero-downtime behavior as
kubectl rollout restart
When to Use
- On older Kubernetes versions (before 1.15)
- When you also need to set configuration
- In CI/CD pipelines that need explicit change triggers
Method 5: Replace Pod (Standalone Pods)
For standalone pods not managed by a controller:
Basic Usage
# Replace pod using manifest
kubectl replace --force -f pod.yaml
# Get current pod, delete, and recreate
kubectl get pod nginx -o yaml > /tmp/nginx.yaml
kubectl delete pod nginx
kubectl apply -f /tmp/nginx.yaml
When to Use
- Standalone pods (not recommended in production)
- Testing and development
- One-off jobs
Restart All Pods in a Namespace
Rolling Restart All Deployments
kubectl rollout restart deployment -n production
Restart Pods by Label
# Delete all pods with label
kubectl delete pods -l app=backend -n production
# Rolling restart deployments with label
kubectl rollout restart deployment -l tier=frontend
Restart All Pods (Nuclear Option)
# Delete all pods in namespace (they'll be recreated)
kubectl delete pods --all -n production
Warning: This causes downtime. Only use in development/testing.
Troubleshooting Pod Restarts
Check Why Pod is Restarting
# Check pod events
kubectl describe pod nginx-abc123
# Check container logs
kubectl logs nginx-abc123
# Check previous container logs (if crashed)
kubectl logs nginx-abc123 --previous
# Check restart count
kubectl get pod nginx-abc123 -o jsonpath='{.status.containerStatuses[0].restartCount}'
Common Restart Reasons
| Exit Code | Meaning | Common Cause |
|---|---|---|
| 0 | Success | Normal termination |
| 1 | Application error | Code exception |
| 137 | SIGKILL (OOMKilled) | Memory limit exceeded |
| 139 | SIGSEGV | Segmentation fault |
| 143 | SIGTERM | Graceful shutdown |
Fix OOMKilled Pods
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
template:
spec:
containers:
- name: api
resources:
requests:
memory: "256Mi"
limits:
memory: "512Mi" # Increase if OOMKilled
Best Practices for Restarting Pods
1. Always Use Controllers
Never run standalone pods in production. Use Deployments, StatefulSets, or DaemonSets:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
2. Configure Readiness Probes
Ensure new pods are ready before old ones are terminated:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
3. Set Proper Resource Limits
Prevent OOMKilled restarts:
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
4. Handle SIGTERM Gracefully
Your application should handle termination signals:
import signal
import sys
def graceful_shutdown(signum, frame):
print("Shutting down gracefully...")
# Close connections, save state
sys.exit(0)
signal.signal(signal.SIGTERM, graceful_shutdown)
5. Use Pod Disruption Budgets
Prevent too many pods from being restarted at once:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: nginx-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: nginx
Kubectl Restart Pod in CI/CD Pipelines
GitHub Actions Example
- name: Restart Deployment
run: |
kubectl rollout restart deployment/${{ env.DEPLOYMENT_NAME }} -n ${{ env.NAMESPACE }}
kubectl rollout status deployment/${{ env.DEPLOYMENT_NAME }} -n ${{ env.NAMESPACE }} --timeout=300s
GitLab CI Example
deploy:
script:
- kubectl rollout restart deployment/api -n production
- kubectl rollout status deployment/api -n production --timeout=5m
Comparison: When to Use Each Method
| Scenario | Recommended Method |
|---|---|
| Production deployment | kubectl rollout restart |
| Single unhealthy pod | kubectl delete pod |
| Complete reset needed | Scale to zero and back |
| Older K8s version (<1.15) | kubectl set env |
| Standalone pod | kubectl replace --force |
| All pods in namespace | kubectl rollout restart deployment -n ns |
Conclusion
While there’s no direct kubectl restart pod command, Kubernetes provides multiple methods to restart pods:
kubectl rollout restart- Best for production (zero downtime)kubectl delete pod- Quick fix for single pods- Scale replicas - Complete reset
kubectl set env- Config-triggered restartkubectl replace --force- Standalone pods
For production environments, always use kubectl rollout restart to ensure zero-downtime restarts with proper rolling updates.
Related Resources
- Kubectl Rollout Restart 2026
- Kubectl Restart Deployment 2026
- Kubernetes Security Best Practices 2026
- Kubernetes Consulting Services
Need help with Kubernetes troubleshooting? Book a free 30-minute consultation with our DevOps experts.