Engineering

Kubectl Restart Pod 2026: 5 Methods to Restart Kubernetes Pods

Engineering Team

Need to kubectl restart pod in Kubernetes? Unlike Docker’s simple docker restart command, Kubernetes doesn’t have a direct kubectl restart pod command. However, there are several effective methods to restart pods, each with different use cases and trade-offs. This guide covers all 5 methods for restarting Kubernetes pods in 2026.

Quick Reference: Kubectl Restart Pod Methods

MethodCommandZero DowntimeBest For
Rolling Restartkubectl rollout restart deployment/nameYesProduction deployments
Delete Podkubectl delete pod pod-nameNo*Single pod issues
Scale to Zerokubectl scale --replicas=0 then --replicas=NNoFull reset needed
Env Variablekubectl set env deployment/name VAR=valueYesConfig-triggered restart
Replace Podkubectl replace --force -f pod.yamlNoStandalone pods

*Managed pods are automatically recreated by their controller


Why Kubernetes Doesn’t Have “kubectl restart pod”

Kubernetes is designed around declarative state management, not imperative commands. Instead of telling Kubernetes “restart this pod,” you describe the desired state and let Kubernetes figure out how to achieve it.

Key concepts:

  • Pods are ephemeral and meant to be replaced, not restarted
  • Controllers (Deployments, StatefulSets, DaemonSets) manage pod lifecycle
  • Rolling updates ensure zero downtime during restarts

The kubectl rollout restart command is the recommended way to restart pods managed by a Deployment, StatefulSet, or DaemonSet. It performs a rolling update with zero downtime.

Basic Usage

# Restart a deployment
kubectl rollout restart deployment/nginx-deployment

# Restart in specific namespace
kubectl rollout restart deployment/api-server -n production

# Restart a StatefulSet
kubectl rollout restart statefulset/mysql

# Restart a DaemonSet
kubectl rollout restart daemonset/fluentd

How It Works

  1. Kubernetes adds a restart annotation to the pod template
  2. This triggers a rolling update
  3. New pods are created before old ones are terminated
  4. Readiness probes ensure traffic only goes to healthy pods

Verify the Restart

# Check rollout status
kubectl rollout status deployment/nginx-deployment

# Watch pods being recreated
kubectl get pods -l app=nginx -w

# Check pod ages
kubectl get pods -l app=nginx -o custom-columns=NAME:.metadata.name,AGE:.metadata.creationTimestamp

Example Output

$ kubectl rollout restart deployment/nginx
deployment.apps/nginx restarted

$ kubectl rollout status deployment/nginx
Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
deployment "nginx" successfully rolled out

Method 2: Delete the Pod

Deleting a pod forces Kubernetes to create a new one (if managed by a controller).

Basic Usage

# Delete a single pod
kubectl delete pod nginx-abc123

# Delete with namespace
kubectl delete pod api-server-xyz -n production

# Delete multiple pods
kubectl delete pod pod1 pod2 pod3

# Delete pods by label
kubectl delete pods -l app=nginx

Force Delete (Stuck Pods)

For pods stuck in Terminating state:

# Force delete immediately
kubectl delete pod stuck-pod --grace-period=0 --force

# Force delete in namespace
kubectl delete pod stuck-pod -n production --grace-period=0 --force

Warning: Force delete should be used sparingly. It doesn’t wait for graceful shutdown and may cause data loss.

When to Use

  • Single pod is unhealthy or stuck
  • Quick restart of specific pod
  • Testing pod scheduling behavior

Limitations

  • Standalone pods won’t be recreated - Only pods managed by Deployments, ReplicaSets, StatefulSets, or DaemonSets are automatically recreated
  • No rolling update - The pod is terminated before a new one starts
  • Potential downtime - If you only have one replica

Method 3: Scale to Zero and Back

Scaling replicas to zero and back forces all pods to be recreated.

Basic Usage

# Scale down to zero
kubectl scale deployment/nginx --replicas=0

# Scale back up
kubectl scale deployment/nginx --replicas=3

One-Liner

kubectl scale deployment/nginx --replicas=0 && kubectl scale deployment/nginx --replicas=3

When to Use

  • Need a complete restart of all pods simultaneously
  • Troubleshooting persistent issues
  • Testing startup behavior

Limitations

  • Causes downtime - All pods are terminated before new ones start
  • Not suitable for production - Unless downtime is acceptable
  • Loses in-memory state - No graceful handoff

Method 4: Update Environment Variable

Changing the pod template (like adding an environment variable) triggers a rolling restart.

Basic Usage

# Add/update an environment variable
kubectl set env deployment/nginx RESTART_TIME="$(date)"

# Or use a more descriptive variable
kubectl set env deployment/nginx DEPLOY_TRIGGER="restart-$(date +%s)"

How It Works

  1. Kubernetes detects the pod template changed
  2. Triggers a rolling update
  3. Same zero-downtime behavior as kubectl rollout restart

When to Use

  • On older Kubernetes versions (before 1.15)
  • When you also need to set configuration
  • In CI/CD pipelines that need explicit change triggers

Method 5: Replace Pod (Standalone Pods)

For standalone pods not managed by a controller:

Basic Usage

# Replace pod using manifest
kubectl replace --force -f pod.yaml

# Get current pod, delete, and recreate
kubectl get pod nginx -o yaml > /tmp/nginx.yaml
kubectl delete pod nginx
kubectl apply -f /tmp/nginx.yaml

When to Use

  • Standalone pods (not recommended in production)
  • Testing and development
  • One-off jobs

Restart All Pods in a Namespace

Rolling Restart All Deployments

kubectl rollout restart deployment -n production

Restart Pods by Label

# Delete all pods with label
kubectl delete pods -l app=backend -n production

# Rolling restart deployments with label
kubectl rollout restart deployment -l tier=frontend

Restart All Pods (Nuclear Option)

# Delete all pods in namespace (they'll be recreated)
kubectl delete pods --all -n production

Warning: This causes downtime. Only use in development/testing.


Troubleshooting Pod Restarts

Check Why Pod is Restarting

# Check pod events
kubectl describe pod nginx-abc123

# Check container logs
kubectl logs nginx-abc123

# Check previous container logs (if crashed)
kubectl logs nginx-abc123 --previous

# Check restart count
kubectl get pod nginx-abc123 -o jsonpath='{.status.containerStatuses[0].restartCount}'

Common Restart Reasons

Exit CodeMeaningCommon Cause
0SuccessNormal termination
1Application errorCode exception
137SIGKILL (OOMKilled)Memory limit exceeded
139SIGSEGVSegmentation fault
143SIGTERMGraceful shutdown

Fix OOMKilled Pods

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  template:
    spec:
      containers:
        - name: api
          resources:
            requests:
              memory: "256Mi"
            limits:
              memory: "512Mi"  # Increase if OOMKilled

Best Practices for Restarting Pods

1. Always Use Controllers

Never run standalone pods in production. Use Deployments, StatefulSets, or DaemonSets:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest

2. Configure Readiness Probes

Ensure new pods are ready before old ones are terminated:

readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10

3. Set Proper Resource Limits

Prevent OOMKilled restarts:

resources:
  requests:
    cpu: "100m"
    memory: "128Mi"
  limits:
    cpu: "500m"
    memory: "512Mi"

4. Handle SIGTERM Gracefully

Your application should handle termination signals:

import signal
import sys

def graceful_shutdown(signum, frame):
    print("Shutting down gracefully...")
    # Close connections, save state
    sys.exit(0)

signal.signal(signal.SIGTERM, graceful_shutdown)

5. Use Pod Disruption Budgets

Prevent too many pods from being restarted at once:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nginx-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: nginx

Kubectl Restart Pod in CI/CD Pipelines

GitHub Actions Example

- name: Restart Deployment
  run: |
    kubectl rollout restart deployment/${{ env.DEPLOYMENT_NAME }} -n ${{ env.NAMESPACE }}
    kubectl rollout status deployment/${{ env.DEPLOYMENT_NAME }} -n ${{ env.NAMESPACE }} --timeout=300s

GitLab CI Example

deploy:
  script:
    - kubectl rollout restart deployment/api -n production
    - kubectl rollout status deployment/api -n production --timeout=5m

Comparison: When to Use Each Method

ScenarioRecommended Method
Production deploymentkubectl rollout restart
Single unhealthy podkubectl delete pod
Complete reset neededScale to zero and back
Older K8s version (<1.15)kubectl set env
Standalone podkubectl replace --force
All pods in namespacekubectl rollout restart deployment -n ns

Conclusion

While there’s no direct kubectl restart pod command, Kubernetes provides multiple methods to restart pods:

  1. kubectl rollout restart - Best for production (zero downtime)
  2. kubectl delete pod - Quick fix for single pods
  3. Scale replicas - Complete reset
  4. kubectl set env - Config-triggered restart
  5. kubectl replace --force - Standalone pods

For production environments, always use kubectl rollout restart to ensure zero-downtime restarts with proper rolling updates.


Need help with Kubernetes troubleshooting? Book a free 30-minute consultation with our DevOps experts.

Chat with real humans
Chat on WhatsApp