The CrashLoopBackOff error in Kubernetes means a container keeps crashing and Kubernetes is restarting it with exponential backoff delays. This is not a bug in Kubernetes — it is Kubernetes telling you that your container cannot stay running. At Tasrie IT Services, we encounter this error more than any other across our client clusters.
This guide takes an error-first approach: we start with the exact error messages and exit codes you see and map them to fixes.
What CrashLoopBackOff Actually Means
When you run kubectl get pods, you see this:
NAME READY STATUS RESTARTS AGE
myapp-7d4b8c6f5-x2k9p 0/1 CrashLoopBackOff 5 3m
Here is what is happening behind the scenes:
- Kubernetes starts the container
- The container crashes (exits with a non-zero code)
- Kubernetes restarts the container after a 10-second delay
- The container crashes again
- Kubernetes increases the delay: 10s → 20s → 40s → 80s → 160s → 300s (max)
- After reaching the 5-minute cap, Kubernetes keeps retrying every 5 minutes indefinitely
The RESTARTS column shows how many times Kubernetes has restarted the container. The backoff timer resets if the container runs successfully for 10 minutes.
Step-by-Step Error Diagnosis
Step 1: Get the Exit Code
kubectl describe pod <pod-name> -n <namespace>
Find the Last State section in the output:
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 26 Apr 2026 10:15:00 +0000
Finished: Mon, 26 Apr 2026 10:15:02 +0000
The exit code is the key to diagnosis. See the complete exit code reference below.
Step 2: Get the Error Logs
# Logs from the crashed container (--previous is essential)
kubectl logs <pod-name> -n <namespace> --previous
# If the pod has multiple containers
kubectl logs <pod-name> -n <namespace> -c <container-name> --previous
# If --previous returns nothing, try without it (container might still be starting)
kubectl logs <pod-name> -n <namespace>
Step 3: Check Events
kubectl describe pod <pod-name> -n <namespace> | tail -30
Events tell you about image pulls, scheduling, probe failures, and OOM kills.
Exit Code Reference and Fixes
Exit Code 1: Generic Application Error
The application started but encountered a fatal error.
What to check in logs:
kubectl logs <pod-name> -n <namespace> --previous | head -50
Common error messages and fixes:
| Log Error | Cause | Fix |
|---|---|---|
Error: Cannot find module | Missing Node.js dependency | Rebuild image with npm install |
FileNotFoundError | Missing config file | Check volume mounts and ConfigMaps |
Connection refused | Database or service unreachable | Check service endpoint and network policy |
Permission denied | File or directory not accessible | Fix file ownership in Dockerfile |
KeyError / undefined variable | Missing environment variable | Add to pod spec or ConfigMap |
bind: address already in use | Port conflict | Change the port or check for duplicate containers |
# Check environment variables set on the pod
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].env}' | python3 -m json.tool
# Check if referenced ConfigMaps exist
kubectl get configmap -n <namespace>
# Check if referenced Secrets exist
kubectl get secret -n <namespace>
For ConfigMap issues, see our Kubernetes ConfigMap guide. For Secrets, see our Kubernetes Secrets guide.
Exit Code 126: Command Cannot Execute
The entrypoint file exists but is not executable.
# Check what command the container is trying to run
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].command}'
Fixes:
- Add
chmod +xin the Dockerfile for the entrypoint script - Ensure the entrypoint file uses the correct line endings (LF, not CRLF)
- Check if a
readOnlyRootFilesystemsecurity context is preventing execution from a writable layer
Exit Code 127: Command Not Found
The entrypoint binary or script does not exist at the specified path.
# Check the command configuration
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].command}'
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].args}'
Common causes:
- Typo in the
commandfield commandin the pod spec overrides the DockerfileENTRYPOINT— the binary might only exist in the Dockerfile’sPATH- Using a multi-stage Dockerfile build where the binary was not copied to the final stage
- Alpine-based image missing
bash(use/bin/shinstead)
Debug by running the image interactively:
kubectl run debug --image=<image>:<tag> --rm -it --restart=Never -- /bin/sh
# Inside: find / -name "<binary-name>" 2>/dev/null
Exit Code 137: OOMKilled (SIGKILL)
The container exceeded its memory limit and was killed by the Linux OOM killer.
# Confirm it is OOMKilled (not a manual kill)
kubectl describe pod <pod-name> -n <namespace> | grep "OOMKilled"
# Check the memory limit
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].resources.limits.memory}'
# Check actual memory usage (if pod is currently running before crash)
kubectl top pod <pod-name> -n <namespace>
Fixes:
- Increase the memory limit if the application needs more memory
- Fix memory leaks in the application code
- For JVM applications, set
-Xmxto 75% of the container limit:env: - name: JAVA_OPTS value: "-Xmx768m -Xms256m" resources: limits: memory: "1Gi" - Use VPA to automatically right-size memory limits
For a complete OOMKilled guide, see our OOMKilled troubleshooting post.
Exit Code 139: Segmentation Fault (SIGSEGV)
The application crashed due to a memory access violation. This is common with applications that use native code (C/C++ extensions, Go, Rust) or corrupted binaries.
Fixes:
- Update the base image to the latest version
- Check if the binary was compiled for the wrong architecture (amd64 vs arm64)
- Enable core dumps for debugging: set
ulimit -c unlimitedin the entrypoint - Test the image locally:
docker run --rm <image>:<tag>
Exit Code 143: SIGTERM (Graceful Termination)
This is usually normal — the container received a termination signal and shut down. You see this during rolling updates, pod evictions, and node drains.
When it becomes a CrashLoopBackOff issue:
- A preStop hook is failing
- The application does not handle SIGTERM and exits before completing shutdown
- Another process (like a sidecar) sends SIGTERM to the main container
Fixing CrashLoopBackOff When There Are No Logs
Sometimes the container crashes so fast that --previous returns no logs.
Approach 1: Check events for clues
kubectl describe pod <pod-name> -n <namespace> | grep -A 20 "Events:"
Look for:
Error: configmap "xyz" not found— ConfigMap does not existError: secret "xyz" not found— Secret does not existFailedMount— Volume cannot be mountedRunContainerError— Container runtime error
Approach 2: Run the image manually
# Override the entrypoint to keep the container running
kubectl run debug --image=<image>:<tag> --restart=Never --command -- sleep 3600
# Exec in and test
kubectl exec -it debug -- /bin/sh
# Manually run: /app/entrypoint.sh or whatever the command is
# Clean up
kubectl delete pod debug
Approach 3: Use ephemeral debug container
kubectl debug -it <pod-name> -n <namespace> --image=busybox --target=<container-name>
CrashLoopBackOff vs Other Similar Errors
| Error | Meaning | Key Difference |
|---|---|---|
CrashLoopBackOff | Container starts and crashes repeatedly | Container ran and exited |
CreateContainerConfigError | Cannot create the container config | Missing ConfigMap/Secret — container never started |
RunContainerError | Container runtime failed to start it | Runtime error — check containerd/Docker |
ImagePullBackOff | Cannot pull the image | Image or registry issue — container never started |
Init:CrashLoopBackOff | Init container keeps crashing | Debug the init container, not the main container |
Error | Container exited with an error once | If restart policy is Never, it won’t retry |
Monitoring and Alerting for CrashLoopBackOff
Set up Prometheus alerts to catch CrashLoopBackOff before it impacts users:
groups:
- name: pod-health
rules:
- alert: PodCrashLooping
expr: rate(kube_pod_container_status_restarts_total[15m]) * 60 * 15 > 0
for: 5m
labels:
severity: warning
annotations:
summary: "Pod {{ $labels.pod }} in {{ $labels.namespace }} is crash looping"
- alert: PodCrashLoopingCritical
expr: kube_pod_container_status_restarts_total > 10
for: 5m
labels:
severity: critical
annotations:
summary: "Pod {{ $labels.pod }} has restarted {{ $value }} times"
For a comprehensive Kubernetes troubleshooting framework, see our production troubleshooting guide.
Fix CrashLoopBackOff Faster With Expert Support
CrashLoopBackOff errors in production mean downtime. Our engineers at Tasrie IT Services debug these daily and can help you resolve the current issue and prevent future ones.
Our Kubernetes consulting services include:
- Immediate incident support for production CrashLoopBackOff resolution
- Container image best practices review and optimisation
- Observability setup with Prometheus alerts for restart counts and exit codes