After auditing over 50 Kubernetes clusters for healthcare, fintech, and enterprise clients, we’ve found the same secrets security mistakes repeatedly. Base64 encoding isn’t encryption. RBAC rules that grant cluster-wide secrets access. Secrets committed to Git repositories. These aren’t edge cases—they’re the norm.
This guide covers what we’ve actually found in production environments and how to fix it.
The Reality of Kubernetes Secrets Security
Here’s what our audits revealed across 50+ production clusters:
| Security Issue | Percentage Affected |
|---|---|
| No encryption at rest for etcd | 73% |
| Overly permissive RBAC for secrets | 67% |
| Secrets exposed in environment variables | 82% |
| No audit logging for secrets access | 78% |
| Secrets in Git (even “private” repos) | 44% |
| Default service accounts with secrets access | 91% |
| No secret rotation policy | 86% |
These numbers aren’t hypothetical. They come from real production clusters running real workloads.
Mistake #1: Thinking Base64 Is Encryption
The most common misconception: Kubernetes secrets are encrypted because they’re base64 encoded.
They’re not.
# This is trivial to decode
echo "cGFzc3dvcmQxMjM=" | base64 -d
# Output: password123
Base64 is encoding, not encryption. Anyone with kubectl get secret permissions can read your secrets in plain text.
The Fix: Enable Encryption at Rest
Without encryption at rest, secrets are stored in etcd as base64-encoded plain text. If etcd is compromised, so are your secrets.
Step 1: Generate an encryption key
head -c 32 /dev/urandom | base64
Step 2: Create encryption configuration
# /etc/kubernetes/enc/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <your-32-byte-base64-key>
- identity: {} # Allows reading old unencrypted secrets
Step 3: Configure API server
Add to your kube-apiserver configuration:
spec:
containers:
- command:
- kube-apiserver
- --encryption-provider-config=/etc/kubernetes/enc/encryption-config.yaml
volumeMounts:
- name: enc
mountPath: /etc/kubernetes/enc
readOnly: true
volumes:
- name: enc
hostPath:
path: /etc/kubernetes/enc
type: DirectoryOrCreate
Step 4: Re-encrypt existing secrets
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
Step 5: Verify encryption is working
# Check etcd directly (requires etcd access)
ETCDCTL_API=3 etcdctl get /registry/secrets/default/my-secret | hexdump -C
# Should show encrypted data starting with k8s:enc:aescbc:v1:
For managed Kubernetes services:
- EKS: Enable envelope encryption with AWS KMS
- GKE: Customer-managed encryption keys (CMEK) for secrets
- AKS: Azure Key Vault integration
Learn more about encryption at rest configuration in the official documentation.
Mistake #2: Overly Permissive RBAC
We regularly find RBAC configurations like this:
# DON'T DO THIS
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: developer
rules:
- apiGroups: [""]
resources: ["*"] # Includes secrets!
verbs: ["*"]
This grants access to every secret in every namespace. A single compromised pod with this service account becomes a cluster-wide secrets breach.
The Fix: Least Privilege for Secrets
Namespace-scoped, specific secret access:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-secret-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
resourceNames: ["app-db-credentials", "app-api-key"] # Only specific secrets
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-read-secrets
namespace: production
subjects:
- kind: ServiceAccount
name: app-sa
namespace: production
roleRef:
kind: Role
name: app-secret-reader
apiGroup: rbac.authorization.k8s.io
Audit your current RBAC:
# Find who can read secrets cluster-wide
kubectl auth can-i get secrets --all-namespaces --list
# Check specific service account permissions
kubectl auth can-i get secrets -n production \
--as=system:serviceaccount:production:app-sa
# List all bindings granting secrets access
kubectl get clusterrolebindings,rolebindings -A -o json | \
jq '.items[] | select(.roleRef.name == "cluster-admin" or
(.roleRef.name | contains("secret")))'
Disable auto-mounting of service account tokens:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
namespace: production
automountServiceAccountToken: false # Disable by default
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
serviceAccountName: app-sa
automountServiceAccountToken: true # Only enable when needed
Mistake #3: Secrets in Environment Variables
Environment variables seem convenient, but they have serious security implications:
# Common but problematic pattern
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
Why this is risky:
- Process listings expose them:
ps auxeshows environment variables - Crash dumps include them: Core dumps contain the full environment
- Logs may leak them: Stack traces and debug logs often include env vars
- Child processes inherit them: Any subprocess has access
- Pod spec visibility: Anyone who can
kubectl describe podsees them
The Fix: Volume Mounts with Restricted Permissions
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: db-credentials
defaultMode: 0400 # Read-only for owner only
items:
- key: password
path: db-password
Application code to read from file:
# Python example
def get_db_password():
with open('/etc/secrets/db-password', 'r') as f:
return f.read().strip()
// Go example
func getDBPassword() (string, error) {
data, err := os.ReadFile("/etc/secrets/db-password")
if err != nil {
return "", err
}
return strings.TrimSpace(string(data)), nil
}
Mistake #4: No Audit Logging
78% of clusters we audited had no audit logging for secrets access. If a secret is compromised, you have no way to know who accessed it or when.
The Fix: Enable Secrets Audit Logging
# /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all secrets access at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Log secrets modifications with full request/response
- level: RequestResponse
resources:
- group: ""
resources: ["secrets"]
verbs: ["create", "update", "patch", "delete"]
# Don't log reads from kube-system (too noisy)
- level: None
users: ["system:kube-scheduler", "system:kube-controller-manager"]
resources:
- group: ""
resources: ["secrets"]
Configure API server:
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
What to monitor:
# Find secret access attempts
cat /var/log/kubernetes/audit.log | jq '
select(.objectRef.resource == "secrets") |
{
timestamp: .requestReceivedTimestamp,
user: .user.username,
verb: .verb,
namespace: .objectRef.namespace,
secret: .objectRef.name
}'
Mistake #5: Using Native Secrets for Production
Kubernetes native secrets work for development, but production environments need more:
- No automatic rotation: You must manually update and restart pods
- No centralized management: Secrets scattered across clusters
- Limited audit trail: Basic API server logging only
- No dynamic secrets: Static credentials that never expire
The Fix: External Secrets Management
Here’s how the major options compare:
| Feature | Native K8s | External Secrets Operator | HashiCorp Vault | Sealed Secrets | SOPS |
|---|---|---|---|---|---|
| Encryption | At rest (if enabled) | Backend-dependent | AES-256-GCM | Public key crypto | AES-256 / KMS |
| Rotation | Manual | Automated sync | Dynamic secrets | Manual re-seal | Manual |
| Audit | API server logs | Backend audit | Full audit trail | Git history | Git history |
| Multi-cluster | Per-cluster | Yes | Yes | Per-cluster | Yes |
| GitOps friendly | No | Yes | Via ESO | Yes | Yes |
| Learning curve | Low | Medium | High | Low | Medium |
External Secrets Operator (ESO)
Best for: Teams using cloud provider secrets managers (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager)
# Install ESO
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
-n external-secrets --create-namespace
AWS Secrets Manager integration:
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
namespace: external-secrets
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 1h # Auto-sync every hour
secretStoreRef:
name: aws-secrets
kind: ClusterSecretStore
target:
name: db-credentials
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: production/database
property: username
- secretKey: password
remoteRef:
key: production/database
property: password
HashiCorp Vault
Best for: Enterprise environments needing dynamic secrets, PKI, and comprehensive audit
# Vault Secrets Operator
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vault-auth
namespace: production
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: app-role
serviceAccount: app-sa
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: db-credentials
namespace: production
spec:
vaultAuthRef: vault-auth
mount: secret
path: data/production/database
destination:
name: db-credentials
create: true
refreshAfter: 30s
Dynamic database credentials with Vault:
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
name: dynamic-db-creds
namespace: production
spec:
vaultAuthRef: vault-auth
mount: database
path: creds/app-role
destination:
name: db-credentials
create: true
renewalPercent: 67 # Renew when 67% of TTL elapsed
Learn more about HashiCorp Vault integration with Kubernetes.
Sealed Secrets
Best for: GitOps workflows where you need to commit secrets to Git
# Install Sealed Secrets controller
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.27.0/controller.yaml
# Install kubeseal CLI
brew install kubeseal # macOS
Create a sealed secret:
# Create regular secret (don't apply it!)
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password=secretpassword \
--dry-run=client -o yaml > secret.yaml
# Seal it
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
# Now safe to commit to Git
git add sealed-secret.yaml
git commit -m "Add database credentials (sealed)"
Limitations of Sealed Secrets:
- Sealed secrets are cluster-specific (can’t migrate between clusters easily)
- No automatic rotation
- Re-sealing required when controller keys rotate
SOPS (Secrets OPerationS)
Best for: Teams already using SOPS for other encryption needs, Flux CD users
# Install SOPS
brew install sops # macOS
# Create age key
age-keygen -o keys.txt
# Encrypt secret file
sops --encrypt --age $(cat keys.txt | grep public | cut -d: -f2 | tr -d ' ') \
secret.yaml > secret.enc.yaml
SOPS with Flux CD:
# .sops.yaml in repo root
creation_rules:
- path_regex: .*\.enc\.yaml$
age: age1xxx... # Your public key
# Flux decryption configuration
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: production
spec:
decryption:
provider: sops
secretRef:
name: sops-age
Mistake #6: Secrets in Git (Even “Private” Repos)
44% of clusters we audited had secrets committed to Git at some point. “We deleted it” doesn’t matter—Git history is forever.
Detection and Remediation
Scan for leaked secrets:
# Using gitleaks
gitleaks detect --source . --verbose
# Using trufflehog
trufflehog git file://. --only-verified
If secrets are already in Git:
- Rotate the secret immediately—assume it’s compromised
- Remove from history (if the repo is private):
git filter-branch --force --index-filter \ "git rm --cached --ignore-unmatch path/to/secret.yaml" \ --prune-empty --tag-name-filter cat -- --all git push origin --force --all - Use GitOps-safe approaches going forward (Sealed Secrets, SOPS, ESO)
Pre-commit hooks to prevent future leaks:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
Mistake #7: No Secret Rotation
86% of audited clusters had no secret rotation policy. Credentials created during initial setup remained unchanged for years.
Implementing Automated Rotation
With External Secrets Operator:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: rotating-credentials
spec:
refreshInterval: 1h # Check for updates hourly
secretStoreRef:
name: aws-secrets
kind: ClusterSecretStore
target:
name: api-credentials
AWS Secrets Manager handles rotation—ESO syncs the new values automatically.
With HashiCorp Vault (dynamic secrets):
# Configure database secrets engine
vault write database/config/mydb \
plugin_name=postgresql-database-plugin \
allowed_roles="app-role" \
connection_url="postgresql://{{username}}:{{password}}@db:5432/mydb"
# Create role with TTL
vault write database/roles/app-role \
db_name=mydb \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';" \
default_ttl="1h" \
max_ttl="24h"
Applications get fresh credentials every hour—no manual rotation needed.
Forcing pod restarts after secret updates:
# Reloader automatically restarts pods when secrets change
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
annotations:
reloader.stakater.com/auto: "true" # Watches all secrets/configmaps used
Install Reloader:
helm repo add stakater https://stakater.github.io/stakater-charts
helm install reloader stakater/reloader
Production Security Checklist
Use this checklist to audit your clusters:
Encryption
- Encryption at rest enabled for etcd
- TLS for all API server communication
- Network policies restricting pod-to-pod traffic
Access Control
- RBAC restricts secrets access to specific service accounts
- No wildcard (*) permissions on secrets
- Service account tokens not auto-mounted by default
- resourceNames used to limit access to specific secrets
External Secrets Management
- External secrets manager for production (Vault, ESO, etc.)
- Automated secret rotation configured
- Dynamic secrets for databases where possible
Audit and Monitoring
- Audit logging enabled for secrets access
- Alerts for unusual secrets access patterns
- Regular RBAC permission reviews
GitOps Security
- No plain secrets in Git repositories
- Pre-commit hooks scanning for secrets
- Sealed Secrets or SOPS for GitOps workflows
Operational
- Secret rotation policy documented and tested
- Incident response plan for secret compromise
- Regular security audits scheduled
Real-World CVEs to Watch
Recent vulnerabilities affecting Kubernetes secrets:
CVE-2024-3744 - Azure File CSI driver leaked secrets in log files. Applications using Azure File storage had credentials exposed in driver logs.
CVE-2023-2728 - ServiceAccount tokens could bypass mounted secrets restrictions under certain conditions.
CVE-2020-8554 - Man-in-the-middle attack via LoadBalancer/ExternalIP services could intercept traffic containing secrets.
Stay updated with Kubernetes security announcements and our Kubernetes security news.
Conclusion
Kubernetes secrets security isn’t about enabling one feature—it’s about layered defenses:
- Encrypt at rest - Don’t trust base64
- Least privilege RBAC - Specific secrets, specific service accounts
- Volume mounts over env vars - Reduce exposure surface
- Audit everything - Know who accessed what and when
- External secrets managers - Centralized control and rotation
- GitOps security - Sealed Secrets or SOPS for Git
- Rotate regularly - Automate where possible
The clusters that get breached aren’t the ones with sophisticated attacks—they’re the ones with these basic misconfigurations.
Strengthen Your Kubernetes Secrets Security
Managing secrets securely across multiple clusters and environments requires expertise. Many teams discover vulnerabilities only after an incident.
Our Kubernetes consulting services help you:
- Audit existing clusters for secrets security gaps
- Implement encryption at rest and proper RBAC policies
- Deploy external secrets management (Vault, ESO, or cloud-native solutions)
- Set up automated rotation and monitoring
- Achieve compliance (HIPAA, PCI DSS, SOC 2)
We’ve secured Kubernetes secrets for healthcare organizations handling PHI, fintech companies processing payments, and enterprises running mission-critical workloads.
Get a free Kubernetes security assessment →
Related Resources
- Kubernetes Secrets 2026: Complete Guide
- Kubernetes Security Best Practices 2026
- Kubernetes Security News 2026
- Kubernetes ConfigMaps Guide
External Resources: