~/blog/finops-consulting-aws-cost-reduction-guide-2026
zsh
CLOUD

FinOps Consulting: We Cut AWS Bills by 40% (Real Examples)

Engineering Team 2026-03-19

Most AWS bills are 30-50% higher than they need to be. We know because we have audited dozens of them. The waste follows the same patterns every time — oversized instances, unused resources, missing commitments, and storage nobody touches.

This is not theory. These are the exact strategies we use in our FinOps consulting engagements, with real cost reductions from real client work.

What FinOps Actually Means

FinOps is not a tool. It is an operational framework where engineering, finance, and business teams collaborate to manage cloud costs. The FinOps Foundation defines three phases:

  1. Inform — understand what you are spending and why
  2. Optimise — reduce waste and improve efficiency
  3. Operate — continuously govern and improve

Most teams skip straight to “buy reserved instances” without understanding their spending patterns. That is why savings do not stick. Our engagements start with visibility, then move to action.

The 6 Strategies That Deliver 40%+ Savings

1. Right-Size Compute (10-35% savings)

This is the single biggest win in every engagement. Most EC2 instances are oversized because someone picked a “safe” instance type during initial setup and never revisited it.

What we find:

  • 40-60% of instances run below 10% average CPU utilisation
  • Memory is typically 20-30% utilised
  • Teams choose m5.xlarge when t3.medium would handle the workload

How we fix it:

# Pull 14-day CloudWatch metrics for all instances
aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name CPUUtilization \
  --dimensions Name=InstanceId,Value=i-1234567890 \
  --start-time 2026-03-05T00:00:00Z \
  --end-time 2026-03-19T00:00:00Z \
  --period 3600 \
  --statistics Average Maximum

We analyse two weeks of CPU, memory, network, and disk metrics. Then we recommend right-sized instances:

Current InstanceAvg CPUAvg MemoryRecommendedMonthly Savings
m5.2xlarge ($280/mo)8%15%t3.large ($60/mo)$220
c5.xlarge ($124/mo)12%25%c6g.large ($62/mo)$62
r5.xlarge ($182/mo)5%40%r6g.large ($92/mo)$90

Real result: One client had 45 EC2 instances. After right-sizing, their monthly compute bill dropped from $12,400 to $7,800 — a 37% reduction with zero performance impact.

2. Migrate to Graviton Processors (20-40% savings on compute)

AWS Graviton (ARM-based) instances deliver up to 40% better price-performance than equivalent x86 instances. Most workloads run without modification.

What works on Graviton without changes:

  • Containerised workloads (Docker images need multi-arch builds)
  • Python, Node.js, Java, Go applications
  • PostgreSQL, MySQL, Redis, Elasticsearch

What needs testing:

  • Applications with x86-specific compiled dependencies
  • Legacy software with no ARM support

Migration approach:

# Multi-arch Docker build for Graviton compatibility
FROM --platform=$BUILDPLATFORM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM --platform=$TARGETPLATFORM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
# Build for both architectures
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest .

Real result: A SaaS client migrated their entire EKS cluster from m5 to m7g (Graviton3) nodes. Compute costs dropped 38% with a 15% improvement in response latency.

3. Spot Instances for Non-Critical Workloads (60-90% savings)

Spot instances cost 60-90% less than on-demand. The trade-off is that AWS can reclaim them with 2 minutes notice. This works well for:

  • CI/CD build agents
  • Batch processing and data pipelines
  • Development and staging environments
  • Stateless web workers behind a load balancer

What we set up:

For Kubernetes workloads, we configure Karpenter to use spot instances with fallback to on-demand:

apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: spot-workers
spec:
  template:
    spec:
      requirements:
      - key: karpenter.sh/capacity-type
        operator: In
        values: ["spot", "on-demand"]
      - key: node.kubernetes.io/instance-type
        operator: In
        values: ["m7g.large", "m7g.xlarge", "m6g.large", "c7g.large"]
  limits:
    cpu: 100
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized

Real result: A client running CI/CD on 8 dedicated m5.xlarge instances ($1,200/month) switched to spot-based Karpenter provisioning. Monthly cost dropped to $180 — an 85% reduction.

4. Shutdown Non-Production During Off-Hours (40-65% savings on dev/staging)

Development, staging, and QA environments typically run 24/7 but are only used during business hours. Automating shutdowns saves 40-65%:

Schedule: Run 10 hours/day, 5 days/week = 50 hours vs 168 hours = 70% reduction

# Lambda function to stop/start tagged instances
import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    action = event.get('action', 'stop')

    filters = [{'Name': 'tag:Environment', 'Values': ['dev', 'staging']}]
    instances = ec2.describe_instances(Filters=filters)

    instance_ids = []
    for reservation in instances['Reservations']:
        for instance in reservation['Instances']:
            instance_ids.append(instance['InstanceId'])

    if action == 'stop':
        ec2.stop_instances(InstanceIds=instance_ids)
    elif action == 'start':
        ec2.start_instances(InstanceIds=instance_ids)

    return f"{action}ped {len(instance_ids)} instances"

For Kubernetes, scale non-production clusters to zero nodes outside business hours or use Karpenter’s consolidation to aggressively remove idle nodes.

Real result: One client had $4,200/month in dev/staging costs. Off-hours scheduling reduced it to $1,500/month — a 64% reduction.

5. Storage Tiering and Cleanup (30-80% savings on storage)

S3 storage accumulates silently. We consistently find:

  • Old backups in S3 Standard that should be in Glacier
  • EBS snapshots from terminated instances still running
  • CloudWatch log groups retaining data indefinitely
  • Unattached EBS volumes from deleted instances

Quick wins:

# Find unattached EBS volumes
aws ec2 describe-volumes \
  --filters Name=status,Values=available \
  --query 'Volumes[*].{ID:VolumeId,Size:Size,Created:CreateTime}' \
  --output table

# Find snapshots older than 90 days
aws ec2 describe-snapshots --owner-ids self \
  --query 'Snapshots[?StartTime<=`2025-12-19`].{ID:SnapshotId,Size:VolumeSize}' \
  --output table

S3 Lifecycle policy:

{
  "Rules": [
    {
      "ID": "ArchiveOldData",
      "Status": "Enabled",
      "Transitions": [
        { "Days": 30, "StorageClass": "STANDARD_IA" },
        { "Days": 90, "StorageClass": "GLACIER" },
        { "Days": 365, "StorageClass": "DEEP_ARCHIVE" }
      ]
    }
  ]
}

Real result: A media company had 50TB in S3 Standard ($1,150/month). After lifecycle policies moved 80% to Glacier and Deep Archive, monthly storage cost dropped to $230 — an 80% reduction.

6. Commitment Optimisation (20-40% savings on steady-state)

After right-sizing and optimising, lock in savings with commitments for your steady-state workload:

Commitment TypeSavingsFlexibilityBest For
Compute Savings Plans20-30%Any instance family/regionGeneral compute
EC2 Instance Savings Plans30-40%Specific instance familyStable workloads
Reserved Instances (1yr)30-38%Specific instance typeDatabases, critical infra
Reserved Instances (3yr)40-60%Specific instance typeLong-term stable workloads

Important: Never buy commitments first. Right-size and optimise first, then commit to what remains. We have seen teams buy 3-year RIs for instances they later downsized — wasting the commitment.

Our approach:

  1. Optimise for 30 days (right-sizing, Graviton, spot, schedules)
  2. Observe stable baseline for 2-4 weeks
  3. Cover 70-80% of baseline with Savings Plans
  4. Leave 20-30% on-demand for flexibility

The Full Savings Stack

When we combine all six strategies, the results compound:

StrategyTypical SavingsApplied To
Right-sizing10-35%All compute
Graviton migration20-40%Compatible workloads
Spot instances60-90%Non-critical, stateless
Off-hours scheduling40-65%Non-production
Storage tiering30-80%S3, EBS, snapshots
Commitment optimisation20-40%Steady-state baseline

Combined real result: One client’s AWS bill went from $38,000/month to $21,500/month after a 6-week engagement — a 43% reduction ($198,000 annualised savings).

Our FinOps Engagement Model

Week 1-2: Assessment

  • Connect to your AWS account (read-only IAM role)
  • Analyse 90 days of Cost Explorer data
  • Map spending by service, team, and environment
  • Identify top 10 optimisation opportunities with estimated savings

Week 3-4: Quick Wins

  • Right-size oversized instances
  • Delete unused resources (unattached EBS, old snapshots, idle load balancers)
  • Set up off-hours scheduling for non-production
  • Configure S3 lifecycle policies

Week 5-6: Strategic Optimisation

  • Graviton migration for compatible workloads
  • Spot instance integration for CI/CD and batch
  • Commitment planning based on optimised baseline
  • Set up cost monitoring dashboards and alerts

Ongoing: Governance

  • Monthly cost review and optimisation recommendations
  • Budget alerts and anomaly detection
  • Tag enforcement for cost allocation
  • Quarterly commitment re-evaluation

Tools We Use

ToolPurposeCost
AWS Cost ExplorerSpending analysisFree
AWS Trusted AdvisorIdle resource detectionFree (Business support)
KubecostKubernetes cost allocationFree tier available
AWS Compute OptimiserRight-sizing recommendationsFree
Prometheus + GrafanaCustom cost dashboardsFree (self-hosted)
CloudWatchResource utilisation metricsPay per metric

We prefer open-source and AWS-native tools over expensive third-party platforms. Most clients do not need a $5,000/month FinOps platform — they need someone to act on the data that is already available for free.


Want to Cut Your AWS Bill by 30-50%?

We have reduced cloud costs for clients across AWS, Azure, and GCP — from startups spending $5K/month to enterprises spending $100K+/month.

Our Kubernetes cost optimisation services and cloud FinOps consulting include:

  • Cost assessment — identify your top savings opportunities in 1 week
  • Quick wins execution — right-sizing, cleanup, and scheduling in weeks 2-3
  • Strategic optimisation — Graviton, spot, and commitment planning in weeks 4-6
  • Ongoing governance — monthly reviews and anomaly detection

Every engagement starts with a no-obligation cost assessment. We show you the savings before you commit.

Get a free AWS cost assessment →

Continue exploring these related topics

$ suggest --service

Need AWS expertise?

From migration to managed services, we help teams get the most out of AWS.

Get started
Chat with real humans
Chat on WhatsApp