Cloud repatriation — the strategic movement of workloads from public cloud back to private infrastructure or on-premises data centres — has become one of the defining infrastructure trends of 2026. A Barclays CIO Survey from Q4 2024 found that 86% of CIOs planned to move some workloads from public cloud to private or on-premises environments, the highest rate ever recorded. Meanwhile, IDC research reports that 80% of enterprises expect to repatriate compute or storage workloads within the next 12 months.
This is not an anti-cloud backlash. It is a strategic rebalancing toward hybrid architectures that place the right workloads in the right environments. In this guide, we break down why repatriation is accelerating, when it makes sense, how regulatory pressure is forcing the conversation, and what a well-planned repatriation programme looks like.
Why Cloud Repatriation Is Accelerating in 2026
Several forces converged in 2025 and 2026 that turned repatriation from a fringe talking point into a boardroom priority.
Cloud Cost Shock
Many organisations that adopted a lift-and-shift cloud-first strategy between 2018 and 2022 are now confronting the reality of compounding cloud bills. Egress fees, cross-region data transfer charges, and storage costs for growing datasets have made cloud spending unpredictable. According to Forrester, at least 15% of enterprises are already shifting toward private deployments in response to rising costs.
37signals, the company behind Basecamp and HEY, is the most cited example. After spending roughly $3.2 million annually on AWS, the team invested approximately $600,000 in Dell servers and cut its annual compute bill by $1.5 million to $2 million. They documented the migration of billions of files off S3 and the operational model behind 10 petabytes of data on their own hardware.
Dropbox took a similar path years earlier with its “Magic Pocket” project, partially breaking away from AWS and saving tens of millions annually. Nvidia has estimated that moving large, specialised AI and ML workloads back on premises can yield a 30% cost reduction compared to equivalent public cloud deployments.
For organisations with predictable, steady-state workloads — ERP systems, CI/CD build farms, data lakes, or internal APIs — the economics often favour private infrastructure. Our cloud migration consulting team regularly helps clients model these trade-offs to determine which workloads are cloud-native fits and which are repatriation candidates.
The Broadcom-VMware Effect
Broadcom’s acquisition of VMware fundamentally reshaped the virtualisation market. The shift to bundled VMware Cloud Foundation (VCF) licensing, the elimination of perpetual licences, and significant price increases for many customers forced a reckoning. Organisations that relied on VMware for private cloud suddenly faced a choice: absorb steep cost increases, migrate to alternative virtualisation platforms, or re-evaluate their entire infrastructure strategy.
Broadcom CEO Hock Tan has openly encouraged repatriation, pointing to a global survey where seven out of ten IT professionals planned to bring workloads back on premises. Broadcom’s own IT organisation migrated critical workloads from public cloud database-as-a-service offerings to VMware Data Services Manager, reportedly saving over $10 million. Internal Broadcom analysis suggests a modern private cloud can deliver 40-50% lower total cost of ownership (TCO) compared to public cloud for steady-state workloads.
Ironically, while Broadcom pushes repatriation as a strategy, its own licensing changes are driving some organisations away from VMware entirely — toward open-source alternatives like Proxmox, OpenStack, or Kubernetes-based platforms. If your organisation is evaluating container orchestration as a VMware replacement, our Kubernetes consulting practice can help assess the transition.
AI Infrastructure Demands
The explosive growth of AI workloads in 2025-2026 created a new repatriation driver. Training and inference on GPU clusters in public cloud is expensive, and the data gravity problem — where moving terabytes of training data in and out of cloud regions becomes a bottleneck — pushes organisations toward co-located or on-premises GPU infrastructure.
Gartner predicts that 40% of enterprises will adopt hybrid compute architectures for mission-critical workflows by end of 2026, up from just 8% in prior years. Much of this growth is driven by AI teams demanding predictable GPU access without cloud spot-instance volatility.
Regulatory Drivers: DORA, NIS2, and Data Sovereignty
Regulatory pressure is no longer a theoretical concern. It is an active enforcement reality that compels organisations to demonstrate infrastructure control, not just contractual assurances.
DORA (Digital Operational Resilience Act)
The EU’s Digital Operational Resilience Act (DORA) is now enforceable and targets financial institutions specifically. DORA requires:
- Documented exit strategies for every critical ICT third-party provider, including cloud vendors
- Transition periods with provider obligations to support migration, data return, and continuity
- Regular exit-plan testing to validate that migration strategies work in practice, not just on paper
- Concentration risk assessment to evaluate over-reliance on any single cloud provider
Under DORA, exit strategies are not optional. Financial institutions must maintain step-by-step runbooks, data return processes, location transparency, crisis protocols, and secure data deletion procedures. The ECB recommends regular exit-plan testing to ensure cloud independence.
For organisations subject to DORA, repatriation planning is now a compliance requirement — even if no actual migration happens. Having a tested, viable alternative to your current cloud provider is a regulatory expectation.
NIS2 Directive
The NIS2 Directive, transposed into member-state law by October 2024, explicitly requires organisations in critical sectors to assess and manage cybersecurity risks from their supply chain, including cloud providers. This creates formal obligations to evaluate concentration risk and maintain demonstrable control over critical infrastructure components.
GDPR and Data Sovereignty
Data sovereignty concerns continue to intensify. The combination of GDPR, the invalidation of Privacy Shield, and ongoing uncertainty around EU-US data transfer frameworks has pushed European organisations to question whether workloads containing personal data belong in US-headquartered hyperscaler regions at all.
For regulated industries, the question is no longer just “is it compliant?” but “can we prove we control it?” Organisations navigating these requirements benefit from a thorough cybersecurity and compliance assessment before making infrastructure decisions.
When to Repatriate vs. When to Stay in the Cloud
Repatriation is not a universal answer. The decision should be driven by a structured workload assessment, not ideology.
Strong Repatriation Candidates
| Workload Characteristic | Why Repatriation Fits |
|---|---|
| Predictable, steady-state compute | Cost savings of 30-60% vs. pay-as-you-go cloud pricing |
| Large data volumes with low egress needs | Eliminates egress fees and cross-region transfer costs |
| Regulatory-sensitive data (PII, financial, healthcare) | Direct control over encryption keys, access policies, and data residency |
| GPU-intensive AI/ML training | Avoids cloud GPU scarcity and spot-instance unpredictability |
| Legacy applications tightly coupled to VMs | May run more efficiently on private virtualisation than refactored for cloud-native |
Better Left in the Cloud
| Workload Characteristic | Why Cloud Fits |
|---|---|
| Highly variable or spiky demand | Auto-scaling avoids over-provisioning on-premises hardware |
| Global distribution requirements | Multi-region presence is expensive to replicate privately |
| Rapid prototyping and experimentation | Spin-up speed matters more than per-unit cost |
| Managed services dependencies (e.g., DynamoDB, BigQuery) | Replacing managed services on premises requires significant operational investment |
| Small teams without infrastructure expertise | Cloud reduces operational burden for lean teams |
The Grey Zone: Hybrid Candidates
Many workloads fall in between. A common pattern is to keep the control plane and burst capacity in public cloud while running steady-state data processing on private infrastructure. For example:
- CI/CD pipelines: Keep orchestration in cloud (GitHub Actions, GitLab CI), run build agents on private infrastructure
- Data lakes: Store and process on premises, use cloud for ad-hoc analytics and dashboards
- Kubernetes clusters: Run production workloads on private clusters, use cloud clusters for development and staging
Hybrid Cloud as the End State
The industry consensus in 2026 is clear: the future is neither cloud-only nor on-premises-only. It is hybrid.
Gartner’s prediction that 40% of enterprises will adopt hybrid compute architectures for mission-critical workloads reflects a pragmatic shift. Organisations are moving past the “cloud at all costs” mindset toward infrastructure pragmatism — placing each workload where it runs most efficiently, securely, and cost-effectively.
A well-designed hybrid architecture provides:
- Workload portability: The ability to move applications between environments without re-engineering
- Cost optimisation: Steady-state workloads on private infrastructure, burst and variable workloads in cloud
- Regulatory flexibility: Sensitive data stays on controlled infrastructure while non-regulated workloads leverage cloud scale
- Vendor negotiation leverage: Demonstrated ability to exit reduces lock-in and improves contract terms
Infrastructure-as-code tools are essential for making hybrid architectures manageable. A consistent Terraform-based provisioning approach across both on-premises and cloud environments reduces operational complexity and enables workload portability.
Building a Cloud Repatriation Plan
A successful repatriation is not a weekend migration. It requires structured planning across several phases.
Phase 1: Workload Assessment and TCO Analysis
Start by cataloguing all cloud workloads and classifying them by:
- Compute pattern: Steady-state vs. variable
- Data gravity: Volume, transfer frequency, and egress costs
- Regulatory requirements: Data residency, sovereignty, and compliance mandates
- Dependency mapping: Managed services, APIs, and integrations that are cloud-specific
- Operational maturity: Whether your team can manage the workload outside the cloud
Build a total cost of ownership model that accounts for hardware acquisition (or colocation), staffing, power, cooling, network, and the opportunity cost of migration effort. Compare this against projected cloud spend over three to five years, including expected price increases.
Phase 2: Target Architecture Design
Define the destination environment:
- Private cloud on Kubernetes: For teams already container-native, running Kubernetes on bare metal or in colocation provides portability and ecosystem consistency
- Colocation with managed hardware: For organisations that want private infrastructure without building data centres
- Hybrid split: Keep some workloads in cloud, move others to private, with a consistent management layer across both
Phase 3: Migration Execution
Prioritise workloads by risk and impact:
- Start with non-critical workloads (development environments, internal tools) to build confidence
- Move data-heavy, steady-state workloads next (databases, data lakes, object storage)
- Migrate production applications last, with parallel-run periods and automated failover
Use infrastructure-as-code to define both source and destination environments. Automate testing, monitoring, and rollback procedures. Track migration progress against defined success criteria, not just “it works.”
Phase 4: Operational Readiness
The most overlooked phase. Running infrastructure on premises or in colocation requires operational capabilities that cloud abstracts away:
- Monitoring and alerting: Deploy comprehensive observability stacks (Prometheus, Grafana, or equivalent)
- Patching and security: Establish processes for OS, firmware, and application updates
- Capacity planning: Forecast demand and procure hardware in advance rather than on-demand
- Disaster recovery: Design and test backup, replication, and failover procedures
- On-call processes: Ensure team readiness for infrastructure incidents
Real-World Repatriation Patterns
Pattern 1: Database Repatriation
Managed database services (RDS, Cloud SQL, Azure SQL) are among the most expensive cloud line items. Organisations with large, predictable database workloads often find that running PostgreSQL, MySQL, or ClickHouse on dedicated hardware reduces costs by 40-60% while improving latency through co-location with application servers.
Pattern 2: Object Storage Migration
S3 and equivalent services are cost-effective for small volumes but become expensive at scale, particularly with frequent access patterns. MinIO on private infrastructure offers S3-compatible APIs, enabling migration without application changes.
Pattern 3: Kubernetes Cluster Repatriation
Organisations running managed Kubernetes (EKS, AKS, GKE) for steady-state production workloads can reduce costs significantly by running upstream Kubernetes on bare metal or in colocation. The application layer (containers, Helm charts, GitOps pipelines) remains unchanged — only the underlying infrastructure shifts.
This pattern works particularly well when combined with a Kubernetes cost optimisation strategy that has already identified workloads with predictable resource consumption.
Pattern 4: AI/ML Training Infrastructure
GPU-intensive workloads with consistent utilisation (above 60-70%) almost always favour private infrastructure. The capital expenditure on GPU servers pays back within 12-18 months compared to equivalent cloud GPU instance costs, and teams gain guaranteed access without competing for spot capacity.
Common Mistakes to Avoid
Underestimating operational complexity. Cloud abstracts significant operational work. Repatriating without investing in automation, monitoring, and on-call processes creates a worse outcome than staying in cloud.
Ignoring egress costs during migration. Moving large datasets out of cloud incurs egress fees. Factor these one-time costs into the business case and plan data migration during low-cost windows or negotiate egress fee waivers with your provider.
Repatriating everything. The goal is not zero cloud. It is the right workload in the right place. Over-repatriating can eliminate the flexibility benefits that cloud provides for variable and experimental workloads.
Skipping the parallel-run phase. Running workloads simultaneously in both environments before cutting over reduces risk. It costs more short-term but prevents costly outages.
Neglecting vendor contracts. Review cloud provider contracts for minimum commitments, reserved instance obligations, and termination terms before planning a timeline.
The Strategic Framing: Repatriation as Risk Management
The most effective way to position cloud repatriation internally is as risk management, not as a rejection of cloud technology:
- Financial risk: Reducing exposure to unpredictable cloud cost increases and vendor pricing changes
- Regulatory risk: Demonstrating control and exit capability as required by DORA, NIS2, and GDPR
- Concentration risk: Avoiding excessive dependency on any single infrastructure provider
- Operational risk: Ensuring business continuity even if a cloud provider experiences extended outages or policy changes
This framing aligns with how regulators, auditors, and boards think about infrastructure decisions. It is not about whether cloud is good or bad — it is about whether the organisation has deliberate, tested control over its critical infrastructure.
Plan Your Cloud Repatriation with Expert Guidance
Cloud repatriation is a high-stakes infrastructure decision that demands careful planning, realistic cost modelling, and operational readiness. Getting it wrong is more expensive than staying in the cloud.
Tasrie IT Services provides comprehensive cloud repatriation and hybrid infrastructure consulting to help organisations:
- Assess workload suitability with structured TCO analysis that accounts for hidden costs on both sides
- Design hybrid architectures that place each workload in its optimal environment
- Execute migrations with automated infrastructure-as-code pipelines, parallel-run validation, and zero-downtime cutover
- Build operational readiness including monitoring, security, capacity planning, and disaster recovery
Our team has guided enterprises through complex cloud-to-hybrid transitions across regulated industries including financial services, healthcare, and government.
Discuss your cloud repatriation strategy with Tasrie IT Services