~/blog/airflow-2-end-of-life-end-of-support-2026
zsh
ENGINEERING

Airflow 2 End of Life: We Migrated 200+ DAGs Before the Deadline (What You Must Know)

Amjad Syed 2026-05-11

So Apache Airflow 2 officially hit end of life on April 22, 2026. If you’re still running any 2.x version right now, you’re basically on unsupported software. No security patches, no bug fixes, no provider package updates. Nothing.

We just wrapped up migrating a client’s Airflow 2.8 setup - over 200 DAGs - to Airflow 3.1 on EKS. Honestly it went smoother than we expected but there were definitely some surprises along the way. This post covers what the EOL actually means for your team, what the real risks are if you stay put, and what your options look like going forward.

The Official Timeline

Here’s the official support lifecycle straight from the Apache Airflow project:

VersionLatest PatchStateFirst ReleaseLimited MaintenanceEnd of Life
3.x3.2.1ActiveApr 22, 2025TBDTBD
2.x2.11.2EOLDec 17, 2020Oct 22, 2025Apr 22, 2026
1.101.10.15EOLAug 27, 2018Dec 17, 2020Jun 17, 2021

Airflow 2 went into limited maintenance back on October 22, 2025. During that phase they were only shipping security patches and critical bug fixes. As of April 22, 2026 even that stopped. The 2.x line is done.

What Actually Breaks When You Stay on Airflow 2

Let’s get specific about what “end of life” means in practice because alot of teams underestimate this.

No Security Patches

Any CVE discovered in Airflow 2 or its dependencies going forward will not get patched. And if you think “how bad can it be?” - just look at what’s been found recently:

CVEWhat It DoesSeverityFixed In
CVE-2026-33858Remote code execution via crafted XCom payloads - lets DAG authors execute arbitrary code on the webserverCriticalAirflow 3.2.0 only
CVE-2025-66388Secret values exposed to authenticated UI users through rendered templatesHighAirflow 3.1.4
CVE-2025-68438API keys, database passwords, tokens visible in cleartext when template fields exceed length thresholdHighAirflow 3.1.6
CVE-2025-54831Sensitive connection details exposed to unauthorized usersHighAirflow 3.0.4
CVE-2025-67895Remote code execution in webserver context via Edge provider on Airflow 2CriticalProviders Edge 2.0.0 (Airflow 3 only)
CVE-2024-45784Sensitive config variables leaked to logs by DAG authorsMediumAirflow 2.10.3

That’s just the last 18 months or so. Notice how the recent critical ones - the RCE via XCom and the Edge provider exploit - are only fixed in Airflow 3. There’s no Airflow 2 patch for those. If you’re still on 2.x, you’re exposed and there is no fix coming.

But here’s the thing that alot of people miss - Airflow 3 doesn’t just patch individual CVEs. It fixes entire categories of vulnerabilities by changing the architecture. The biggest one is removing direct metadata database access from tasks. In Airflow 2, any task code could import Airflow’s database sessions and models, query the metadata DB directly, and potentially read secrets, modify other DAG runs, or mess with scheduler state. That’s not a bug, that was just how Airflow 2 worked - and its the root cause behind many of those CVEs above.

Airflow 3 replaces all of that with a dedicated Task Execution API. Tasks now talk to the API server, which is the only thing that touches the database. This means:

  • Task isolation - a compromised or malicious task can’t read other tasks’ secrets or modify scheduler state
  • Least privilege access - tasks only get the data they need through the API, not full DB access
  • Fewer database connections - reduces the blast radius of connection-level exploits
  • Secret redaction by default - the API layer handles masking secrets before they reach task code or the UI

On top of that, the webserver split (separate API server + DAG processor) means the attack surface is smaller for each component. A vulnerability in the UI doesn’t automatically give you access to the DAG parsing layer or the scheduler internals.

These aren’t incremental patches. This is a fundamentally more secure architecture. Most of the credential-leak and RCE vulnerabilities that have plagued Airflow 2 over the years simply can’t happen the same way in Airflow 3.

Airflow has had a steady stream of security vulnerabilities and that stream won’t stop just because support ended. The fixes will.

If a critical vulnerability shows up in the Airflow webserver, scheduler, or any of its Python dependencies tomorrow, you’re on your own. You either patch it yourself or you live with it. Neither option is great.

Provider Packages Stop Working

This is the one that catches teams off guard. Airflow’s provider packages for Snowflake, Databricks, BigQuery, AWS and other platforms are actively dropping Airflow 2 support. New versions of these providers require Airflow 3. As the ecosystem moves forward your connections to external systems will start breaking.

You won’t get a nice warning about it either. You’ll get a pip install failure or a confusing import error after what seemed like a routine dependency update.

Python and OS Compatibility Degrades

Airflow 2.11 supports Python 3.9 through 3.12. As Python 3.9 and 3.10 reach their own end of life, and OS-level libraries keep updating, maintaining a working Airflow 2 environment gets progressively harder. Dependency conflicts pile up. Docker base images move on. Its one of those things that gets worse slowly and then all at once.

No Bug Fixes

Known issues in the scheduler, webserver and CLI will stay exactly as they are. If you hit a bug you either work around it or contribute a patch to a dead branch that nobody will merge. We’ve seen teams waste days debugging issues that were already fixed in Airflow 3 - that’s time you don’t get back.

Compliance and Audit Risks

This is where things shift from “kind of annoying” to “genuinely urgent” for alot of organizations.

SOC 2

SOC 2 Type II audits require you to demonstrate that you’re using supported, patched software. Running Airflow 2 after EOL means you can’t show evidence of vendor-provided security patches. Your auditor will flag it. You’ll need a compensating control or a migration plan with a hard deadline - and compensating controls are never fun to maintain.

HIPAA

If your Airflow pipelines process PHI (Protected Health Information) - and plenty of healthcare data engineering teams use Airflow for exactly this - running unsupported software creates a direct compliance gap. HIPAA’s Security Rule requires maintaining security patches for systems handling ePHI. An unpatched orchestrator sitting in the middle of your data pipeline is basically a finding waiting to happen.

PCI-DSS

PCI-DSS Requirement 6.3.3 explicitly requires patching critical vulnerabilities within a defined timeframe. You can’t patch what has no patches available. If Airflow touches any system in your cardholder data environment you’ve got a problem.

General Risk Posture

Even if you’re not in a regulated industry, most enterprise security teams and cyber insurance policies require running supported software. EOL systems end up on vulnerability reports and sooner or later someone asks why you haven’t migrated. Usually at the worst possible time.

Why You Need to Migrate Now (Not Later)

We’ve talked to alot of teams who know they need to move off Airflow 2 but keep pushing it to next quarter. Here’s why waiting makes things worse, not easier:

  • The security window is already open. Every day you’re on Airflow 2 is a day where a new CVE could drop with no patch coming. The longer you wait the more likely you are to get caught.
  • Provider packages are already breaking. Snowflake, Databricks, BigQuery - the providers your DAGs depend on are releasing versions that only work with Airflow 3. Pin your versions all you want, eventually something upstream forces your hand.
  • Your team’s knowledge gets stale. The Airflow community, blog posts, Stack Overflow answers, conference talks - everything is moving to Airflow 3. The longer you stay on 2.x the harder it gets to find help when something goes wrong.
  • The migration doesn’t get smaller. You’re not going to have fewer DAGs next quarter. You’re not going to have less custom code. The scope only grows.
  • Compliance deadlines don’t wait. If you’re SOC 2 or HIPAA audited, your auditor will eventually ask about that unsupported software. Better to have “we migrated” than “we’re planning to migrate.”

We’ve seen teams turn a 4-week migration into a 3-month emergency project because they waited until a provider package broke in production. Don’t be that team.

Why Airflow 3 Is the Right Move

Look, there are other orchestrators out there - Prefect, Dagster, Kestra, Temporal. They’re all solid tools. But if you’re already running Airflow in production with a pile of DAGs, switching to a completely different platform is a way bigger project than upgrading. You’d be rewriting everything from scratch, retraining your team, and rebuilding all your operational tooling around a new system.

For most teams the answer is straightforward: upgrade to Airflow 3.

And honestly Airflow 3 is genuinely good. Its not just a version bump to stay supported - there are real improvements that make the upgrade worth it on its own:

  • DAG versioning - DAGs run to completion on the version that started them, even if you deploy a new version mid-run. This alone saved us from several headaches during our migration
  • Event-driven scheduling - assets (formerly datasets), external event triggers, and queue-based scheduling
  • Task Execution API - tasks no longer have direct database access, which massively improves security posture
  • Separate DAG processor - broken DAG files don’t stall your scheduler anymore. If you’ve ever had one bad DAG file take down scheduling for everything else you know why this matters
  • Rebuilt UI - faster, cleaner, and people actually like using it now
  • Native backfill management - backfills are first-class citizens managed by the scheduler, not a fragile CLI command that dies if your terminal disconnects

What Breaks During the Upgrade

The upgrade isn’t trivial though. You need to know what’s coming so you’re not scrambling mid-migration. Here’s the stuff that will actually break:

What ChangedImpactFix
SubDAGs removedDAGs using SubDAGs will failConvert to TaskGroups or dynamic task mapping
execution_date removed from contextAny DAG reading execution_date will breakReplace with logical_date
Standard operators moved to providersBashOperator, PythonOperator imports changeInstall apache-airflow-providers-standard, update imports
Webserver split into two servicesDeployment manifests need updatingDeploy separate API server and DAG processor
REST API v1 deprecatedAPI scripts breakUpdate to REST API v2
Direct metadata DB access removedCustom operators querying the DB will failRefactor to use Task Execution API

The execution_date one is probably the most common. If you’ve been using Airflow for a while, chances are its sprinkled throughout your Jinja templates and Python callables. The fix is mechanical but you need to find every instance.

The provider package move is sneaky too. BashOperator and PythonOperator used to come bundled with Airflow core. Now they live in apache-airflow-providers-standard. Miss this and your DAGs fail at import time with a confusing ModuleNotFoundError that doesn’t really tell you what went wrong.

How to Actually Do the Migration

Here’s the path we recommend based on doing this multiple times now:

  1. Get to Airflow 2.11 first if you’re not already there. This version gives you deprecation warnings for everything that will break in 3.x. Its basically a preview of the work ahead.
  2. Run ruff with the AIR3 rule set. AIR301 and AIR302 flag all the mandatory changes. Takes about 5 minutes and gives you a concrete list instead of guessing.
  3. Fix all the deprecation warnings. Do this while still on 2.11 so you can validate everything works before adding the Airflow 3 changes on top.
  4. Stand up a fresh Airflow 3 environment. Green field is way cleaner than in-place upgrade. Your old metadata database has probably seen better days anyway.
  5. Migrate variables, connections and DAG code. Airflow’s CLI has export/import commands that make this pretty painless.
  6. Test thoroughly in staging. Run your full DAG suite and compare outputs.
  7. Run both clusters in parallel before cutting over. This is the part you don’t skip.

We went through this exact process recently - took about 6 weeks for a 200+ DAG environment on EKS. We wrote up everything that happened including the Kubernetes manifests, the git-sync config, the LDAP gotchas, all of it. Read the full migration walkthrough here.

How Long Does It Take

Based on our experience migrating multiple Airflow environments this year:

Environment SizeDAG CountEstimated Timeline
SmallUnder 50 DAGs2-3 weeks
Medium50-200 DAGs4-6 weeks
Large200+ DAGs6-10 weeks

The timeline depends heavily on how much custom code you have. If your DAGs are mostly standard operators with simple Python callables the migration is pretty straightforward. If you’ve got custom operators that query the metadata database directly, or SubDAGs, or heavy use of execution_date in Jinja templates, plan for the longer end.

And honestly the biggest time sink isn’t the code changes. Its testing. You need confidence that every DAG produces the same results before and after migration, and that means running both environments in parallel for a while. Don’t skip this part - we’ve seen teams rush it and regret it.

What to Do Right Now

If you’re reading this and still on Airflow 2, here’s what I’d prioritize:

  1. Audit your environment. What Airflow version are you on? How many DAGs? Any SubDAGs? Any custom operators with direct DB access? Get the full picture first.
  2. Run ruff with AIR3 rules. This gives you a concrete list of what needs to change. Takes about 5 minutes and it’ll tell you exactly how big the job is.
  3. Check your provider packages. Which providers are you using? Check if the latest versions still support Airflow 2. Alot of them already don’t.
  4. Read our migration guide. We documented every step of upgrading Airflow 2.8 to 3.1 on Kubernetes - the import changes, the variable export, the manifest configs, all the gotchas. Start there.
  5. Set a deadline. Every week on unsupported software is added risk. Get a date on the calendar and hold yourself to it.

The EOL date has already passed. The clock is running.


Need Help With Your Airflow Migration?

We’ve done this migration multiple times now - production Airflow environments on Kubernetes, everything from 30 DAGs to 200+. We handle the infrastructure setup, the DAG code refactoring, the parallel testing, and the cutover so your data team can keep building pipelines instead of fighting infrastructure problems.

Our managed Apache Airflow service covers:

  • Airflow 2 to 3 migration - we handle the full upgrade including DAG code changes and parallel environment validation
  • Kubernetes-native deployments on EKS, AKS, or GKE with production-grade manifests
  • Ongoing managed operations - monitoring, scaling, upgrades so you don’t have to think about Airflow infrastructure again

We just finished this exact migration for a client running 200+ DAGs on EKS. Here’s the full story of how we did it.

Talk to our Airflow team about your migration ->

Continue exploring these related topics

$ suggest --service

Need DevOps help?

From CI/CD to infrastructure automation, we help teams ship faster and more reliably.

Get started
Chat with real humans
Chat on WhatsApp