,

·

Hyperscaler 2026 capex hits ~$700B. Free cash flow is the variable that breaks.

What was announced

On February 6, CNBC reported that combined 2026 AI capex commitments across Amazon, Google, Microsoft, and Meta now approach $700 billion. Amazon: roughly $200 billion. Alphabet: up to $185 billion. Microsoft: increase from prior 2025 levels (analyst consensus near $99 billion FY26, ending June). Meta: budgeted $115–135 billion. Approximately 75% of the spend is AI-related — call it $450 billion of AI infrastructure in a single year, up about 36% versus 2025. Free cash flow projections for the same set of companies show meaningful compression; Amazon is forecast to turn negative, with analyst projections of negative free cash flow between $17 billion and $28 billion in 2026.

What it means

Capex of this magnitude rewrites the financial model for the entire frontier compute stack. The hyperscalers are no longer building toward a near-term revenue profile — they are building toward a 5-to-7-year usage curve they believe is coming. That is a different posture than the 2018–2022 capex cycle, which was largely demand-led. This one is conviction-led, and the conviction is asymmetric: if AI compute demand materializes at the projected rate, today’s capex looks conservative; if it lags by even 18 months, the depreciation schedule eats free cash flow at a rate the public markets have not yet priced.

A second-order effect matters more for non-hyperscalers: every CIO planning AI infrastructure in 2026 is now negotiating against a supplier base whose capacity is partially already absorbed by internal hyperscaler workloads. Pricing power for capacity is structurally higher, lead times for premium GPU instances are longer, and the cost-per-token of frontier inference will move on hyperscaler margin compression rather than competition.

Andreas’s view

My read on this: $700 billion is not a number that resolves itself by spreadsheet logic. It resolves itself by which hyperscaler is willing to absorb the cash-flow hit longest. The strategic question inside each company is no longer “should we build” but “which competitor blinks first when the free-cash-flow line turns red on quarterly reporting.” Amazon is closest to that line. Microsoft has the strongest cash position to absorb it. Google sits in between. Meta has the most flexibility because its core ad business is funding the AI infrastructure with the lightest accounting drag.

I don’t think the capex commitment will be revised down materially in 2026. The competitive cost of unilaterally easing off — handing GPU capacity, customer relationships, and the model-training cadence to a competitor — is too high. What will happen instead is creative financing: more debt, more partnerships with sovereign wealth and infrastructure funds, more long-term capacity contracts that move spend off the balance sheet. The capex will continue. The accounting around it will get more interesting.

The way I see it, adjacent businesses should not assume the capacity they need will be available at the price they modeled. My expectation is that premium-tier inference and training capacity will be priced as a scarce resource for the rest of 2026 and most of 2027. Any AI roadmap that depends on flat or declining unit costs over that window has a hidden assumption built in that I think is unlikely to hold.

Three things I’m watching

  1. I’ll be watching whether companies move to lock multi-year capacity contracts for premium inference and training now, or wait — because negotiating against scarcity in 2027 will be more expensive than over-committing modestly in 2026.
  2. The companies that preserve optionality will be the ones that have stress-tested their AI cost models against a scenario where frontier-tier compute prices are flat or rising for 18 months — and redesigned the workflow, not the budget, when the unit economics broke.
  3. Hyperscaler free-cash-flow disclosures over the next four quarters are the leading indicator I’m focused on — they will show whether the capex commitments hold or quietly compress.

References and related signals

— Andreas