When Microsoft announced $35 billion in infrastructure spending for a single quarter - up 75% year-over-year - the investment world asked: where does all that money go?

Its a lot of capex and investors are becoming nervous that the big build might not pay off. But to frame up that question you have to understand how this money flows. Not just through Nvidia and data centers, but through six interconnected layers, each dependent on the others, each representing investment opportunities if you understand how the system works.

Training GPT-4 required 50,000 Nvidia H100 GPUs running for months.

Next-generation models will need at least 10x -100x more compute power.

That's why Amazon, Microsoft, Google, Meta, and Oracle are projected to spend $602 billion in 2026, up 36% from 2025. About 75% of that massive sum targets AI-specific infrastructure.

McKinsey projects $5.2 trillion through 2030. This massive spend is driving the US market right now and shows no signs of slowing.

This money flows through six layers, and understanding each reveals where the constraints, and opportunities, exist right now.

Layer 1: Silicon - The Thinking Machines

Everything starts with specialized processors for AI's matrix multiplication operations.

Nvidia didn't become a $3 trillion company by accident. The company's H100 and Blackwell chips perform trillions of calculations per second with its CUDA software platform locking in developers.

Nvidia trades at 23x forward earnings while growing revenue 94% year-over-year. Blackwell is sold out through 2026.

But Nvidia can't build chips alone. Each Blackwell GPU requires 192GB of HBM3E memory - high-bandwidth memory feeding data at speeds copper traces can't match.

Micron has HBM production sold out through late 2027. DRAM prices are rising 40% through Q2 2026 because every AI chip demands exponentially more memory.

TSMC in Taiwan manufactures 90% of advanced chips at 59% gross margins on $90B annual revenue, launching 2nm production in 2026.

ASML provides the extreme ultraviolet lithography machines making this possible - they're the only company that can.

AMD at $214 is breaking Nvidia's stranglehold with MI300 chips, signing a 6GW OpenAI deal in late 2025.

Broadcom at $347 designs custom chips for hyperscalers wanting Nvidia independence, expecting over $60B in AI chip revenue in fiscal 2026 - triple last year.

The silicon layer represents hundreds of billions annually. But you can't do anything with chips until they're in servers.

Layer 2: Servers - Building the Boxes

A modern AI cluster contains thousands of GPUs, each consuming 700 watts. The thermal density would melt conventional cooling within hours. This demands specialized server manufacturers.

Super Micro Computer, despite collapsing 75% to $30, holds a $13B backlog and expects $36B in fiscal 2026 revenue. They specialize in liquid-cooled, high-density racks handling 50-100 kilowatts - five to ten times traditional servers.

They recently announced SuperBlade reducing cabling 93%. Stock trades 0.8x sales after resolving accounting concerns. High-risk turnaround if you believe the backlog.

Dell at $119-129 shipped $15.6B in AI servers through Q3 with an $18.4B backlog, expecting $25B full-year (up 150% YoY).

At ~10x forward earnings, Dell offers a valuation discount, though rising memory costs pressure margins.

Every server needs internal connections. High-speed connectors transferring hundreds of gigabits per second. This invisible infrastructure funnels billions to Layer 3.

Layer 3: Networking & Connectivity - The Invisible Superhighway

Here's where most investors get lost, because networking infrastructure isn't sexy. But when you connect 50,000 GPUs into a single training cluster, the networking fabric becomes as critical as the GPUs themselves.

Arista Networks manufactures the high-speed Ethernet switches that create the data superhighway between compute nodes.

The company's Q3 2025 revenue hit $2.31 billion (up 27.5% year-over-year) at 65.2% gross margins - the kind of profitability that signals genuine competitive advantage. Arista holds a $4.7 billion deferred revenue backlog and expects $2.75 billion in AI data center revenue within its 2026 guidance of $10.65 billion total.The company sells primarily to Meta, Microsoft, and Oracle.

Wall Street consensus: Strong Buy with a median target of $167.50, representing 30% upside.

At 39x forward earnings, Arista trades expensive, but the margins and growth trajectory justify it.

But switches are just part of the story. Inside every data center, behind every rack, runs a hidden world of connectors, cables, and physical infrastructure that makes the digital connections possible. This is Amphenol's domain.

Amphenol at $141.38 is the global leader in high-speed connectors and cable assemblies. The company reported 53% revenue growth in Q3 2025 (41% organic, excluding acquisitions), with IT datacom representing roughly a third of total sales.

In August 2025, Amphenol agreed to buy CommScope's Connectivity and Cable Solutions unit for $10.5 billion, closing in Q1 2026. The stock currently trades at 45x P/E with a consensus target around $145.

TE Connectivity is the world's largest electrical connector supplier with $17.26 billion in 2025 revenue. While 50% of their business serves automotive markets, data center exposure is growing rapidly. The company employs 93,000 people and has risen 62% over the past year. Analyst targets range from $200 to $316, with a median around $271.

And then there's the optical revolution.

Corning isn't just making fiber optic cables, they're enabling a fundamental shift in data center architecture. Copper cables generate significant heat when transmitting data at high speeds. In AI data centers where every watt matters, that heat becomes a massive problem. Glass fiber doesn't generate heat, transmits data faster, and uses far less power per bit transferred.

The company's Optical Communications segment generated $1.56 billion in Q2 2025, up 41% year-over-year, with enterprise network revenue up 81% and data center products doubling sequentially. Corning projects $6 billion in full-year 2025 revenue from optical communications, up 29%. UBS set a price target of $109. The company trades at 58x earnings - rich by traditional standards, but AI demand creates pricing power.

All of these companies benefit from a simple physical reality: AI data centers require 5-10x more cabling and connectivity than traditional data centers due to density, speed, and redundancy requirements.

When Microsoft spends $100 billion on infrastructure, roughly $7 billion flows to Amphenol, TE Connectivity, Corning, and similar companies just for the physical connections. Another $3 billion goes to cooling connection specialists like nVent and Vertiv.

But the most sophisticated networking infrastructure in the world means nothing if you don't have the computing capacity to run it. That capacity lives in the cloud.

Layer 4: Cloud Providers - Infrastructure to Services

All that silicon, servers, and networking culminates in hyperscalers selling AI compute as a service.

Amazon Web Services spent over $125B on capex in 2025, planning higher for 2026. Q3 revenue: $33B (up 20% YoY). CEO Andy Jassy: "As fast as we're adding capacity, we're monetizing it." No inventory problem. Demand outstrips supply.

Microsoft Azure spent $34.9B in a single quarter (up 75% YoY), with fiscal 2026 capex accelerating further. Azure revenue grew 40% in Q3. The OpenAI partnership creates unique positioning: funding model development while capturing cloud revenue. At $478 and 32x forward earnings, expensive but justified.

Google Cloud raised 2025 capex to $91-93B. Q3 cloud revenue: $15.15B (up 34% YoY). Trades at 20x forward earnings. This is the cheapest valuation among hyperscalers, suggesting possible underpricing.

Meta's $70B 2025 capex grows "notably larger" in 2026 (potentially $100B), but they're not selling cloud - building AI infrastructure internally for recommendations, moderation, and ad targeting. Q3 operating income: $23B.

The Big Question: Will this be a productive investment or metaverse repeat? We'll know by 2026 ad performance.

CoreWeave at $79.32 tells a volatile story. IPO'd March 2025 at $40, peaked $183, now $79 after construction delays. $55.6B backlog, $12B projected 2026 revenue, but $14B debt (363% debt-to-equity) while unprofitable. Stock down 60% from highs.

The most sophisticated cloud infrastructure means nothing without physical housing and cooling.

Layer 5: Data Centers - The Physical Layer

An AI training facility isn't a traditional data center. Standard facilities consume 5-10 megawatts. A single AI cluster can consume 100+ megawatts—a small city's worth. These require specialized construction, cooling systems moving thousands of gallons of liquid per minute, and critically, massive electricity access.

Equinix at $779.97 operates 260 data centers across 71 markets, focused on interconnection - the network effect of multiple customers in one facility connected at ultra-low latency.

  • Q3: 10% EBITDA growth

  • Record $394M bookings

  • 92%+ utilization.

  • Deutsche Bank: $915 target (17% upside), 2.4% dividend yield.

Digital Realty provides massive single-tenant hyperscale facilities via PlatformDigital. 300+ data centers globally specializing in high-density AI workloads. Deutsche Bank: $180 target. More commodity-like than Equinix but more direct hyperscaler exposure.

Brookfield Infrastructure: 140+ data centers, 1.6GW capacity, 3.4GW pipeline. Partnering with Bloom Energy for on-site fuel cell power, recognizing power availability as the critical constraint.

Vertiv designs liquid-to-chip cooling handling 50-100 kilowatts per rack, co-engineering with Nvidia on Blackwell. Every AI data center requires 5-10x more cooling than traditional. Limited competition, Nvidia partnership provides demand visibility.

But cooling and construction mean nothing without one thing: electricity. And that's where AI infrastructure hits its biggest bottleneck.

Layer 6: Power - The Ultimate Constraint

Here's the uncomfortable truth everyone's starting to realize: We're running out of power, not chips.

A single AI training cluster requires 100+ megawatts - equivalent to a small city. The United States needs 50 gigawatts of new power generation capacity for AI by 2028, according to Anthropic. FERC projects data center power demand will hit 35 gigawatts by 2030, up from 19 gigawatts in 2023. Goldman Sachs revised their forecast upward in late 2025, now projecting 175% growth in data center electricity usage by 2030.

Here's the problem: 70% of the U.S. electrical grid was built in the 1950s-1970s and is approaching end-of-life. Building new power generation takes 5-10 years from permitting to operation. Grid interconnection queues - the line to connect new generation to the grid - take 3-5 years just to process applications.

We're in a race between AI demand growth and power supply expansion, and supply is losing.

Data center projects are already being delayed or canceled due to power unavailability. Virginia (the largest U.S. data center market), Silicon Valley, and Phoenix are seeing constraints. The phrase "power-rich" has replaced "GPU-rich" as the key competitive advantage for hyperscalers.

This is why Constellation Energy, up 62% year-to-date through early 2026, has become a critical AI infrastructure play. The company operates 21 nuclear reactors at a 98.8% operating rate, providing carbon-free baseload power 24/7. Nuclear doesn't depend on the sun shining or the wind blowing—it runs constantly, which is exactly what AI training clusters require.

Constellation's contracts tell the story:

  • $1.6 billion Microsoft deal to restart Three Mile Island Unit 1

  • 1.1 gigawatt Meta deal (20-year contract starting 2027)

  • Over $1 billion GSA federal government contract

The company guides for 11% revenue growth and 22.5% earnings growth in 2026, trading at 18-20x earnings versus 12-14x for traditional utilities. The premium is justified. You can't build a new nuclear plant in less than 10 years, and existing facilities are scarce assets.

Vistra Corp just acquired Cogentrix Energy for $4.7 billion in early January 2026, expanding their natural gas generation footprint.

NextEra Energy, the largest U.S. utility, is planning $25+ billion in transmission infrastructure investment and has partnered with Google on nuclear and data center projects. Natural gas serves as the "bridge fuel" - dispatchable power that can ramp up and down to supplement renewables when wind and solar can't deliver.

Talen Energy and Dominion Energy round out the nuclear exposure, both seeing increased demand from data center operators desperate for reliable baseload power.

The power constraint is real, and it's getting tighter. When Microsoft or Amazon can't secure power for a new data center, the entire upstream supply chain - Nvidia's GPUs, Micron's memory, Arista's switches, Amphenol's connectors - sits idle in a warehouse somewhere. Power is the ultimate bottleneck.

Following the Money: How $100 Billion Flows Through the Ecosystem

When Microsoft announces $100 billion in infrastructure spending for 2026, here's approximately where it goes:

$30 billion → Nvidia and AMD for GPUs and accelerators

$10 billion → Micron, SK Hynix, and Samsung for HBM memory that feeds those GPUs

$15 billion → Dell, Super Micro, and HPE for server systems that house the chips

$8 billion → Arista and Cisco for the networking switches that connect everything

$7 billion → Amphenol, TE Connectivity, Corning, and CommScope for connectors, cables, and fiber optics

$3 billion → nVent and Vertiv for the cooling systems that prevent thermal meltdown

$15 billion → Equinix, Digital Realty, and construction companies for the physical buildings

$12 billion → Constellation, NextEra, and utilities for power purchase agreements

Each layer depends on the others. You can't run servers without chips. You can't connect chips without cables. You can't house servers without data centers. You can't power data centers without electricity. And constraints in any layer throttle the entire system.

Right now, we're seeing simultaneous constraints in three areas:

  • HBM memory

  • Power availability

  • Data center construction

These constraints create pricing power for suppliers and investment opportunities for those who understand where the bottlenecks exist.

The Investment Story

Understanding this ecosystem reveals why diversification makes sense. Nvidia dominates at 90% market share, but what if AMD captures 20% and Broadcom's custom silicon takes another 10%? Meanwhile, Arista's networking advantage strengthens as clusters scale, Micron's memory constraint tightens, and Constellation's power scarcity intensifies.

AI infrastructure isn't one investment. This is an ecosystem where understanding constraints reveals opportunities. We're in the early innings. Memory prices rising, power tightening, networking accelerating, hyperscalers guiding higher. This buildout runs through 2027, possibly 2030.

Companies with the tightest constraints and strongest moats - memory (Micron), networking (Arista), power (Constellation), silicon (Nvidia) - represent the best risk-adjusted opportunities.

The Earnout Investor provides analysis and research but DOES NOT provide individual financial advice. Jamie Dejter may have a position in some of the stocks, funds, or investments mentioned. All content is for informational purposes only. The Earnout Investor is not a registered investment, legal, or tax advisor, or a broker/dealer. Trading any asset involves risk and could result in significant capital losses. Please, do your own research before acquiring stocks.

Subscribe to the Earnout Investor Free Newsletter!

Keep Reading