AI for Business

AWS Custom Silicon Hits $20B Run Rate as Amazon Doubles Down on AI Infrastructure

Amazon's in-house chip business has quietly become a major force in the data center market, crossing a $20 billion annual revenue run rate with growth exceeding 100% year over year. CEO Andy Jassy...

Share:

Amazon's in-house chip business has quietly become a major force in the data center market, crossing a $20 billion annual revenue run rate with growth exceeding 100% year over year. CEO Andy Jassy recently called it one of the top three data center chip operations globally, a claim backed by the numbers.

The chip lineup—Graviton CPUs, Trainium AI accelerators, and Nitro security processors—is driving AWS's competitive edge as AI workloads explode. During the Q1 2026 earnings call, Jassy noted that if the chip unit operated as a standalone business selling to third parties like other chipmakers, its run rate would hit $50 billion.

AWS posted $37.6 billion in Q1 revenue, up 28%—its fastest growth in 15 quarters. The chip division stole the spotlight. Trainium2 offers 30% better price performance than comparable GPUs and is sold out. Trainium3, shipping since early 2026, improves that by 30-40% and is nearly fully subscribed. Trainium4 reservations are piling up, with broad availability 18 months out and commitments exceeding $225 billion. OpenAI has pledged roughly two gigawatts of Trainium capacity through AWS, while Anthropic locked in up to five gigawatts. Meta signed on for tens of millions of Graviton cores to handle agentic AI workloads, citing up to 40% better price performance than x86 rivals.

AWS's AI revenue run rate now tops $15 billion—nearly 260 times the $58 million from its first three years. Bedrock processed more tokens in Q1 than in all prior years combined, with customer spending up 170% quarter over quarter. The platform now previews OpenAI's GPT-5.4 and Anthropic's Claude Opus 4.7, and a Cerebras partnership delivers the fastest AI inference available. Bedrock AgentCore deploys agents every 10 seconds.

Amazon has deployed over 2.1 million AI chips in the past year, more than half Trainium, alongside over a million Nvidia GPUs. This mix gives customers flexibility. Jassy emphasized that no competitor offers a better chip set across AI and CPU workloads. Power constraints, not chip supply, remain the primary bottleneck.

Capital expenditure plans for 2026 hover around $200 billion, much of it locked into AI infrastructure. Free cash flow dropped 95% to $1.2 billion over the trailing twelve months—the price of aggressive buildout. But Trainium at scale could save tens of billions annually in capex and add hundreds of basis points to operating margins versus third-party chips.

Amazon isn't alone in custom silicon. Google has TPUs, Meta has MTIA. But none match Amazon's external traction. Nvidia's data center dominance persists through CUDA lock-in, yet Trainium has won frontier labs like Anthropic and OpenAI. That signals a shift.

The stock dipped post-earnings on heavy spending guidance, but the long view is clear. Amazon's chip business isn't a hedge anymore. It's the engine powering the next phase of AI infrastructure, with a potential $50 billion standalone valuation and triple-digit growth locked in.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →