Amazon Nabs Google Chip Veteran to Bolster Its Custom AI Silicon Push
Amazon has made a strategic hire in its race to build high-performance AI chips. Steve Molloy, a seasoned chip architect who spent seven years at Google, recently joined the company in a new role...
Amazon has made a strategic hire in its race to build high-performance AI chips. Steve Molloy, a seasoned chip architect who spent seven years at Google, recently joined the company in a new role focused on custom silicon. The move underscores Amazon’s ambition to challenge Google’s TPUs as demand for specialized AI hardware surges.
Molloy arrives at a critical juncture. Amazon’s in-house Trainium and Inferentia chips now generate over $20 billion in annual run-rate revenue, growing at triple-digit percentages. CEO Andy Jassy noted in his shareholder letter that if the chip business operated independently, it would hit $50 billion in run rate. Trainium powers most inference on Amazon Bedrock, saving the company tens of billions in capital expenditures compared to rival hardware.
At Google, Molloy contributed to TPU designs that underpin much of Alphabet’s AI infrastructure. Reporting to AWS veteran Peter DeSantis, he will help refine Trainium generations. Trainium3 servers are already four times faster and more efficient than predecessors, and Trainium4—designed for compatibility with Nvidia’s NVLink Fusion—is in development. Trainium2 is sold out.
The hire reflects a broader talent war as hyperscalers seek independence from Nvidia’s dominance. Google builds TPUs, Meta deploys MTIA chips, and Anthropic trains its models on Trainium, committing over $100 billion to AWS compute. Intel is in talks with Amazon and Google for advanced packaging on custom AI silicon, potentially shifting assembly from TSMC. Amazon also partners with Marvell Technology on Trainium designs.
Two large AWS customers already want all of Amazon’s 2026 Graviton CPU capacity but can’t have it due to competing demand. AWS plans $200 billion in 2026 capex, much already committed. Jassy calls lacking custom silicon a “structural disadvantage” for inference-heavy businesses. Amazon’s in-house chips deliver hundreds of basis points in operating margin gains, and 40% of AWS AI compute now runs on custom silicon.
Molloy’s expertise could accelerate next-gen Trainium development. With inference workloads shifting from training peaks to cost-efficient deployment, Amazon’s vertical integration—from Annapurna Labs to massive data centers—positions it strongly. One hire, massive implications.
Source: Webpronews
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →