A seismic shift is underway in the AI chip race – and this time it’s not about GPUs. As enterprise customers look to take advantage of the surging custom application-specific integrated circuit (ASIC) market, two tech giants are making big moves.
Both Broadcom and Marvell have notched major design wins, the kind of cutting-edge innovation that could power a new generation of AI breakthroughs. But while the opportunity looks similar on paper, the momentum is anything but. One company is gliding on impressive market share and revenue growth while the other lurches back from a rough patch, despite big-name customers and backing by analysts.
Which chipmaker is primed to ride the $35 billion AI infrastructure wave and challenge Nvidia for microprocessor dominance?
Key Takeaways
- The custom silicon ASIC market is in the spotlight as analysts point to custom silicon as the next wave of AI technology growth, and name Broadcom and Marvell as the companies to watch.
- The Nvidia competitors are following separate routes to ASICs leadership, with Broadcom taking the DIY path and Marvell growing via acquisition.
- Broadcom is in the lead with more than half the market share. But number two, Marvell, is on the way up.
- Meanwhile, AI chip leader Nvidia is mounting a strategic defense, while hyperscale data center customers like AWS have custom silicon plans of their own.
Custom Silicon in the Spotlight
A tsunami is forming beneath the surface of the AI boom, and, for once, Nvidia isn’t the top story.
Momentum is shifting toward custom AI chips, known as ASICs, with two silicon powerhouses vying for position: Broadcom and Marvell. JPMorgan reports that both companies have secured design wins in the crucial two-nanometer (nm) range.
As cloud giants like Amazon and Microsoft scramble for next-generation performance, demand for tailor-made chips is soaring, with the market expected to hit $35.5 billion by 2030.
Application-specific integrated circuits (ASICs) made their name as the architecture behind compute-intensive cryptocurrency mining. Now they’re being repositioned as next-gen silicon designed to outpace off-the-shelf GPUs from AI-leader Nvidia Corp and Advanced Micro Devices (AMD).
At the 2nm node size, ASICs have higher transistor density than previous generations, with AI models surging past 100 billion transistors per chip. That suits hyperscalers’ hunger for more AI compute power with lower power consumption and reduced silicon costs.
ASICs vs. GPUs
Why are hyperscalers now looking to ASICs to run their AI infrastructure? Karl Freund, a semiconductor analyst at Moor Insights, says customization is key:
“GPUs are very fast and relatively flexible. However, a custom ASIC is dedicated to performing fixed operations extremely fast.”
That’s a benefit for companies that have moved beyond the experimentation phase with do-it-all models like ChatGPT and are now designing AI applications aimed at specific customer segments, sectors, and use cases.
On the downside, AI ASICs aren’t as flexible or adaptable as GPUs. Because designers freeze the chip’s logic early in the development process, they can’t adapt quickly to changes in the market or new competitive challenges. GPUs, he says, can be reprogrammed to add new features.
The price tag for an ASIC can also be prohibitive, costing tens or even hundreds of millions of dollars and requiring a team of expensive engineers.
Karl Freund wrote:
“Paying for all that development means many tens or hundreds of thousands of chips are needed to amortize those expenses across the useful lifetime of the design (typically 2-3 years). Additionally, the chip will need to be updated frequently to stay ahead of new design techniques and production processes.”
Marvell vs. Broadcom: Who’s in Pole Position?
Broadcom and Marvell Technology are two of the world’s biggest semiconductor developers for AI-powered data centers and cloud service providers. Both offer custom ASICs – a segment of the broader XPU chip category – built to power AI deployments.
According to Coherent Insights, the ASIC chip market will hit $21.77 billion in 2025, on the way to $35.68 billion by 2032. A compound annual growth rate of 7.3% presents significant growth opportunities for both Broadcom and Marvell.
An analysis by Mitrade estimates Broadcom’s ASIC market lead at 55-60%. The company has scored some notable recent wins, from its V8 cloud tensor processing unit (TPU) for Google to Meta’s 2nm Meta Training and Inference Accelerator (MTIA). Morningstar analyst William Kerwin expects Broadcom’s AI revenue to hit $50 billion next year – a 60% jump.
Number two, Marvell, with a 15% share, has momentum of its own. The company is expanding production for Amazon’s Trainium2 AI accelerator chip and Google’s Axion CPU for cloud data centers. Analysts say design wins at AWS and Microsoft for 2nm ASICs are also on the cards. Yet markets don’t seem convinced. The firm’s stock MRVL is down 33% year-to-date, as of June 26, 2025.
Spec-for-spec, it’s difficult to call a clear winner as each company takes a different route to ASIC creation.
- Broadcom’s AI chip development has focused on large-scale integration and platform design, backed by enormous investments in R&D.
- Marvell has grown rapidly through strategic acquisitions, acquiring companies like Avera, Cavium, and Innovium and integrating their technologies.
Big tech likes both approaches, but in the battle for ASIC dominance, market footprint could be the decisive factor.
Broadcom has a broader customer base, and revenues are spread accordingly. Marvell, in contrast, is more dependent on a few large customers.
It faces a greater risk to revenues if major customers like Amazon cancel orders or decide to scale back. For now, at least, Broadcom seems likely to maintain leadership in the ASIC market.
The Bottom Line
So, who takes gold in the AI silicon sprint? With more than half the ASIC market, a more stable revenue base, and a strong proprietary technical footing, Broadcom has a commanding lead. Marvell aims for a 20% market share next year and seems to relish its scrappy underdog role, but for now, it has a lot of catching up to do.
Blurring the crystal ball is Nvidia’s recent move to open up its technical infrastructure and give customers more flexibility and design options, creating a potential third way that could cool some of the enthusiasm for ASICs and affirm its hold on AI rack space.
Hyperscalers, meanwhile, have thrown up their own challenge, building custom silicon in-house to reduce their reliance on outside vendors.
FAQs
Will Marvell overtake Broadcom as Nvidia’s top AI Challenger?
What role do custom chips play in AI infrastructure?
Why are hyperscalers like AWS developing their own ASICs?
References
- Marvell’s Strategic Growth in AI Datacenter Networking and ASIC Business: A Buy Recommendation by Harlan Sur (TipRanks)
- Marvell Launches New Power Regulator And 2nm Memory Chips To Boost AI, Cloud Efficiency (SahmCapital)
- ASIC Chip Market Size, Share & Analysis Report 2030 (KBVresearch)
- Will ASIC Chips Become The Next Big Thing In AI? – Moor Insights & Strategy (MoorInsightsStrategy)
- What is an xPU? | Experts on Data (SNIA)
- ASIC Chip Market Size, Trends & YoY Growth Rate, 2025-2032 (COHERENT market insights)
- Broadcom (AVGO): ASIC vs GPU: Broadcom and NVIDIA’s Battle for AI Dominance (Mitrade)
- Broadcom Earnings: Strong AI Guidance Eclipses Our Model and Drives Our Valuation Higher (MorningStar)