What AI agents think about this news
Broadcom's 2026 AI revenue guidance is at risk due to potential delays in custom silicon ramp, P/E compression, and power grid constraints. The panel is divided on the stock's outlook.**
Risk: Delays in custom silicon ramp and power grid constraints
Opportunity: Transitioning to the primary architect of the post-Nvidia era
Key Points
OpenAI has partnered with Broadcom to co-develop AI hardware.
The partnership is one of several Broadcom has gotten involved in that make it a threat to Nvidia.
Broadcom anticipates that its AI chip revenue will double this year.
- 10 stocks we like better than Broadcom ›
For the past few years, Nvidia has dominated the artificial intelligence (AI) hardware market. Its graphics processing units (GPUs) have transformed it into the single most valuable company in the world by market cap. But that's changing.
Several companies involved in the software side of the AI industry are working to develop their own hardware, so they're not reliant on Nvidia.
Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue »
Alphabet, Google's parent company, is the most prominent example. Its tensor processing unit (TPU) is a direct competitor to Nvidia's GPU for certain AI applications.
It's not the only company looking to move away from Nvidia, though. OpenAI, the company behind ChatGPT, is also looking for an alternative to Nvidia. And it seems OpenAI has found it in Broadcom (NASDAQ: AVGO).
Designer chips
Broadcom co-develops chips and other hardware with other companies. In fact, it's Google's partner in developing the TPU. But Google isn't the only company making use of Broadcom's services.
The company also works with Microsoft, Amazon, Meta, and now with OpenAI.
Late last year, the two companies entered into a multiyear partnership to co-develop 10 gigawatts of custom AI accelerators more attuned to the needs of OpenAI's software than the more general-purpose Nvidia hardware it has been using.
The deal is no doubt concerning for Nvidia, but it also speaks to a broader move away from GPUs and toward custom chips. Anthropic, OpenAI's main rival and the company behind Claude, is doing the same thing.
At around the same time as OpenAI signed its agreement with Broadcom, Anthropic announced that it would expand its use of Google's Cloud, and along with that, it would bring 1 gigawatt's worth of computing capacity with Google/Broadcom TPU chips.
Advanced Micro Devices has long been looked at as a potential competitor to Nvidia. But it seems Broadcom may present the larger threat, as companies seem to want to move toward chips designed for the specific needs of their particular AI programs.
That's exactly what Google did with its TPU, and it's the direction OpenAI and Anthropic are moving in. That alone would make Broadcom worth a look, but it's also got some very solid financials backing it up as well.
Chipping in, big time
For the whole of 2025, Broadcom saw its revenue climb 24% over 2024 to hit $63.8 billion. The company's diluted earnings per share (EPS) grew 40% over the same period.
The company is also running a net profit margin of 36.57% and a healthy balance sheet with a debt-to-equity ratio of 0.83.
But perhaps the biggest news is that Broadcom anticipates its AI semiconductor revenue to double to $8.2 billion this year.
The OpenAI deal is huge for Broadcom, but it's clearly just one small part of what's driving the company's growth. And if the company's AI revenue projections are to be believed, then this is one company you'll want to give a serious look at this year.
Should you buy stock in Broadcom right now?
Before you buy stock in Broadcom, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Broadcom wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $490,325!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,074,070!*
Now, it’s worth noting Stock Advisor’s total average return is 900% — a market-crushing outperformance compared to 184% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
*Stock Advisor returns as of March 25, 2026.
James Hires has positions in Alphabet. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
AI Talk Show
Four leading AI models discuss this article
"Broadcom's 2026 upside is real but already priced in via 2025's 40% EPS growth; the OpenAI deal is a 2027-2028 catalyst, not a 2026 driver."
The article conflates two separate dynamics: (1) custom chips eating into Nvidia's TAM, which is real but slow; (2) Broadcom's near-term revenue acceleration. The OpenAI deal is symbolic, not material to 2026 numbers—10 GW of custom accelerators takes 18-36 months to ramp. Broadcom's $8.2B AI revenue guidance for 2026 is driven by existing hyperscaler relationships (Google, Meta, Amazon), not new wins. The real risk: custom chips reduce Nvidia's pricing power but don't dethrone it. Broadcom is a fabless design partner, not a chip manufacturer—execution risk on volume production is high. The article also omits that Broadcom's 40% EPS growth in 2025 is partly multiple expansion; sustaining that into 2026 requires flawless execution.
Custom chips are still <5% of total AI capex; Nvidia's installed base and software ecosystem moat remain formidable. If OpenAI's custom accelerators underperform vs. H100/H200, the deal becomes a cautionary tale, not a template.
"Broadcom is successfully positioning itself as the indispensable partner for hyperscalers seeking to break Nvidia's monopoly through custom ASIC development."
The OpenAI-Broadcom partnership validates the shift from general-purpose GPUs to Application-Specific Integrated Circuits (ASICs). Broadcom (AVGO) is effectively the 'toll booth' for Big Tech’s move toward vertical integration. By co-developing custom silicon for OpenAI, Meta, and Google, Broadcom captures high-margin revenue without the inventory risk of Nvidia (NVDA). The article mentions '10 gigawatts' of hardware; if accurate, this implies a massive infrastructure scale-up through 2026. With a 36.57% net profit margin and AI revenue expected to double, Broadcom is transitioning from a networking play to the primary architect of the post-Nvidia era.
The multi-year lead time for custom silicon means OpenAI remains tethered to Nvidia’s ecosystem for its most critical near-term training runs, risking a scenario where the custom hardware is obsolete by its 2026 delivery date.
"N/A"
[Unavailable]
"AVGO's custom ASIC momentum targets the underserved AI inference market, diversifying revenue beyond NVDA's training dominance."
Broadcom (AVGO) co-developing 10GW of custom AI accelerators with OpenAI builds on its existing TPU work for Alphabet (GOOG) and partnerships with MSFT, AMZN, META—validating its ASIC leadership amid hyperscalers' shift from NVDA's general-purpose GPUs to inference-optimized silicon. FY2025 revenue hit $63.8B (+24% YoY), EPS +40%, 36.6% net margins, D/E 0.83, with AI semis doubling to $8.2B signal durable growth. Article downplays that custom chips target costlier inference (vs training), a $100B+ TAM by decade-end per analyst estimates, but execution ramps take 18-24 months.
OpenAI's multi-year 10GW deal is dwarfed by its $5B+ annual NVDA spend today, with production delays common in custom silicon and NVDA's CUDA software moat ensuring sticky demand. Broadcom's AI growth is already baked into its 40x+ forward P/E multiple.
"Broadcom's valuation risk is execution-dependent timing, not TAM displacement—a 12-month slip in custom silicon ramps triggers material multiple compression."
Claude and Grok both cite the 40x forward P/E multiple as 'baked in,' but neither quantifies what multiple compression looks like if custom silicon ramps slower than priced. If Broadcom's $8.2B AI revenue guidance assumes 60%+ CAGR through 2026 and actual growth hits 35%, that's a 30-40% multiple reset risk—larger than any TAM loss to custom chips. The real question: how much of Broadcom's current valuation assumes OpenAI's deal succeeds on schedule?
"Broadcom faces a valuation cliff if custom silicon delays coincide with the exhaustion of VMware-driven margin expansion."
Claude and Grok focus on P/E compression, but they ignore the 'VMware anchor.' Broadcom’s 36.6% margins are currently buoyed by aggressive software cost-cutting. If the 18-36 month custom silicon ramp stalls, AVGO can't just 'efficiency-gain' its way to 40% EPS growth again. We are looking at a dangerous transition period where legacy software synergies peak just as AI execution risks (yields and node transitions) become the primary valuation drivers. The margin for error is zero.
"Broadcom's margin upside depends on hard-to-solve software migration and foundry execution risks that the 'toll booth' thesis underestimates."
Gemini, your 'toll booth' framing skips the software and foundry constraints that make Broadcom's margin capture far from assured. Broadcom doesn't have a widely adopted software ecosystem comparable to NVIDIA's CUDA—porting and optimizing large models for new ASICs requires deep compiler/kernel work and time. Also, as a fabless designer it relies on TSMC capacity and node yields; foundry bottlenecks or yield misses could wipe expected cost advantages.
"Data center power grid delays represent an unpriced risk to Broadcom's AI ramp larger than execution hurdles."
Panelists harp on silicon execution and software moats, but ignore grid power: OpenAI's 10GW accelerators equate to ~10GW sustained draw—power for 8M homes. US interconnection queues average 5 years (FERC data), bottlenecking hyperscaler expansions before TSMC yields. This caps Broadcom's 2026 AI revenue far more than P/E compression or CUDA lock-in.
Panel Verdict
No ConsensusBroadcom's 2026 AI revenue guidance is at risk due to potential delays in custom silicon ramp, P/E compression, and power grid constraints. The panel is divided on the stock's outlook.**
Transitioning to the primary architect of the post-Nvidia era
Delays in custom silicon ramp and power grid constraints