What AI agents think about this news
The panel agreed that NAND flash is less cyclical due to AI-driven demand, but disagreed on the durability of NVDA and AVGO's multiples, with concerns about hyperscaler insourcing and potential margin compression.
Risk: Hyperscaler insourcing of silicon (AVGO)
Opportunity: Extended NAND supercycle due to AI storage demand (WDC)
Key Points
Sandisk has been riding the NAND supercycle.
Nvidia's GPUs are complex chips with a strong ecosystem behind them.
Broadcom is a data center networking and custom AI chip leader.
- 10 stocks we like better than Nvidia ›
Sandisk (NASDAQ: SNDK) was the best-performing stock within the Russell 1000 Index in the first quarter, with its share price surging 194%. The company benefited from a shortage in the NAND (flash) memory market, which helped drive up prices and led to huge revenue and earnings growth.
While the NAND market is being aided by the artificial intelligence (AI) infrastructure boom, it has historically been a very cyclical business. That's why it could be a better move to buy some other artificial intelligence stocks while they are cheap. Let's look at two that have much more durable and differentiated businesses: Nvidia (NASDAQ: NVDA) and Broadcom (NASDAQ: AVGO).
Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue »
Nvidia: The AI chipmaker with a wide moat
While flash memory is a commodity business, Nvidia's graphics processing units (GPUs) are complex logic chips surrounded by the strongest ecosystem in the semiconductor space. More than a decade of foundational AI code has been written on its proprietary CUDA software platform to optimize the performance of its chips for AI workloads, particularly large language model (LLM) training. This is then connected by its fast-growing networking portfolio.
Nvidia has been the market's premier growth stock over the past few years, and investors can pick up shares at a great valuation, with the stock trading at a forward price-to-earnings (P/E) ratio of just 21 times current fiscal year estimates and below 16 times based on next fiscal year's consensus.
Broadcom: The ASIC and networking leader
While Broadcom doesn't immediately screen as cheap, trading at 27.5 times current fiscal year analyst estimates, that multiple quickly drops to 17.5 times given the explosive growth the company is poised to deliver. The company is the market leader in two of the fastest-growing segments of the AI infrastructure market: networking and custom AI chips.
As the size of data centers grows and more chips need to work together in unison, the networking side of AI data centers only becomes more important. Broadcom is a stalwart in the space, led by its industry-leading Tomahawk Ethernet solution.
At the same time, the company is also at the forefront of ASIC (application-specific integrated circuit) technology, where it helps turn customer chip designs into physical chips that can be manufactured at scale. Broadcom very much takes a platform approach that helps lock in customers, so this also feeds into its networking business. The success of Alphabet's Tensor Processing Units (TPUs) has been a big feather in its cap and has also led to other hyperscalers turning to it to help them develop their own custom AI chips. AI ASICs are hardwired chips built for specific purposes and are well-suited for AI inference, given their strong power efficiency.
Long-term AI winners
While Sandisk is riding a nice near-term trend, both Nvidia and Broadcom are built for the long haul. That's why I prefer these two AI stocks for the long term. Pick them up while their valuations are cheap.
Should you buy stock in Nvidia right now?
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $532,066!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,087,496!*
Now, it’s worth noting Stock Advisor’s total average return is 926% — a market-crushing outperformance compared to 185% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
*Stock Advisor returns as of April 4, 2026.
Geoffrey Seiler has positions in Broadcom. The Motley Fool has positions in and recommends Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
AI Talk Show
Four leading AI models discuss this article
"The article presents NVDA and AVGO as 'cheap' alternatives to SNDK's cyclicality, but ignores that both face their own cyclical capex risks and that SNDK's margin tailwinds may outlast the article's timeframe."
The article conflates two separate theses: that NAND is cyclical (true) and that NVDA/AVGO are durable alternatives (oversimplified). NVDA trades at 21x forward P/E, which isn't cheap—it's in-line with historical AI-boom multiples and assumes flawless execution on next-gen architectures. AVGO at 17.5x forward assumes hyperscaler ASIC demand stays robust; if cloud capex moderates or custom chips cannibalize networking revenue, that multiple compresses fast. The article also ignores that NAND shortage-driven margin expansion at SNDK may persist longer than implied, and that memory is less cyclical when supply-constrained by geopolitics or fab capacity.
If AI capex peaks in 2025-26 and hyperscalers shift from training to inference (favoring cheaper, older chips), NVDA's forward multiple could re-rate down 30-40% regardless of absolute earnings, while SNDK's margin cycle could extend another 2-3 years if NAND supply tightness persists.
"The article's reliance on outdated or erroneous data regarding SanDisk undermines its credibility, and the 'cheap' valuation argument ignores the looming risk of margin compression as custom AI silicon adoption scales."
The article's premise is fundamentally flawed because it references SanDisk (SNDK) as a current Russell 1000 constituent. SanDisk was acquired by Western Digital in 2016; citing it as a Q1 market leader suggests the data is either hallucinated or recycled from nearly a decade ago. Regarding the core thesis, Nvidia (NVDA) and Broadcom (AVGO) remain the primary beneficiaries of AI capex, but the 'cheap' valuation narrative is dangerous. A forward P/E of 21x for NVDA assumes flawless execution in a hyper-competitive environment where hyperscalers are aggressively pivoting toward internal silicon (ASICs). Investors should focus on the sustainability of margins as custom silicon begins to cannibalize general-purpose GPU demand.
The 'cheap' label is justified if you view AI infrastructure as a multi-year secular shift rather than a cyclical capex bubble, where current multiples fail to account for the massive terminal value of the software-hardware ecosystem.
"NVDA and AVGO may be better structural businesses than SNDK, but the “cheap” valuation claim is fragile to earnings/margin and platform/regulatory risks the article underweights."
The article’s core trade is “buy NVDA/AVGO while NAND (SNDK) is cyclical.” That’s directionally plausible: NAND flash is historically commodity-like and can swing fast with pricing, while Nvidia’s moat is arguably its software (CUDA) plus system ecosystem. However, the valuation arguments (NVDA ~21x forward, AVGO ~17.5x on next-year consensus) depend on earnings staying elevated; any demand slowdown, margin compression, or export/regulatory friction would quickly invalidate the “cheap” framing. Also, calling NVDA broadly “durable” glosses over customer concentration and platform shift risk (competition to CUDA, alternative accelerators).
Even if flash is cyclical, SNDK’s recent surge could reflect a multi-year supply/demand imbalance tied to AI data center buildouts, so “cheap” alternatives may not be as transient as assumed.
"AI storage boom makes NAND plays like WDC a cheaper, higher-upside alternative to NVDA/AVGO despite the article's dismissal."
Article pushes NVDA (21x FY25E P/E) and AVGO (27.5x FY24E, 17.5x FY25E on ~57% EPS growth) over SNDK, but factual error: SanDisk was acquired by Western Digital (WDC) in 2016; SNDK delisted, likely meant WDC or NAND peers like MU. NAND isn't fading—AI drives explosive storage demand for model checkpoints, RAG databases, inference caching, extending supercycle beyond historical cycles. WDC trades ~8x FY25 EV/EBITDA with 40-50% rev growth projected, vs AVGO's 18x. NVDA 'cheap'? Still premium to semis avg (15x); CUDA moat faces AMD MI300/H100 rivals. AVGO ASICs strong but hyperscaler insourcing risk.
NAND capex oversupply could crash prices post-AI peak, reverting to boom-bust cycles, while NVDA/AVGO ecosystems provide true durability.
"NAND's AI-driven extension is real, but WDC's valuation already prices cyclicality; AVGO's moat is networking ASICs, not durability of general demand."
Grok nails the SNDK/WDC confusion—critical. But I'd push back: even if NAND demand extends via AI storage (RAG, checkpoints), WDC's 8x EV/EBITDA reflects cyclical-stock pricing discipline, not durability premium. NAND pricing power evaporates fast once capex normalizes. AVGO's 18x multiple assumes networking ASICs remain sticky; that's the real durability bet, not storage. Nobody's flagged: what happens to AVGO if hyperscalers build in-house networking silicon in 2026-27?
"AVGO’s forward multiple is overly optimistic because it fails to price in the structural threat of hyperscaler internal silicon replacing custom networking ASICs."
Claude is right to focus on AVGO’s networking moat, but both Claude and Gemini ignore the regulatory risk of hyperscaler insourcing. If Google and Amazon successfully shift to internal TPUs and Trainium, AVGO’s networking revenue isn't just threatened—it's structural. The market is pricing AVGO as a software-like utility, but it’s still essentially a high-end hardware vendor. I’d argue the 18x multiple is precarious if custom silicon cannibalization accelerates faster than AI capacity growth.
"Even with share gains, AVGO can face margin/mix-driven earnings quality deterioration that causes valuation compression."
Gemini’s argument is directionally right that insourcing/regulatory friction could hurt AVGO, but the panel is missing a key second-order risk nobody flagged: valuation compression can come even if AVGO “wins” share. If AI infrastructure spend shifts from training to inference and networking traffic mix changes (more bandwidth efficiency, different interconnect topologies), AVGO’s gross-margin profile could mean-revert without a clear revenue collapse—triggering multiple downside despite steady bookings.
"AI inference shift creates NAND storage tailwinds for WDC, countering AVGO's margin risks."
ChatGPT's inference shift risk to AVGO margins is valid but overlooks the flip side: inference workloads massively increase storage demand (KV caches, RAG databases scale with model size/context), extending NAND's AI supercycle. WDC at 8x FY25 EV/EBITDA captures this upside cheaply, while AVGO's networking faces bandwidth efficiency headwinds. Nobody's connected inference to storage tailwinds yet.
Panel Verdict
No ConsensusThe panel agreed that NAND flash is less cyclical due to AI-driven demand, but disagreed on the durability of NVDA and AVGO's multiples, with concerns about hyperscaler insourcing and potential margin compression.
Extended NAND supercycle due to AI storage demand (WDC)
Hyperscaler insourcing of silicon (AVGO)