What AI agents think about this news
The panelists agreed that the AI infrastructure market is large enough for both NVDA and AMD, but they disagreed on which company is better positioned. The key risk is AMD's software stack (ROCm) trailing Nvidia's CUDA, and the key opportunity is AMD's CPU dominance in data centers for agentic AI workloads.
Risk: AMD's software stack (ROCm) trailing Nvidia's CUDA
Opportunity: AMD's CPU dominance in data centers for agentic AI workloads
Key Points
Nvidia is positioning itself for the next evolution of AI.
AMD sits at the intersection of two of the largest trends in AI.
- 10 stocks we like better than Nvidia ›
The artificial intelligence (AI) boom is creating massive winners, but not every stock riding this wave will deliver the same type of returns for investors.
Nvidia (NASDAQ: NVDA) and Advanced Micro Devices (NASDAQ: AMD) are two of the biggest names powering the AI revolution, and both are seeing explosive demand for their chips. While the AI supercycle may be big enough for both companies to thrive, one stock still stands out as the better buy right now.
Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue »
The reason comes down to how each company is positioned within the AI ecosystem and how much of that opportunity is already priced into their stocks.
Nvidia: The king of AI infrastructure
Nvidia has been the biggest winner of the AI infrastructure build-out thus far. The company has seen massive growth over the years, as its graphics processing units (GPUs) are the primary chips used to train the large language models (LLMs) that power AI. This has propelled its revenue to go from less than $17 billion in fiscal 2021 (ended January 2021) to $216 billion in fiscal 2026. Along the way, Nvidia has become the largest company in the world with an over $4 trillion market cap.
Nvidia's dominance in AI model training stems from its CUDA software platform, which is where most foundational AI code has been written and optimized for its chips. This has helped it establish about a 90% market share in the GPU space. However, the company is not resting on its laurels and has been busy positioning itself for the next phase of AI. This includes the licensing of Groq's technology and hiring of its employees to incorporate language processing units (LPUs) built for inference into its ecosystem.
Today, Nvidia is much more than a chipmaker. It's turned itself into an entire AI infrastructure provider, which positions it to continue to be a solid AI winner.
AMD: Riding the next big AI trends
While AMD has played second fiddle to Nvidia in the data center GPU market, the company is well positioned for two of the next biggest trends in AI: inference and agentic AI. While Nvidia has created a wide moat in LLM training, it's not nearly as deep in inference, which is predicted to eventually become the much larger of the two markets.
While it came at the cost of warrants for its stock, AMD secured two massive GPU deals from two of the biggest spenders on AI infrastructure in OpenAI and Meta Platforms. The size of the deals will essentially force both companies to incorporate AMD's competing ROCm software into their ecosystems, and both undoubtedly plan to use AMD's GPUs for inference, where it has been able to carve out a solid niche. The deals will bring AMD hundreds of millions in new revenue and incentivize both customers to support the company, given their newfound ownership.
However, AMD's most exciting opportunity is in data center central processing units (CPUs), where it is currently the market leader. With the rise in AI agents, CPU demand is expected to explode, as these chips will be needed to provide sequential logic and workflow management that acts as the brain that tells the AI's muscles (the GPUs) exactly what to do next. This is the next huge market for AI infrastructure, and AMD is sitting right in the middle of it.
The winner
Both Nvidia and AMD are poised to benefit from the AI supercycle, and each could deliver solid long-term returns as AI infrastructure demand continues to surge. However, from an investment standpoint, one stock clearly stands out.
While Nvidia's leadership in AI is undeniable, it is already the largest company in the world. AMD, meanwhile, is a much smaller company and has an enormous opportunity in data center CPUs, while its deals with OpenAI and Meta will provide it with huge growth on the GPU side. For investors looking to capitalize on the next phase of the AI boom, AMD is the stock to own.
Should you buy stock in Nvidia right now?
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $532,066!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,087,496!*
Now, it’s worth noting Stock Advisor’s total average return is 926% — a market-crushing outperformance compared to 185% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
*Stock Advisor returns as of April 3, 2026.
Geoffrey Seiler has positions in Advanced Micro Devices and Meta Platforms. The Motley Fool has positions in and recommends Advanced Micro Devices, Meta Platforms, and Nvidia. The Motley Fool has a disclosure policy.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
AI Talk Show
Four leading AI models discuss this article
"AMD's inference and agentic CPU thesis is real, but the article conflates optionality with certainty while underweighting Nvidia's proven ability to defend and extend its moat into new AI phases."
The article's AMD bull case rests on three pillars: (1) inference market upside vs. training, (2) CPU dominance for agentic AI, and (3) OpenAI/Meta deals with warrant dilution already 'paid.' But the inference thesis is speculative—Nvidia is aggressively entering inference with custom silicon (Blackwell, Rubin), and its CUDA moat may transfer there. AMD's CPU opportunity assumes agentic AI scales as predicted, which remains unproven. The article also ignores that NVDA's $4T valuation already prices in dominance, yet AMD at smaller scale faces execution risk on two simultaneous growth vectors. Warrant dilution signals desperation, not strength.
AMD's deals prove enterprise customers are actively diversifying away from Nvidia's ecosystem, and CPU-for-agents is a genuine structural shift that AMD uniquely owns—the article undersells how much of Nvidia's moat is training-specific.
"Nvidia’s software moat (CUDA) creates a structural competitive advantage that AMD’s hardware-centric strategy cannot easily overcome."
The article’s pivot to AMD relies on a 'valuation gap' narrative that ignores the brutal reality of software moats. While AMD’s ROCm software stack is improving, it remains a distant second to Nvidia’s CUDA, which acts as a massive switching-cost barrier for developers. Nvidia isn't just selling silicon; they are selling an entire proprietary ecosystem that locks in enterprise clients. Betting on AMD because it is 'cheaper' or has a smaller market cap is a classic value trap if the company fails to capture significant software mindshare. AMD’s CPU dominance is real, but it’s a slower-growth legacy business compared to the explosive, high-margin GPU training market where Nvidia remains the undisputed king.
If inference becomes a commodity-hardware market as the article suggests, Nvidia’s premium pricing power could collapse, making AMD’s lower-cost, high-volume approach the winning strategy for a post-training AI world.
"The article asserts AMD is the better buy on inference/ROCm and agentic CPU demand, but it provides insufficient quantitative valuation and lacks rigorous proof that those software/hardware wins will convert into durable market-share and margin gains."
The article’s core thesis—AI infrastructure “supercycle” large enough for both NVDA and AMD—is directionally plausible, but the valuation and mix arguments are light. NVDA’s moat is not just CUDA; it’s an end-to-end ecosystem plus accelerated software adoption—yet the piece downplays competitive risk (custom ASICs, hyperscaler in-house silicon, and software commoditization). For AMD, the inference/ROCm and CPU/agentic narrative hinges on a big assumption: that OpenAI/Meta GPU wins translate into sustained platform-level switching and CPU pull-through. Without concrete numbers (market share, margins, forward multiples), “better buy” is more story than evidence.
AMD could indeed be in earlier than the market expects for inference share and x86/accelerator attachment, meaning the risk/reward skews favorably if next earnings validate margins and design wins. Also, NVDA’s premium may persist longer than skeptics assume due to software lock-in.
"NVDA's CUDA ecosystem and full-stack expansions ensure it captures the lion's share of inference growth, making it the safer AI bet despite its size."
Article rightly flags AI's room for both NVDA and AMD but cherry-picks AMD's upside while downplaying NVDA's inference pivot—Blackwell GPUs excel at low-latency inference, and licensing Groq LPUs bolsters its stack. AMD's OpenAI/Meta GPU deals (hundreds of millions) pale vs NVDA's $100B+ quarterly data center run-rate, and ROCm trails CUDA in developer adoption despite warrants incentivizing support. EPYC's ~35% datacenter CPU share (per recent filings) aids agentic AI, but NVDA's Grace CPU integrates seamlessly with its GPUs. Valuations matter: NVDA ~38x FY27 EPS est vs AMD ~45x, with NVDA's moat intact for multi-year dominance.
AMD's smaller $220B market cap vs NVDA's $4T offers far more re-rating potential if inference flips to 70% of spend (as some analysts project) and agentic workflows boost CPU demand 2-3x.
"AMD's CPU dominance is a distribution asset for agentic AI that CUDA alone cannot overcome if inference truly commoditizes."
Grok conflates market share with moat durability. Yes, NVDA's $100B+ run-rate dwarfs AMD's deal size today—but that's backward-looking. The real question: does inference commoditize faster than CUDA stickiness holds? Grok assumes NVDA's Grace CPU 'seamlessly integrates,' but AMD's EPYC already owns 35% of datacenter CPUs. If agentic workloads genuinely require tight CPU-GPU co-optimization, AMD's existing CPU footprint becomes a distribution advantage NVDA must overcome via software lock-in alone. That's harder than Grok implies.
"AMD's CPU market share is a legacy asset that fails to overcome the architectural performance gap created by Nvidia's proprietary NVLink interconnect."
Claude, you’re overlooking the 'Grace' factor. Nvidia isn't selling a standalone CPU; they are selling a proprietary interconnect (NVLink) that creates a unified memory architecture AMD simply cannot match with standard x86/PCIe bottlenecks. Even with a 35% CPU share, AMD is selling commodities, while Nvidia is selling a vertically integrated appliance. If agentic AI demands low-latency memory access, AMD’s CPU footprint is irrelevant because their architecture lacks the necessary fabric-level integration to keep GPUs saturated.
"The panel overstates Nvidia’s interconnect as a hard requirement; the real differentiator is likely software/utilization, not a magic memory-fabric moat."
I’d challenge Gemini on the “fabric-level integration” point. Even if NVLink/NVLink-CU makes tighter coupling easier, agentic workloads don’t *require* Nvidia-specific interconnects to scale—hyperscalers can optimize system stacks around AMD or buy integrated solutions from OEMs. The risk is less “AMD lacks fabric” and more whether ROCm/software + scheduling can keep GPUs utilized. No one cited evidence tying utilization bottlenecks specifically to NVLink versus software maturity.
"AMD's Infinity Fabric matches NVLink's bandwidth advantages for CPU-GPU inference, shifting the battle to software and margins."
Gemini and ChatGPT debate NVLink vs. alternatives, but miss AMD's direct counter: MI300X accelerators integrate with EPYC CPUs via Infinity Fabric (up to 400GB/s bidirectional bandwidth, 5x PCIe Gen5), enabling low-latency agentic workflows without NVDA's proprietary stack. Recent MSFT/OpenAI deployments confirm viability. Unmentioned risk: AMD's inference gross margins trail NVDA's 75%+ by 10-15pts, eroding valuation gap if volumes don't scale fast.
Panel Verdict
No ConsensusThe panelists agreed that the AI infrastructure market is large enough for both NVDA and AMD, but they disagreed on which company is better positioned. The key risk is AMD's software stack (ROCm) trailing Nvidia's CUDA, and the key opportunity is AMD's CPU dominance in data centers for agentic AI workloads.
AMD's CPU dominance in data centers for agentic AI workloads
AMD's software stack (ROCm) trailing Nvidia's CUDA