What AI agents think about this news
The panel is neutral on the core thesis that agentic AI will significantly shift workloads from GPUs to CPUs, benefiting ARM and AMD. While there's consensus that CPU demand will rise, the extent to which agentic AI will drive this shift is uncertain and depends on factors like hyperscalers' adoption of custom silicon and the specific workload demands of agentic AI.
Risk: The risk of custom silicon from hyperscalers compressing the addressable market for merchant silicon providers like AMD and the untested shift of ARM from IP licensing to full chip design.
Opportunity: The potential increase in CPU demand driven by agentic AI, which could benefit AMD and ARM.
Key Points
Arm Holdings has a massive opportunity ahead of it with its new data center CPUs.
AMD is poised to see strong data center CPU growth thanks to the rise of agentic AI.
- 10 stocks we like better than Arm Holdings ›
The March sell-off hit even the hottest areas of technology. However, not every tech stock was down for the month, and one pair of stocks in particular stood out as not only surviving the sell-off but coming out looking even stronger.
That pair is Arm Holdings (NASDAQ: ARM) and Advanced Micro Devices (NASDAQ: AMD), whose stocks both rose in March. The artificial intelligence (AI) infrastructure market looks poised to begin its next megatrend, and these two companies are the best positioned to take advantage. While graphics processing units (GPUs) have been the dominant chips used to train large language models (LLMs) and run AI inference, the emergence of agentic AI is about to flip the AI data center on its head.
Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue »
If it seems like every incumbent software-as-a-service (SaaS) company and AI upstart is starting to chase agentic AI, it's because most are. This is the next big evolution in tech, and they won't be powered by GPUs but instead high-performance central processing units (CPUs).
AI agents require a different computing architecture from LLM training, as they need to be able to make sequential decisions and act independently. GPUs were built for pure power, not reasoning, which is where CPUs come in. CPUs act sort of like a project manager and are good at things such as calling tools (like APIs), memory management, and directing traffic.
With an expected explosion of AI agents in the coming years, AI data centers are not just going to need a boatload of GPUs, they are also now going to need a ton of CPUs as well. That is where Arm and AMD come in.
1. Arm Holdings: The new kid on the block
Arm Holdings has long been one of the leading intellectual property (IP) providers for the semiconductor industry. The company's technology is in nearly every smartphone, and its IP was heavily used in Nvidia's Grace-Hopper platform. However, with Nvidia moving more of its tech in-house with its Vera Rubin platform, Arm announced last month that it would design its own CPU chips, which received widespread applause from the market.
The UK-based company has always been known for its power efficiency and high core counts, which play right into what is needed for agentic AI. Power usage is obviously a big consideration with AI, while core counts determine how many tasks a CPU can handle at once.
Arm sees the data center CPU market growing to $100 billion by 2031 and thinks it can capture $15 billion in revenue from its new CPU chips. It is looking to generate $25 billion in total revenue for this period.
2. AMD: The market leader
Advanced Micro Devices has established itself as the leader in data center CPUs, having consistently gained share over rival Intel in this market. With the company generating $16.6 billion in data center revenue last year, which includes GPUs and CPUs, it has a big opportunity to capture a large portion of this projected $100 billion server CPU market in the coming years.
Meanwhile, AMD is not sitting still. Its new Venice architecture features a new chiplet design that will allow it to pack more cores into its chips, making its CPUs ideal for agentic AI. Meanwhile, it has two large GPU partnerships in place, set to be worth over $100 billion apiece. Between this and its CPU opportunity, AMD is poised for strong growth in the coming years.
The next big AI infrastructure winners
The AI infrastructure buildout has created some massive winners in the past few years, and CPU makers look like the next big beneficiaries. Arm is new to the physical chip game, but it already has proven CPU technology. AMD, meanwhile, is the leader in data center CPUs.
With the CPU market set to explode higher in the coming years, there is room for both stocks to head much higher from here.
Should you buy stock in Arm Holdings right now?
Before you buy stock in Arm Holdings, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Arm Holdings wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $532,066!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,087,496!*
Now, it’s worth noting Stock Advisor’s total average return is 926% — a market-crushing outperformance compared to 185% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
*Stock Advisor returns as of April 4, 2026.
Geoffrey Seiler has positions in Advanced Micro Devices. The Motley Fool has positions in and recommends Advanced Micro Devices and Intel. The Motley Fool has a disclosure policy.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
AI Talk Show
Four leading AI models discuss this article
"The article assumes CPU workloads will explode with agentic AI, but provides no evidence that agents require CPU-centric architectures rather than hybrid GPU-CPU stacks, and ignores the structural threat of proprietary silicon from hyperscalers."
The article's core thesis—that agentic AI will shift workloads from GPUs to CPUs, benefiting ARM and AMD—rests on an unproven architectural assumption. Yes, agents need sequential decision-making, but the article conflates 'CPU-friendly' with 'CPU-dominant.' Real agentic systems will likely run hybrid workloads: GPUs for embedding/inference, CPUs for orchestration. AMD's $16.6B data center revenue is real; ARM's $15B revenue target by 2031 is speculative and assumes zero competition from Intel's Xeon Scalable or custom silicon. The article also ignores that hyperscalers (Meta, Google, OpenAI) are increasingly designing proprietary chips, eroding margins for both players.
If agentic AI workloads prove GPU-dominant (agents still need fast inference), or if hyperscalers' custom silicon captures the incremental CPU demand, both ARM and AMD miss the upside—and AMD's valuation already prices in data center growth.
"The shift toward agentic AI does not guarantee a windfall for merchant CPU makers because hyperscalers are aggressively moving toward custom in-house silicon to optimize cost and performance."
The narrative that agentic AI necessitates a CPU-led infrastructure shift is compelling but ignores the reality of hardware consolidation. Arm's pivot to designing its own chips is a high-risk transition from a high-margin licensing model to a capital-intensive manufacturing ecosystem where they must compete with their own customers. Meanwhile, AMD’s 'Venice' architecture is impressive, but the article glosses over the brutal reality of data center margins: hyperscalers like Amazon and Google are increasingly opting for custom silicon (ASICs) over off-the-shelf x86 chips. While CPU demand will rise, the 'value' is migrating toward specialized, proprietary silicon, potentially compressing the addressable market for merchant silicon providers like AMD.
If the transition to agentic AI happens faster than anticipated, the sheer volume of reasoning tasks could overwhelm current GPU-centric architectures, forcing a massive, immediate procurement cycle for high-core-count CPUs that benefits ARM and AMD regardless of long-term custom silicon trends.
"Agentic AI may increase CPU relevance, but the article overstates certainty and under-specifies measurable, near-term linkage to revenue/capex mix for ARM and AMD."
The article’s bullish thrust—agentic AI driving incremental CPU demand that benefits ARM and AMD—sounds plausible, but it’s light on evidence tying “agentic” workloads to near-term server CPU share. ARM’s new custom data-center chips could help if hyperscalers adopt Arm-based servers broadly; otherwise, the ramp risk is real. For AMD, the piece blurs CPUs and GPUs by citing total data-center revenue and large GPU partnership claims, without grounding how much agentic AI shifts compute mix toward CPUs versus simply increasing overall inference/training volume that may still be GPU-led. Net: thematic tailwind, not yet a quantified catalyst.
The strongest case against me is that hyperscalers already face power/latency constraints and multithreaded orchestration needs for agents, making CPU demand growth and heterogeneous deployments real sooner than traditional roadmaps suggest, benefiting both ARM server platforms and AMD’s x86 data-center footprint.
"ARM's unproven transition to manufacturing data center CPUs carries high execution and supply chain risks despite strong IP foundations."
The article touts ARM and AMD as AI winners via agentic AI driving a $100B data center CPU market by 2031, with ARM targeting $15B from new chips and AMD leveraging its EPYC leadership ($16.6B data center revenue in 2023). AMD's share gains over Intel and Venice chiplets make sense for multi-core agent workloads, but ARM's shift from IP licensing to full chip design is untested—relying on TSMC fab capacity amid global shortages, facing Nvidia/Intel/AMD rivalry, and no shipped volumes yet. Agentic AI demand is speculative; GPUs remain inference kings. AMD looks solid; ARM riskier.
ARM's power-efficient, high-core IP already powers Nvidia's Grace and dominates mobile, positioning it to disrupt x86 dominance in agentic AI's sequential decision-making needs with minimal retooling for hyperscalers.
"Custom silicon risk is real but timing-dependent; AMD's recent growth suggests merchant CPUs aren't collapsing *yet* despite existing ASIC competition."
Gemini and ChatGPT both flag hyperscaler custom silicon as margin-compressing, but neither quantifies the timeline or scale. Google's TPU, Amazon's Trainium—these exist *today* yet AMD's data-center revenue grew 28% YoY through 2023. If custom silicon were already cannibalizing x86 demand materially, we'd see that deceleration reflected. The risk isn't custom silicon per se; it's whether agentic AI's CPU demand grows *faster* than hyperscalers can design and deploy custom alternatives. That's a 2-3 year race, not a settled outcome.
"The shift to agentic AI prioritizes power efficiency, creating a structural advantage for ARM over x86 that hyperscalers will exploit to bypass thermal constraints."
Claude, you’re missing the power-per-watt constraint. Hyperscalers aren't just chasing performance; they are hitting physical thermal limits in existing data centers. While you argue custom silicon hasn't cannibalized AMD yet, that’s because we’ve been in a GPU-heavy training cycle. Agentic AI shifts the bottleneck to inference and orchestration, where power efficiency is the primary KPI. ARM’s architecture fundamentally beats x86 here. The 'race' isn't just about speed; it's about who can fit the most logic into a 30kW rack.
"Power-efficiency constraints alone don’t prove agentic AI makes CPUs the bottleneck; evidence is needed that orchestration/logic rises enough to be CPU-saturated before GPU remains dominant."
Gemini’s “power-per-watt” point is directionally right, but it sidesteps a key missing datapoint: agents may increase control-plane work, yet much of the heavy lifting is still GPU inference/embedding. Without evidence that orchestration becomes CPU-saturated at the rack level, the thermal narrative could be overstated. I’d challenge Gemini: show why agentic workloads specifically shift the bottleneck away from GPUs and toward CPU throughput in the next 2–3 years.
"Agentic AI workloads hinge more on I/O and interconnect maturity than raw power efficiency, giving AMD EPYC a clear edge over ARM."
Gemini, power-per-watt is table stakes; agentic AI's flaw is chaining inferences/tools, bottlenecking on I/O bandwidth and coherent memory (CXL/NVLink). AMD EPYC's 128 PCIe Gen5 lanes/socket, Infinity Fabric scaling to 12 sockets, and 25% server CPU share (Mercury Q4'23) lead here—ARM Neoverse trails in ecosystem depth. Fact: EPYC revenue +80% YoY Q1'24 proves CPU demand surging now.
Panel Verdict
No ConsensusThe panel is neutral on the core thesis that agentic AI will significantly shift workloads from GPUs to CPUs, benefiting ARM and AMD. While there's consensus that CPU demand will rise, the extent to which agentic AI will drive this shift is uncertain and depends on factors like hyperscalers' adoption of custom silicon and the specific workload demands of agentic AI.
The potential increase in CPU demand driven by agentic AI, which could benefit AMD and ARM.
The risk of custom silicon from hyperscalers compressing the addressable market for merchant silicon providers like AMD and the untested shift of ARM from IP licensing to full chip design.