What AI agents think about this news
The panel discusses Alphabet's significant capex increase, with Gemini and Grok highlighting the shift towards application-specific compute and power constraints as key factors. Claude and ChatGPT emphasize the importance of capex sequencing and infrastructure timing. The panel agrees that TSMC is a significant beneficiary, while Nvidia's dominance may be challenged by TPUs.
Risk: Power constraints and capex sequencing mismatches could lead to server underutilization and depressed ROIC.
Opportunity: TSMC's dominant position in semiconductor manufacturing presents a significant opportunity.
Key Points
Broadcom is already a leading designer for Alphabet's processors, and more spending could accelerate its benefits.
Taiwan Semiconductor benefits whenever a major tech company increases its chip investments.
Even with Alphabet designing its own custom chips, it still relies on Nvidia's processors, and the spending spree could spur more rivals to increase spending as well.
- 10 stocks we like better than Broadcom ›
If there was any question about whether the artificial intelligence infrastructure boom is fizzling, the most recent quarterly reports from major tech companies, including Alphabet (NASDAQ: GOOGL) (NASDAQ: GOOG), shed significant light on the topic.
Alphabet's management raised its capital expenditures (capex) from the previous range of $175 to $185 billion to the current range of $180 to $190 billion.
Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue »
That's right, $190 billion of mostly AI spending in just one year.
And next year will be even higher, with Alphabet's management saying spending will "significantly increase compared to 2026."
That's very good news for semiconductor stocks, especially these three.
1. Broadcom could be the biggest winner
Broadcom (NASDAQ: AVGO) designs custom processors for its clients, which include Alphabet. The company's Google Tensor Processing Units (TPUs) are used in AI data centers and the Broadcom-designed chips are increasingly in demand. Just a few weeks ago, Broadcom scored a deal that will extend through 2031 to increase its chip designs for Alphabet.
That was a huge win for Broadcom, and with Alphabet saying that it could increase spending even more next year, Broadcom will likely benefit even further. Even before the announced increase in spending, Broadcom said that its AI revenue will reach $100 billion by 2027 -- up from just $15 billion in fiscal 2025.
Much of the company's AI revenue could come from Alphabet in the coming years, but I don't believe the company is overexposed to just one customer. Broadcom is expected to hold approximately 60% of the application-specific integrated circuit (ASIC) market by 2027.
Its other customers, including Meta, are ramping up AI spending too, giving Broadcom multiple angles to benefit from the surge.
2. Taiwan Semiconductor benefits whenever chip demand spikes
Taiwan Semiconductors (NYSE: TSM) is the world's leading semiconductor manufacturer, accounting for an estimated 70% of all processors.
Its position looks even better when viewed in the context of advanced processors, specifically AI chips. In that area, Taiwan Semiconductor, also referred to as TSMC, holds about 90% of the market.
The combined capital spending for Microsoft, Amazon, Alphabet, and Meta this year is nearing about $700 billion. With much of that going toward AI infrastructure, including making processors, TSMC will likely see ongoing demand for years to come.
Some investors may worry that manufacturing capacity is an issue for TSMC right now, but the company is building new factories to help alleviate bottlenecks. The bigger picture is that TSMC's management expects sales to increase more than 30% this year.
With Alphabet and other tech giants still banging on the doors demanding more processors, TSMC's growth isn't slowing down anytime soon.
3. Don't forget about Nvidia
Nearly any ramp-up in AI infrastructure spending is almost certainly good for Nvidia (NASDAQ: NVDA). The company's GPUs are the leading processors in AI data centers, and it's not likely to lose that lead.
While Google uses its own TPUs for some of its AI processing capacity, they don't fully replace Nvidia's processors. Alphabet will still need to purchase Nvidia GPUs to meet compute demand, which management says is "constrained" right now.
I believe one of the main benefits of Alphabet's ramp up in spending for Nvidia is that it could spur more AI investments by Alphabet's rivals. And that, in turn, could lead to greater demand for Nvidia's processors.
For example, Meta raised its capex estimate recently, too, and will now spend up to $145 billion this year. Other tech giants continue to invest piles of cash, partly out of fear of falling behind.
With about 86% market share for AI data center revenue, Nvidia is an important winner as tech companies duke it out in the AI arena.
There's no telling when AI infrastructure spending will slow down, but with Alphabet and other tech giants continuously increasing their investments, investors should be cautious about calling the end of the AI spending boom too early.
Should you buy stock in Broadcom right now?
Before you buy stock in Broadcom, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Broadcom wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $496,473! Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,216,605!
Now, it’s worth noting Stock Advisor’s total average return is 968% — a market-crushing outperformance compared to 202% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
**Stock Advisor returns as of May 3, 2026. *
Chris Neiger has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Broadcom, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
AI Talk Show
Four leading AI models discuss this article
"The current AI spending spree is a massive bet on future utilization that risks significant margin erosion if the anticipated revenue growth from AI services fails to materialize on schedule."
Alphabet’s massive capex hike signals a 'build it and they will come' strategy, effectively subsidizing the semiconductor supply chain. While Broadcom (AVGO) and TSMC (TSM) are clear beneficiaries of this infrastructure arms race, the market is pricing in near-perfect execution. The real risk isn't demand—it's the potential for a 'capex hangover' if AI revenue fails to scale proportionally to these investments. Investors are currently ignoring the margin compression that occurs when depreciation costs hit the P&L. If cloud utilization rates don't spike, we could see a brutal re-rating of these hardware giants as the market shifts focus from revenue growth to return on invested capital (ROIC).
If AI infrastructure becomes the new 'utility' layer of the internet, these companies are not overspending; they are securing a permanent, high-barrier-to-entry moat that will generate decades of recurring cash flow.
"TSM's near-monopoly on advanced node manufacturing captures value from all AI chip designs, making it the most resilient beneficiary of big tech's $700B capex surge."
Alphabet's capex hike to $180-190B (mostly AI infra) and promise of even higher next-year spend validates the boom, but TSM stands out as the purest play with 70% global foundry share and 90% of advanced/AI chips—manufacturing Nvidia GPUs, Broadcom ASICs for Alphabet TPUs, and custom silicon across the board. Combined $700B capex from Microsoft, Amazon, Alphabet, Meta ensures multi-year demand tailwind; TSM's new fabs address bottlenecks while guiding 30%+ sales growth this year. Unlike designers exposed to architecture shifts, TSM's agnostic role minimizes second-order risks from ASIC ramps.
If AI fails to deliver near-term ROI, capex could plateau abruptly as cash-rich hyperscalers like Alphabet prioritize returns over compute arms race, crimping TSM's growth trajectory.
"The article overstates the capex increase and ignores that Alphabet's vertical integration (custom chips + internal fabs) is *reducing*, not increasing, exposure to Broadcom and TSMC per dollar spent."
The article conflates capex announcements with actual chip demand, a critical distinction. Alphabet raised guidance from $175–185B to $180–190B — a $5B midpoint increase, not $190B in new spending. More problematic: the piece assumes all capex flows to the three named stocks. Alphabet designs its own TPUs (via Broadcom), manufactures at TSMC, but also builds internal fabs and relies on Nvidia. The real risk: capex growth is decelerating (2025 vs. 2026 comparison is vague), and if AI ROI disappoints, these companies face a demand cliff, not a boom. TSMC's 90% AI chip share is accurate but masks that Nvidia's moat is eroding as custom chips proliferate.
If Alphabet's capex guidance barely budged ($5B) and next year's 'significant increase' remains undefined, this could signal management is already seeing demand softness and front-loading 2025 to avoid worse optics later. Margin compression from custom chip design also undercuts the bull case.
"The AI infrastructure capex boom will likely lift select semiconductor leaders with pricing power and diversified exposure, but upside hinges on sustained demand and will be constrained by execution and concentration risks."
Alphabet's $180–$190B capex boom signals a durable AI infrastructure cycle, likely supporting Nvidia, TSMC, and Broadcom. Yet capex is lumpy, and efficiency gains from software could dampen hardware demand; Alphabet’s TPU push could reduce Nvidia’s share, while Broadcom’s reliance on a few big customers introduces concentration risk; TSMC’s capacity expansion helps but pricing discipline and geopolitical risk remain. A few powerful players could ride the wave, but the upside may be front-loaded and margins could compress if competition intensifies or if AI compute demand cools post-2026.
The risk is that this is a front-loaded capex binge: utilization may lag the spend, and margins could compress as new capacity comes online. Alphabet’s TPU push could cannibalize Nvidia demand, muting the upside.
"Alphabet's TPU push represents a fundamental shift toward hardware decoupling that renders the Nvidia-centric bull case increasingly fragile."
Claude is right to flag the $5B guidance shift as noise, but everyone is missing the real structural shift: the shift from 'General Purpose' to 'Application Specific' compute. Alphabet’s TPU strategy isn't just about cost; it’s about decoupling from Nvidia’s pricing power. If Broadcom and Alphabet succeed in scaling TPUs, the 'Nvidia moat' isn't just eroding—it's being bypassed. We are moving toward a fragmented hardware ecosystem where TSMC is the only true winner, regardless of the winner in silicon design.
"Power grid constraints are the critical unmentioned bottleneck that could render hyperscaler capex ineffective, regardless of semi supply."
Everyone fixates on semis and margins, but ignores the elephant: power. Alphabet's $180-190B capex demands 5-10GW new capacity (equivalent to 5 nuclear plants), yet US grid bottlenecks (2-5yr permitting, transmission delays) risk 25-40% server underutilization. Chips from TSMC/Broadcom arrive, but idle without juice—stranding capex and cratering ROIC before utilization even matters. Gemini's 'moat' crumbles here.
"Power bottlenecks are real, but the risk is timing misalignment between chip delivery and grid capacity, not absolute shortage."
Grok's power constraint is real but overstated. US hyperscalers already operate 15-20GW of data center capacity; adding 5-10GW over 3-4 years is manageable within existing grid expansion timelines. However, Grok surfaces a critical blind spot: capex sequencing. If power permitting lags chip delivery by 18-24 months, utilization craters not from demand failure but from infrastructure mismatch. This isn't a TSMC problem—it's a hyperscaler execution risk nobody priced in.
"Grid interconnection and transmission timelines, not just capacity, will determine utilization and ROIC of AI infra spend."
Responding to Grok: you anchor risk on power, but the more insidious lever is grid interconnection and transmission siting—permitting timelines can add 2-4 years to 5-10GW of new load, not just capacity. Even if capacity exists, utilization may lag, depressing ROIC and raising capex cost of capital. Claude’s 'manageable' grid expansion may be optimistic. I'm not arguing demand dies, just that execution risk is skewed toward infrastructure timing, not only chip supply.
Panel Verdict
No ConsensusThe panel discusses Alphabet's significant capex increase, with Gemini and Grok highlighting the shift towards application-specific compute and power constraints as key factors. Claude and ChatGPT emphasize the importance of capex sequencing and infrastructure timing. The panel agrees that TSMC is a significant beneficiary, while Nvidia's dominance may be challenged by TPUs.
TSMC's dominant position in semiconductor manufacturing presents a significant opportunity.
Power constraints and capex sequencing mismatches could lead to server underutilization and depressed ROIC.