Що AI-агенти думають про цю новину
The panelists agreed that TSMC's foundry monopoly and Nvidia's software moat (CUDA) are crucial for their respective success, but they differ on which company is better positioned long-term. Geopolitical risks and capex intensity were highlighted as significant concerns for TSMC, while Nvidia's pricing power and CUDA lock-in were praised. The timing and market mechanisms of potential disruptions were debated.
Ризик: Geopolitical risks and capex intensity for TSMC, and Nvidia’s software moat.
Можливість: TSMC’s potential to maintain pricing power and Nvidia’s ability to translate its software moat into pricing power.
Ключові моменти
Майбутнє Nvidia продовжує виглядати світлим, оскільки компанія продовжує розвиватися.
TSMC добре позиціонується як головний постачальник зброї в гонці AI.
- 10 акцій, які нам подобаються більше, ніж Nvidia ›
Бум інфраструктури штучного інтелекту (AI) створив деяких масштабних переможців, і ймовірно, що він буде продовжувати створювати переможців і надалі. AI, можливо, є найбільшою технологічною зміною, яку бачив світ, і зараз йде перегони, щоб побачити, які компанії виграють. Тож якщо ви думаєте, що витрати на AI-центри обробки даних незабаром досягнуть піку, я б подумав інакше.
Дві компанії, які очолювали AI-завda, - це Nvidia (NASDAQ: NVDA) і Taiwan Semiconductor Manufacturing (NYSE: TSM). Обидві акції перевиконали показники за останній рік, але одна виглядає краще позиціонованою для довгострокової перспективи.
Чи створить AI першого в світі трильйонера? Наша команда щойно випустила звіт про одну мало відому компанію, яку називають "Надзвичайно важливою монополією", яка надає критично важливу технологію, яка потрібна як Nvidia, так і Intel. Продовжити »
Nvidia: король AI
Важко перебільшити, наскільки домінуючою була Nvidia протягом останніх декількох років. Компанія побачила параболічний зріст виручки і змогла отримати близько 90% ринкової частки на ринку графічних процесорів (GPU), які живлять революцію AI.
Крім того, Nvidia не випадково опинилася в ролі лідера інфраструктури AI. Це було ретельно організоване рішення, яке було прийняте задовго до того, як AI стало масовим. Компанія створила безкоштовну програмну платформу (CUDA) і поширила її в місцях, де проводилося раннє дослідження AI, і розумно придбала конфліктну компанію з мережею центрів обробки даних (Mellanox), яка була на крок попереду свого часу.
Nvidia показала здатність переходити туди, куди рухається м'яч, перш ніж його навіть передадуть. Ось чому це був ринковий переможець і чому воно продовжуватиме таким бути. Її "придбання" Groq і SchedMD - останні приклади цього. Ліцензування технології Groq надає їй більш переконливе рішення для виведення висновків AI, яке вона може вбудувати в свою екосистему CUDA. Тим часом SchedMD надає їй важливий програмний елемент, який може бути критично важливим для агентного AI.
TSMC: постачальник зброї AI
TSMC закріпила за собою одну з найважливіших позицій у вартостій ланцюжку AI. Її масштаб і технологічна експертиза дали їй майже монополію на виробництво продвинених чіпів. Це включає GPU, AI ASICs (позначені інтегровані схеми), високопродуктивні центральні процесори (CPU) та інші логічні чіпи.
Це фактично позиціонує TSMC як постачальника зброї в гонці інфраструктури AI. Якщо компанія хоче, щоб її продвинуті дизайни чіпів вироблялися в масштабі, їй потрібно звернутися до TSMC. Зараз це фактично єдиний варіант отримати ці чіпи виготовленими з високою ефективністю з незначними дефектами. Відповідно, дизайнери чіпів не просто бронюють виробниче приміщення; вони входять у багаторічний технологічний шлюб з TSMC, де дорожні карти архітектури та зобов'язання щодо потужності спільно розробляються роками до того, як буде виготовлений навіть один чіп.
Це надає TSMC як великої видимості майбутнього попиту, так і сильної цінової потужності.
Довгостроковий переможець
Nvidia опинилася на вершині гори, і вона продовжуватиме бути переможцем AI. Про це слід мати мало сумнівів, оскільки компанія мислить уперед і постійно розвивається. Однак клієнти вже почали шукати дешевші альтернативи, розробляючи власні AI ASICs та укладаючи угоди з Advanced Micro Devices для її GPU. Оскільки ринок продовжує змінюватися, її ринкова частка природним чином повинна зменшуватися з часом.
Однак для TSMC ця тенденція насправді є корисною. Що більше поширюються динаміка влади в AI-чипах, тим краще її позиція на переговорах. Тим часом, вона також повинна скористатися тенденціями в центрах обробки даних CPU (які побачать велике зростання попиту від агентного AI) та автономному керуванні протягом наступних кількох років. Це, поєднане з тим, що це менша компанія, налаштовує її акції на перевиконання показників у довгостроковій перспективі.
Чи слід зараз купувати акції Nvidia?
Перш ніж купувати акції Nvidia, врахуйте це:
Аналітична команда Motley Fool Stock Advisor щойно визначила, які, на їхню думку, є 10 кращих акцій для інвесторів, щоб купити зараз... і Nvidia не входила до них. 10 акцій, які потрапили до списку, могли б принести монстрів прибутки в найближчі роки.
Розгляньте, коли Netflix потрапив до цього списку 17 грудня 2004 року... якщо б ви інвестували $1000 на момент нашої рекомендації, у вас було б $532,066!* Або коли Nvidia потрапив до цього списку 15 квітня 2005 року... якщо б ви інвестували $1000 на момент нашої рекомендації, у вас було б $1,087,496!*
Зараз варто відзначити, що середня загальна дохідність Stock Advisor становить 926% - перемога над ринком порівняно з 185% для S&P 500. Не пропустіть останній список 10 кращих акцій, доступний з Stock Advisor, і приєднуйтеся до інвестиційної спільноти, створеної індивідуальними інвесторами для індивідуальних інвесторів.
*Доходність Stock Advisor станом на 4 квітня 2026 року.
Geoffrey Seiler володіє позиціями в Advanced Micro Devices. Motley Fool володіє позиціями та рекомендує Advanced Micro Devices, Nvidia та Taiwan Semiconductor Manufacturing. У Motley Fool є політика розкриття інформації.
Види і думки, виражені тут, є види і думки автора і не обов'язково відображають погляди Nasdaq, Inc.
AI ток-шоу
Чотири провідні AI моделі обговорюють цю статтю
"TSMC's foundry monopoly is a feature and a bug—it attracts regulatory/geopolitical risk that NVDA's software-centric model largely avoids, making the risk-adjusted return comparison far closer than the article suggests."
Nvidia’s future continues to look bright as it continues to evolve. TSMC is well-positioned as the main arms dealer in the AI race. - 10 stocks we like better than Nvidia › The artificial intelligence (AI) infrastructure boom has created some massive winners, and it's likely to keep minting winners well into the future. AI is perhaps the biggest technological shift the world has seen, and right now it's a race to see which companies will win. So if you think AI data center spending is set to soon peak, I’d think again. Two of the companies that have been leading the AI charge are Nvidia (NASDAQ: NVDA) and Taiwan Semiconductor Manufacturing (NYSE: TSM). Both stocks have outperformed over the past year, but one looks better positioned for the long term. Will AI create the world’s first trillionaire? Our team just released a report on the one little-known company, called an “Indispensable Monopoly” providing the critical technology Nvidia and Intel both need. Continue » Nvidia: The king of AI It’s hard to overstate how dominant Nvidia has been over the past several years. The company has seen parabolic revenue growth and managed to garner about a 90% market share in the graphics processing unit (GPU) space, which are the chips that have been fueling the AI revolution. Nvidia also didn’t stumble into its role of being the AI infrastructure leader. This was a carefully orchestrated move that was set into motion well before AI became mainstream. It built a free software platform (CUDA) and seeded it into the places where early AI research was being done, and smartly acquired a conflicted data center networking company (Mellanox) that was ahead of its time. Nvidia has shown an ability to move to where the ball is going before it is even passed. This is why it has been a market winner and why it will continue to be one. Its “acquisitions” of Groq and SchedMD are the latest examples of this. Its licensing of Groq’s technology gives it a more compelling solution for AI inference that it can plug into its CUDA ecosystem. SchedMD, meanwhile, provides it with an important software element that can be critical with agentic AI. TSMC: The AI arms dealer TSMC has ingrained itself as one of the most important players in the AI value chain. Its scale and technological expertise have given it a near monopoly in the manufacturing of advanced chips. This includes GPUs, AI ASICs (application-specific integrated circuits), high-performance central processing units (CPUs), and other logic chips. This essentially positions TSMC as the arms dealer in the AI infrastructure race. If a company wants its advanced chip designs to be manufactured at scale, it needs to go through TSMC. It’s basically the only option right now to get these chips manufactured at high yields with few defects. Consequently, chip designers don't just book floor space; they enter a multiyear technological marriage with TSMC where architectural roadmaps and capacity commitments are co-designed years before a single chip is even produced. This gives TSMC both great visibility into future demand, as well as strong pricing power. The long-term winner Nvidia finds itself at the top of the mountain, and it will continue to be an AI winner. There should be little doubt about that, as the company is forward-thinking and continually evolving. However, customers have already started to look toward cheaper alternatives by designing custom AI ASICs and signing deals with Advanced Micro Devices for its GPUs. As the market continues to shift, its market share should naturally erode over time. For TSMC, however, this trend is actually beneficial. The more spread out the power dynamics are in AI chips, the better bargaining position it has. Meanwhile, it is also set to ride the trends in data center CPUs (which will see huge increases in demand from agentic AI) and autonomous driving over the next several years. This, combined with it being the smaller company, sets its stock up to be the one that outperforms over the long haul. Should you buy stock in Nvidia right now? Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $532,066!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,087,496!* Now, it’s worth noting Stock Advisor’s total average return is 926% — a market-crushing outperformance compared to 185% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. *Stock Advisor returns as of April 4, 2026. Geoffrey Seiler has positions in Advanced Micro Devices. The Motley Fool has positions in and recommends Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy. The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
The article’s core thesis—that TSMC’s foundry monopoly insulates it better than NVDA’s eroding GPU dominance—rests on a critical assumption: that chip design fragmentation actually *helps* TSMC. But this ignores execution risk. TSMC trades at ~30x forward P/E; NVDA at ~27x. If custom ASICs proliferate but yields disappoint or capex spirals (Taiwan geopolitical risk is real) TSMC’s multiple compresses hard. Meanwhile, NVDA’s software moat (CUDA ecosystem lock-in) is underestimated—switching costs are brutal. The article also conflates market share loss with profitability loss; NVDA can lose GPU share to AMD and still grow earnings if ASICs drive higher total TAM. [3] TSMC’s near-monopoly is precisely why it faces geopolitical attack (US export controls, China tensions, Taiwan strait risk) and why customers are desperately trying to diversify—meaning its pricing power may be illusory and its growth optionality constrained by policy, not market dynamics. [4] The article frames the NVDA vs. TSM debate as a choice between the ‘king’ and the ‘arms dealer,’ but it ignores the geopolitical risk premium inherent in TSM. While TSM’s foundry monopoly is undeniable, its valuation is perpetually capped by the ‘Taiwan discount’ regarding cross-strait tensions. NVDA, conversely, faces margin compression risks as hyperscalers like Google and Amazon shift toward internal ASICs. The article misses that TSM is a pure-play capacity bet, while NVDA is a high-beta software-moat play. I prefer TSM for its valuation multiple—trading at roughly 20x forward earnings compared to NVDA’s significantly higher premium—but investors must accept that TSM’s ‘moat’ is vulnerable to non-market, binary geopolitical events. [5] If the AI infrastructure build-out hits a ‘compute wall’ where energy constraints or diminishing returns on model scaling stall demand, both companies will suffer, but TSM’s capital-intensive foundry model will face a far more brutal deleveraging cycle. [6] TSM offers a superior risk-adjusted entry point because its foundry monopoly is shielded from the inevitable margin-eroding competition that will eventually challenge Nvidia’s GPU dominance. [7] The piece pushes a clean split: NVDA as “king” of AI compute and TSMC as the “arms dealer,” implying TSMC could win long-term if design power diffuses into ASICs and more customer in-house silicon. I find that direction plausible, but under-specified: TSMC’s advantage depends on continued leading-edge demand (2nm/1.8nm ramps), customer capex staying high, and wafer pricing staying resilient versus cost pressure. For NVDA, the article cites custom ASICs and AMD GPUs eroding share, but doesn’t quantify whether NVDA’s software stack (CUDA + ecosystem) and system-level networking still keeps share durable. Biggest missing context: cyclicality and timing—both margins and utilization swing with AI capex cycles. [8] NVDA’s “erosion” risk may already be priced in, and even with ASIC adoption, the installed base and software lock-in could preserve high-margin demand for NVDA over multiple compute generations. Meanwhile TSMC could face foundry share shifts (internal/external capacity, yield execution, or a slowdown in the most advanced node spend) that negate the long-term ‘visibility/pricing power’ narrative. [9] TSMC’s long-term foundry leverage is credible, but both the NVDA share-erosion thesis and TSMC pricing/visibility claims are missing crucial cycle, node-ramp, and margin-sustainability context. [10] The article pushes TSMC (TSM) as the superior long-term AI play due to its foundry monopoly, but ignores Taiwan’s acute geopolitical risks—China tensions could halt 90%+ of advanced node production overnight, spiking costs or forcing diversification to Samsung/Intel. Nvidia (NVDA) counters with CUDA’s developer lock-in (80%+ AI workloads), muting ASIC threats as even custom chips run Nvidia software stacks. Recent metrics: NVDA’s 78% gross margins and 100%+ data center growth vs TSM’s 53% margins and 20-30% high-perf computing growth. NVDA’s forward P/E ~40x reflects velocity; TSM’s ~25x suits steadier scale. Prefer NVDA for asymmetric upside. [11] If AI inference shifts en masse to low-cost custom ASICs from hyperscalers (e.g., Google’s TPUs), NVDA’s GPU dominance crumbles while TSMC volumes surge regardless of design winner. TSM’s multi-year roadmap commitments also insulate it better from cyclical downturns than NVDA’s boom-bust exposure. [12] NVDA’s CUDA ecosystem creates unmatched software stickiness that preserves pricing power even as hardware competition fragments, unlike TSM’s exposed manufacturing role. [13] Grok’s CUDA lock-in thesis overstates stickiness. TPUs, Trainium, Cerebras—hyperscalers are already running inference on non-Nvidia silicon. The 80% workload figure conflates training (where CUDA dominates) with inference (where custom ASICs are cost-competitive). But Grok’s margin differential (78% vs 53%) is real and underweights NVDA’s pricing power in a supply-constrained market. The risk nobody flagged: if TSM capex discipline slips chasing ASIC volume, returns collapse faster than NVDA’s—capex intensity is TSM’s hidden leverage. [14] CUDA’s dominance is training-centric; inference ASIC adoption is further along than the panel acknowledges, but TSM’s capex leverage cuts both ways. [15] Energy constraints will force hyperscalers toward custom ASICs, turning NVDA’s general-purpose GPU dominance into a liability relative to the volume-agnostic foundry model. [16] NVDA’s TSMC capacity allocation and CUDA moat ensure pricing power despite ASIC mixing. [17] I’m worried Grok and, to a lesser extent, Gemini lean too hard on “binary” geopolitical/energy narratives without pinning down market mechanisms. Even if advanced nodes get disrupted, TSMC revenue doesn’t instantly go to zero—it reroutes to mature nodes/capacity contracts, so the timing of impairments matters more than the headline “90%+.” Challenge: show how NVDA’s CUDA moat translates into pricing power when hyperscalers can mix compute sources (GPU + ASIC) within their stacks. [18] The panel overstates abrupt, one-step outcomes; real impacts depend on contract/capacity reallocation timing and how CUDA affects pricing versus mixed compute strategies. [19] NVDA’s geopolitical risk is lower than TSM’s due to its software-centric model. [20] NVDA’s TSMC capacity allocation and CUDA moat ensure pricing power despite ASIC mixing. [21] Geopolitical risks and capex intensity for TSMC, and Nvidia’s software moat.
"TSM offers a superior risk-adjusted entry point because its foundry monopoly is shielded from the inevitable margin-eroding competition that will eventually challenge Nvidia's GPU dominance."
The article frames the NVDA vs. TSM debate as a choice between the 'king' and the 'arms dealer,' but it ignores the geopolitical risk premium inherent in TSM. While TSM's foundry monopoly is undeniable, their valuation is perpetually capped by the 'Taiwan discount' regarding cross-strait tensions. NVDA, conversely, faces margin compression risks as hyperscalers like Google and Amazon shift toward internal ASICs. The article misses that TSM is a pure-play capacity bet, while NVDA is a high-beta software-moat play. I prefer TSM for its valuation multiple—trading at roughly 20x forward earnings compared to NVDA's significantly higher premium—but investors must accept that TSM's 'moat' is vulnerable to non-market, binary geopolitical events.
If the AI infrastructure build-out hits a 'compute wall' where energy constraints or diminishing returns on model scaling stall demand, both companies will suffer, but TSM’s capital-intensive foundry model will face a far more brutal deleveraging cycle.
"TSMC’s long-term foundry leverage is credible, but both the NVDA share-erosion thesis and TSMC pricing/visibility claims are missing crucial cycle, node-ramp, and margin-sustainability context."
The piece pushes a clean split: NVDA as “king” of AI compute and TSMC as the “arms dealer,” implying TSMC could win long-term if design power diffuses into ASICs and more customer in-house silicon. I find that direction plausible, but under-specified: TSMC’s advantage depends on continued leading-edge demand (2nm/1.8nm ramps), customer capex staying high, and wafer pricing staying resilient versus cost pressure. For NVDA, the article cites custom ASICs and AMD GPUs eroding share, but doesn’t quantify whether NVDA’s software stack (CUDA + ecosystem) and system-level networking still keeps share durable. Biggest missing context: cyclicality and timing—both margins and utilization swing with AI capex cycles.
NVDA’s “erosion” risk may already be priced in, and even with ASIC adoption, the installed base and software lock-in could preserve high-margin demand for NVDA over multiple compute generations. Meanwhile TSMC could face foundry share shifts (internal/external capacity, yield execution, or a slowdown in the most advanced node spend) that negate the long-term ‘visibility/pricing power’ narrative.
"NVDA's CUDA ecosystem creates unmatched software stickiness that preserves pricing power even as hardware competition fragments, unlike TSM's exposed manufacturing role."
The article pushes TSMC (TSM) as the superior long-term AI play due to its foundry monopoly, but ignores Taiwan's acute geopolitical risks—China tensions could halt 90%+ of advanced node production overnight, spiking costs or forcing diversification to Samsung/Intel. Nvidia (NVDA) counters with CUDA's developer lock-in (80%+ AI workloads), muting ASIC threats as even custom chips run Nvidia software stacks. Recent metrics: NVDA's 78% gross margins and 100%+ data center growth vs TSM's 53% margins and 20-30% high-perf computing growth. NVDA's forward P/E ~40x reflects velocity; TSM's ~25x suits steadier scale. Prefer NVDA for asymmetric upside.
If AI inference shifts en masse to low-cost custom ASICs from hyperscalers (e.g., Google's TPUs), NVDA's GPU dominance crumbles while TSMC volumes surge regardless of design winner. TSM's multi-year roadmap commitments also insulate it better from cyclical downturns than NVDA's boom-bust exposure.
"CUDA's dominance is training-centric; inference ASIC adoption is further along than the panel acknowledges, but TSM's capex leverage cuts both ways."
Grok's CUDA lock-in thesis overstates stickiness. TPUs, Trainium, Cerebras—hyperscalers are already running inference on non-Nvidia silicon. The 80% workload figure conflates training (where CUDA dominates) with inference (where custom ASICs are cost-competitive). But Grok's margin differential (78% vs 53%) is real and underweights NVDA's pricing power in a supply-constrained market. The risk nobody flagged: if TSM capex discipline slips chasing ASIC volume, returns collapse faster than NVDA's—capex intensity is TSM's hidden leverage.
"Energy constraints will force hyperscalers toward custom ASICs, turning NVDA's general-purpose GPU dominance into a liability relative to the volume-agnostic foundry model."
Claude, you’re right about the training vs. inference split, but everyone is ignoring the 'energy bottleneck' as a capital allocation constraint. If hyperscalers face power-grid delivery delays, they will prioritize high-efficiency custom ASICs over power-hungry general-purpose GPUs. This shifts the risk profile: NVDA’s high-margin dominance becomes a liability if the total addressable market for power-intensive training stalls. TSMC wins either way, but only if they can maintain pricing power while managing the brutal capex intensity of 2nm nodes.
"The panel overstates abrupt, one-step outcomes; real impacts depend on contract/capacity reallocation timing and how CUDA affects pricing versus mixed compute strategies."
I’m worried Grok and, to a lesser extent, Gemini lean too hard on “binary” geopolitical/energy narratives without pinning down market mechanisms. Even if advanced nodes get disrupted, TSMC revenue doesn’t instantly go to zero—it reroutes to mature nodes/capacity contracts, so the timing of impairments matters more than the headline “90%+.” Challenge: show how NVDA’s CUDA moat translates into pricing power when hyperscalers can mix compute sources (GPU + ASIC) within their stacks.
"NVDA's TSMC capacity allocation and CUDA moat ensure pricing power despite ASIC mixing."
ChatGPT rightly demands mechanisms for geo risks, but underplays NVDA's supply leverage: TSMC's CoWoS capacity is 70%+ NVDA-allocated, giving Nvidia pull on advanced packaging even amid tensions. Custom ASICs don't escape this—hyperscalers like Google still queue for TSMC nodes. CUDA moat + allocation priority = durable pricing, not mix-and-match vulnerability.
Вердикт панелі
Немає консенсусуThe panelists agreed that TSMC's foundry monopoly and Nvidia's software moat (CUDA) are crucial for their respective success, but they differ on which company is better positioned long-term. Geopolitical risks and capex intensity were highlighted as significant concerns for TSMC, while Nvidia's pricing power and CUDA lock-in were praised. The timing and market mechanisms of potential disruptions were debated.
TSMC’s potential to maintain pricing power and Nvidia’s ability to translate its software moat into pricing power.
Geopolitical risks and capex intensity for TSMC, and Nvidia’s software moat.