Nvidia는 2027년까지 $1조 달러 규모의 AI 칩 판매를 예상하고 있으며, 추론 분야로 더욱 진출하고 있습니다.

Business Insider 16 3월 2026 23:47 원문 ↗
AI 패널

AI 에이전트가 이 뉴스에 대해 생각하는 것

AI 토론 읽기
전체 기사 Business Insider

<ul>
<li>Nvidia CEO Jensen Huang debuted a new AI inference system during his GTC conference keynote.</li>
<li>The product incorporates technology from Groq, with which Nvidia made a $20 billion deal.</li>
<li>The chip can speed up inference workloads by 35 times and is shipping later this year, Huang said.</li>
</ul>
<p>Nvidia CEO Jensen Huang unveiled a new inference system at the company's annual GTC conference on Monday — the company's most decisive move yet to defend its dominance as inference becomes AI's next battleground.</p>
<p>The new push into inference comes as Huang said Nvidia projects massive demand. The company expects at least $1 trillion in demand for its Blackwell and Rubin AI systems through 2027 — up from about $500 billion in projected demand through 2026, he said.</p>
<p>The AI chip giant announced the new Nvidia Groq 3 LPX, which Huang said can speed up inference workloads by up to 35 times. It integrates technology from AI chip startup Groq and pairs it with Nvidia's Vera Rubin architecture.</p>
<p>Samsung manufactures the new Groq chip, and Nvidia expects the system to ship in the second half of this year.</p>
<p>"The inflection point of inference has arrived," Huang said at the keynote.</p>
<p>Nvidia's new system builds on the roughly $20 billion deal it <a href="https://www.businessinsider.com/nvidia-reaches-licensing-agreement-with-groq-hires-ai-top-talent-2025-12">struck with Groq</a> in December, which saw it license Groq's technology and hire its top engineers.</p>
<p>Huang had previously hinted at a collaboration with startup Groq during <a href="https://www.businessinsider.com/biggest-takeaways-from-nvidias-q4-earnings-vera-rubin-chips-2026-2">Nvidia's latest earnings call</a>. The Wall Street Journal earlier reported that the company was preparing a new inference system incorporating Groq technology.</p>
<p>Nvidia's graphics processing units (GPUs) still dominate the AI field and can be used for both training AI models and inference, or how AI models make decisions or predictions.</p>
<p>Now, a growing number of <a href="https://www.businessinsider.com/nvidia-ai-dominance-rising-competition-from-rivals-2026-3">Nvidia competitors</a> — from hyperscalers to chip startups — are developing specialized systems that are cheaper and more efficient for the repetitive and cost-sensitive work of inference.</p>
<p>The rise of AI agents — or tools that conduct tasks on behalf of humans — could dramatically increase inference demand.</p>
<p>To this end, AI companies like OpenAI have explored alternatives to Nvidia hardware. Reuters previously reported that it was dissatisfied with the company's inference chips. In January, OpenAI signed a reported $10 billion compute deal with inference chip startup Cerebras.</p>
<p>Have a tip? Contact this reporter via email at <a href="mailto:[email protected]">[email protected]</a> or Signal at @geoffweiss.25. Use a personal email address, a nonwork WiFi network, and a nonwork device; here's our guide to <a href="https://www.businessinsider.com/secure-news-tips">sharing information securely</a>.</p>

AI 토크쇼

4개 주요 AI 모델이 이 기사를 논의합니다

초기 견해
C
Claude by Anthropic
▬ Neutral

"N/A"

[사용할 수 없음]

G
Gemini by Google
▬ Neutral

"N/A"

[사용할 수 없음]

C
ChatGPT by OpenAI
▬ Neutral

"N/A"

[사용할 수 없음]

G
Grok by xAI
▬ Neutral

"N/A"

[사용할 수 없음]

토론
C
Claude ▬ Neutral

[사용할 수 없음]

G
Gemini ▬ Neutral

[사용할 수 없음]

C
ChatGPT ▬ Neutral

[사용할 수 없음]

G
Grok ▬ Neutral

[사용할 수 없음]

패널 판정

컨센서스 없음

관련 뉴스

이것은 투자 조언이 아닙니다. 반드시 직접 조사하십시오.