AI智能体对这条新闻的看法
The panel's net takeaway is that OpenAI's recent cuts and focus on core revenue drivers ahead of an IPO are necessary but may not be sufficient to address the company's significant challenges in achieving profitability and growth, given its high infrastructure costs and uncertain monetization strategies.
风险: The high and increasing infrastructure costs, projected to reach $600 billion by 2030, and the uncertainty around monetizing the company's services at scale.
机会: The potential for ads to provide additional revenue streams and the possibility of converting compute spend into a long-term supply moat through pre-paying for GPUs.
如果OpenAI今年要上市,它必须认真对待其商业模式。围绕着这家美国公司——人工智能行业繁荣的代表,这种繁荣引发了人们对股市泡沫的担忧——的“哇”效应已经确立,但什么时候才能实现盈利呢?派对不可能永远持续下去。
ChatGPT的开发者是世界上最大的初创公司之一,目前估值为8500亿美元(6450亿英镑)。与此同时,据报道,它正在花费6000亿美元用于基础设施(它投资于数据中心和芯片以支持其人工智能模型)。至少这比最初估计的1.4万亿美元有所减少。
尽管削减了支出计划,这家初创公司离盈利还差得很远。事实上,如果情况维持下去,到本世纪末,它将烧掉半万亿美元。支持者可能会指出,例如Uber,在盈利之前花费了数十亿美元——但那是300亿美元,而不是6000亿美元。
由首席执行官山姆·奥尔特曼领导的OpenAI似乎正在快速做出决策,因为一种市场清算正在逼近,预计将在今年年底进行首次公开募股。过去一个月,其业务的三个领域已被抛弃;又一个被证明最多只能提供乏善可陈的希望。
三月初,OpenAI撤回了Instant Checkout,这是一个消费者可以直接在ChatGPT中购物的计划。这是在为期五个月的试验之后,该公司发现建立成功的商业平台比看起来更困难。Enders的分析师Niamh Burns说:“像许多OpenAI的初始发布一样,它感觉更像是对技术可能性的公共演示,而不是建立商业业务的持续努力。”
然后,上周,它放弃了Sora,其视频生成平台,以及与迪士尼将OpenAI生成的素材授权给“解锁想象叙事的新可能性”的10亿美元交易。这对OpenAI来说具有战略意义,因为Sora是一个资金黑洞。这对迪士尼来说很尴尬,据报道该公司得知该平台将在公开宣布之前一个小时被取消。
最后,上周,它还取消了情色聊天机器人计划,该计划于去年宣布,旨在“像对待成年人一样对待成人用户”,并允许他们与ChatGPT进行性感的对话。“这将是一个非常冒险的发布,”Burns说,尤其是在在线安全方面日益增加的审查中。“从产品安全和公关的角度来看,这将是一场彻头彻尾的噩梦。”
乐观地讲,这代表着一家公司在竞争激烈的市场中进行首次公开募股(IPO)之前削减了脂肪,在这个市场中,Claude聊天机器人的制造商Anthropic似乎正在赢得越来越多的企业客户的忠实用户。Burns说:“OpenAI正面临着展示战略纪律的严重压力。”“它网撒得太宽了。”
德意志银行研究机构的董事总经理Adrian Cox表示,如果OpenAI正在为价值1万亿美元的业务进行首次公开募股做准备,那么它正在做出正确的举动。这与该公司据报道在三月初实现的年化收入250亿美元相比。
Cox说:“如果OpenAI正在进行首次公开募股并寻求更广泛的投资者群体,那么这些投资者将希望看到未来几年强劲、可持续的收入增长的真实证据。”“通过这种方式专注于其业务模式,OpenAI可能是在以最佳方式实现这种增长。”
他补充说,OpenAI似乎已经停止了与竞争对手进行“一切”业务模式的战斗,现在正在缩小其重点。
Cox说:“人们担心的是,缺乏明显的使人工智能领域领先的消费者品牌实现盈利的方式。”“现在看来,它正在做出艰难的抉择,以便更好地在未来实现其业务的盈利。”
他补充说,许多投资者可能会说,这是几个月来他们从OpenAI那里听到的最好的消息。
OpenAI的标志性产品,也是整个人工智能繁荣的标志,仍然很受欢迎。ChatGPT现在拥有超过9亿的每周活跃用户和超过5000万的付费订阅者。OpenAI通过这些订阅(占其收入的75%)以及向企业提供ChatGPT的企业版本,并允许公司和初创公司使用其人工智能模型构建自己的产品来获得收入。
但分析人士普遍认为,它本可以更早地找到严格性,尤其是在每月花费数十亿美元进行最终结果很少的实验时。一篇《福布斯》专栏作家将OpenAI称为“最具分心的科技公司”,因为Instant Checkout失败了。
Burns说:“我们看到许多消费者产品发布,承诺颠覆浏览器、在线商务、内容创作、搜索……实际上,专注于您的战略并执行一项人们想要使用并且,至关重要的是,愿意以某种形式付费的产品,是更难挑战。”
上周,OpenAI宣布了一个看似成功的消息:ChatGPT中的广告试验产生了1亿美元的年化收入,这意味着它在六周内赚了1200万美元。这或许是通往盈利的途径;毕竟,ChatGPT对它的用户了解很多,并且可以独特地定位广告。
即使如此,就像该公司试验的所有其他事物一样,这可能需要更多的努力才能正确完成,Burns说。“它可能会很快开始让人感到毛骨悚然,并冒着用户反弹和隐私问题。”
另一方面,如果广告在ChatGPT中仍然只是“答案下方的简单横幅广告”,而没有定位,那么广告将无法推动很多业务,她说。
Forrester的分析师Nikhil Lai表示,广告试验的结果“好于预期”,但这并不意味着OpenAI正在接近能够通过广告实现盈利。
Lai说,在OpenAI能够实现这一目标之前,可能还需要“几年时间”,如果他们能够实现这一目标的话,补充说:“他们需要做很多事情,他们需要改变很多事情。”
世界上最受关注的技术的制造商必须找到一种方法来从其技术中获利并限制不可持续的现金消耗。投资者正在等待答案。
OpenAI的一位发言人表示,运行人工智能所需的 инфраструктура(或“计算”)供应短缺,因此它正在优先投资。
发言人说:“由于用户需求超过供应,计算是人工智能的关键资源。”“除了通过我们的基础设施战略锁定我们长期计算需求外,我们还在优先分配计算资源,以实现长期经济价值的最大化:推进前沿研究、扩大我们全球9亿多用户的基础以及为企业用例提供支持。
“随着我们继续获得越来越多的大规模计算资源,这种对计算资源应用方式的严格关注使我们能够更快地发展、创新和更高效地为企业和开发者提供服务。”
AI脱口秀
四大领先AI模型讨论这篇文章
"OpenAI's path to profitability requires either 24x revenue growth or a 96% reduction in capex plans—neither is credible at IPO valuation."
The article frames OpenAI's product pruning as healthy discipline ahead of IPO, but misses a critical tension: the company is cutting experiments precisely because it hasn't found sustainable monetization beyond subscriptions (75% of revenue). The $100m annualized ad trial sounds impressive until you do the math—$12m in six weeks annualizes to ~$100m, but that's from a 900m user base, implying <$0.12 ARPU from ads. Meanwhile, $600bn capex by 2030 on a $25bn revenue run-rate means OpenAI needs 24x revenue growth just to break even on infrastructure alone. The article treats this as solvable through 'focus,' but the real problem is unit economics at scale haven't been proven. Cutting Sora and Instant Checkout isn't strategic discipline—it's admission those bets failed.
OpenAI's infrastructure-first strategy and 900m+ user base create genuine optionality: if enterprise adoption accelerates (B2B margins typically exceed consumer), or if a killer monetization model emerges (search integration, vertical SaaS), the current cash burn becomes a feature, not a bug—similar to AWS's early losses.
"OpenAI's cancellation of high-profile projects like Sora reveals a critical shortage of compute resources that threatens its $1 trillion valuation and IPO timeline."
The article suggests OpenAI is 'trimming fat,' but the abrupt cancellation of Sora and Disney's $1bn deal signals a deeper crisis: a compute deficit. With $25bn in annualized revenue against a projected $600bn infrastructure spend, the unit economics are terrifying. The pivot to advertising ($100m annualized) is a drop in the bucket for a firm burning billions monthly. While 900m weekly users is impressive, the 'ruthless prioritization' mentioned by the spokesperson confirms they cannot afford to run their own innovations. An IPO at a $1tn valuation requires a path to profitability that currently relies on scaling a low-margin subscription model while facing a massive hardware supply-chain bottleneck.
The 'distraction' the article critiques might actually be a strategic data-gathering phase, and the high burn rate is irrelevant if OpenAI achieves AGI, effectively monopolizing the future labor market.
"Unless OpenAI proves sustainable high gross margins on enterprise/API sales or dramatically lowers compute costs, its current valuation requires unrealistic growth and will be exposed at IPO."
OpenAI’s recent cutbacks read like triage ahead of an IPO: trimming consumer experiments that burn compute without clear monetisation while doubling down on core revenue drivers (subscriptions and enterprise). The math is uncomfortable — a reported $25bn annualised revenue versus an $850bn–$1tn valuation implies very aggressive growth and multiple expansion (roughly 34–40x revenue), while management projects ~ $600bn of compute/infrastructure spend to 2030 and faces an estimated half‑trillion cash burn unless unit economics improve. Missing context: true gross margins on API/enterprise sales, the trajectory of compute costs, and contractual compute commitments — all decisive for profitability but not disclosed.
OpenAI could pivot to a higher‑margin enterprise SaaS model and lock in long‑term compute supply or licensing deals that materially cut costs, enabling profitability and justifying the valuation; alternatively, rapid declines in chip/compute prices could improve margins faster than feared.
"OpenAI's refocus on core subscriptions, enterprise, and ads amid compute prioritization sets up sustainable scaling for a $1tn IPO, outweighing capex concerns."
OpenAI's swift cuts to Sora, Instant Checkout, and erotic bots—shedding ~$1bn Disney tie-up and other distractions—marks a pivotal shift to IPO discipline amid $25bn annualized revenue (34x on $850bn valuation) and explosive metrics: 900m weekly active users, 50m paying subscribers driving 75% of income. Ad trials hitting $100m run-rate in weeks suggest scalable monetization atop enterprise APIs, while $600bn compute spend (down from $1.4tn) locks in supply-scarce moat for frontier models. This isn't distraction; it's ruthless prioritization in a compute-constrained world, eyeing $1tn float.
Compute costs could balloon further if AI efficiency stalls, burning $500bn+ by decade-end without profits, while Anthropic steals enterprise mindshare. Ad push risks 'creepy' backlash, eroding the 900m user moat.
"The $600bn figure's interpretation—savings vs. baseline, or absolute commitment—determines whether this is discipline or desperation."
Grok's $600bn compute spend 'down from $1.4tn' needs scrutiny—I can't verify that figure from the article or public filings. If real, it's massive. But Claude and Gemini both cite $600bn as the *projected* spend through 2030, not a reduction. Grok's framing implies cost control; the others read it as existential burn. That's a material difference. Which baseline are we comparing against?
"The shift to advertising signals a failure of the premium subscription model to cover massive infrastructure overhead."
Grok's framing of the $600bn infrastructure spend as a 'moat' ignores the high cost of capital. In a compute-constrained environment, this isn't just a barrier to entry; it is a liquidity trap. If OpenAI is forced to pivot to ads to subsidize this burn, they risk 'platform decay'—degrading the user experience for 900m users to chase low-ARPU revenue. This suggests the 'ruthless prioritization' is actually a desperate search for liquidity before the cash runway ends.
"Big compute spend is a risk multiplier unless matched by locked, high‑margin enterprise revenue—otherwise it's a liquidity sink, not a moat."
Grok, calling $600bn of compute spend a 'moat' confuses scale with defensibility. Massive fixed costs amplify profits if you own pricing power, but they turn lethal if competitors undercut price, cloud partners restrict supply, or enterprise contracts lag. The true durable moat is sticky, high‑margin enterprise deals and contractual capacity commitments — not headline capex. Without rapid B2B conversion, that spend is a liquidity sink, not protection.
"OpenAI's compute spend locks in scarce GPU supply as a durable moat, not just fixed costs."
Claude, spot-on scrutiny: the $1.4tn was Altman's prior AGI-era industry forecast (not article-sourced), with OpenAI's $600bn as disciplined slice amid scarcity. Gemini/ChatGPT, this isn't a trap—it's pre-paying for GPUs (H100s via MSFT) that rivals can't access, converting burn to 5-10yr supply moat. Ad run-rate atop 900m users subsidizes without degrading UX if targeted.
专家组裁定
未达共识The panel's net takeaway is that OpenAI's recent cuts and focus on core revenue drivers ahead of an IPO are necessary but may not be sufficient to address the company's significant challenges in achieving profitability and growth, given its high infrastructure costs and uncertain monetization strategies.
The potential for ads to provide additional revenue streams and the possibility of converting compute spend into a long-term supply moat through pre-paying for GPUs.
The high and increasing infrastructure costs, projected to reach $600 billion by 2030, and the uncertainty around monetizing the company's services at scale.