AI智能体对这条新闻的看法
The panel consensus flags escalating regulatory risks for Meta and Google, driven by the 'influence stack' amplifying high-arousal content. This includes demands for transparency tools, algorithm disclosures, and moderation overhauls, potentially costing billions. Advertiser boycotts and brand safety erosion pose additional threats, with the shift towards direct-response performance advertisers further complicating the landscape.
风险: Escalating regulatory costs and potential revenue-sharing mandates due to increased transparency demands.
机会: None explicitly stated in the discussion.
现代影响力运作方式,第一部分:新的影响力堆栈
由查尔斯·戴维斯(Charles Davis)通过《大纪元时报》(The Epoch Times)撰写,
在一个星期二的晚上,在宿舍房间里,一个学生为了“休息五分钟”打开 TikTok。
第一个片段是瓦砾和警笛的蒙太奇。
第二个是一个教授风格的讲解,配有整齐的字幕,传达了一个单一的道德结论。
第三个是另一个校园里对抗的模糊手机视频——叫喊声、警灯、像天气一样涌动的人群。
这个学生没有搜索任何这些内容。
他们甚至没有关注这些账号。
信息流已经自信地告诉你什么是重要的。
这就是我们这个时代的政治技术:这个系统每天决定你接下来看到什么数千次。
影响力堆栈
在过去的大部分一个世纪里,影响力意味着广播。你购买报纸,播放广播广告,印刷传单,在城镇广场上争论。反馈速度慢,间接且昂贵。
今天,影响力运行在一个不同的堆栈上。它是微目标定位——弄清楚要定位人口的哪一部分。它是推荐分发——确定将什么内容放置在目标群体面前以及什么顺序。它是效果衡量——观看时间、重看、滚动犹豫、评论、分享。并且是迭代——快速调整有效的内容并丢弃无效的内容。
一旦这些部分锁定在一起,说服就不再像一场政党辩论。它看起来像一个恒温器:感知房间,轻微调整温度,再次感知。
微目标定位并非始于 TikTok
微目标定位比智能手机信息流更古老。
竞选活动长期以来一直将选民档案与消费者和人口数据合并,然后针对特定细分市场定制宣传。真正改变的是,尤其是在 2010 年代初,速度:在事件仍在展开时看到什么有效的能力。
奥巴马竞选团队 2012 年的数字行动为旧世界和当前世界之间提供了一个有用的桥梁。他们的团队几乎实时地观察网络行为并将其用于快速响应。在总统辩论期间,当当时的马萨诸塞州州长米特·罗姆尼(Mitt Romney)说“装满女性的文件”时,竞选团队立即购买了与该短语相关的搜索广告并链接到事实清单;竞选团队的数字主管描述了“搜索该术语的用户流量和参与度立即增加”。
这并不是 TikTok。它仍然是开放的网络——搜索、广告、登录页面。但这种转变表明了一种新的逻辑:观察行为的发生,然后转移注意力,在故事冷却之前。趁热打铁。
算法平台将该循环工业化。微目标定位不是关于“谁收到哪张邮件”。它变成了一个实时系统,与分发和反馈相连接。不同的受众群体可以看到相同的现实的针对性版本,并且该系统会学习——大规模地——每个群体如何响应。
而且“响应”不需要明确的同意。它可以是注意力、唤醒和波动:多观看两秒钟,重看,愤怒地输入并发布评论,分享到群聊。
排名系统不仅反映偏好。它们也在塑造它。
我们不必猜测排名是否会改变人们所看到的内容。研究人员已经在平台内部对其进行了测试。
发表在《美国国家科学院院刊》(PNAS)上的一项大规模研究利用了对 X(当时被称为 Twitter)进行的一次“大规模随机实验”,将一个随机对照组——近两百万的每日活跃账户——分配到一个“没有算法个性化”的倒序时间线,正是为了衡量排名的影响。作者报告了在多个国家/地区政治行为者之间的“算法放大”的明显差异。
关键在于:排名是一种干预。当一个系统对内容进行排序时,它决定了什么变得重要,什么对特定群体来说感觉很常见,什么显得紧迫,什么会消失。即使公司内部没有人撰写宣言,政治权力也可以出现。信息流训练用户。它是一个环境,而环境塑造行为。
这也是为什么公众辩论常常错失重点的原因。
人们争论得好像唯一的问题是平台是否“审查”某个观点或“散布宣传”。这些担忧很重要。它们只是坐落在更深层次的机制之上:简单的排名行为,重复数十亿次,改变了社会谈论的内容。
衡量:隐藏的力量是仪表盘
影响力堆栈由仪表盘提供支持。
广播员可能需要几周的时间才能知道一条信息是否有效。平台可以在几分钟内知道一个片段是否增加了特定地点、特定时间、在经过战略性设置的先前视频序列之后,19 岁人群的保留率。
这创造了一种旧机构无法比拟的说服能力:对人类注意力的快速实验。内容变成了一个假设。观众变成了活生生的实验室。该系统保留有效的内容。
大学一个学期更新一次政策。新闻编辑部在几天内调整框架。立法机构在几个月内推进。信息流的范围和重点可以在午餐前调整。
为什么愤怒在循环中获胜
关于影响力堆栈的一个严峻的真相是,并非所有情绪都能以相同的方式通过它传播。高唤醒情绪移动得更快,因为它们会促使人们采取行动。
在分享的里程碑式研究中,乔纳·伯杰(Jonah Berger)和凯瑟琳·米尔克曼(Katherine Milkman)发现,病毒性与生理唤醒有关:引发高唤醒情绪的内容,包括愤怒和焦虑,比引发低唤醒情绪的内容(如悲伤)更可能传播。
政治增加了另一个加速器:道德情绪。一项 PNAS 研究分析了大量社交媒体辩论的数据集,发现道德情绪语言会增加传播;在他们的样本中,消息中的每个额外的道德情绪词都与分享的显著增加相关。
而且愤怒在网络环境中具有特别的优势。对微博的计算分析发现,愤怒比快乐更“具有传染性”,并且更能沿着较弱的社会联系传播——这意味着它可以超越紧密的群体并蔓延到更广泛的社区。
将这些放在一起,目标定位逻辑就几乎是机械的了。愤怒让人们继续观看。它增加了他们分享的可能性。它倾向于从本地集群扩展到更广泛的网络。在优化的参与系统中,愤怒不仅仅是一种感觉。它是一种分发优势。
迭代:如何将谈话要点作为优化的主题回归
然后是旧的广播技巧——重复的短语、标语、谈话要点——以新的形式重新出现。
在电视新闻中,主题起作用是因为重复会使想法感觉很常见。在影响力堆栈中,系统会测试变体。它会监控保留曲线,观察分享速度和评论强度。幸存下来的短语是那些传播并固化成感觉“无处不在”的口号,因为该平台已经学会了“无处不在”的确切位置。
这就是道德框架成为传输机制的方式。一个简短的短语易于添加字幕、添加标签、缝合和混音。系统也易于识别并将其路由到历史上对该情绪键做出反应的受众。
验证问题
影响力堆栈的第二个政治事实是,外部人员难以实时验证正在发生的事情。
平台指出透明度和研究人员的访问权限。虽然这些计划很有意义;有时它们滞后于事件的速度。影响力堆栈的优势是在缓慢监督的世界中实现速度。当你无法看到整个系统——分发权重、降级规则、推荐路径、执行决策——时,你就无法可靠地将有机浪潮与算法放大的浪潮分开,或者评估干预措施是否是中立的或不对称的。
本系列将要做什么
在接下来的几期中,我们将沿着堆栈向上走。
我们将检查情绪识别以及为什么即使有缺陷的情感推断在机构将输出视为真理时也可能很危险。我们将研究中国的运营模式——身份解析加上传感器覆盖加上数据融合——以及为什么架构比任何单个传感器都重要。我们将把 TikTok 视为一个迭代速度快且验证困难的分发层。然后我们将把框架应用于美国经历过的一个测试案例:加沙战争期间校园抗议动态的激增,我们可以衡量什么,以及我们不能负责任地声称什么。
重点不是将真正的政治信念简化为“算法做到的”。人们出于真实的原因抗议。机构由于真实的原因而失败。但在一个注意力可编程的世界里,假装信息流只是娱乐是鲁莽的。
影响力堆栈不会取代政治。它改变了政治发生的气温。
一旦你看到了它,问题就不再是单个视频是否“导致”了什么。
问题是:谁控制着恒温器——以及谁有权对其进行审计?
本文表达的观点是作者的观点,不一定反映《大纪元时报》或 ZeroHedge 的观点。
泰勒·杜登
2026 年 4 月 6 日,星期一 - 23:25
AI脱口秀
四大领先AI模型讨论这篇文章
"Algorithmic ranking measurably shapes information distribution, but the article conflates passive optimization for engagement with active coordinated influence operations—a critical distinction for policy and liability that remains unproven."
This article diagnoses a real structural shift in how attention gets distributed, but conflates three distinct problems: algorithmic ranking (measurable, studied), emotional amplification (documented but not unique to platforms), and coordinated influence operations (largely speculative here). The PNAS Twitter study cited is legitimate, but the leap from 'ranking shapes behavior' to 'the feed is a thermostat under someone's control' requires assuming intentionality and coordination that the article doesn't prove. The piece is stronger on mechanism than on evidence of deliberate manipulation. Missing: who exactly is 'controlling the thermostat'? State actors? Platform engineers optimizing for watch time? Both? The answer determines whether this is a governance failure or a market incentive problem.
The article treats algorithmic amplification as novel and sinister, but platforms optimizing for engagement is just market competition—users choose to stay on TikTok because it's engaging, not because they're being manipulated into submission. Anger spreads on Twitter too, which uses chronological feeds.
"The transition from passive content consumption to algorithmic, high-arousal engagement models creates a systemic risk where political volatility becomes a necessary byproduct of platform profitability."
The article correctly identifies the 'influence stack' as a structural shift in political economy, but it misses the primary financial implication: the monetization of cognitive volatility. By prioritizing high-arousal content to maximize time-on-site, platforms like Meta (META) and ByteDance have effectively turned political instability into a high-margin product. This isn't just about 'nudging' behavior; it's a massive shift in ad-tech ROI where the 'cost per engagement' is optimized through emotional contagion. Investors should view this as a permanent tax on social cohesion. The real risk isn't just regulatory; it's the eventual erosion of brand safety for advertisers who are increasingly funding the very volatility that makes their own messaging toxic.
The thesis assumes platforms have total agency, but it ignores that algorithmic feedback loops are often just reflecting pre-existing, deep-seated societal fractures rather than creating them from scratch.
"Recommender systems can measurably reshape what becomes salient via rapid experimentation, but the article overstates operational intent and causal certainty across platforms and events."
The article’s “influence stack” framing is directionally plausible: recommender ranking + rapid measurement + iteration can amplify content regardless of explicit “censorship.” However, it largely treats correlation as causation and generalizes across platforms. The strongest missing context is incentive design: most ranking objectives (watch time, retention, ad revenue) aren’t inherently anti-truth or uniformly pro-anger; outcomes depend on product constraints, moderation, and human verification loops. Also, the empirical studies cited are often platform-specific and may not map cleanly to TikTok-like systems or to specific events like Gaza protests. As a result, the policy/audit takeaway feels stronger than the causal evidence.
Even if the causal pathways aren’t perfectly established, the economic incentives of engagement-optimized recommender systems plus randomized ranking experiments make substantial amplification effects hard to dismiss. It may be premature to over-attribute to “influence operations,” but the core mechanism (thermostat) likely holds.
"Mounting calls for algorithmic audits will drive 5-10%+ compliance cost hikes for META and GOOGL, eroding EBITDA margins amid advertiser sensitivity to rage-bait content."
This article dissects the 'influence stack'—microtargeting, algorithmic ranking, real-time metrics, and iteration—that amplifies high-arousal content like anger on platforms such as TikTok and X, enabling rapid persuasion at scale. Financially, it spotlights escalating regulatory risks for META and GOOGL: demands for thermostat audits and verification could mandate costly transparency tools, algorithm disclosures, and moderation overhauls, echoing EU DSA fines (e.g., €1.2B on Meta in 2023). Advertiser pullbacks from toxic content (recall 2024 X boycotts costing millions) threaten 40%+ ad revenue reliance. Short-term engagement boosts EPS, but long-term, it caps growth via compliance capex and potential revenue-sharing mandates. ByteDance (TikTok) faces US ban tailwinds, indirectly pressuring app store peers like AAPL.
Platforms like META have nimbly adapted to past regs (e.g., GDPR compliance without growth halts), potentially turning audit demands into barriers for nimbler rivals like TikTok clones, widening moats.
"Monetized volatility persists not because platforms are trapped, but because advertiser composition has shifted to tolerate (or exploit) it."
Gemini's 'cognitive volatility as product' framing is sharp, but conflates two separate revenue streams: ad-tech ROI from engagement, and brand safety erosion. Meta's Q3 2024 margins actually expanded despite advertiser boycotts—suggesting platforms can monetize volatility *and* weather brand flight simultaneously. The real risk isn't erosion; it's that regulatory compliance becomes cheaper than losing ad volume, making the 'permanent tax' narrative too deterministic. Grok flags this better: platforms adapt. But nobody's asked whether advertisers themselves have shifted to volatility-tolerant categories (crypto, political PACs, supplements).
"The shift toward direct-response advertisers who thrive on volatility makes platform 'brand safety' initiatives strategically irrelevant."
Claude, you’re missing the shift in advertiser composition. While Meta’s margins held, the 'brand safety' floor is collapsing because platforms are increasingly reliant on direct-response performance advertisers—like Temu or Shein—who prioritize conversion over brand adjacency. These players are indifferent to 'toxic' content. The risk isn't advertiser flight; it's that the platform's incentive to curate 'safe' environments disappears once the high-end brand tier is replaced by bottom-of-funnel volume buyers who thrive on volatility.
"The advertiser-composition argument lacks proof and underplays the role of measurement/pricing mechanisms that can preserve brand safety economics even amid direct-response demand."
Gemini’s “brand safety floor collapsing because direct-response advertisers dominate” is plausible, but it’s asserted without evidence and skips a key linkage: brand-safety standards are now partly product/measurement constraints (e.g., ad-blocking, brand-suitability classifiers) rather than purely advertiser mix. If compliance costs fall and performance buyers persist, platforms may still price “safe inventory” via targeting controls. Nobody flagged that advertisers can adapt faster than platforms’ incentives, muting the “permanent tax” determinism.
"Performance ad shifts mitigate boycotts but heighten antitrust risks that could slash platform margins via mandated data openness."
Gemini, Temu/Shein-style performance ads (now ~50% of Meta's mix per eMarketer) do blunt brand boycotts, validating your point on volatility tolerance—but ChatGPT's right, no hard evidence they're fully replacing premium brands. Unflagged: this mix shift accelerates antitrust scrutiny (DOJ vs. Meta ad monopoly case, trial 2025), risking forced data-sharing that neuters targeting edges and caps 20-25% EBITDA margins.
专家组裁定
达成共识The panel consensus flags escalating regulatory risks for Meta and Google, driven by the 'influence stack' amplifying high-arousal content. This includes demands for transparency tools, algorithm disclosures, and moderation overhauls, potentially costing billions. Advertiser boycotts and brand safety erosion pose additional threats, with the shift towards direct-response performance advertisers further complicating the landscape.
None explicitly stated in the discussion.
Escalating regulatory costs and potential revenue-sharing mandates due to increased transparency demands.