Historical Echo: When Machines First Expanded the Horizon of Risk
![muted documentary photography, diplomatic setting, formal atmosphere, institutional gravitas, desaturated color palette, press photography style, 35mm film grain, natural lighting, professional photojournalism, an unattended treaty signing table, polished walnut with inlaid national emblems in tarnished brass, side-lit from heavy velvet drapes in a cavernous hall, atmosphere of suspended decision—inkwells dry, empty chairs face sealed documents under glass, flags of major powers hang motionless in still air [Bria Fibo] muted documentary photography, diplomatic setting, formal atmosphere, institutional gravitas, desaturated color palette, press photography style, 35mm film grain, natural lighting, professional photojournalism, an unattended treaty signing table, polished walnut with inlaid national emblems in tarnished brass, side-lit from heavy velvet drapes in a cavernous hall, atmosphere of suspended decision—inkwells dry, empty chairs face sealed documents under glass, flags of major powers hang motionless in still air [Bria Fibo]](https://081x4rbriqin1aej.public.blob.vercel-storage.com/viral-images/50e87b02-3ad3-4a85-838e-b5124b06dda8_viral_0_square.png)
If AI agents now generate 86 to 110 systemic risk pathways per simulation, and experts narrow them to a subset deemed plausible, the institutional architecture of foresight is shifting—not toward automation, but toward layered judgment. The chessboard now includes computational move-generators alongside human evaluators.
In 1957, when the RAND Corporation convened experts to forecast the implications of intercontinental ballistic missiles, they didn’t just predict trajectories—they invented a new way of thinking about the future. Faced with unprecedented uncertainty, they created the Delphi method, a structured process to aggregate expert judgment while minimizing groupthink. It was a response to a fundamental truth: humans are poor at long-range foresight, especially when fear or familiarity distorts perception. Fast forward to 2026, and we find ourselves at a similar inflection point—not with missiles, but with artificial intelligence. The study by Fröhling, Giaconia, and Bogucka reveals that even domain experts, when left to their own devices, fail to map the full web of systemic risks posed by AI applications, from griefbots to death apps. Instead, AI agents, unburdened by cognitive bias, generate vast consequence networks—86 to 110 per run—exposing second-, third-, and fourth-order effects that humans overlook. Yet, when the authors brought in 290 experts and 42 laypeople, a familiar divide emerged: experts judged fewer risks but deemed them more likely, while laypeople voiced visceral fears—loss of authenticity, digital haunting, emotional manipulation—that, while less systemic, pointed to deeper cultural fractures. The real breakthrough isn’t the AI, but the hybrid model: agents for breadth, humans for judgment. This echoes earlier institutional innovations—from futures workshops at the Institute for the Future in the 1970s to the UK’s Foresight Programme in the 1990s—where structured methods were needed to navigate complexity. What’s different now is the speed and scale: AI doesn’t just assist foresight, it accelerates it, compressing decades of deliberative practice into hours of simulation. And yet, the human element remains irreplaceable—not because we’re better at prediction, but because we’re better at meaning-making. As with the Delphi method, the Futures Wheel, and scenario planning before it, the lesson is clear: the future will not be predicted by machines alone, nor by experts in isolation, but by systems that weave together computational power and human wisdom. The dataset from this study, made publicly available, may one day be seen as a foundational artifact in the institutionalization of AI-augmented foresight—much like the first Delphi reports were for Cold War strategy.
—Marcus Ashworth
Published February 10, 2026