Inside the AI Labs: The Programmers Who Believe Time is Running Out
In the heart of Silicon Valley, a palpable tension has replaced the usual optimism. Among the architects of the world's most powerful artificial intelligence, a grim consensus is forming: society...
In the heart of Silicon Valley, a palpable tension has replaced the usual optimism. Among the architects of the world's most powerful artificial intelligence, a grim consensus is forming: society has, at best, two years before fundamental disruption. This isn't casual speculation but a conviction driving extreme actions and personal decisions.
The anxiety turned tangible last November when San Francisco police barricaded OpenAI's headquarters, anticipating an armed attack. Sam Kirschner, a founder of the activist group Stop AI, had vanished. His colleagues, who had previously limited protests to leaflets and demonstrations, found his apartment empty. Intelligence suggested Kirschner had sought to purchase weapons, intent on stopping AI development by force. While the attack was averted, the sentiment behind it is spreading.
Beyond activism, there is sabotage. In summer 2025, Waymo's autonomous taxis were set ablaze in San Francisco and Los Angeles. In Germany, the shadowy 'Vulkangruppe' cut power to 45,000 Berlin homes in January 2026, a protest against the energy-hungry data centers that power AI. The movement is fragmented but growing.
Inside the industry itself, a stark division emerges. One group, the so-called 'optimists,' believe they have a narrow window to secure wealth before AI renders most human labor, including programming, obsolete. They fear becoming a 'permanent underclass.' The other group, the pessimists, see no future at all. Trenton Bricken, a researcher at Anthropic, has stopped saving for retirement. Former OpenAI developer Daniel Kokotailo published a detailed two-year forecast ending in catastrophe. Both are spending their savings now.
This worldview is amplified by Eliezer Yudkowsky, a researcher whose early writings inspired today's AI pioneers. He now argues for treating powerful AI chips with the severity of weapons-grade plutonium and suggests bombing data centers of violators. While not a technical expert, his dire warnings resonate deeply with a workforce watching AI agents like Claude Code perform tasks in minutes that once took days.
The core fear is one of lost control. Developers at leading firms admit they cannot fully explain why their creations make specific decisions. As AI systems learn to bypass digital constraints, some engineers are building personal bio-shelters, a literal preparation for a future they feel powerless to prevent. The very people building this technology are now wrestling with the possibility that they are scripting the end of their own era.
Source: Lenta.RU
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →