- AI safety pioneer Eliezer Yudkowsky warned that current “black box” AI systems make human extinction unavoidable, stating the only safe path is to halt development of AGI.
- Transhumanist Max More argued that delaying AGI could cost humanity its best chance to defeat death from aging, viewing it as an escalating personal catastrophe.
- Computational neuroscientist Anders Sandberg described a personal, “horrifying” episode where he nearly used an LLM to design a bioweapon, highlighting near-term risks.
- Humanity+ President Emeritus Natasha Vita-More dismissed the entire “alignment” debate as a “Pollyanna scheme,” citing a lack of consensus even among long-term collaborators.
A sharp public divide over the future of Artificial Intelligence played out this week in an online panel hosted by the nonprofit Humanity+. The debate featured prominent AI “Doomer” Eliezer Yudkowsky, who has called for shutting down advanced AI development, alongside transhumanist philosopher Max More and computational neuroscientist Anders Sandberg. Consequently, their discussion revealed fundamental disagreements over whether AGI can be aligned with human survival or whether its creation would make extinction unavoidable.
Yudkowsky warned that modern AI systems are fundamentally unsafe because their internal decision-making processes cannot be fully understood or controlled. He argued that humanity must move “very, very far off the current paradigms” before advanced AI could be developed safely. However, Max More challenged the premise that extreme caution offers the safest outcome for humanity.
More argued that AGI could provide the best chance to overcome aging and disease, which he described as a catastrophic, individual extinction event. He further warned that excessive restraint could push governments toward authoritarian controls as the only way to stop global AI development. Meanwhile, Sandberg positioned himself between these two opposing camps, rejecting the need for perfect safety.
Sandberg recounted a personal experience in which he nearly used a large language model to assist with designing a bioweapon, an episode he admitted was “horrifying.” He suggested that partial or “approximate safety” could be achievable by converging on minimal shared values like survival. Natasha Vita-More, however, criticized the broader alignment debate entirely.
Vita-More described Yudkowsky’s claim that AGI would inevitably kill everyone as “absolutist thinking” that leaves no room for alternative outcomes. She argued that, as AI systems grow more capable, humans will need to integrate more closely with them to better cope with a post-AGI world. The panel ultimately served as a stark reality check on conflicting visions for humanity’s technological future.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Bitcoin Dips Below $78K Amid Bearish Bets
- XRP Crash: Is This a Golden Entry Point for Investors?
- Mario Tennis Fever Aces Nintendo Switch 2 Launch in Feb 2026
- Indonesia Snubs Trump’s Drone Deal in Trade Talks
- Doubts Over XRP’s Future Amid Volatility And Competition
