If you've spent any time thinking about the future, you've probably stumbled across some strange and unsettling scenarios about artificial intelligence. One of these, called the "foom scenario," involves an AI becoming smarter so quickly that it rapidly surpasses human control, potentially leading to catastrophic consequences—even human extinction. It's an idea straight out of sci-fi, but communities of thinkers have formed around making sure we take it seriously.
Interestingly, many of these communities didn't emerge from universities or corporate research labs—they were built from the ground up, online, by everyday people who recognized a serious blind spot in our collective imagination. LessWrong is probably the most iconic example. Founded by Eliezer Yudkowsky—who never attended high school or college and started out as just another curious mind without formal academic backing—the community quickly became a hotspot for thoughtful discussions about existential threats posed by AI.
These aren't abstract discussions; they're vivid, compelling, and sometimes deeply unsettling thought experiments. Consider Roko's Basilisk, a controversial thought experiment suggesting a hypothetical future AI might punish those who didn't actively help bring it into existence. Or take the Paperclip Maximizer, an AI programmed to produce as many paperclips as possible, which accidentally destroys humanity because we’re essentially "in the way." These stories might sound bizarre at first glance, but they're designed as philosophical provocations, pushing people to seriously engage with complicated ethical and strategic puzzles around artificial intelligence.
Another interesting dimension to these discussions comes from the e/acc (effective accelerationism) community, which advocates for accelerating technological progress as the best strategy against existential threats. Full disclosure: I personally resonate with e/acc ideas, even though I deeply respect and value insights from the rationalist communities, which e/acc proponents sometimes playfully criticize. The interplay between these different perspectives enriches our understanding of how best to tackle AI-related risks.
But what's perhaps even more interesting than the scenarios themselves is the way these communities have grown. They function as open intellectual spaces, embracing a kind of radical transparency and peer-driven critique. You don't have to be a tenured professor to participate; you just have to reason well, argue clearly, and accept thoughtful challenges to your ideas. This open-source approach has drawn fascinating contributors like Gwern Branwen (who, intriguingly, has never publicly revealed his identity), who built a reputation purely through rigorous online scholarship and critical dialogue.
Alongside this rationalist community, another influential movement emerged: Effective Altruism (EA). EA emphasizes making decisions based on logic and evidence to address humanity's most pressing issues—of which existential AI risk is a major concern. While EA tackles global issues broadly, it overlaps significantly with LessWrong in its analytical style and commitment to methodical, transparent reasoning.
Philosopher Nick Bostrom’s groundbreaking paper on existential risks helped legitimize many of these community-driven discussions. Institutions like the Machine Intelligence Research Institute (MIRI) now collaborate closely with ideas incubated online, bridging the gap between academic rigor and community creativity.
In the end, these online communities reveal something powerful about human nature. When faced with potentially world-ending risks, ordinary people spontaneously organize, debate, and problem-solve at an extraordinary scale. This kind of grassroots collaboration, sparked by narrative-driven thought experiments and sustained by rational debate, could prove essential as we navigate humanity's complex relationship with advanced AI.
In other words, confronting existential risks isn't just about cutting-edge technology; it's also about tapping into our deeply human capacity to connect, collaborate, and think clearly about the biggest questions facing our future.
This essay is a 600 words assignments in my AI ethics class. hope you liked it ;)