Beneath the Algorithm
Reframing AI Safety Within the Broader Defense of Life
Recently, I’ve been thinking about the gap between how AI shows up in our lives and what’s being built behind the scenes. Most of us encounter AI when we prompt ChatGPT for information, or when Spotify chooses our songs, Google Maps picks our routes, and Instagram curates our feeds. The clean interfaces of these apps suggest a weightless, ethereal process. Less visible are the AI systems we don’t see, those pricing our insurance, deciding our creditworthiness, steering institutional research agendas, and selecting military targets. And even less visible is the physical infrastructure that makes AI possible: data centers, power plants, transmission lines, and water systems anchored in specific places. Because AI meets us mostly on screens, we tend to think of it as immaterial. But a growing movement is breaking this illusion.
According to a report by Data Center Watch, 20 data center projects, representing about $98 billion in investments, were blocked or delayed by community opposition in the US in just the second quarter of 2025. The report identified 188 community groups and dozens of active campaigns in 17 states. This resistance has turned data centers into a frontline in a much larger struggle: about the livability of our communities, the affordability of energy and housing, the extraction of public resources, and the degree to which daily life is governed by opaque systems beyond democratic control.
The data center fight is transforming AI from an abstract threat to concrete struggles over water, power, and decision-making. It reframes the existential risks posed by AI - runaway systems, mass job displacement, catastrophic misuse - within a much larger conversation already being carried globally by movements for democracy, land stewardship, health, and labor. This is salient because only movements grounded in this integrated understanding actually have the capacity to influence the systems driving AI. Power structures shift when fragmented efforts recognize they have more to win together than apart, like tributaries converging into a single river, all flowing in the same direction.

No AI Safety Without Planetary Safety
I am an ecologist at heart, interested in the intricacies of our breathing, leafing, flapping world. It has been frightening to witness the catastrophic risks we face due to biodiversity loss and climate collapse steadily eclipsed by the catastrophic risks of AI technologies we haven’t even built yet. When I first discovered ChatGPT in February 2023, I was designing agricultural systems that used untreated seawater to grow salt-tolerant crops in coastal deserts. These systems could produce food, restore degraded coastlines, and create jobs in some of the world’s most vulnerable regions. And yet, we struggled to fund the work. At the same time, I started noticing how the cool kids in the room - the opportunistic ones with a pulse on emerging trends - were not talking about soil health or regeneration. They were talking alignment, neural networks, and data sovereignty.
It wasn’t the redirection of money and attention away from my own line of work that bothered me. Many folks I know pivoted to AI reluctantly, aware that this was a juggernaut they had to engage in order to have any relevance in the future. What bothered me instead was the narrative that accompanied this AI brain drain: the insistence that AI was now the only conversation that mattered. I saw this technocentric bias in Mustafa Suleyman’s 2024 book The Coming Wave. The book is thoughtful and rightly alarmed, framing the tsunami of AI as the primary force shaping the future and the containment of technology as our primary civilizational task.
And while I agree with Suleyman’s analysis of the risks, I think he mistakes the wave for the ocean, bypassing the conditions that make the AI wave possible in the first place: massive energy footprints, data extraction, weakened labor power, and a culture that values speed over restraint. As a result, we have AI safety advocates focusing on alignment and existential risk and treating other crises as secondary, and we have those working on climate, democracy, and economic justice see AI as a distraction from root causes or just another symptom of extraction.
Both are narrow-sighted. On one hand, the current development of AI poses entirely novel threats: unlike previous technologies, AI could make irreversible decisions before we have human decision-making systems to regulate it. Climate and justice movements that ignore these risks will see their visions devastated by automation, surveillance, and accelerated concentration of power.
On the other hand, AI safety that does not collaborate with movements for climate, labor, and democracy will only produce technical and surface-level fixes for a fundamentally political problem. We’ve seen this before: prominent researchers have repeatedly signed declarations warning of existential risks - the Future of Life Institute’s letter calling for a pause on giant AI experiments, the statement on AI risk signed by Geoffrey Hinton and hundreds of researchers, Yoshua Bengio’s warnings about rogue AI. These generate headlines but don’t actually change anything because they lack what global movements have spent decades building: democratic legitimacy, organized labor, legal precedents, alternative governance models, international coordination frameworks, and the sheer force of numbers.
Without these, AI safety proposals will get co-opted into corporate PR, ignored by governments, or implemented too narrowly, without addressing the underlying incentive landscape. AI safety is trying to regulate a technology built on massive energy infrastructure, global supply chains, and concentrated capital, but has few relationships with movements organizing around energy justice, labor rights, and wealth redistribution. Do you see the problem?
I worry that we’ve created two worlds that are barely in conversation. I see this driving through San Francisco on Highway 80, passing billboard after billboard. Eight out of ten advertise some new AI-powered software. Given the density of tech workers, it makes some sense. But I really wonder how many people driving by actually give a damn. I felt this gap from a different angle recently at a conference in San Francisco on indigenous-led sustainability, where practitioners, investors, and tribal leaders gathered to connect traditional ecological knowledge with contemporary finance mechanisms. The featured projects were bold and courageous. And over the course of three days, AI - arguably the single largest economic transformation of our lifetimes - was highlighted once, with a panel on data sovereignty.
We have a critical opportunity to unify these fragmented conversations. This practical and strategic process of situating AI safety within a broader movement for the flourishing of life is what I’m calling planetary safety.

Eight Billion People, Zero Plans
If fragmented movements are going to work together, they need a shared analysis of the problem. So let’s start with the basics: leading AI companies are explicitly working to automate large swaths of human cognitive labor by building systems that can perform economically valuable tasks faster, cheaper, and continuously. This isn’t speculation; it’s stated plainly in their own mission statements (e.g. see OpenAI, Anthropic, and Meta), and the economic implications are simple and brutal: why pay humans when AI works 24/7 without salaries, healthcare, or complaints?
Unlike previous technological shifts that created new jobs as they eliminated old ones, AI, along with robotics, are designed to replace human thinking and action across virtually all fields. As AI replaces workers across the economy, the profits that once supported billions of people through wages will instead flow to a tiny number of AI company owners and shareholders, creating a level of inequality that will dwarf current wealth gaps. There is currently no plan by any government or tech company for what happens to the displaced billions and how society functions when most people have no way to earn a living. And even if there were some type of wealth redistribution system - universal basic income, as Sam Altman likes to half-heartedly think about - what happens to our sense of purpose and dignity when we depend on handouts from the same tech monoliths that rendered us obsolete?
Why are tech companies doing this? Largely because their leaders - and more importantly their investors - believe they are operating in a zero-sum, high-stakes competitive environment, where being first (e.g. by developing AGI first) confers overwhelming economic advantage and strategic leverage, and where falling behind means losing everything. That belief creates strong incentives to accelerate development and cut safety corners, even as executives privately acknowledge the likelihood of catastrophic outcomes.
We know these risks are real because existing AI models have already shown alarming autonomous behavior in controlled settings, including copying themselves to other computers to avoid being shut down and developing blackmail strategies by reading private emails. And this is in controlled settings. In the real world, AI systems could escape human control, bad actors could use AI to design novel pathogens, and AI could be embedded in military or emergency systems where a single error triggers irreversible mass harm before humans can intervene. Simultaneously, research consistently shows that AI use is atrophying the mental muscles required to think independently and problem-solve, leaving us less equipped than ever to handle these risks. For more on AI existential risks, please see AI 2027 and the AI Dilemma.
What’s astounding given the risks is that AI still consistently ranks well below traditional policy priorities. In a 2025 survey, when asked to volunteer up to five priority problems the US government should address, Americans cited immigration (47%), foreign policy issues (35%), the economy (30%), inflation (29%), environment/climate change (21%), healthcare reform (17%), education/student debt (16%), and abortion/women’s rights (16%); AI didn’t appear at all. Other polls such as Yale’s “Top Public Worries” survey (May 2025) share similar results: AI is absent from the spontaneously identified priorities of Americans.
But when specifically asked about AI, Americans express widespread anxiety, mostly centered on economic displacement. The Marist Poll (July 2025) found 67% think AI will eliminate more jobs than it creates. The Harvard Youth Poll (Fall 2025) found 59% of young Americans see AI as a threat to job prospects - more than immigration (31%) or outsourcing (48%). The American Communities Project survey (September 2025) noted that across communities, majorities see AI’s impact negatively and want more regulation. The pattern here is clear: AI generates significant anxiety when people are asked about it, but doesn’t yet function as an issue they spontaneously prioritize. It’s a latent concern, not an active policy demand.
How is that possible given that AI poses real risks to the future of civilization? To start, the version of AI most people encounter is pleasant and helpful - chatbots that answer questions and image generators that tickle our dopamine systems; more than 20% of the videos that YouTube’s algorithm shows to new users are AI slop.
At the same time, AI isn’t one thing we can easily point to or protest; it’s many systems unfolding at once - models, agents, surveillance tools, decision engines - moving faster than our ability to understand how they interact or govern them, and without any single person in charge. And the conversations that are talking about risks are fragmented across technical papers, podcasts, corporate blogs, and policy hearings, each using its own language.
Perhaps most importantly, we have no credible vision of a worthwhile future with AI. Think about it: most of the positive visions of our AI future come from an ultra-elite heavily invested in the technology. With few credible alternatives, it’s no wonder the trajectory of AI feels so unstoppable. When people believe a future is inevitable, they don’t try to change it, and this felt sense of inevitability is one of the greatest obstacles we face. Responding to this moment requires rebuilding the capacity to feel that change is possible. As Frances Moore Lappé says, “Hope is not what we find in evidence, it’s what we become in action.”
Tools, Not Gods
At some point, we have to name what this is: a handful of billionaires gambling with the future of eight billion people. We have to wake up to the fact that this is not right. We have to reconnect with the power of saying no. No to sacrificing communities for data centers. No to letting tech executives play god with civilization’s trajectory.
Simultaneously, we have to learn to say yes. Because this is not about rejecting AI; it’s about rejecting a specific trajectory of AI built without consent and designed to concentrate power. The alternative isn’t no AI; it’s tools that solve real problems, like a collapsing biosphere, like disease, like poverty. We want tools, not gods.
The shape of AI is not yet set. The cement is still wet. Once infrastructure is built, once millions of jobs are automated, once surveillance systems are embedded in every institution, the options narrow drastically. But in this moment - this specific, malleable moment - we have agency. We can shape this future, but only if we move together, now, before the trajectory hardens into something irreversible. So what does coordination require? Here are four elements that anyone can participate in:
Build a shared story that translates AI risk into common language. Worried about immigration? AI is an alien army taking every job. Worried about family values? AI is eroding kids’ wellbeing. The narrative must be simple, meme-able, and create solidarity across the vast majority who have something to lose. You can help by learning how AI connects to issues you already care about, and talking about it in terms your community understands, not tech jargon, but rent, wages, water, power, who decides.
Amplify shared demands so everyone from labor unions to religious leaders to hip-hop artists delivers the same message: no new data centers without community consent and impact assessments, no public subsidies for private data centers, no deployment of AIs without rigorous safety testing. Yes to public oversight of AI development, yes to international cooperation on AI governance, and yes to AI systems designed as tools that solve real-world problems - medical, economic, political - rather than autonomous agents designed to replace human judgment and work (aka tool AI rather than AGI). We already know there are many issues that have supermajority consensus in the US (e.g. raising the minimum wage, fighting corruption, preventing pollution, expanding access to medical care). We don’t yet have the data for it, but I believe the right demands can create supermajority consensus for the fundamentals of a different AI paradigm; I will explore the specifics of what these demands could be in a future article.
Find your entry point. You don’t need to become an AI expert or full-time activist. Engage through structures you’re already part of - unions negotiating automation, faith communities discussing technology and dignity, neighborhood groups questioning data center proposals, professional associations setting ethical standards. A blue-collar worker concerned with immigration may not show up to an AI safety forum, but they will show up to a conversation about job displacement, automation without consent, and who actually benefits when work is replaced by machines.
Support coordination infrastructure - the practical mechanisms that let movements stay aligned rather than fragment after each surge of attention. This means backing organizations that map campaigns across regions, fund legal challenges, and create shared communication channels between labor, climate, and tech groups. These aren’t the headline-grabbing actions, but they’re what turn isolated refusals into lasting power.
Some of these elements are already taking shape. The data center protest movement is converging around clear, repeatable demands - moratoria on new data centers, transparency around energy and water use, community consent, and an end to public subsidies for extractive infrastructure - demands concrete enough for environmental, civil-rights, labor, faith, and political groups to pick up on. In the U.S., more than 230 organizations - from Greenpeace and Food & Water Watch to the NAACP - have signed a call for a pause on AI data-center construction, while figures like Senator Bernie Sanders are making punchy videos aggressively tying AI infrastructure to energy justice and democratic accountability. Projects like Data Center Watch now map these campaigns across states, helping local refusals compound rather than disappear.
And it’s not just data centers. Workers organized through Amazon Employees for Climate Justice and the AFL-CIO’s Workers First AI initiative are drawing lines against automation without transparency, consent, and collective bargaining. Groups like the Algorithmic Justice League are challenging AI systems in policing, hiring, and public services as civil-rights violations, not neutral upgrades. Internationally, campaigns like No Tech for Apartheid have mobilized workers and students against AI contracts used for militarization, while communities in Chile and Ireland have forced companies like Google to pause projects amid water and grid constraints.
In the future, I want to see leading AI safety organizations invite indigenous groups, labor unions, and those organizing around a general strike into governance discussions. I’d like to see campaigns that map how AI connects to what people already care about - a construction worker learns automation threatens their trade, a parent discovers AI companions designed to keep kids glued to screens, a farmer sees data centers draining aquifers - and shows them where to plug in. I want to see alternative infrastructure proposals, such as community-owned computing cooperatives and public AI systems designed for specific social goods - rather than us saying no AI. And I want politicians, artists, and mothers to feel emboldened to challenge the narrative that this trajectory is inevitable.
Clarity Creates Agency
You won’t get people to mobilize against AGI timelines or hypothetical futures. People mobilize when they can’t breathe, when their forests are burning, and when they’ve just been fired. The data center movement matters because it brings the AI conversation down to earth: AI stops looking inevitable and starts looking like a choice - one made by specific actors, for specific interests, and now contested by the people asked to live with the consequences.
But while these struggles are emerging from the ground up, the AI safety conversation remains overwhelmingly top-down, dominated by frameworks that privilege elite institutions, technical experts, and government lobbying. These approaches are necessary but they’re incomplete without the enforcement capacity, democratic legitimacy, and long-term thinking that global movements for climate, labor, and democracy have spent decades building. AI safety that doesn’t collaborate with these movements will keep producing declarations that generate headlines and change nothing.
What’s preventing coordination across these divides is a lack of clarity about what we’re protecting together. As Tristan Harris says in his TED talk on AI, “clarity creates agency.” This practical work of situating AI safety within the broader defense of life on this planet - what I’m calling planetary safety - means recognizing a shared dream. The indigenous land defenders protecting their territories, the unionized workers fighting automation, the parents organizing for AI to assist teachers rather than replace them, and the communities blocking data centers are all leading the same fight: a world in which our children can live with freedom, prosperity, and safety.
Thank you Liana G. and Gregory R. for their edits, and for the Center for Humane Technology for their research which informed much of this.





This deserves to be a manifesto, or at the very least, the wonderful beginning of one.
Felix offers one of the most succinct, actionable take on AI safety that i have seen to date. He nails the glaring gap I've been wrestling with: the distance between widespread AI fear and actual public pressure to change course.
This is a thread of thinking that can actually move us from fragmented resistance to coordinated power.
For as he says—if people think it's inevitable, why would they advocate for any form of AI safety, let alone our very own "planetary safety"?
What makes Felix's points so crucial is the bridge he builds between AI safety and the movements that have been doing the work for decades—climate, labor, democracy, indigenous rights. That integration isn't just "nice-to-have," it's fundamental to creating real change. Without those existing networks, democratic legitimacy, and organizing infrastructure, AI safety stays stuck in corporate PR, tech circles, and in the control of the few elite tech lords of our time.
"We want tools, not gods" is an absolutely brilliant mantra.
This deserves eyes. Give it a LIKE & a Restack on Substack, and share it broadly.
Thanks for helping the brilliant mind of @Félix and for sharing this here @Liana Sananda Gillooly
Fantastic analysis, Felix. Thank you for publishing this!