This deserves to be a manifesto, or at the very least, the wonderful beginning of one.
Felix offers one of the most succinct, actionable take on AI safety that i have seen to date. He nails the glaring gap I've been wrestling with: the distance between widespread AI fear and actual public pressure to change course.
This is a thread of thinking that can actually move us from fragmented resistance to coordinated power.
For as he says—if people think it's inevitable, why would they advocate for any form of AI safety, let alone our very own "planetary safety"?
What makes Felix's points so crucial is the bridge he builds between AI safety and the movements that have been doing the work for decades—climate, labor, democracy, indigenous rights. That integration isn't just "nice-to-have," it's fundamental to creating real change. Without those existing networks, democratic legitimacy, and organizing infrastructure, AI safety stays stuck in corporate PR, tech circles, and in the control of the few elite tech lords of our time.
"We want tools, not gods" is an absolutely brilliant mantra.
This deserves eyes. Give it a LIKE & a Restack on Substack, and share it broadly.
Thanks for helping the brilliant mind of @Félix and for sharing this here @Liana Sananda Gillooly
I appreciate this problematization of AI pushback. The conversation does seem very siloed, but I don't think the local level of resistance is a sufficient foundation for addressing such issues. This sort of "not in my backyard" line of thinking simply pushes the issue out of sight. History teaches us that the infrastructure for this madness will instead be built in someone else's backyard, where access to legal and political power is lacking (predominantly the global south). That being said, it is indeed valuable how such local resistance brings the ecological impossibility of AI expansionism more to the forefront.
Either way, I love AI as a sort of gateway drug for problematizing capitalism.
I think AI is beautifully exposing what capitalism looks like when we cut labor out of the production equation altogether. The meaninglessness of the whole project comes that much more clearly into view.
The West was already headed toward social breakdown. In this sense, AI only accelerates this process by accentuating one of capitalism's core contradictions (ever-intensifying inequality).
So, how can this be a good thing?
This disenfranchisement is not new in capitalism, but has thus far been gradual. By being gradual, it has been politically metabolizable, so to speak. Imagine disenfranchising your local serfs (or worker class) so very gradually that they don't even notice the water is boiling over to cook them alive. Disenfranchising them too rapidly leads to political instability. Well, we're at that point in the story where political instability and chaos may be necessary to further the plot.
And this is only from a Marxist class-inequality point of view. More broadly, AI comes full circle to expose the fault in Marx's own thinking--considering human beings first and foremost as producers\consumers. In other words, I agree that AI is threatening the modern coupling of jobs to meaning, as in "my life's meaning is derived from my socially validated participation in this worthwhile economy." Rather than considering this threat as an issue because of the psychological pressure it puts on workers, it may be more fruitful to pause and question the very legitimacy of this coupling. On the other side of it, we must ask once more what is the meaning of life and what is the worth of human existence, beyond the modern paradigm.
By creating work that is so mundane and superfluous that it can hypothetically be automated by machines, by coupling the meaning of life with such drudgery, we set ourselves up for these contradictions to eventually blow up in our faces. Legislating around these issues will only serve to kick the existential can further down the road.
May this AI bubble pop sooner rather than later, and may it pop in such a way that carries humanity to question its outdated economic, political, existential, and metaphysical paradigms, and do better. Amen.
I totally agree the local level of resistance is not enough. I am just pointing towards it as a place where AI safety folks might look to for how to "ground" the AI convo both literally and metaphorically in realities more people can understand.
"it may be more fruitful to pause and question the very legitimacy of this coupling." So well said. The article doesn't go deep enough into asking once more about the meaning of life beyond the modern paradigm.
You were writing about the policy level. I think that's a practical level worthy of attention. My response was about different registers, and I'm kinda debating whether it was even a relevant response.
But I think maybe it's worthwhile to touch these levels, say, meta-economic, existential and what have you, as side notes even when discussing immediately pragmatic issues, so that we don't lose track of the bigger picture.
But I'm with you. The local level of resistance pulls ecological externalities into the ai conversation, which is otherwise crowded by code-bros and egg heads. 🥚
This deserves to be a manifesto, or at the very least, the wonderful beginning of one.
Felix offers one of the most succinct, actionable take on AI safety that i have seen to date. He nails the glaring gap I've been wrestling with: the distance between widespread AI fear and actual public pressure to change course.
This is a thread of thinking that can actually move us from fragmented resistance to coordinated power.
For as he says—if people think it's inevitable, why would they advocate for any form of AI safety, let alone our very own "planetary safety"?
What makes Felix's points so crucial is the bridge he builds between AI safety and the movements that have been doing the work for decades—climate, labor, democracy, indigenous rights. That integration isn't just "nice-to-have," it's fundamental to creating real change. Without those existing networks, democratic legitimacy, and organizing infrastructure, AI safety stays stuck in corporate PR, tech circles, and in the control of the few elite tech lords of our time.
"We want tools, not gods" is an absolutely brilliant mantra.
This deserves eyes. Give it a LIKE & a Restack on Substack, and share it broadly.
Thanks for helping the brilliant mind of @Félix and for sharing this here @Liana Sananda Gillooly
Thanks Tyler.
Fantastic analysis, Felix. Thank you for publishing this!
Thanks Brandon!
I appreciate this problematization of AI pushback. The conversation does seem very siloed, but I don't think the local level of resistance is a sufficient foundation for addressing such issues. This sort of "not in my backyard" line of thinking simply pushes the issue out of sight. History teaches us that the infrastructure for this madness will instead be built in someone else's backyard, where access to legal and political power is lacking (predominantly the global south). That being said, it is indeed valuable how such local resistance brings the ecological impossibility of AI expansionism more to the forefront.
Either way, I love AI as a sort of gateway drug for problematizing capitalism.
I think AI is beautifully exposing what capitalism looks like when we cut labor out of the production equation altogether. The meaninglessness of the whole project comes that much more clearly into view.
The West was already headed toward social breakdown. In this sense, AI only accelerates this process by accentuating one of capitalism's core contradictions (ever-intensifying inequality).
So, how can this be a good thing?
This disenfranchisement is not new in capitalism, but has thus far been gradual. By being gradual, it has been politically metabolizable, so to speak. Imagine disenfranchising your local serfs (or worker class) so very gradually that they don't even notice the water is boiling over to cook them alive. Disenfranchising them too rapidly leads to political instability. Well, we're at that point in the story where political instability and chaos may be necessary to further the plot.
And this is only from a Marxist class-inequality point of view. More broadly, AI comes full circle to expose the fault in Marx's own thinking--considering human beings first and foremost as producers\consumers. In other words, I agree that AI is threatening the modern coupling of jobs to meaning, as in "my life's meaning is derived from my socially validated participation in this worthwhile economy." Rather than considering this threat as an issue because of the psychological pressure it puts on workers, it may be more fruitful to pause and question the very legitimacy of this coupling. On the other side of it, we must ask once more what is the meaning of life and what is the worth of human existence, beyond the modern paradigm.
By creating work that is so mundane and superfluous that it can hypothetically be automated by machines, by coupling the meaning of life with such drudgery, we set ourselves up for these contradictions to eventually blow up in our faces. Legislating around these issues will only serve to kick the existential can further down the road.
May this AI bubble pop sooner rather than later, and may it pop in such a way that carries humanity to question its outdated economic, political, existential, and metaphysical paradigms, and do better. Amen.
Hi Nymrod,
I totally agree the local level of resistance is not enough. I am just pointing towards it as a place where AI safety folks might look to for how to "ground" the AI convo both literally and metaphorically in realities more people can understand.
"it may be more fruitful to pause and question the very legitimacy of this coupling." So well said. The article doesn't go deep enough into asking once more about the meaning of life beyond the modern paradigm.
Looking forward to exploring further...
You were writing about the policy level. I think that's a practical level worthy of attention. My response was about different registers, and I'm kinda debating whether it was even a relevant response.
But I think maybe it's worthwhile to touch these levels, say, meta-economic, existential and what have you, as side notes even when discussing immediately pragmatic issues, so that we don't lose track of the bigger picture.
But I'm with you. The local level of resistance pulls ecological externalities into the ai conversation, which is otherwise crowded by code-bros and egg heads. 🥚
Yes to keeping track of the bigger picture.
I think the 3 horizons framework is helpful here. Maybe you are familiar.
Thanks @Neural Foundry!