The Practice of Being Human
A Review of Shannon Valor's "The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking"
It’s hard to believe that it was less than a year and a half ago that I first encountered ChatGPT. Until then, my awareness of artificial was conceptual, gleaned from magazine articles and scientific journals. I remember very clearly the evening, in front of a screen, when I first experienced that spooky feeling as ChatGPT delivered, word by word, its entirely cogent response to my query. I imagine my experience was similar to that of audiences listening to Edison’s phonograph in 1877 and looking inside the phonograph’s horn to find the source of the sound.
For me, this was a confusing moment in which my core model of reality was challenged. What had I just experienced? With what tone should I address the ghostly being on my screen? Since then, I’ve been fascinated by the implications of AI’s growing presence in our lives. Because user-friendly AI interfaces like ChatGPT or MidJourney are just the tip of the iceberg: in 2024, AI is used to automate judicial, medical, military, financial, and political decision-making. This is happening with little to no democratic oversight, using opaque algorithms whose logic of reasoning cannot be traced and whose energy footprint is causing big tech companies like Google to significantly scale back their carbon neutrality pledges.
What are the consequences of the algorithmic automatization of our lives for our sense of self? What happens to our capacity to imagine different futures, to feel empathy for others, and to make decisions together, when we start outsourcing these tasks to machines designed according to the values of a diseased civilization?
It’s these questions that led me to Shannon Valor’s book The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Her central premise is simple: the existential threat of AI is not the emergence of a superhuman intelligence that destroys civilization. Rather, it’s the possibility that we lose touch with our own humanity, with our capacity to love and grieve, dream and create: “An ‘existential’ risk to humanity need not involve extinction. Some existential risks involve human survival in an inhumane form.”
Shannon is interested in what the Spanish philosopher José Ortega y Gasset called autofabrication, that quintessentially human task of creating ourselves. Once our basic needs are met, we have to make decisions about how we live our lives and what types of beings we want to be. When we delegate this process to the algorithms designed by an unrepresentative few, we surrender our agency.
To explain the dangers and possibilities of AI, Shannon uses the metaphor of a mirror. An AI system receives data (the light), reads it with an algorithm (the surface), and produces a pattern or image (the reflection). But mirrors are not faithful reflections of the full complexity of the world. Look into a mirror (seriously, try it), and you’ll find no sound, or smell, or texture. Nor will you see your hopes and dreams. All there is an image.
AI mirrors trained on the values of neoliberal culture are not faithful mirrors. They amplify and distort already existing inequalities. As an example, researchers discovered in 2019 that a risk prediction algorithm used nationwide in US hospitals was diverting medical care away from high-risk Black patients who were in fact much sicker than white patients, thereby replicating a long history of racial bias in American health care. AI systems can even cause “runaway feedback loops” in which human biases, mirrored by AI, drive action in the world that reinforces these biases, as seen with Instagram beauty filters designed with Eurocentric aesthetic norms of white skin, thin noses, and wide eyes.
It’s because AI is a reflection of our society’s priorities that we need to be wary of elite narratives of AGI wanting to destroy humanity. These stories - professed by the Musks and Altmans of the world whilst they simultaneously fund further research into AGI - don’t just siphon critical resources away from the real, ongoing calamities of climate disasters and biodiversity collapse. Rooted in the values of conquest, domination, and empire, instead of interdependence and empathy, these “mainstream AGI fantasies manage to exclude almost everything we know about the behavior of intelligent creatures”. They limit our capacity to exercise our moral imaginations, in which we engage the possibilities of a different future guided by a sense of justice and care.
An AI is simply “a mathematical tool of extracting statistical patterns from past human-generated data and projecting these patterns forward into optimized predictions, selections, classifications, and compositions.” If we don’t decide for ourselves the values that guide AI, we simply continue the destructive dynamics of the past. Morality, Shannon says, is a practice, a muscle, that we are losing in a process of “moral deskilling.”
To build a world together - whether that is a family, a community, or a planet - we need to be able to work through our ideas, discomforts, hesitations, and fears together. In the words of the eco-feminist sci-fi writer, Ursula LeGuin, the struggle for human freedom is a “war without end,” an ongoing, messy process of autofabrication that AI cannot do for us, lest we deny our own selves.
Thanks to
for recommending the book!
Thank you for this essay, Felix! Did you see the recent NYTimes article about degenerative AI? It highlights the loss of accuracy in data and images and words as AI develops its own feedback loop for learning: https://www.nytimes.com/interactive/2024/08/26/upshot/ai-synthetic-data.html?smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb
"As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.
In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse."
Your essay is thoughtful and thought provoking, thank you!
I like the image of a mirror with” no smell, no sound no texture “, image , as a screen , that would deserved more comments.
Beyond the image, is the world of the shadow that you rightly describe at the end of your writings :”we need to be able to work through our ideas, discomforts, j’éditai and fears together”. From two dimensions, the shadow brings a third one, the human one.