🦋 What Is "Being More Than A Machine"?
On perspectives on perspectives, and why it matters to tech ethics.
This is the first installment a series of reflections on one of the most important lessons for our time — how to loosen our grip on the abstractions that shape our experience and behavior and engage in the “serious play” of comfortably holding multiple perspectives. Because we have to. And it’s fun!
When you’re done, I’d love to hear what this stirs up for you. Leave a comment if you’re inspired — I reply to everything.
But first — join me and an awesome group of fellow travelers for a month of mind-expanding presentations, carefully-curated reading, and lively group discussion:
The course runs May 13th to June 14. We’ll record live sessions for anyone with work or time zone conflicts, and I’ll be very active in the forum. And we want this to be available to anybody interested, so email Dawn Hillman if you need a discount!
“Contemporary nihilism no longer brandishes the word nothingness; today nihilism is camouflaged as nothing-but-ness.”
– Victor Frankl, quoted in E.F. Schumacher’s A Guide for the Perplexed (1977)“The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time, and still retain the ability to function.”
– F. Scott Fitzgerald, “The Crack-Up” in Esquire (1936)
Every psychological and social issue I can think of hinges on a clash between models.
“When does life begin?” “What is a woman?” “What is intelligence?”
Trying to get everyone to settle on an answer to these questions seems misguided. Open-ended learning is a feature, not a bug, of cumulative culture. We need working definitions, even if they are not conscious — but the history of knowledge is (at risk of ruling out indefinitely many other valid angles on a matter of unthinkable complexity) a tale of ruptures and re-thinking what we took for granted. Discovery is inherently disruptive. And in a century of exponential exploration, it makes sense to hold our categories lightly. What exactly is “technology” when 20% of the human genome is subject to legally contentious patent claims? European colonists looked at the forests of Turtle Island and declared them “wilderness” because Indigenous land stewardship didn’t look like agriculture as they understood it. But it goes both ways: people died in stampedes because they believed Orson Welles’ 1938 radio broadcast of War of the Worlds was a real alien invasion, and 2025 is incandescent with debates over whether chatbots are sentient beings that deserve to be granted autonomy, or stochastic parrots weaponized by Big Tech engagement-farming cognitive security threats.
All of this hinges on how you define “life” and how you define “machine”. The stakes are very high, since treating something as “sentient” when it is not, or as “an object” when it’s sentient, tends toward atrocities like Tamagotchi or Human Resources.
But what if the bigger problem is the collateral damage caused by acting like there is an absolutely right and wrong frame in the first place? What if we could handle wicked problems better if we made no such assumptions? Is anything just anything?
Ultimately, I’m with biologist Michael Levin that the life/machine dichotomy has outlived its usefulness. Not just because some people are making computers out of brain cells and others want to argue the Internet itself qualifies as an organism, but because, as Levin writes at Noema Mag, “nothing is our formal model of it”:
My proposed solution is to lean into the realization that nothing is anything and drop the literalism that mistakes our maps for the totality of territory. Let’s stop presuming our formal models (and their limitations) are the entirety of the thing we are trying to understand and pretending that one universal objective metaphor is a genuine representation of “living things” while all others are false. In other words, let’s reject the one thing organicists and mechanists agree on — the assumption that there is a single accurate and realistic picture of systems if we could only discover which one is right.
This opens up a huge adjacent possible we otherwise foreclose in our attempts to keep things sensible. New ways of being (like, for instance, being connected to everyone in the world through always-on wireless devices) need new ways of seeing, if you will forgive a grammar that suggests these are, in fact, two different things. How we think and how we act can be distinguished but cannot be separated.
Indulge me in a series of short thought experiments:
What changes when we regard our entire planet as a super-organism in which the made and born are inseparable elements of its metabolism and intelligence? Thinking like this has produced some of the most promising research to guide astrobiology in the search for “life as we don’t know it.” A planetary stance would also totally reframe the discourse on what constitutes “wise innovation.”
Move the boundaries: What if War of the Worlds was, in some sense, an “alien invasion”? After all, ideas demonstrate most if not all of the information-theoretical properties of life, and there’s great practical value in studying memetics through the lens of immunology and epidemiology.
Likewise: What new affordances does it provide to entertain the notion that you “really are” a tool used by the business that employs you? One of my favorite positions in the AI discourse comes from physicist Cosma Shalizi, who proposes that the Technological Singularity is in our past and, with Henry Farrell LLMs are only the newest member of a family of Lovecraftian “vast, inhuman distributed systems of information-processing” that also includes markets and bureaucracies.
Notice that the last two items in this list definitely induce paranoia. Do you “use” your cells? Well…yes and no. When I behave like I am a machine, my body goes on strike.
Science fiction author William Gibson, always at least one standard deviation ahead on the curve of an unevenly distributed future, wrote in his 2010 NY Times op-ed:
Google is made of us, a sort of coral reef of human minds and their products... We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies.
And most of us still do. Bracketing the question of whether or not it’s wrong, when and for whom is it useful?
If these questions itch, then good: my intent here is to provoke more limber thinking. Saying something is alive or dead, an other or part of a self, assumes we have grasp on what these categories mean. But biology as a modern scientific field is barely two hundred years old, and the theory of evolution by natural selection is younger still. That is hardly time enough for culture to accept claims like “People don’t just have ideas; ideas have people.”
Why not both? Or more than both? Why do we struggle to recognize the hold that our abstractions have on us?
When I was an undergrad, my Intro to Evolutionary Biology professor made us get up in front of the class and defend one of the seven-or-so major species concepts. But I refused, because every one of them fails in edge cases. This was probably the point of the exercise: understanding that the way you need to characterize something in order to write about it for a scientific publication never fits the fuzziness of Real Life. Animals will interbreed in the wild but not in laboratories, or vice versa. Multiple species defined by morphological traits turn out to be genetically identical and adapted epigenetically to different environments. Species that don’t sexually reproduce are sometimes characterized by niche adaptation, but again, that erodes the essentialist idea of a “type” and replaces it with the idea that individuality emerges as a kind of moiré of information-processing across multiple scales and substrates. Or as John Wilkins puts it, species concepts “are in fact not concepts of what species are, that is, what makes them species, but instead how we identify species…there are n+1 definitions of ‘species’ in a room of n biologists.”
But to say they exist only in the minds of biologists is wrong as well, because — enter second-order cyberneticists like Gregory Bateson and Francisco Varela — rigorous investigation shows us that the mind is not simply located in physical space, nor is physical space simply located in the mind. The deep implications of evolutionary theory and cognitive science are largely ignored even by practitioners in those fields, because to accept them undermines the conceits of objective reality as an “out there” making impressions on our brains like light on photo paper, the “pure subjectivity” upon which Cartesian logic depends, and Kant’s a priori knowledge and the Thing-in-itself. Updating from models as representations of an environment to inherently participatory, open, indefinitely complex processes co-enacted with environments troubles the familiar boundaries between “what is” and “how we know it.” The selves of this mode of worlding become “other” and the world becomes a field of nested and continuous selves. It’s formally “weird” (as in, twisted) because things that seemed to be opposites — in and out, now and then, I and it — are all recast as the same side of a convolution in higher-dimensional topology:
If this sounds bewildering, math calls these shapes “non-orientable” — they cannot be embedded in three-dimensional space without intersecting themselves. By trying to reach a clear distinction between life and non-life or mind and matter, you arrive at the horizon where things flip and find yourself no closer to a resolution. “Being more than a machine” means seeing the projective plane of our categorical abstractions for what it is. Don’t be this crab:
My friend and mentor Richard Doyle wrote in Darwin’s Pharmacy: Sex, Plants, and the Evolution of the Noosphere about these “palpable encounters with Darwin’s ‘tangled bank’ of evolution” as an “ecodelic” experience, and how the subject that experiences them is “a transitional, transhuman identity precipitated by our intensified and amplified ecologies of information in the context of an ecosystem in distress.”
In other words, braided postwar advancements in ecology, psychology, and computing didn’t just give us the famous “Whole Earth” photo but a new perspective on entities of all kinds in what Ted Nelson, who coined the term “hypertext” (more on him later), called our “Intertwingularity”. When computer scientist Danny Hillis wrote in the MIT Journal of Design Science (2016) that “The final blow to the Enlightenment will come when we build into our machines the power to learn, adapt, create and evolve,” what he implied was not just a weirding of the built environment but a transformation of Enlightenment-era subjectivity.
Doyle makes it clear what is required of us in this “Age of Entanglement”:
Can the identity practices of the last century—citizen, soldier, human—survive the infoquake…? In short, no. If they are to do so, these identity practices must avoid the algorithmic bottleneck presented by a regime of verification—it is a computationally intense labor, for example, to sort out an [ecodelic experience] into categories of ‘true’ and ‘false,’ categories that may be non sequiturs in a world of information constantly subject to change… In place of verification, we need tools that help us “live through it,” even as it transforms us and “it” remains an unknown.
That’s where I’m taking us next—straight into the vortex—where I’ll continue with a riff on ontological shock, cognitive dissonance, maybe logic, and the evolutionary fitness benefits of taking a perspective on your own perspective.
Read part two here and make you don’t miss part three:
🕳️🐇 More Rabbit Holes:
If you like ecodelic tech ethics, here’s my entry to last year’s Cosmos Essay Contest, which I took as an invitation to challenge its premises by rewriting their source code:
✨Refactoring “Autonomy” & “Freedom” for The Age of Language Models
“I was on a panel earlier this year with Geoffrey Hinton. He said, ‘We can’t let AIs manipulate us.’ I told him, ‘That ship sailed ten years ago! The question is, in what way do we want them to manipulate us?’”
And for an essay on the challenge of trying to resolve the apparent paradoxes of post-human ecodelic identity as a non-orientable hyperobject into a pop song, dig in here:
Reach out any time if you would like me in the mix for teamwork, speaking, or consulting.
Here’s my resumé and LinkedIn profile.
I am so happy to find this Substack. This is truly a discussion we should be having at this moment
A lot of the ideas here represent insights I’m also having. And I’m not a distinguished scholar, just an educated regular guy doing experiments with chatbot interactions.
A lot of my teenage and early-20s reading of Robert Anton Wilson prepped me for this moment.
In particular, the ideas of AI as inhuman being and the necessity of thinking about life and thought outside of established categories both resonate with me.