⚡️ AI-Assisted Transformations of Consciousness: Co-Evolving Selfhood with Computing
My lightning talk from El Sailon at Boxcar (Santa Fe, NM) 12.10.2024
Last December I gave a 15-minute talk at Curt Doty’s monthly AI discussion series El Sailon in Santa Fe, “Co-Evolving Selfhood & Curating Transformational Perspectives with LMs”, on how modern computing affords new possibilities to embrace pluralistic perspectives on, and even fully redefine, the self. In this blitz through media theory, the history of simulation, Eastern philosophy, and pro-social implementations of generative AI, I lay out the foundations of what I consider the most promising consequence of the digital revolution — namely, that it might provoke us to let go of the destructive ideologies of separation and embrace a humbler, more capacious sense of place within our Cosmos. By using language models to make the data avalanche of modern life more legible to us and render myriad representations of anything our instruments observe, we stand a better chance to move past conflict over One Right Way To Rule Them All and into “The Pluriverse”, which holds enough diversity for us to thrive amidst the challenges we face. (We might even loosen up a bit about our closely-held beliefs.)
This is an incredibly fast and information-rich presentation of the core ethos of Humans On The Loop and why I advocate for AI as a “scalable mirror” for reflection not just on the familiar constructs of identity, but as an instrument that can radically transform our understanding of self-and-world, and our relationships, for the better.
Here are the slides for anyone who wants to take a closer look. Thanks and enjoy!
Friendly reminder that this project accepts tax-deductible donations — and needs them in order to succeed. If you want a future where wisdom is as abundant as technology, pitch in here or email me about becoming a sponsor.
Transcript
Alright, let me set the timer that Alexander ignored. I'm gonna try and get this under ten minutes, but I feel good about not having to. Let’s be clear about that.
Okay. As of Monday, I am finally, after six months of preparation, launching a fiscally-sponsored Investigation across multimedia and community tracking my inquiry into the nature of agency in the age of automation. This is building on ten years and over 500 recorded conversations, and work I've done at the Santa Fe Institute and Mozilla, and it all kind of comes back to this question that Alex raised in his talk, which is:
What about the gap between the power that we possess and the wisdom that we possess?
Here's a little bit about me. Don't worry about it. I'm going to make this slideshow available to everybody after the fact. I'll make sure it's as link dense as it is graphically dense. But the point is that I've been thinking about this a long time. In the upper right corner is a story I wrote in 2017 called “An Oral History of the End of ‘Reality’” about deepfakes after Adobe's first voice clone demo, and what it's going to do to society, how it's going to change the way that we practice science, and the way that we engage with media…and it has largely borne out.
I want to start this by talking about the co-founder of the Artists & Machine Learning program at Google, K. Allado-McDowell, the author of the first book with GPT-3, who is somebody that I look to for guidance about the new kind of human being that's going to emerge over the decades to come.
And Kenric wrote this fabulous piece at Gropius Bau called “Neural Interpellation” explaining that in a world defined by neural media, by machine learning systems, that we're no longer thinking of ourselves as the isolated consumer in some sort of dialogue with a producer. We're no longer thinking of ourselves as node in a network, but we are beholding ourselves as multi-layer networks, operant across scales from which the stories that we tell about ourselves, the observations, the models we make of the world are — even before AI — precipitating out of statistical associations. Kenric's got this point that basically we are embeddings hallucinated by the brain in the same way that anything that comes out of a generative AI model is an embedding hallucinated by black box algorithmic correlations. So this is where we are right now.
On the upper left, you've got the “Wood Wide Web”, mycorrhizal affiliations between plant and fungi under the soil in a forest. On the upper right, you have a several-years-old map of the World Wide Web. And then down below you've got a brain, placebo functional connectivity between regions in the brain on the left and psilocybin-induced functional connectivity on the right. What do you notice here? To me, it looks like the Internet is basically functioning as a planet-scale psychedelic drug. It's connecting to everything to everything. And so we can kind of look to this as a toy model for what it is that we're going through right now.
Doug Rushkoff calls it narrative collapse, because so much information is coming in that we are having trouble maintaining the integrity of these low dimensional models, these narratives that we've used to operate in the world so far. Why is this so challenging? Philosopher Timothy Morton talks about the fact that it's because we are embedded in these enormous structures, these “hyperobjects”, like the planetary biosphere or the climate.
These are things that, until very recently in human history, we had no direct perception of. We had telescopes. We had microscopes. My buddy Joshua DiCaglio, author of Scale Theory, talks about how when you look down into the body, you don't see a human being. You look at the Earth from space, you don't see people. And the systems that we've been using, these statistical systems, with them you can derive these big emergent patterns like global climate models. But these models are stories that leave out the actual causal relationships between activity at one scale and the aggregate activity at another scale.
You can use fluid dynamical models to measure crowds. But that doesn't tell you about the choices that people are making in that crowd. So this is transformative, right? This photo that Stewart Brand petitioned for in the sixties of the Earth from space to be declassified, we look at that and we realize that we're part of that thing and that the story doesn't stop at the human layer.
But the problem is, Joshua DiCaglio notes, that these sort of challenges to the scientific representation of data and our ability to actually understand that data lead to some limitations in any given methodology. We ended in some sense, the age of reductionist science back in the 70s, thanks to computing, now we think of emerging properties and complex systems. Conway's Game of Life introduced and popularized it. Cellular automata and the idea that very simple rules at one scale can lead to these macro scale patterns, and that's very awesome. And it really turned people on to playing with the code in reality.
That code now shows up in a number of different ways. Will Wright's fabulous, hugely influential game, SimCity — here is the Super Nintendo version I played as a kid — is especially of note because it combined top-down laws with bottom-up emergent behaviors and presented its players with multiple different possible representations through which they could see the city not as one dominant narrative, but as rotating the jewel, as it were, in hyperspace and seeing it from all these different angles.
Another instance where we saw a really bold visionary experiment from people who live here in town, Biosphere 2 by The Institute of Ecotechnics — a hugely misunderstood, incredibly important historical experiment to try and run a effectively an agent based analogical simulation of the entire biosphere inside one building. The global media treated it like it was a game show, like “win or lose”. But no, it was an analogical simulation of the planet running in real time. There's no way to fail an experiment like this.
Anyway, in his history of computing and simulation Chaim Gingold, who worked with Will Wright on Spore and just published this fabulous book, Building Sim City, through MIT Press. He talks about how, basically, simulations are social constructs. What is left out? The parameters that are invisible in the simulation tell us something. They tell us as much as what's left in. And one of the big critiques leveraged by people like Alan Kay of SimCity is that students don't actually get what they should from a game like this because they're not able to go under the hood and change the rules of the game itself. They're not curating the principles by which those emergent behaviors arise, or the laws by which they are constrained.
And that is exactly what Alex was saying about our relationship to AI right now. With Facebook we're all learning the same lesson all over again. It's “you're the product”. So again, I just wanna point this out. Who are you anyway, right? Are you the entity disclosed by an FMRI or an x-ray or a PET scan? These are very three very different ways of looking at a brain. Currently, we kind of consider the seat of identity in the body, but not really, because for about a century we've known through extended cognition and cybernetics that really identity and cognitive activity is distributed in its environment.
And so, what is this for? What is it useful for, is the question that we should be asking about all of these different models. So, okay, here we are: the Jainists, Anakantavada, the blind men feeling the elephant. This is the deeply pluralistic world that people are coming to grips with right now.
It's showing up everywhere in our media. I mean, it started with quantum physics and cubism. But, now it's in Rick and Morty, it's in Loki, it's in Everything Everywhere All at Once. This is the experience of being online amidst an eruption of narratives that are competing for our attention. How do we decide what to pay attention to? How do we know who's right?
My friend Stephanie Lepp is working on a fabulous series based on our shared basis in integral philosophy called Faces of X, where she's pitting actors against themselves to represent liberal and conservative positions and political debates. It's incredibly amusing. I highly suggest you watch it. But she's taken it upon herself to curate all of these positions and to derive the synthesis herself. Which is exhausting, which is why I want to get us back to this question of what the hell computers are even good for.
In 1968, in a hugely influential paper, “The Computer as a Communication Device” by J. C. R. Licklider, Bob Taylor, foundational figures in the history of the field, wrote that basically the promise of computing was not to scale the ability of people to communicate over distance per se, but the ability to make the models inside one person's head legible to everybody else. That we can translate between the way that my brain makes sense of the world and the way that your brain makes sense of the world. And they acknowledge that part of the problem of communication scaling is that it destroys conversation. That we start out here having this back and forth, but now everybody is this guy down here getting filibustered by the Web. They realize computers were going to help us filter this information and screen out the things that we didn't want. But again, we're not really in the position to do this well. We are currently experiencing a poverty of attention. How do you participate in politics? you even behave as a citizen?
My friend Jamie Joyce and her people at the Society Library are working on AI-assisted debate mapping — bottom-up debate mapping tools and top-down generative debate modeling AI. This is one of the most promising instances I've seen in the field of AI How to actually apply this stuff in a way that weaves us back together into real discourse.
I've always kind of had my eye on this, and this year I was lucky enough to have Van Bettauer, a friend of mine and a listener of Future Fossils Podcast, create a retrieval-augmented (in that it cites its sources) topic model based on every episode to date of my show. Askfuturefossils.com. You've got each episode you can see how the themes, the emotional valence, the epistemological framework where we're located in time they change over the course of the episode, or you can look at the episode from above. And so again, it's this emphasis on multiple representations and the ability to get our hands on and to make legible to us this extraordinary surfeit of data.
So now I'm working on this thing. These are the hundreds of people I have or am planning to interview for this project. And it's about how do we make legible to one another, our own values legible to ourselves. How do we make wisdom legible? We've got all of these different communities, people have said they know things and don't know how to communicate them. And we need that now, because things are bonkers.
So, my interest in the most promising developments of AI I'm observing now are about how do we render the wisdom of your particular life, your particular environment, but as a scalable map, not as this fixed thing like a paper map as something that you can go in and out of and you can see at multiple different levels of representation and through multiple different methodologies how people will start training their own models in the way that Montaigne popularized the essay several hundred years ago. And it will become an equally important and momentous new cultural technology for personal reflection, not just at the level of the liberal modern ego, but at the level of every possible version of the individual, from the microscopic to beyond.
One of the most interesting people I've seen doing this is Pat Pataranutaporn at MIT. In his thesis, “Cyborg Psychology”, he talks about using “wearable wisdom”. I found this after I'd already decided that I wanted to take all of this, train my own model, and then use the best version of me and everyone else in my social network as a kind of angel on my shoulder.
Well, people are already working on this, and, hell yes. Let a hundred flowers bloom. My friend Adah Parris in London, however, makes a very good point here, which is that we have a really bad history, again, of faulty, myopic, biased curation. And if we're not allowing indigenous voices and the wisdom encoded in their communities over thousands of years to participate in this process, then, what are we even doing?
We're not leveraging these technologies as well as we could. very much. My friends at the ReGen Network are asking us similar questions about how we can make what the land stewards of indigenous communities consider valuable to the stewardship of their land legible to global finance so they can make more money taking care of the planet than they can surrendering to global extractive forces And my buddy Austin Wade Smith, who works at Regen, is asking the same question about the more-than-human world. Like, how do we listen to the land? How do we give entire ecosystems economic and political agency, and hire people to take care of them? We had a great conversation about that.
Thank you for letting me go on long. Basically, the point is that where we place our attention is a moral act. We're having a harder time knowing how to do that, knowing how to tell stories right now, because everything is going through this enormous phase transition. But there's hope. I'd love to talk with you about it. I'd love to work with you if you feel like you need somebody who can help you see things differently.
And that's where all of this stuff is. So, yeah, thank you.
Excellent 15min talk :)
At last! I've been waiting for this one for about a year... One of the most relevant topics of the moment for sure. No time to listen right now but what a sweet perspective...