✨Refactoring “Autonomy” & “Freedom” for The Age of Language Models
For whose freedom are we coding?
Humans On The Loop is now fiscally sponsored by Happi.org alongside a great bouquet of allies like the School of Wise Innovation and the Post-Internet Project. Now you can make tax-deductible donations for this work, honk honk! I have 17 amazing conversations in the can and many more queued up (the episodes start dropping in December) and just started a thread on how AI will empower A Guild for The Protestant Reformation of Hollywood in the Holistic Technology and Wise Innovation Discord Server.
I also wrapped two epic tracks that will constitute the theme music for the show — a song about attention allocation in the Technium, and a hybrid soundscape weaving space sonifications, incantations, strings, keys, and generative AI music into a glitchy multi-instrumental headnodica experiment I hope will honor inspirations like The Books, Four Tet, and Björk. More on that soon, but first I want to prosecute the selves creating and/or consuming all this media:
Image: “refactoring the self, stems and circuitry, interwoven root networks, co-mingling animal and vegetal and mineral, entelechy, crystalline consciousness and plant teachers, efflorescence of sensors” @ AI Test Kitchen
“I was on a panel earlier this year with Geoffrey Hinton. He said, ‘We can’t let AIs manipulate us.’ I told him, ‘That ship sailed ten years ago! The question is, in what way do we want them to manipulate us?’”
— De Kai, machine translation pioneer, in conversation with Elliott Bayev
“The question is not can machines win, but what can they contribute to life. This is not an instrumental relationship. This is a symbiotic relationship, as a complete integration of life and machines in their own creative capacities…Machines cannot live without us. They cannot win without life. There is no question of winning. It is a question of symbiosis, of living together, or nothing.”
— GPT-3 as prompted by K. Allado McDowell, founder of the Artists + Machine Intelligence program at Google
"Individuality is not a neat concept for fungi. It's not a neat concept for us either, although we like to think it is."
— Merlin Sheldrake, mycologist, in conversation with Rick Rubin
Cosmos Institute (the first organization to back this project, and a group with “third way” aspirations I admire) prompted their inaugural essay contest with the following:
“How should AI be developed or governed to protect and enhance human autonomy, safeguarding both freedom of thought and freedom of action?”
Let’s be good philosophers and start by asking fundamental questions. Let’s see if we can uproot assumptions in this prompt and locate where the roots connect, what patterns they reveal, what other paths we might take that can “get us there” with the fewest convolutions.
What is human autonomy? What are freedom of thought and freedom of action? How can we develop or govern AI? Is there one way that AI “should” be developed or governed?
We’ll start with “autonomy” and ask to what degree we ever really had it.
Apple Dictionary defines it thusly:
1. The right or condition of self-government.
• a self-governing country or region: the national autonomies of the Russian Republic.
• freedom from external control or influence; independence.
2. (in Kantian moral philosophy) the capacity of an agent to act in accordance with objective morality rather than under the influence of desires.
We can’t ask about designing for self-governance without inquiring into selfhood. We can’t ask about freedom from external control or influence without inquiring into the nature of the external. And we can’t act according to objective morals without asking how morality emerges from the very substrate of desire that Kant attempts to leave behind.
For anyone who grew up in a networked digital society, these premises will probably look problematic. This isn’t print, absorbed in isolation and at arm’s length, static and canonical and referencing some shared epistemic backstop, some default confidence in authors and their institutions. This is an innately hypertextual, nonlinear, densely interlinked, ever-evolving palimpsest of contagious memes from which the subject manifests like a moiré at intersections between multiscale object-processes: “I” am just as much a vast community of human and nonhuman cells as “I” am an ephemeral uptake of matter, energy, and information serving transiently in the deep time dissipation of sunlight and stored chemical energy as “I” am an apparently distinct observer and a nexus of decision-making.
Telescopes and microscopes and screens and microprocessors and fiber optic cable bear on how my self emerges as a set of nested interactions stretching “down” into the subatomic, “up” into the cosmic, “out” into dark patterns of occulted social engineering and much larger biosphere-scale metabolic tangles, “in” to equally complex cascading agitations of my molecules responding to each other’s agitations. I am dynamic, mutable encodings of environmental regularities, mostly autonomic from the point of view of who I am as I define myself by episodic memory and narrative identity. I am an hypothesis about a world in which my very presence alters the conditions on which my hypothesis is based — incomplete, radically uncertain, networked, permeable, and therefore non-computable — which matters to the mission of describing people well enough with code to help us do a better job of moral action than we do already.
The age of digital computing owes a lot to Shannon, Bateson, Weiner, Burroughs, Engelbart, and many others who collectively spent centuries of labor hours interrogating the atomic self of print…to seek the kind of independence that arises as the goal-state of the apparition of a separate self, what Einstein called “a kind of optical illusion” that emerges from the relatively linear, low-density communication flows of Europe in the 18th century, would arguably start the quest for betterment off on the wrong foot. If we want Kant’s “objective” morals we must start out with our best approximation of objective truth as possible — which means acknowledging, at least:
We do not ever get the luxury of final totalizing knowledge;
Science gestures toward but never reaches objectivity in its ongoing synthesis of disconfirmation of support from ever-more-alien perspectives;
Thought itself is founded in associations, and the causal mechanistic frameworks upon which we can base moral judgments lie downstream of inferences situated in our lossy, fuzzy, social reasoning (or as William Irwin Thompson put it, “A fact requires a theory like a flame requires an atmosphere”);
There is no good argument for any level of granularity as the superior explanatory depth to rule them all, which forces us to wrangle with the paradox inherent in such seemingly-incompatible claim pairs as “free will exists” and “we live in a deterministic cosmos” or “I am alive” and “I am a machine” — suggesting that what we need is stereoscopic/dialectic thinking capable of living with the fact that “what” depends on “how” (e.g., Brian Arthur’s argument that algebra, with its fixed variables and its emphasis on balancing equations, cannot handle economic novelty production — whereas algorithmic modeling discloses trade as process: fundamentally uncertain, recombinant, and out-of-equilibrium).
Computer networks and LLMs in particular act as a medium to emphasize how narratives emerge out of nonlinear patterns of relationality and large latent spaces of statistical association, revealing atomic identity as a linguistic construct based on dynamic and provisional paths through an opaque and ineffably large semantic hyperspace — of epigenetic expressions, of evolutionary interactions, of mutually co-creative encounters with seeming-radical otherness, and most definitely of non-Western and nonhuman cognitive/attentional modes.
…which is all to say, it’s probably time for us to move past asking how we (whomever has the privilege of co-developing/co-governing these powerful technologies) can use them to uphold the illusion of a separate or separable self, and instead ask how we can lean into the affordances of networked digital and computational media to reveal to us a more accurate, empirically-substantiated, networked and porous self-construct that:
Arises simultaneously at and through the tangling and folding of phenomena at multiple spatiotemporal scales (see Varela et al.’s “enaction”);
Benefits from and yet is constrained by both Dawkins’ “extended phenotype” and/or Bateson’s “extended mind”;
Is measurably diminished by any loss of richness in mutual information exchange with distal aspects of its Umwelt (e.g., loss of biodiversity or cultural diversity);
Is disempowered in what we can call, by first approximation, “personal agency” when there are imbalances between the capacity of this extended self to gather information and to act on it;
Is acutely and persistently aware of its own limitations and biases within a larger, multi-agentic, multi-perspectival process of active inference (and thus deliberately depends on and invests in cultivating the insights and abilities of “not-selves” to correct for prediction errors, including those of self-reference).
To put this simply, European rational enlightenment philosophies of freedom and autonomy may have been suitable for guiding the development and governance of built environments constrained by letter-speed communication, sail-speed oceanic trade, horse-speed transit, and a world population of one billion people (mostly rural and agrarian). It’s been three hundred years and now more than five billion people find themselves in an entirely different set of daily circumstances, choice points, information flows, and scales of interaction and abstraction.
For the last few decades, planet-spanning meshworks of supply chains, corporate transactions, rapid cultural exchanges, transnational alliances, industrial pollution, and human travel have together staged a figure-ground reversal emphasizing the connectedness of everything…and yet most of the new realities we have to navigate lie over the horizon. We know every purchase that we make pulls on some vast, immensely complex hyperobject, and yet we are mostly still blind to the concrete consequences of those actions. We know we are being watched but not by whom. We experience our actions as immensely consequential, in no small part to a mismatch between our dim nascent sense of planet-scale identity and the even-less-developed sensorimotor feedback loops that let us grok the implications of our actions, learn from our mistakes, adjust and tweak in open cybernetic steering. (See for example how consumers have been gaslit by predatory institutions like British Petroleum, who with the help of the Ogilvy-coined “carbon footprint” as a way of dodging corporate accountability for climate change…individuals can state the obvious — that BP has by far the larger carbon footprint — but can’t necessarily prove claims like this, in part because we don’t have the “telescopes” required to see it all at once, and in part because institutions deliberately obfuscate the data trails.)
Now we can ask what “freedom” means, starting from these observations. We can’t use “freedom from” the way it was imagined in the years before ecology and global commerce. But we can use “freedom with” or “freedom to” or “freedom as”, and get a lot of mileage with “degrees of freedom” — as in, however many axes we can cognize or compute within the bounds of reasonable metabolic limits — as in, “How complex must our causal mechanistic frameworks be to make good choices in the time we get to make our choices, and can we afford to favor comprehension or must we accept opaque but accurate prediction?” There are always trade-offs, and since nobody has time to be an expert in all things we always let most choices fall to others in whom we effectively place faith, because neither do we have the time to fully audit their credentials.
Where is hearsay good enough and where must we get rigorous with data provenance and methodology and rooting out implicit bias? How do we inform re-weighting or select the thresholds at which we must reconsider what we’re willing to accept as satisfying explanatory depth? We need cycles within cycles, checks on checks, layers of contextual heuristics, meta-frameworks that allow for movement between quantity and quality, first- and second- and third- and nth-person confirmation, instrumental and intrinsic value, balancing the good and true and beautiful and practical.
In this sense “freedom” means, approximately, “the capacity to exercise one’s choice within a known space of parameters” or “the experience of acting in accordance with one’s models” — something like “alignment of the will with possibility”. Note the conspicuous lack of “independence” from this definition, for reasons like those given by Galen Strawson about the conditioned nature of any choice. More so, freedom isn’t ignorance but depends on at least partial understanding of the interdependencies that make the felt-sense of a self possible in the first place and define the bounds of rational decision-making (legible costs and benefits disclosed within the Umwelt of a body as a process and a model of the regularities of space- and time-bound information). Paradoxically, freedom is made out of limitation; one cannot choose when everything is possible.
Further, thought is action and vice versa. These categories are not ontological but describe behaviors disclosed at different scales: Where and when is the impact? How legible are the results, and to whom? “My” choices can be seen as logic gates in the much larger computations of society or of the biosphere. Design is philosophy embodied in constraints on social interaction. Ideas are metabolic, living on the substrate of neuronal firing pattern motifs and retrievable outboard memory in code and other writing systems. “Freedom of thought and freedom of action”, relative to human beings, references states of balance within and across scales: Does emergence from the microcosm (within people) to the mesocosm (between people) work in concert with regulation from the macrocosm (between institutions) to the mesocosm to afford the appropriate proportion of adaptability to stability? If my nervous system isn’t integrated, I cannot decide to move my hands or I may not even realize that I have them. If I cannot build or vote, I have no influence on infrastructure, but if I can’t learn my votes or building efforts are blind, pointless, futile. Individuality and community are inseparable, so fostering autonomy cannot be framed in isolation from the health of every system in which and of which individuals emerge.
Now how do we encode this in AI and the multi-scale regulation of technology?
A very partial list of principles from complex systems science to consider as constraints for tech and governance:
Forgetting has evolutionary fitness benefits. Networks made of nodes with longer memories don’t adapt as quickly to environmental changes, whereas networks made of nodes with shorter memories do not stably encode environmental structure. Tradeoffs between plasticity and stability at multiple scales within collective intelligence systems suggest that the wise implementation of artificial intelligence will utilize a diverse portfolio of short- and long-term memory encoded in layers and locations appropriate to the pace of change at each; some situations call for extreme durability (e.g., language and culture vaults that preserve a reservoir of low-overhead, easily retrievable novelty available for rare but serious threats to systems, the way that highly specialized mature ecologies nonetheless maintain some modest population of generalists as a kind of “anticipatory” adaptation from which life can rebound from catastrophic collapse) and some call for extreme transience (e.g., when uncompressed always-on recording would effect a kind of DDoS attack on information storage or processing, the way that CCTV security systems usually record into a buffer that is periodically erased after some latency period at which point the recordings become “stale”). We need both “heirloom” and “compostable” computing, or systems that recycle when appropriate but don’t metabolize important resources in service of “innovation” (i.e., there is a mathematical argument for ideologically high-turnover organizations as suffering something like bone cancer). Knowing how to balance the recruitment of resources for memory encoding versus the liquidity of resources for adaptation is a crucial consideration, which brings us to:
In some sense, individual autonomy (at whatever level of individuality) requires illegibility to the systems in which it operates. The metrics that allow a state or market to “see” phenomena allow top-down regulation, and history is full of examples where this legibility caused harm to the systems it could ultimately only see in part. Standardized testing warps educational outcomes as students try to beat the tests instead of learning and teachers optimize classroom performance for school funding instead of truly preparing students to think “for themselves” in a complex and evolving world. The divestiture of Bell Labs famously brought tech projects under the direct supervision of managers who started to demand returns on corporate investment earlier and earlier, ruining the group’s track record of world-changing innovations. GDP does not work as a measure of human or societal flourishing and using carbon capture as a proxy for ecosystemic health has led to many well-intended, law-abiding efforts that have drawn energies away from more holistic and effective mechanisms for the cultivation of biodiversity. Fungibility of value and quantification generally has empowered economies of scale and ever more effective capital extraction at the cost of much if not most of cultural diversity and the wild space within which there once resided incalculably large reservoirs of future discoveries.
We benefit from knowing how to align the scale of analysis with the granularity and dimensionality of analysis. In developing AI that preserves the dignity of the individual, we have much to learn and mimic from the way that prehuman living systems have evolved designoid balances for distributed governance as an algorithm weighted across different spatiotemporal horizons, in which AI gives us new “telescopes” and “microscopes” with which to notice opportunities for potent interventions, but without inviting “micromanagement” of human behavior…a problem we already face due to behavioral nudging through surveillance capitalism. Something like the daylighting of paved waterways in urban spaces or the re-wilding of animal migration corridors seems necessary as we explore “right-sizing” data collection and the consequences of human-machine algorithm-algorithm coevolution. Reservoir computing extends the predictive horizon for chaotic systems like the weather by injecting noise, instead of gathering more data. Where do we compress, and where do we ignore, and when? How much data is too much? When might scaling models improve prediction in the short term but generate unwanted externalities in the curtailment of the creative expression of people and the systems upon which we depend?The fundamental theory of intelligence describes a world of gradients between different strategies for active inference and the dissipation of free energy. Along one axis defined by David Krakauer et al., information theory formalizes a continuum of individuality from entities defined entirely by mutual information with their environment, or “culture” (e.g., whirlpools, which have no DNA but persist as stable patterns thanks to the interaction of river beds with water) to individuals entirely dependent on the temporal inheritance of information, or “nature” (e.g., hypothetical organisms whose anatomy and instinctual behavior is entirely pre-specified by genetics...although it’s worth noting that no such creatures actually exist, because epigenetic interactions constitute cybernetic feedback with an organism’s surroundings and thus something like learning even if it does not occur in brains). Along another axis defined by Ricard Solé, we see a diversity of “solid” brains whose network structures remain more or less static and fixed in place, with only the weights changing, and “liquid” brains whose nodes not only change the weights of their relationships but also their physical relationships in space.
The simple fact of so many different kinds of intelligence suggests that the prerequisite conditions for human beings, the substrate of our felt-sense of individuality and autonomy, depends on an enormous diversity of these different modes of cognition…and that the preservation and cultivation of autonomy depends on the maintenance of this nonhuman cognitive diversity. If we want to actually enhance or expand autonomy, it is reasonable to start from the assumption that we may have to not only simulate this in machinic architectures, but articulate these different modes in ways we have not yet discovered in ecological relationships. What combinations of intelligence, what geometries, can we not find in prehuman intelligent systems, and what explanations can we stress-test for why there are (presumably enormous) empty regions of the phase space of evolutionary possibility on Earth to date?
We cannot take the world that we inherited for granted as the best of all possible worlds in which autonomy can flourish, but neither should we pull down “fences” in that phase space without making our best effort to discover why we do not notice certain combinations. A rigorous study of the fundamental patterns of intelligence and their interaction will enable us to make more informed — and, crucially, massive parallel — explorations of the ontogeny and governance of machine intelligences for the promotion of the human.
Much, much more to say about this soon, in a series of articles in which I will transect and unpack the six dimensions of Humans On The Loop. In the meantime, let me refer you to fellow wise innovator Christina Fedor’s response to Cosmos Institute Founder Brendan McCord’s “The Machine Stops: Will AI Lead to Freedom or Control?: How E.M. Forster's dystopia mirrors our AI-driven crossroads”:
We don't even need to look to the past to know what it is to be "transformed from free agents to passive subjects." Progression of technologies is inextricably linked to those power structures that set forth the rules of price, value, and social status, collectively defining the boundaries any individual's freedom. So long as society has been of a technological nature, free agency remains elusive. Perhaps such a freedom is merely the intermediate goal on a path to something else, which might reveal itself through a latticework of ideas that are collectively telling us we are all part of something big, mysterious and impossible to explain.
Take a look at some of the trends so far. The impact of mobility technology on spatial dynamics has rendered land a scarce commodity, increasing its value while simultaneously reducing the cost of mobility. Similarly, as labor time is compressed, leisure time becomes a luxury, with its price escalating inversely to the decreasing cost of labor. The allure of artificially illuminated screens further commodifies daylight leisure. In response to these shifts, we consciously organize ourselves into hierarchical class structures, reinforcing these power and price dynamics while elevating those who embody intellectual or economic mastery over them. This pattern reflects our persistent tendency towards self-imposed servitude in the pursuit of survival.
One of the greatest threats to human autonomy and therefore flourishing is our own capacity to see through the fog of complexity, and to disillusion ourselves from those constructed features of our lives that bind us to conditions of unfreedom. Sometimes, the binding conditions are not even cultural. For example, our cognitive predisposition to visual stimuli is so pronounced that we rarely allocate enough expensive "free" time to develop skills for looking at the dark interior of ourselves-a practice that so many ancient traditions associate with a fundamental disruption of values-or, enlightenment. But in today's society, where are the images of wisdom we can all aspire towards?
Even before a new philosophy can steer technological governance towards human flourishing, we will need to carve out new contexts (perhaps even so new that they break time-space frames) within which it becomes possible to develop an innate sense of what it is to be a human again.
I respond to Christina:
"So long as society has been of a technological nature, free agency remains elusive. Perhaps such a freedom is merely the intermediate goal on a path to something else, which might reveal itself through a latticework of ideas that are collectively telling us we are all part of something big, mysterious and impossible to explain."
Epistemic humility seems prerequisite to acting with due skepticism (the original, formal definition of the philosophical practice, not the defensive knee-jerk reaction to anomalous stimuli) and thus to freedom from hijack by viral memes. And the careful study of history suggests that "freedom" as an ideological goal-state has frequently, if not always, undermined the long-term agency of those whom it possesses as an ultimate concern; how can we exist but in an elaborate web of interdependencies, and thus how can extrication from the apparent bindings of relationality be anything but the script whereby we become unwitting agents of the destruction of the systems upon which others rely — and thus a "villain" in the colloquial sense, a cell that will not listen to the regulatory commands of the body and thus identifies itself as a foreign and invasive entity? Choosing to lean in to a definition of selfhood whereby selves emerge as ephemeral patterns of interbeing, the strategy for "freedom" transforms profoundly into optimization for service of the bigger and ultimately only-ever-partially-knowable self-that-includes-other. The alternative is to become malignant, however noble one might see the quest of power-seeking as an affirmation and fulfillment of the toil and sacrifices of our ancestors.
That’s all for now. 😘
🎶 Recommended listening: Regina Spektor’s immensely prescient song “Machine”
My eyes are bifocal
My hands are sept-jointed
I live in the future
In my prewar apartment
And I count all my blessings
I have friends in high places
And I'm upgraded daily
All my wires without traces
Hooked into machine, hooked into machine
Hooked into machine
I'm hooked into, hooked into
Hooked into machine, hooked into machine
Hooked into machine
I'm hooked into, hooked into machine
I collect my moments
Into a correspondence
With a mightier power
Who just lacks my perspective
And who lacks my organics
And who covets my defects
And I'm downloaded daily
I am part of a composite
Hooked into machine, hooked into machine
Hooked into machine
I'm hooked into, hooked into
Hooked into machine, hooked into machine
Hooked into machine
I'm hooked into, hooked into machine
Everything's provided
Consummate consumer
Part of worldly taking
Apart from worldly troubles
Living in your prewar apartment
Soon to be your postwar apartment
And you lived in the future
And the future
It's here, it's bright, it's now
Rock on brother, love this new undertaking. Feels like what destiny has prepared you to do.