Introducing Humans On The Loop
Help me launch a bold new series on decision-making in the age of automation!
It’s time to graduate from asking if our dreams are possible to asking how we can dream better.
Greetings and welcome to the first installment of Humans On The Loop, a new series of essays, interviews, salons, and other artifacts intended to record and cultivate evolving public discourse on technology and wisdom, agency and automation. I can’t think of anything more timely.
Here’s the pitch and action plan I wrote in February for my application to the 2024 O’Shaughnessy Fellowship. (Out of 5299 international applicants, I made it into the final thirty and was invited into their grant program. With that support and the support of a Cosmos Institute Fellowship, I’m off to the races but by no means able to rest on my laurels. Don’t hesitate to reach out and suggest an opportunity!)
Below, you’ll find the first installment: an expository piece on why I chose the name, and how it frames and guides this inquiry.
But first, some history and context. Feel free to skip this part for now and circle back…
(And! Heartfelt thanks to all of you who offer your support. I’m unemployed with kids again, and taking an enormous leap of faith to do this. Every contribution matters. The pitch explains my goals and plans, and what it’s going to take, and how to help.)
History & Background: Why This? Why Now?
This project is inspired by over twenty years of scholarship, experiments with fraught-but-visionary new technologies like Google Glass, and several hundred awesome conversations I’ve enjoyed as host of Future Fossils and Complexity. But it only came into focus last year after I left The Santa Fe Institute and stumbled into doing background research on legendarily innovative organizations for an as-yet-unannounced new open source AI non-profit. For months, my work was to read as much as I could on the history of PCs and the Internet and then identify the structures and processes that the golden eras of five groups (Bell Labs, ARPA, Xerox PARC, Pixar, and SFI) all had in common.
I organized three months of notes into an eighty-plus page treatise on best practices for innovation and invention that the company intended to release into the commons. That never happened, partially due to the fact that innovation for its own sake tends to make a lot of monsters and effective oversight is very hard.
The next question I was tasked to answer flowed straight out of the ethical conundrum that we faced with whether or not we could open source an innovation handbook. It is a question I consider even more important than the first, because there are fewer people paid to ask it:
“How can wisdom keep pace with power?”
I believed (and still believe) in what they set out to accomplish — to realize a future in which not just using but creating (plural, offline, tailored) AI models is as commonplace as writing, and the power of ubiquitous machine intelligence is spread as evenly as possible. To do this, it seemed necessary to help people better understand the founding vision for computing as described by legendary minds like Licklider and Engelbart and Taylor, and how far we’ve strayed, and why, and to engage in public, intergenerational, cross-cultural, and open-ended dialogue about the “wisdom/power” question as a way of building trust and learning how to better carry the baton. But that question was eventually subordinated to more pressing and immediate concerns, like pitch decks.
(After all, philosophy has always had a hard time justifying its existence to the ruthlessness of markets. You have to eat before you art, and even if holistic and synthetic thinking bolsters reason and concepts make their way into the world through play before they’re formalized and great new ideas don’t spread themselves without emotional appeal, art is equivalent to “noise” for managers and engineers…which is to say, arguably useful more often than it is acknowledged as such. I nonetheless foresee an even weirder world soon in which some of the greatest value humans can contribute draws on skills STEM-dominated education usually neglects. Mark my words —and Steven Johnson’s: “the revenge of the humanities” will mean Resident Generalists & Chief Philosophy Officers. As I tried to put it to my colleagues at the startup, it is not enough to have a baby. We must also raise the baby, which means much more than building something great but also making sure a tender new idea can thrive upon delivery. Open source AI will need what Google Glass lacked: namely, parents and extended family.)
At the same time, I got mixed up with a group of disaffected techies who all wanted to join forces and help breathe sanity into an AI sector fueled by drugs and far too many messianic and apocalyptic claims. My people! I’d been arguing for a third way for years and felt at home amidst these current and/or former employees of Meta, Google, Apple, Amazon, Mozilla, Walmart, Microsoft, Airbnb, The Center for Humane Technology, and other big-name orgs, all of whom see something like The Sorcerer’s Apprentice going on: a need for individual initiation among the hordes of smart-but-inexperienced young minds creating magical new tools. Mixed in with them are some inveterate practitioners of ancient wisdom schools and medicine traditions keen on helping speed-freak startup folks slow down for long enough to bring more care and long-term viability to what they do. Most of these people I met in an AI Signal group or while co-facilitating an Embodied Ethics in The Age of AI course — organized by Andrew Dunn of The Wise Innovation Project and led by Josh Schrei of The Emerald — which immersed us in five weeks of constant large-group conversation on the cultural and ecological containers that can help us wield new AI superpowers wisely.
It was more than I could handle. For the third time in a year I teetered on the edge of burnout, trying to write the story of the future of the Internet in a crash-course-pressure-cooker of a fledgling company while volunteering to help build an unrelated, nascent and as-yet-unfunded coalition for holistic tech development…not to mention working constantly to be as present and as loving as I could be as a dad and partner. (Otherwise, what is the point?) It wasn’t long before I came to the conclusion that I had a call to answer. No matter how much I had hoped to stick around and see it through with the computing startup, I knew in my heart that I would better serve their long-term goals and my own needs (and my family’s, and possibly the world’s) by navigating people into deeper understanding, not just greater potency.
I didn’t need to push the river. Tech innovation has enough momentum. Steering?Knowing what to do with it — knowing how to amplify good biases, to shape the worlds we want, to cultivate well-being with new tools we barely understand? By far the bigger and more necessary opportunity for me to serve.
So here we are.
Now, let’s begin.
Part 1: Introducing Humans On The Loop
✨ First, A Disclaimer: This Is Going To Get Weird
There are many ways to talk about technology and wisdom, other threads to pull on in the network of perspectives that combine to make an interference pattern and delineate a topic-space that most of us alive today will recognize as meaningful and probably urgent. “Automation and decision-making” is one way to bring it into focus; “understanding and prediction” is another. “Agency” — when weighed against “convenience”, maybe, or “efficiency” — figures hugely into this discussion. “Power and responsibility” is another well-worn groove between two key ideas, an edge I hope to walk and walk again with you as we trace the shape of what’s emerging in this day and age…and how it links back to perennial concerns while bringing novel challenges for which the past alone is not enough to guide us.
The work of mapping real complexity — something that by definition evades all mapping efforts — is always open, generative, and provisional. That work is humbling, and in this case that’s the point: illuminating spaces that stay dark, we get to practice ways of seeing I consider crucial in our turbulent and metamorphic era. We have the complicated privilege to live through an efflorescence of diverse intelligences, a transition of the kind that punctuates long stable periods of biospheric history and sets the stage for everything after. One thing that moments like(-but-not-like) this one have in common is that they’re disorienting. As it happens, owning up to what we do not and/or cannot know is hugely helpful — John Keats’ “negative capability” and Zen’s “beginner’s mind” are two of many strategies for moving through these epochal transitions during which we have to re-test all of our assumptions and heavily rewrite the guides to life.
Our enormous wealth of cultural inheritance, a cornucopia of insight gathered over hundreds of millennia by myriad societies and now in large part free to drink from at our whim, acts as a reservoir of adaptations that swing in and out of relevance. Ideas that people once attempted to snuff out as crazy or heretical now make a ton of sense, because great cycles have a way of bringing us around to rhyming circumstances. Marshall McLuhan called it “cultural retrieval”: something like a systole-diastole in how each passing media environment constrains our thought and action, with striated bands of fashions that go in and out of vogue, like sedimentary deposits visible to those of us who step back far enough to reckon human history as the process of geology it truly is. Evolutionary theorists see the same phenomenon in DNA: the chicken teeth not lost so much as casked, awaiting such time as it might prove useful to have teeth again, and countless other instances suggesting that our “future” will be largely made from latent, currently invisible and dormant bits of what seems “past”. When what seemed obvious gets weird and even bedrock notions like “reality” are up for grabs, it’s time to put our favorite stories on probation and experimentally reprieve the other stories we may once have jailed for decent reasons…but might now be what we need to meet our changing situations.
So the disclaimer is: if you haven't noticed, this is going to get weird. Weird is guaranteed when we set out to think through what it means to meet the dizzying complexity within which we must make decisions about how to use the new, enormous powers granted by the digital transition — including how to innovate responsibly. One goal is to help strike a precious balance of humility and comprehension, with which each of us can help to realize noble dreams of better futures — and in ways that honor and accept the bigger picture(s) in which our desires constantly rub up against constraints like natural limits and the needs of other beings.
In other words, we cannot know how to act without asking who we are — and, if as Stafford Beer says, "the purpose of a system is what it does", then we learn who we are by asking what we do. This is just one of the many loops we're on at any time, this "noun/verb" frame shift, and by traveling it back and forth across now-arguably-obsolete distinctions like "matter/mind" or "nature/culture", everything takes on strange new dimensions.
I'll argue this is necessary if we want to thrive amidst the exponential trends defining modern life, because the Internet is one of the most potent psychedelics ever synthesized — and the whole world’s going on this trip. As our boundaries collapse and new ones form, it’s easy to feel overwhelmed…but this happens periodically on Planet Earth, and we have well-established protocols for “feeling our way stone by stone” through fundamentally uncertain times.
✨ What Do I Mean By “Humans On The Loop”?
In her 2012 Human Rights Watch report, Bonnie Docherty laid out a three-part scheme for classifying how much control humans have over automated weapons. “Human-in-the-loop” (HITL) is by far the most famous of the three: a system in which the machine presents its operator(s) with a slate of options and the humans do the choosing. Machine learning took up the term to differentiate between the norm of random sampling, a “data over theory” mode that places all its bets on “raw” statistics, and when people are involved in model-building and curating what to feed those models to refine them.
In this sense, pure HITL in AI is a kind of fantasy, as algorithmic justice scholars like Joy Buolamwini note that there is always bias in every data corpus. “Theory-less” AI is really just opaque, not actually objective; the “view from nowhere” is a myth in the pejorative sense of the word, and math is full of solid arguments that we can’t escape this fact — like Gödel’s Incompleteness Theorem and Wolpert’s No Free Lunch Theorems, which I suggest the industry might want to make required reading for aspiring engineers.
But first, let’s give some credit where it’s due. All possible critiques aside, HITL as an idea gets several key things right:
• People make decisions;
• Automation is, ostensibly, supposed to help us make them;
• People of today, on average, experience machines as separate tools through which our will is realized;
• Humans figure centrally in the design of built environments; and
• Maybe we do not want Terminators “choosing” who will live and who will die.
And yet… The “loop” in question is the cybernetic feedback loop described in works like Norbert Weiner’s classic text, “The Human Use of Human Beings”, one of the strange fruits that grew out of World War II’s violence-driven innovation sprint and the advent of technologies like targeting computers, modern propaganda, cryptographic hacking, and the atom bomb. Trauma tends to spur the growth of learning networks as adaptive systems bend to marshall their intelligence in service of prevention, making sure that history won’t repeat itself, and now we live inside a hyperobject called the Internet that grew from linking databases in anticipation of another nuclear event. So we are humans in the loop as much as feedback dominates the human world, and the post-war gestalt is one in which we are embedded in a giant web of interaction and relationality. But that same revelation makes it clear we are not meaningfully separate from any of it.
In this light, Docherty’s third automation category doesn’t make much sense: can there even be “humans-out-of-the-loop” in The Anthropocene? “No human action is involved” seems every bit as much a fairy story as “the view from nowhere,” now that we acknowledge choice lives somewhere even in the most abstracted and depersonalized systems. Yes, in the heat of battle, it does matter whether someone pulled the trigger then and there, or that decision was obscured in the same way that corporations dodge accountability (by claiming, for example, that offensive search results are due to some rogue staffer who has since been fired: chaff on journalists’ and lawyers’ radar systems). Just like how the “Random Darknet Shopper” robot built by the !Mediengruppe Bitnik art collective bought ten pills of ecstasy and had them mailed to the Kunst Halle gallery, these systems can’t be held responsible. And how convenient! But someone has to be, and we are in the midst of painful updates to a centuries-old justice system for precisely the same reasons selfhood needs a renovation in The Age of Algorithms. As several authors and film-makers have established, “My brain made me do it” doesn’t scale.
The least well-known automation category is the namesake for this project. "Human-on-the-loop" (HOTL) denotes a system in which people work within a deeper layer of constraints: operators can still veto actions — for instance, by aborting the decision to retaliate against a likely inbound missile — but a human on the loop is not necessarily the bottleneck for every choice. Here, design and the initial training of a system live upstream, just as none of us are privileged to choose our bodies or our parents or our preschool teachers. Nor can hardly anyone alive today, excepting Wim Hof and a handful of Tibetan monks, control their heart rate, body temperature, immune activity, and other autonomic body functions. For this reason, HOTL strikes me as more true to our condition as late entries on the world stage, with limited ability to stop a sneeze or knee jerk, mostly stuck in bodies that encode the stable features of the landscapes of our ancestors and minds trained on a culture that flows faster than our flesh (for now) but still lags behind What Is. Not one of us has godlike power to determine who we are. We have to work with "what God(/evolution) gave us", and therein lies the great work of becoming who we are — of individuation, integration, and potentiation of the possible within a non-negotiable constraint-space.
✨ A Paean To Bounded Rationality
Here is another way of framing this: there are many million times more people using tools, now — both in general and applied to any given tool — than making them, which has important implications for whomever wants to think about the scope of self-determination and responsibility when it comes to technology.
As William Gibson said, "The street finds its own uses" for the things that we invent, and nobody can run the kind of totalizing simulation we would need to forecast every possible way somebody might "jailbreak" their idea. We take a bell and hammer it into a cannon, then we hammer cannons into bells. Making something new is not deciding what it will become. All of us participate in innovation, which is why MIT’s Hans Moravec called AI our "Mind Children"; as you know, it takes a village. We cannot reasonably hold an individual responsible for how somebody else might weaponize their patent, any more than we can blame the parents of the cultists who killed Sharon Tate. The limits other people make upon our agency (including regulation that prevents, and markets that incentivize, specific ventures) don’t just shape what we decide is worthy to create, but how it is repurposed.
Ultimately, we can't say Tool X exists for Purpose Y except in retrospect...which looks a lot like how our brains write narratives about the reasons why we took an action only after it has happened. Storytelling is a more sophisticated, slower function than statistical association. Living in a human body puts us on the loop, not in, when we consider our relationship to most of what a nervous system does. The ego is an apparition that arises at the intersection of a choir of neural subroutines. This doesn't contradict the fact that who we think we are exert all kinds of downward causal pressure on our subcomponents — surgery emerges like a social contract from the interaction of our junior agencies — but scale determines whether we experience ourselves as cities, or as authors, or as cells in some enormous plantary super-computation.
"Compatibilists" like the physicist Sean Carroll argue that this move resolves the free-will-or-determinism problem: individuality, and thus accountability, depends on just how granular you want to get. The opposite of a profound truth is an equally profound truth; we are wholes and parts (and made of wholes and parts). How much does a decision owe to physics and how much to economics? That depends on where we care to draw the boundaries around "decision", not to mention “economics”. Technology has opened us to other scales beyond the human, and living in this century requires us to reimagine what we are and what it means to live smeared over the entire micro-, meso-, micro-cosmic zoom. There is more than just one loop, and we are in some, on some, out of many others. But when it comes to most loops, we are living in the world other choices made already. We have some control, and could have more, but also: there are ancient and time-tested reasons why you aren't in charge of telling every nerve to fire. Presidents have veto power but do not write legislation for each county in their nations. Consequently, HOTL thinking frames this inquiry with the humility we need to live among both "natural" and "artificial" minds, beneath the feet of institutions, using someone else's code, limited by history in many more dimensions than the countable degrees of freedom we enjoy while making choices.
In- and Out-of- are both weird edge cases, extremist options oversimplifying the most obvious but complex Real: When have you ever been completely helpless? When have you ever been in perfect and complete control? These terms appear to make sense when you look at loops in isolation. Loops, however, don’t exist in isolation.
More soon. Subscribe and help me keep this going!
Michael Garfield is an over-hyphenated product of confusing times. More info here.
> “How can wisdom keep pace with power?”
With great difficulty 😉 and our wisdom gap keeps growing - https://notes.lifeitself.org/wisdom-gap
A core part of wisdom is *slowing down*. The question is how to do that, especially given the clear race dynamics and associated collective action problems. It's not impossible however.
More in https://secondrenaissance.substack.com/p/wiser-technology-technology-and-a
Great stuff, Michael.