Skip to main content
SearchLoginLogin or Signup

Myth and the Making of AI

Essay Competition Winner

Published onJul 16, 2018
Myth and the Making of AI
·

A myth runs deep in Western culture that can be traced through everything from Wild West novels to space exploration and the origin stories of Silicon Valley startups. It is a myth that crosses creative boundaries, driving blockbuster sales of Steve Jobs biographies, inspiring visitors to Michelangelo’s ceiling frescoes in the Sistine Chapel, and fueling criticisms of Beyoncé when her albums credit dozens of writers. It is repeated in history books and celebrated in TED talks. It is the myth of the lone pioneer. This lone pioneer may be a hero with a thousand faces, but he is a singular hero. His journey celebrates the self-discovery that comes with creating, but at its heart, it also affirms a reductive identity that is based in self-sufficiency. Everyone else is invisible in his story.

Today’s race for artificial intelligence (AI) is the greatest hero of all, with Singularity driving a new wave of pioneer narratives featuring man as maker and machine as protagonist. Yet this cultural mythology contradicts the characteristic that most distinguishes the rising AI age: ambiguity. Creating technology is now a practice of designing for the uncertain and coding for the indeterminate. In the history of design and technology, human and technical variability have long been treated as error: a deviation from the norm to be simplified away, either in service of mass-scale market growth or because a diversity of possibilities threatens individual control. Yet human lives are full of ambiguous interdependencies. In his essay “Resisting Reduction: A Manifesto,” Joichi Ito asserts that when successful, systems of these interdependencies form value exchanges marked by a flourishing function and drawing from “diversity and the richness of experience.”[1] Interdependence is a necessary reality. Every pioneer needs a patron, every artist needs a group of creatives to inspire and provoke. No pioneer, no artist, no inventor ever makes it alone. These human truths lead us to question the reductive cultural myth of self-sufficiency as the highest form of worth and instead affirm one of adaptive interconnectivity.

Flourishing systems retain and value this natural interconnection. Ito concludes his essay with a stirring call for a new kind of “‘participant design’—design of systems as and by participants”—that champions a robust interdependency in the creation of complex adaptive systems. A call like this requires strategies for creating that are different from what has advanced much of machine learning intelligence to date: technology corporations, government defense departments, and academic institutions, each with distinct funding mechanisms and separate interests and incentives that motivate their work. Because information is considered proprietary, teams in these environments are often required to remain isolated from each other in the face of enormously ambiguous challenges. This is problematic because if the systems that determine who makes AI are intrinsically disconnected, we cannot expect the outcome from these systems to be anything but the same. To counter such isolated approaches, technologists themselves have founded new interdisciplinary groups such as the AI Now Institute[2] and Open AI[3] that aim to encourage greater transparency, examine ethical standards, and promote broader regulatory oversight in the making of artificial intelligence. A system of robust participant design does not lessen ambiguity, but it can enable its participants to benefit by leaning into it as a way of working.

Applying the concept of participatory design to how AI is made requires answers to several practical questions. Who determines who makes? Who has the greatest expertise in interdependence? What actual methods can developers, designers, and creatives apply when the aim of making is flourishing? If AI is to participate in a system that is adaptable and sustainable because it is inherently diverse, the qualities of such a system would look different from the qualities of a system created in a posture of secrecy.

Examining a few key myths can help reveal the roots of reductive systems in technology and how we might transcend these deep-seated paradigms. The stories we tell frame the choices we make. One way to gain insight into the connection between makers and their creations is to study existing factors that break and build human relationships. To begin, flourishing systems that promote interconnectivity would have diverse touchpoints, they would value interdependence, and they would cultivate a strong sense of belonging.

Systems that Have Diverse Touchpoints

The evolution of human-computer interfaces is infused with a strong reductive myth of the average human being. This myth endures even though it is widely acknowledged that people are incredibly diverse and largely unpredictable. As teams work together to build solutions that will reach large audiences, they can make broad assumptions about the human bodies and human circumstances they are creating for. For example, teams often presume the majority of their audience will consist of people with a degree of eyesight, physical mobility, or financial resources that are similar to their own. Such simplifying assumptions might work well for solutions that are used by one person interacting with one computer, in one constant environment, to complete one or two focused tasks. However, the complexity of AI solutions demands that we carefully reconsider these common assumptions and expand the ways we account for the prevalence of ambiguity.

Designers use many techniques to envision the people who will interact with their solutions, from detailed personas to massive databases of customer feedback. These normalizing techniques were heavily influenced by a nineteenth-century Belgian astronomer and mathematician named Adolphe Quetelet.[4] Quetelet used mathematical methods to make sense of uncertainty in human society. He measured human beings and amassed that data into statistical models, from height-weight ratios to rates of growth, across thousands of people in Belgium and the surrounding areas in Europe. He plotted that data and was astonished to find that it mapped to bell curves, also known as the Gaussian (or normal) distribution.

Invigorated by his discovery, Quetelet started measuring more aspects of human beings, creating physical, mental, behavioral, and moral categories of people. Everywhere he looked, he found bell curves. He became consumed with what he deemed the human ideal, the perfect average measurement across all of those dimensions. Quetelet held that individual people should be measured against that perfect average. From this comparison, he reasoned, one could calculate the innate degree of abnormality for an individual person. Diversity and variations in human beings were treated as degrees of error. His ideas were contagious and enduring, especially in the social sciences. Normal-based methods of diagnosing illness led to advancements in public health. However, eugenics, and its horrific assertions about the superiority of abilities, races, and classes of people also grew from Quetelet’s idea of the perfect average human.

The power of the bell curve still echoes through the design of society, from classrooms to computers. Left-handed students are seated in desks made with the assumption that normal human beings are right-handed. Important features of smartphone applications are placed where the average user, presumably right-handed, is likely to reach for them. The first personal computers were designed for a mythic average human who could dedicate a high degree of visual and cognitive attention to navigating a graphical user interface, to the exclusion of anyone who didn’t match this profile. As greater numbers of people use technology in exponentially diverse ways, in different contexts and environments, greater numbers of people are also experiencing moments of exclusion.

A common misconception is that the center of the curve represents an 80 percent majority of the population and 80 percent of the important product problems to solve.[5] The presumption is that if we design a solution that fits the largest bulk of the curve, the middle average, our solution will work well for the majority of people. This leads many teams to treat the remaining 20 percent as outliers or edge cases, a category of work that’s often deferred or neglected. In fact, edge cases can be a useful starting point for creating better solutions. However, having an edge case implies the existence of a normal, average human. When it comes to the design of technology, what if a normal, average human is simply a myth?

Reductive ways of thinking about people leads to reductive touchpoints in the design of a system. Imagine a playground full of only one kind of swing. This swing requires you to be a certain height with two arms and two legs. The only people who will come to play are people who match this design, because the design welcomes them and no one else. And yet there are many different ways to design an experience of swinging. You could adjust the shape and size of the seat. You could keep a person stationary and swing the environment around them. Participation doesn’t require a particular design, but a particular design can prohibit participation.

The same applies to technology. Each feature created by designers and developers determines who can interact and who is left out. When we create a diversity of ways to interact with a system, more people can access that experience. More importantly, they can participate with each other within that system. This natural interconnection and interplay between elements is important to any flourishing system and any healthy human habitat. Unlike the fixed objects in a playground, the elements of digital environments are far more malleable and responsive, ideal for adaptive systems that interact with multiple human beings at once. How might we build better ways to recognize exclusion and regulate negative feedback as inherent parts of a system?

One simple starting point is to identify the types of activities and experiences that are most important to a human environment, physical or digital. We can identify the range of human abilities—physical, cognitive, and social—that are important when using a system. We can design touchpoints that work well for excluded communities, but also extend access to anyone who experiences a similar kind of exclusion on a temporary or situational basis. The result would be a system that enables diverse kinds of participation.

Systems that Value Interdependence

Social independence is another reductive myth that leads to disconnection. Technologies that emerge from cultures that value independence often optimize solutions for one lone person. Even in solutions that aim to connect individuals, such as transit systems or social media, people can be treated as a collection of individuals, counting the number of unique likes they receive on a post, rather than a collective unit where the interdependence between individuals is constantly reshaping the nature of that system.

Conceptualizing interdependence and recognizing it in practice can be challenging for anyone who idolizes independence. Interdependence is often conflated with negative notions of human weakness or indulgence, or it is simply dismissed as relevant only to people who are very young or advanced in age, the times in our lives when we depend heavily on other human beings to support us. And yet no society thrives solely on the skills of its hunters and warriors. No society is sustained through only one kind of contribution. All societies thrive when systems of interdependent skills are manifested in economies that include different types of novices and masters. Interdependence is about matching these complementary skills together and balancing mutual contributions in diverse forms of value exchanges.

People with professions that focus on human relationships, such as educators, sociologists, and personal assistants, often develop a mastery of interdependence as a matter of practice. Interdependence is also a necessary practice for many members of marginalized communities, where collective creativity and resourcefulness are matters of survival when confronted with lack of access to social power and resources. Interdependence can be important for people who employ human assistants and assistive technologies. For people with disabilities, working closely with personal assistants can be a vital aspect of daily life.

Because many societies assume all people are socially independent, members of excluded communities often face the greatest physical, cognitive, and social mismatches when interacting with these touchpoints. The myth of social independence not only limits who can participate in the system, but also who can contribute to the evolution of that system through design, enabling a self-reinforcing loop of omitting people who could have the greatest expertise in how interdependence enables flourishing.

The rise of AI means that more digital agents and algorithms will facilitate everyone’s interactions with society. Transcending the social independence paradigm in an effort to design a ubiquitous interplay with such agents could start with studying the diverse types of value exchanges that exist in communities already cultivating interdependence: the exchange of art for labor or food for childcare. Designing for interdependence changes who can contribute to a society, what they can contribute, and how they make that contribution. If we develop our innate ability to connect with one another as a precious resource and source of social vitality, what kind of AI could we build?

Systems that Create A Sense of Belonging

Finally, disconnection is often perpetuated by the reductive myth of culture fit. When there’s only one fixed path to becoming the maker of a system, that path will determine who makes. Whether we consider early childhood education or corporate hiring practices or the internal processes that teams use to build and communicate, the path to becoming a contributor to AI is narrow. This ensures that the design of AI will be informed by only the select few people who fit and survive the cultural requirements to participate.

One way to revise this myth is by hiring people from excluded communities, especially people with disabilities, to fill positions where they can influence and inform the design of emerging systems. This is a richer form of participatory design, which, as proposed, perhaps doesn’t go far enough in enabling value exchanges that imply a sense of worth for all. A practice known as inclusive design pushes beyond participation and places a higher value on contribution.

Inclusive design is first designing with, and not just for, excluded communities. Then it involves extending the benefits of solutions to anyone who might experience a similar kind of exclusion on a temporary or situational basis. Inclusive design doesn’t mean designing one thing for all people. It emphasizes designing a diversity of ways to participate so that everyone has a sense of belonging in a place. It starts with challenging the most prevalent mental model of inclusion.

Based on the Latin root, claudere, which means “to shut,” inclusion literally means “to shut in.” This evokes an image of a circular enclosure, with some people contained within the circle and others shut out. This mental model informs how we think about inclusive solutions. Is the goal for the people inside the circle to create openings in the enclosure and magnanimously invite excluded communities to participate with them? Is the goal for outsiders to forcibly break into the circle? Or should we eliminate the circle altogether to intermix freely in a utopian state? Perhaps all are incorrect.

What if, rather than a rigid enclosure, inclusion were a cycle of choices that each designer, developer, educator, or leader is constantly making and remaking as they create solutions for someone other than themselves? In this model, what is ultimately made and released into the world is a by-product of who makes and the assumptions they make about who receives their solutions. This is critical, especially when hundreds, if not thousands, of people are working together to manifest a complex system.

The final features of these objects and experiences give strong indicators of who does and doesn’t belong. Imagine a touchscreen at a subway ticket station that works only for people who can see and touch a screen. Or a job application that can be submitted only over a high-bandwidth internet connection. Or a video game controller that requires two hands to play. Each design choice—the contours and materials, the default language, and the underlying logic of a solution—will quickly let you know whether it is made for you.

This is why participation might not go far enough. Creating flourishing systems will require more than just extending a warm invitation to give input and feedback on potential designs. It will mean entrusting the design of these systems to the most excluded communities.

Moments of technological transition are ideal for introducing inclusive design, and today we need to engineer the models we use to ensure they don’t lead to exclusionary design practices that benefit nonexistent average humans. Without inclusion at the heart of the AI age, we risk amplifying cycles of exclusion on a massive scale. This risk is real. Despite the existence of academic and industry guidelines that encourage catering to universal usability, AI-based exclusions are already manifesting in mass-market products. As Dr. Kate Crawford, cofounder of AI Now, wrote in the New York Times: “Users discovered that Google’s photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional. But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.”[6] These and many more grievous types of exclusion are detailed in the books Weapons of Math Destruction[7] by Cathy O’Neil and Automating Inequality[8] by Virginia Eubanks. In the future, these types of exclusion won’t just be perpetuated by a few human beings who are training algorithms with slim data sets. Instead, they will be accelerated by self-directed machines that are reproducing at scale the intentions, biases, and preferences of their human creators.

The Stories We Tell

Myths are derived from culture, and their retelling perpetuates the shape of that culture for future generations. The culture of technology is rife with mythologies too, and it can be tempting to allow the sparkling narratives of genius and riches to camouflage the much more mundane truth that technology emerges from our collective ability to work together. Technology is truly a reflection of how we relate to one another, demonstrated through the creative and ethical choices we make. When we examine each of our creations, as with any life we birth, our acceptance or rejection of that creation also determines the power it holds over us.

One lone pioneer narrative has captured the imagination of technologists like no other: it is the tale of a restless young college student who works alone by night for years on a secretive project, an inspired invention whose completion truly alters the course of his life forever. It is the tale of Victor Frankenstein, as told by Mary Shelley in her popular novella Frankenstein, which has endured as a morality tale in modern tech despite being published in 1818.

Frankenstein is everywhere today. A quote from it opens Chris Paine’s AI documentary Do You Trust This Computer?,[9] it headlines an algorithmic research project and film from Columbia University’s Digital Storytelling Lab, Frankenstein AI: A Monster Made by Many,[10] and it is being celebrated in a cross-disciplinary Frankenstein Bicentennial Project sponsored by Arizona State University with support from the National Science Foundation. The MIT Press also released a special edition of the novella that is, as its subtitle notes, “annotated for scientists, engineers, and creators of all kinds.”[11]

However, Frankenstein is not as straightforward as modern audiences might assume, especially if they have only seen the derivative screen versions of the story. The fact that Shelley never names the frightening creature in her original novel has enabled generations of readers to project onto it any number of cultural and moral ambiguities they faced in their time.[12] Critics have examined the creature’s behavior and assumed it is a symbol for the politics of the French Revolution, the slave uprisings in Haiti and the West Indies, the racial and feminist themes of their day, and, much more recently, the dire consequences at stake for makers of modern technology.

Shelley clearly uses science and technology as vehicles to tell her tale, but reading it with an interpretation focused exclusively on the dangers of tech neglects other nuances. As noted in the preface to the MIT edition, “Frankenstein is unequivocally not an anti-science screed, and scientists and engineers should not be afraid of it. The target of Mary’s literary insight is not so much the content of Victor’s science as the way he pursues it.”[13] And the way Victor pursues his science is by disconnecting from everything that is most meaningful to him.

Themes of connection and disconnection are woven throughout the original story and right into the heart of its most pivotal scene, the night when Victor finally animates his creature after two years of obsessive work, to the detriment of his studies, his relationships, and even his own health. In the laboratory there is a spark, a breath from the creature, then suddenly Victor is rocked by a shock of revulsion—and he runs away.[14] When the creature follows him, he rejects it again. And so the creature flees into the world outside. In Shelley’s novel, it is not just the knowing or the creating that begins the slow undoing of Victor and the creature; it is when the opportunity for interconnection is broken in reaction to fear.

In the end, the creature destroys Victor by severing connections with the people closest to him. First by choice, then by violence, and true to the myth of the lone pioneer, everyone else becomes invisible in Victor’s story.

Despite their enduring popularity as myths for modern technologists, we can examine lone pioneer narratives—including the one in Frankenstein—not only through the lens of technology itself but also by the way technology is pursued in these stories. If there are morals from which creators can learn, certainly one is that human lives are full of ambiguous interdependencies, and to deny these connections is the antithesis of flourishing. Disconnection between makers and their creations, and disconnection from each other, is a prevalent practice in technology as a way of reducing uncertainty. We can disrupt this paradigm by challenging the assumptions at its foundation rather than accepting it as absolute truth.

So how will we pursue the making of AI? The technology we create is a by-product of our choices. We are naturally interconnected with our creations and with each other as we create, but no cautionary tale will change our course when millions of isolated makers are inherently disconnected from each other in the ways they invent. To create in siloed disconnection is to deny the truth of our lives, and, in a stunning lack of forethought, to leave untapped the greatest assets of our collective creativity: the powerful adaptive interconnections that can fuel entirely new systems of flourishing. As we reshape our systems to make this human truth more evident, and as we each contribute to the making of something greater than ourselves, we’ll experience new ways to see into the nature of what we’re creating. In turn, we might learn how to create systems, for AI and beyond, that enable the survival and flourishing of the connections that are most precious to us.

Comments
4
BA Rehl:
  • different from what has advanced much of machine learning intelligence to date: technology corporations, government defense departments, and academic institutions …

This seems to be correct. As far as I can tell it would be more like the US/Soviet space race (although considerably larger). In other words, the scale would be too large for a corporation or university and there are no obvious military applications. The shift seems to be large enough to have a detrimental effect on existing companies like Apple, IBM, Google, and Intel. There are also questions about advertising models.

I think the scale is the hardest thing to grapple with. This would create an entirely new branch of science rather than fitting neatly under computational theory as many assume. It has side-effects in commerce, religion, culture, law, government, corporations, and personal management. It would be highly disruptive in terms of politics in the US because of both the religious implications and the reduction of misinformation. It would be disruptive for both computer hardware and computer software because the architecture and programming methods are so different. These are things that are known today with the theory still incomplete.

Molly McCue:

Excellent thoughts here. I agree that scale is its own animal. It is one thing to create and another thing to make a creation that works for millions—whether or not that was intended in the first place. On one hand, it’s an honor to make something that is used by lots of people. On the other hand, it’s an enormous responsibility, precisely for the reasons you mention: unintended consequences. My friends who work in the field of AI research tell me they are less worried about the silos of information now and more concerned about the veracity and transparency of the data that is being used. Less about how to scale and more about what is being scaled.

Joichi Ito:

I wrote about this in the context of our educational system that is poorly designed for the 25% or so of American who are Aspergers, ADHD or Dyslexic, for example. https://www.wired.com/story/tyranny-neurotypicals-unschooling-education/

Molly McCue:

Kat goes into more detail about this 20% figure in her book: how the 80/20 rule was conceptualized by economist Vilfredo Pareto and eventually conflated with the bell curve to make the 20% come to represent “edge cases.” What can result are designs made for everyone and no one. Your article is an excellent account of how these assumptions have resulted in education systems that are often overly rigid for students, none of whom are average. Thank you for sharing this!

Joichi Ito:

There is a wonderful Japanese book about the Japanese notion of “amae” or dependance: https://en.wikipedia.org/wiki/The_Anatomy_of_Dependence

Molly McCue:

Thanks for this reference! Tangentially, another work that explores this notion is titled “Culture Care” by artist and writer Makoto Fujimura. I wish we had been able to weave his book into our essay in a more formal way. He writes about how society’s utilitarian pragmatism too often reaffirms the unquestioned assumption that people are only worthwhile if they are useful. “We are too prone to see a human being or human endeavor as worthwhile only as it is useful to the whole, whether that be a company, family, community, or even a church. The corollary is that individuals who do not meet this standard are “other,” an attitude that results effectively in their exile from the functioning, “normal” world. Those who are disabled, those who are oppressed or weakened, or those who are without a voice are soon regarded as useless, and then as disposable” (Culture Care, 80). He defends generative culture care values that go beyond materialism and then explores how artists lead the way.

Joichi Ito:

Sort of random and orthogonal, but Martin Nowak from the Harvard Program for Evolutionary Dynamics has been doing some interesting work to show that cooperative strategies are essential and more effective than competition in many settings and that these cooperative strategies are what create the rich complexity that we see in Nature. ( Nowak MA, R Highfield (2011). SuperCooperators: Why We Need Each Other to Succeed. Simon & Schuster ) See also: https://news.harvard.edu/gazette/story/2018/07/studying-games-to-understand-the-evolution-of-cooperation/

Molly McCue:

We ended up amending this statement (in our third paragraph) to add more context for the characteristics we list: “Because information is considered proprietary, teams in these environments are often required to remain isolated from each other in the face of enormously ambiguous challenges. This is problematic because if the systems that determine who makes AI are intrinsically disconnected, we cannot expect the outcome from these systems to be anything but the same.” The Nowak reference sounds super interesting and worth looking into — thank you!