“Where the majority of AI funding and research is to accelerate statistical machine learning, trying to make machines and robots ‘smarter,’ we are interested in the augmentation and machine assistance of the complex ecosystem that emerges from the network of minds and our society.”
-From Extended Intelligence by Joi Ito and Kevin Slavin
It’s easy to get caught up in the idea that Artificial Intelligence (AI) will be one of two things: our destroyer or our savior. It’s time to move beyond this dualistic narrative. The “either or” comparisons create fear or unrealistic expectations, neither of which pragmatically move society forward.
Part of shifting this narrative is evolving a new framing for the term, “Artificial Intelligence.” Here’s why:
- It’s associated largely with fear or hyperbole. If there were a drinking game for every time an AI article featured a picture of The Terminator or a robot and human hand touching, all of society would have alcohol poisoning. While I sympathize with the need to get eyeballs, the visuals at the top of articles featuring autonomous and intelligent technologies have to evolve to avoid biasing readers.
- It’s vague. When defining the technologies comprising AI, definitions vary. Some include lists of various disciplines in the space, such as machine learning or cognitive computing. Some focus on how and when AGI (Artificial General Intelligence) will shift to become ASI (Artificial Super Intelligence). While various definitions may have merit, their lack of alignment causes confusion.
- It’s Reductionist. In an update to his essay on Extended Intelligence quoted above, Joi Ito’s, “Resisting Reduction: Designing our Complex Future with Machines” describes the risks of assuming that every problem can be solved through AI, computation or technology in and of itself. Cory Doctorow of Boing Boing sums up Ito’s essay nicely in his article, “Resisting Reduction Manifesto: against the Singularity, for a “culture of flourishing“:
Joi Ito’s Resisting Reduction manifesto rejects the idea of reducing the world to a series of computable relationships that will eventually be overtaken by our ability to manipulate them with computers (“the Singularity”) and instead to view the world as full of irreducible complexities and “to design systems that participate as responsible, aware and robust elements of even more complex systems.” Ito says that Singularity thinking is at the root of unrestrained pursuit of profit that tramples human flourishing, and he says that we should focus on “vigor and health rather than scale or power” in measuring our systems and deciding which ones to preserve and which ones to change.
Extended Intelligence (EI)
There are three key issues framing why we need to move beyond the term Artificial Intelligence:
- How we frame the Human Experience
- How Human Data is defined, shared and controlled
- What underlying Economic Values drive the design and creation of Autonomous and Intelligent Systems (A/IS)
How We Frame the Human Experience
There is a belief among some that once we are able to copy the human brain, we will in essence be able to copy a person. This would enable a person’s consciousness, or self, to be ported to the cloud or embodied within an android or algorithmic form. This belief, often referred to as computationalism and embraced (in broad strokes) by Transhumanists or Singultarians, holds that because factors such as emotion or faith are projections of our brains, those elements will also be replicated when a brain is copied. So while these elements may manifest differently in their new forms (silicon, algorithm) they are not lost, just augmented or evolved.
This belief is problematic for a few reasons. First is that, as of today, we haven’t been able to copy a brain to the point where we can analyze whether it embodies an original consciousness. So the assumption that a copied entity would represent the same identity as the first hasn’t been tested. Secondly, as of today, many cultural or faith-based traditions presuppose that our emotions or spiritual experiences have an externality to them. Meaning even if elements of our emotional lives or beliefs can be replicated via our brain’s physiology, we cannot deny the external forces embodied within these traditions. In short, by reductionism defining the brain as the sole reservoir of our humanity, we deny and disrespect a huge portion of the planet that does not share the same beliefs of the relatively small population of influential thinkers framing their views as irreproachable empirical fact.
Currently, human data makes up a great deal of technologies within the realm of AI and the Internet of Things. In general, a majority of this data is available to organizations wishing to track human action via the public sphere (e.g. facial recognition), the digital sphere (e.g. algorithms) or the virtual sphere (Virtual Reality). This current model of tracking is based on the idea of consent, where individuals sign terms and conditions allowing for the sharing of their data to use a product or service.
While this model of “consent” may make sense legally, it has ceased to be effective in terms of symmetry. From a business or trust perspective, assuming someone will enter into a relationship essentially by force (“my way or the highway”) is what Doc Searls refers to as a Contract of Adhesion. While organizations need to legally provide an understanding of a contractual relationship regarding a person’s data in exchange for their product or service, as of today there is no clear way for individuals to articulate their own personal terms and conditions of this transaction in return. The messaging stating, “you get access to this great product or service for free” is misleading. If people don’t understand or receive the full value of the insights and assets related to their personal data by design, it’s time to change the mode of transaction in these exchanges in the algorithmic age.
Either we embrace evolved Identity and Data models like the ones proposed by the MyData declaration, putting people at the center of their data, or we stop pretending the current Internet Economy provides transparent and symmetrical value for users. It doesn’t. Either we provide all global citizens personal data clouds linked to trusted identity sources, where they can create their own terms and conditions, or every digital and virtual transaction perpetuates confusion by design.
Today, the world runs on Gross Domestic Product (GDP). It’s the primary metric of value the world often assumes frames holistic societal prosperity. Meaning, if the GDP goes up, everyone is happy (literally and economically). But this isn’t true.
GDP, created in the early part of the 20th century, captures the dimension of exchange, whether it is meaningful or not, not the usefulness of such exchange. This is true even in extreme cases. For instance, activities to cope with a catastrophic environmental accident count positively, but the degradation of the environment does not count. Additionally, GDP prioritizes ever-increasing productivity and exponential growth as the values that should drive society. While being productive or making profits make sense, it’s the notion of growing exponentially that’s an issue for humans at this stage of our evolution. Our capacity for cognitive growth is limited – we can’t compete with machines for many of the functions we’re building them to do. And when societal value is based on exponential productivity or growth, we are inherently valuing machines over humans to achieve these goals.
This is problematic when it comes to issues like employment and automation. We can hope that saying things like, “we need to reskill the population” or “we need more STEM (Science, Technology, Engineering and Math) education” will influence future outcomes regarding human employment, but it’s untenable to not address the fact the core priorities driving GDP mean that all academic and corporate funding is based on exponential growth regarding AI today. And it’s unfair to put the unrealistic burden on organizations of saying on the one hand, “make sure to keep more jobs for humans!” while on the other hand saying, “make your quarterly numbers for exponential shareholder growth!” Those two ideals are incompatible. Humans and machines will only work well together if we prioritize human well-being in the form of triple bottom line economics (planet, people and profit) moving forward. GDP and its single-bottom line economics are the real reason AI might destroy us, not fictional killer robots.
|Key Issues||Artificial Intelligence||Extended Intelligence|
|Framing the Human Experience||Reductionist: Brains are computers; once copied so is the person||Extended: Recognizes human traits beyond cognition|
|Human Data||Volume and external tracking focus||People have access to and control of their data|
|Economics||Exponential growth and productivity focus (GDP driven)||Triple bottom line – prioritizes human well-being|
A Flourishing Future
Here’s how Joi Ito ends his essay, “Resisting Reduction: Designing our Complex Future with Machines”:
Developing a sensibility and a culture of flourishing, and embracing a diverse array of measures of “success” depend less on the accumulation of power and resources and more on diversity and the richness of experience. This is the paradigm shift that we need. This will provide us with a wealth of technological and cultural patterns to draw from to create a highly adaptable society. This diversity also allows the elements of the system to feed each other without the exploitation and extraction ethos created by a monoculture with a single currency.
“Flourishing” is often used synonymously with the term, “well-being” in economic circles focusing on human prosperity. Rather than focusing simply on one’s immediate mood, flourishing infers a long-term and holistic state where your needs are met and you’re able to pursue a purposeful life. It also equates to our need to extend definitions of value regarding our humanness, data and economic values as we evolve.
It’s also why I maintain that the term “Artificial Intelligence” needs to be retired. Beyond the fear and confusion the term brings, it reinforces reductionist ideas that humans are inherently inefficient, biased machines in desperate need of an upgrade. The term “Extended Intelligence” means that machine systems are built around persons to enhance human presence, not to exclude it. It also recognizes the reality that our well-being is dependent upon factors like the environment and mental health. It also infers the need to put people at the center of their data, so they can have access to and control key elements of their identity representing their subjective truth.
AI is not our savior or destroyer. We are. It’s time to revise our thinking about autonomous and intelligent technologies and move our way towards a positive and purposeful future with the concept of extended intelligence. Otherwise, artificial ideals will keep us from realizing a purposeful human future and the full potential of Autonomous and Intelligent Systems (A/IS).
John C. Havens is the Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. The Initiative’s paper, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS) will be released in early December 2017 as part of their “Ethics in Action” campaign encouraging an Extended Intelligence, human-centric vision for A/IS.