Removing Humans from the AI Loop — Should We Panic?

By Sidney PerkowitzFebruary 18, 2016

Removing Humans from the AI Loop — Should We Panic?

The Technological Singularity by Murray Shanahan
Machines of Loving Grace by John Markoff

IF YOU THINK the main existential threat facing humanity is climate change or global food shortages, think again. A number of eminent scientists and technologists believe a bigger threat is the rise of powerful artificial intelligences (AI). They argue that these intelligences will dominate or replace humanity. “We are summoning the demon,” Elon Musk, founder of Tesla Motors and SpaceX, recently said. “We should be very careful about artificial intelligence. If I were to guess, like, what our biggest existential threat is, it’s probably that.” Bill Gates shares these concerns, and Stephen Hawking put it apocalyptically when he told the BBC, “the development of full artificial intelligence could spell the end of the human race.” 


Others profoundly disagree. Eric Horvitz, who directs Microsoft’s Redmond Research Lab — heavily involved in AI — thinks losing control of the technology “isn’t going to happen.” According to him, “we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”


Of course, countless science fiction works have portrayed imagined machine beings, such as HAL in Stanley Kubrick’s 2001. The classic film Colossus: The Forbin Project (1970) portrays an AI running amok and ruling humanity. The conceit has obviously become a popular generator of fictional plots. But Musk and the others are talking about the real world, our world. The pressing question becomes: Should we panic?


Or should we just accept defeat and hope our machine overlords won’t be too brutal? Or, in a more hopeful mood, look to a golden age mediated by kindly superintelligences? Or, in a more indifferent one, file all these comments under “techno overhype” and go about our business?


¤


Several recent books offer answers of a sort by examining the rise of the machine mind. Author of The Technological Singularity, Murray Shanahan is a professor at Imperial College London, where he conducts research on AI and robotics. Steeped since childhood in science fiction, he sees the value of the genre in presenting novel ideas — in the manner, for instance, of last year’s robot film Ex Machina, for which he was a scientific advisor. His new book explores scenarios about the future of AI in somewhat similar fashion.


AI, he explains, can lead to a “technological singularity,” a critical moment for humanity popularized by the futurist Ray Kurzweil, among others, who predicted it would arrive by the mid-21st century. The first person to call this event a “singularity” was the distinguished 20th-century mathematician John von Neumann, who thought breakneck technological progress would take us to “some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”


This may read like science fiction, but Shanahan points out that the idea is potentially meaningful for AI because AI is inherently dangerous. It can produce an unpredictable feedback loop: “When the thing being engineered is intelligence itself, the very thing doing the engineering, it can set to work improving itself. Before long, according to the singularity hypothesis, the ordinary human is removed from the loop.”


Shanahan’s tour of AI begins with the famous Turing test, developed from a seminal paper in 1950 by the British mathematician and World War II codebreaker Alan Turing. He predicted that machines would one day think well enough that a human interlocutor could not distinguish between a person and a machine. The “Turing test” criterion has yet to be met, but Shanahan suggests it’s only a matter of time; in fact, he sketches out exactly how to build AIs possessing this and other “general intelligence” abilities.


One route to AI, “whole brain emulation,” Shanahan explains, depends on the proposition that “human behavior is determined by physical processes in the brain.” There are “no causal mysteries, no missing links, in the (immensely complicated) chain of causes and effects that leads from what we see, hear, and touch to what we do and say.” In a human brain, the chain is built within 80 billion connected neurons, each taking nerve impulses as input and producing other impulses as output, which in turn activate other neurons. Shanahan’s position is that we can build a brain by replicating those neurons with digital electronic elements in silicon chips. Some of us might object that what goes on in a human brain is more than what we see externally in a person’s behavior. After all, we absolutely do not understand how and why neurons firing in the brain produce our individual internal realities: our sense of self or “consciousness.” But setting aside that pesky issue, it is scientifically valid to propose that intelligence as manifested by behavior can be replicated by copying the brain behind the behavior.


Shanahan argues that the obstacles to building such a brain are technological, not conceptual. A whole human brain is more than we can yet copy, but we can copy one a thousand times smaller. That is, we are on our way, because existing digital technology could simulate the 70 million neurons in a mouse brain. If we can also map these neurons, then, according to Shanahan, it is only a matter of time before we can obtain a complete blueprint for an artificial mouse brain. Once that brain is built, Shanahan believes it would “kick-start progress toward human-level AI.” We’d need to simulate billions of neurons of course, and then qualitatively “improve” the mouse brain with refinements like modules for language, but Shanahan thinks we can do both through better technology that deals with billions of digital elements and our rapidly advancing understanding of the workings of human cognition. To be sure, he recognizes that this argument relies on unspecified future breakthroughs. 


But if we do manage to construct human-level AIs, Shanahan believes they would “almost inevitably” produce a next stage — namely, superintelligence — in part because an AI has big advantages over its biological counterpart. With no need to eat and sleep, it can operate nonstop; and, with its impulses transmitted electronically in nanoseconds rather than electrochemically in milliseconds, it can operate ultra-rapidly. Add the ability to expand and reproduce itself in silicon, and you have the seed of a scarily potent superintelligence.


Naturally, this raises fears of artificial masterminds generating a disruptive singularity. According to Shanahan, such fears are valid because we do not know how superintelligences would behave: “whether they will be friendly or hostile […] predictable or inscrutable […] whether conscious, capable of empathy or suffering.” This will depend on how they are constructed and the “reward function” that motivates them. Shanahan concedes that the chances of AIs turning monstrous are slim, but, because the stakes are so high, he believes we must consider the possibility.


¤


The singularity also appears in journalist John Markoff’s Machines of Loving Grace (the title is from a Richard Brautigan poem), but only as a small part of a larger narrative about AI. John Markoff has written about technology, science, and computing for The New York Times since 1988, covering IBM-style mainframes to today’s breakthroughs. From his base in San Francisco, he is well connected to Silicon Valley, and Machines of Loving Grace draws on Markoff’s intimate knowledge of the research and researchers that form the AI enterprise. He begins with early AI work in the 1960s, which did not yield immediate success despite overly optimistic predictions. Same story in the 1980s. Signs of progress finally appeared around 1999, in products with rudimentary intelligence like Roomba, a vacuum cleaner that navigated itself around a house to suck up dirt; and Sony’s Aibo, a mechanically cute robot dog that also autonomously navigated and responded to voice commands. Though not as smart as a two-year-old toddler or even a real dog, these devices possessed a sliver of general intelligence insofar as they could adapt to their environments in real time.


Early AI pioneers had differing notions of the relationship between intelligent machines and people. The Stanford computer scientist John McCarthy, who had coined the phrase “artificial intelligence,” believed he could artificially emulate all human abilities (and do so within a decade). In contrast, Douglas Engelbart, a visionary engineer who had invented the computer mouse, worked on intelligent machines that would enhance human abilities to address the world’s problems — an approach he called IA, “intelligence augmentation.” In other words, as Markoff puts it: “One researcher attempted to replace human beings with intelligent machines […] the other aimed to extend human capabilities.” “Their work,” he therefore argues, “defined both a dichotomy and a paradox.” The paradox is that “the same technologies that extend the intellectual power of humans can displace them as well.”


In the last decade, Markoff reports, AI research has produced commercial products that display both human extension and human displacement. One of them is speech recognition, which you encounter whenever you call your bank to get an account balance, or ask a question of Siri or Alexa, the personal assistants from Apple and Amazon, respectively. This will be important in satisfying the Turing test, and Siri and Alexa are examples of IA helping people manage their lives. On the other hand, Google’s self-driving car, which can more or less safely navigate complex environments, eliminates the human driver.


These are not full human-level AIs — nor potential rogue superintelligences. They are merely steps in that direction.


Even if a singularity never actually happens, AI is already having serious social and economic effects. Markoff points out that robots have been taking over industrial jobs on auto assembly lines and elsewhere for decades. Now, with practicable AI, “workplace automation has started to strike the white-collar workforce with the same ferocity that it transformed the factory floor.” Professionals such as doctors and airline pilots are not immune either. 


But the option of IA, enhancement rather than replacement, makes it less likely that digital intelligences will dominate. Faith in pure AI does not come easily; when Markoff rides in a self-driving auto at 60 miles per hour, he finds it nerve-racking to “trust the car.” People would likely get over this fear, but Markoff also notes that some surprisingly intricate situations can arise. At a four-way stop sign, drivers typically glance at each other to make sure each is following the rule “first in, first out.” With self-driving cars, separate AIs would have to coordinate their actions, adding a hugely complicating layer of intercommunication technology to the process. Maybe the better answer is to keep people in the driver’s seat, supported by IA in the form of smart sensors and software that make it easier and safer to drive.


In other applications, replacing people by synthetic versions might seem inhumane. With rising numbers of the elderly in the United States and Europe and a shortage of caregivers, some observers propose using robots instead. But would anyone want to be tended by machines? They might look human and display intelligence along with seeming compassion and “loving grace,” but could they feel the “real” emotions that people want from truly involved caregivers? Calling the prospect “disturbing,” Markoff suggests that we instead use IA to extend our ability to provide medical care, companionship, and better quality of life to the ill and elderly in human, person-to-person ways.


¤


Taken together, the two books provide an overview of AI. They raise more questions than they answer, but that is to be expected. Both authors explain technical material lucidly with relatable examples. Their coverage sometimes overlaps, but their books are different. Shanahan’s book is a compact (272 pages in a small format) science-based summary of the background and state of the AI art, with enough detail for the reader to grasp what is feasible, now and maybe later. At 400 pages, Markoff’s book has less scientific detail but adds a rich story about the roots of AI and the people behind it, and its place in our daily world.


But back to the original question: Will AI lead to either an existential threat or an earthly paradise? Should we panic? While Markoff mentions the AI singularity, he is really interested in the less shattering effects AI has already had. Shanahan tells us how superintelligence might develop, but gives little reason to think this will happen in our lifetime. 


For now, we are in charge of our machines. Shanahan tells us “we must decide what to do with the technology”; Markoff reminds us that the discussion of AI vs. IA is really about the “kind of world we will create.” 


If we end up in Hell rather than Heaven, this time it will be our own fault. Regardless, there’s no need to panic quite yet.


¤


Sidney Perkowitz is the author of Digital People and other books and articles about science and technology. His latest book, in progress, is Frankenstein 2018. http://sidneyperkowitz.net, @physp.

LARB Contributor

Sidney Perkowitz is Charles Howard Candler Professor Emeritus of Physics at Emory University. He co-edited and contributed to Frankenstein: How a Monster Became an Icon (Pegasus Books), and is the author of Physics: A Very Short Introduction (Oxford University Press, 2019).

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!