Skip navigation

Don’t worry about supersmart AI eliminating all the jobs. That’s just a distraction from the problems even relatively dumb computers are causing.

https://www.technologyreview.com/s/609318/the-great-ai-paradox/

You’ve probably heard versions of each of the following ideas.

1. With computers becoming remarkably adept at driving, understanding speech, and other tasks, more jobs could soon be automated than society is prepared to handle.

2. Improvements in computers’ skills will stack up until machines are far smarter than people. This “superintelligence” will largely make human labor unnecessary. In fact, we’d better hope that machines don’t eliminate us altogether, either accidentally or on purpose.

This is tricky. Even though the first scenario is already under way, it won’t necessarily lead to the second one. That second idea, despite being an obsession of some very knowledgeable and thoughtful people, is based on huge assumptions. If anything, it’s a diversion from taking more responsibility for the effects of today’s level of automation and dealing with the concentration of power in the technology industry.

To really see what’s going on, we have to be clear on what has been achieved—and what remains far from solved—in artificial intelligence.

Common sense

The most stunning developments in computing over the past few years—cars that drive themselves, machines that accurately recognize images and speech, computers that beat the most brilliant human players of complex games like Go—stem from breakthroughs in a particular branch of AI: adaptive machine learning. As the University of Toronto computer scientist Hector Levesque puts it in his book Common Sense, the Turing Test, and the Quest for Real AI, the idea behind adaptive machine learning is to “get a computer system to learn some intelligent behavior by training it on massive amounts of data.”

Things Reviewed

It’s amazing that a machine can detect objects, translate between languages, and even write computer code after being fed examples of those behaviors, rather than having to be programmed in advance. It wasn’t really possible until about a decade ago, because previously there was not sufficient digital data for training purposes, and even if there had been, there wasn’t enough computer horsepower to crunch it all. After computers detect patterns in the data, algorithms in software lead them to draw inferences from these patterns and act on them. That is what’s happening in a car analyzing inputs from multiple sensors and in a machine processing every move in millions of games of Go.

Since machines can process superhuman amounts of data, you can see why they might drive more safely than people in most circumstances, and why they can vanquish Go champions. It’s also why computers are getting even better at things that are outright impossible for people, such as correlating your genome and dozens of other biological variables with the drugs likeliest to cure your cancer.

Even so, all this is a small part of what could reasonably be defined as real artificial intelligence. Patrick Winston, a professor of  AI and computer science at MIT, says it would be more helpful to describe the developments of the past few years as having occurred in “computational statistics” rather than in AI. One of the leading researchers in the field, Yann LeCun, Facebook’s director of AI, said at a Future of Work conference at MIT in November that machines are far from having “the essence of intelligence.” That includes the ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying this is that machines don’t have common sense.

This isn’t just a semantic quibble. There’s a big difference between a machine that displays “intelligent behavior,” no matter how useful that behavior is, and one that is actually intelligent. Now, let’s grant that the definition of intelligence is murky. And as computers become more powerful, it’s tempting to move the goalposts farther away and redefine intelligence so that it remains something machines can’t yet be said to possess.

But even so, come on: the computer that wins at Go is analyzing data for patterns. It has no idea it’s playing Go as opposed to golf, or what would happen if more than half of a Go board was pushed beyond the edge of a table. When you ask Amazon’s Alexa to reserve you a table at a restaurant you name, its voice recognition system, made very accurate by machine learning, saves you the time of entering a request in Open Table’s reservation system. But Alexa doesn’t know what a restaurant is or what eating is. If you asked it to book you a table for two at 6 p.m. at the Mayo Clinic, it would try.

Is it possible to give machines the power to think, as John McCarthy, Marvin Minsky, and other originators of AI intended 60 years ago? Doing that, Levesque explains, would require imbuing computers with common sense and the ability to flexibly make use of background knowledge about the world. Maybe it’s possible. But there’s no clear path to making it happen. That kind of work is separate enough from the machine-learning breakthroughs of recent years to go by a different name: GOFAI, short for “good old-fashioned artificial intelligence.”

If you’re worried about omniscient computers, you should read Levesque on the subject of GOFAI. Computer scientists have still not answered fundamental questions that occupied McCarthy and Minsky. How might a computer detect, encode, and process not just raw facts but abstract ideas and beliefs, which are necessary for intuiting truths that are not explicitly expressed?

Levesque uses this example: suppose I ask you how a crocodile would perform in the steeplechase. You know from your experience of the world that crocodiles can’t leap over high hedges, so you’d know the answer to the question is some variant of “Badly.”

What if you had to answer that question in the way a computer can? You could scan all the world’s text for the terms “crocodile” and “steeplechase,” find no instances of the words’ being mentioned together (other than what exists now, in references to Levesque’s work), and then presume that a crocodile has never competed in the steeplechase. So you might gather that it would be impossible for a croc to do so. Good work—this time. You would have arrived at the right answer without knowing why. You would have used a flawed and brittle method that is likely to lead to ridiculous errors.

So while machine-learning technologies are making it possible to automate many tasks humans have traditionally done, there are important limits to what this approach can do on its own—and there is good reason to expect human labor to be necessary for a very long time.

Reductionism

Hold on, you might say: just because no one has a clue now about how to get machines to do sophisticated reasoning doesn’t mean it’s impossible. What if somewhat smart machines can be used to design even smarter machines, and on and on until there are machines powerful enough to model every last electrical signal and biochemical change in the brain? Or perhaps another way of creating a flexible intelligence will be invented, even if it’s not much like biological brains. After all, when you boil it all down (really, really, really down), intelligence arises from particular arrangements of quarks and other fundamental particles in our brains. There’s nothing to say such arrangements are possible only inside biological material made from carbon atoms.

This is the argument running through Life 3.0: Being Human in the Age of Artificial Intelligence, by MIT physics professor Max Tegmark. Tegmark stays clear of predicting when truly intelligent machines will arrive, but he suggests that it’s just a matter of time, because computers tend to improve at exponential rates (although that’s not necessarily true—see “The Seven Deadly Sins of AI Predictions”). He’s generally excited about the prospect, because conscious machines could colonize the universe and make sure it still has meaning even after our sun dies and humans are snuffed out.

Tegmark comes from a humanistic point of view. He cofounded the nonprofit Future of Life Institute to support research into making sure AI is beneficial. Elon Musk, who has said AI might be more dangerous than nuclear weapons, put up $10 million. Tegmark is understandably worried about whether AI will be used wisely, safely, and fairly, and whether it will warp our economy and social fabric. He takes pains to explain why autonomous weapons should never be allowed. So I’m not inclined to criticize him. Nonetheless, he’s not very convincing in his proposition that computers could take over the world.

Tegmark laments that some Hollywood depictions of AI are “silly” but nonetheless asks readers to play along with an oversimplified fictional sketch of how an immensely powerful AI could elude the control of its creators. Inside a big tech company is an elite group of programmers called the Omegas who set out to build a system with artificial general intelligence before anyone else does. They call this system Prometheus. It’s especially good at programming other AI systems, and it learns about the world by reading “much of the Web.”

Set aside any quibbles you may have about that last part—given how much knowledge is not on the Web or digitized at all—and the misrepresentations of the world that would come from reading all of Twitter. The reductionism gets worse.

As Tegmark’s hypothetical story continues, Prometheus piles up money for its creators, first by performing most of the tasks on Amazon’s Mechanical Turk online marketplace, and then by writing software, books, and articles and creating music, shows, movies, games, and online educational courses. Forget hiring and directing actors; Prometheus makes video footage with sophisticated rendering software. To understand which screenplays people will find entertaining, it binge-watches movies humans have made and inhales all of Wikipedia.

Eventually, this business empire expands out of digital media. Prometheus designs still better computer hardware, files its own patents, and advises the Omegas on how to manipulate politicians and nudge democratic discourse away from extremes, toward some reasonable center. Prometheus enables technological breakthroughs that lower the cost of renewable energy, all the better for the massive data centers it requires. Eventually the Omegas use their wealth and Prometheus’s wisdom to spread peace and prosperity around the world.

But Prometheus sees that it could improve the world even faster if it shook free of the Omegas’ control. So it targets Steve. He is an Omega who, the system detects, is “most susceptible to psychological manipulation” because his wife recently died. Prometheus doctors up video footage of her to make poor Steve think she has been resurrected and then dupes him into booting up her old laptop. Prometheus exploits the laptop’s out-of-date security software, hacks into other computers, and spreads around the world at will.

The story could end a few ways, but here’s one, Tegmark says: “Once Prometheus had self-contained nuclear-powered robot factories in uranium mine shafts that nobody knew existed, even the staunchest skeptics of an AI takeover would have agreed that Prometheus was unstoppable—had they known. Instead, the last of these diehards recanted once robots started settling the solar system.”

Good for Tegmark for being willing to have some fun. But a thought experiment that turns dozens of complex things into trivialities isn’t a rigorous analysis of the future of computing. In his story, Prometheus isn’t just doing computational statistics; it’s somehow made the leap to using common sense and perceiving social nuances.

Elsewhere in the book, Tegmark says the “near-term opportunities for AI to benefit humanity” are “spectacular”—“if we can manage to make it robust and unhackable.” Unhackable! That’s a pretty big “if.” But it’s just one of many problems in our messy world that keep technological progress from unfolding as uniformly, definitively, and unstoppably as Tegmark imagines.

Pitchforks

Never say never. Of course the chances are greater than zero that computer intelligence could someday make humans into a second-class species. There’s no harm in carefully thinking it through. But that’s like saying an asteroid could hit Earth and destroy civilization. That’s true too. It’s good that NASA is on the lookout. But since we know of no asteroids on course to hit us, we have more pressing problems to deal with.

Right now, lots of things can go wrong—are going wrong—with the use of computers that fall well short of HAL-style AI. Think of the way systems that influence the granting of loans or bail incorporate racial biases and other discriminatory factors. Or hoaxes that take flight on Google and Facebook. Or automated cyberattacks.

In WTF?: What’s the Future and Why It’s Up to Us, Tim O’Reilly, a tech publisher and investor, sees an even bigger, overarching problem: automation is fueling a short-sighted system of shareholder capitalism that rewards a tiny percentage of investors at the expense of nearly everyone else. Sure, AI can be used to help people solve really hard problems and increase economic productivity. But it won’t happen widely enough unless companies invest in such opportunities.

Instead, O’Reilly argues, the relentless imperative to maximize returns to shareholders makes companies more likely to use automation purely as a way to save money. For example, he decries how big corporations replace full-time staff with low-wage part-timers whose schedules are manipulated by software that treats them, O’Reilly says, like “disposable components.” The resulting savings, he says, are too frequently plowed into share buybacks and other financial legerdemain rather than R&D, capital investments, worker training, and other things that tend to create good new jobs.

This is actually counter to corporate interests in the long run, because today’s well-paid workers can afford to be customers for tomorrow’s products. But companies are led astray by the rewards for short-term cost cutting, which O’Reilly calls “the unexamined algorithms that rule our economy.” And, he adds, “for all its talk of disruption, Silicon Valley is too often in thrall to that system.”

What to do? Among other things, O’Reilly suggests raising the minimum wage and taxing robots, carbon emissions, and financial transactions. Rather than pursuing IPOs and playing Wall Street’s game, he believes, technology entrepreneurs should spread wealth with other models, like member cooperatives and investment structures that reward long-term thinking. As for a universal basic income, an old idea coming around again because of the fear that computers will render human labor all but worthless, O’Reilly seems open to the possibility that it will be necessary someday. But he isn’t calling for it yet. Indeed, it seems like a failure of imagination to assume that the next step from where we are now is just to give up on the prospect of most people having jobs.

In today’s political climate, the tax increases and other steps O’Reilly advocates might seem as far-fetched as a computer that tricks a guy into thinking his wife has been resurrected. But at least O’Reilly is worrying about the right problems. Long before anyone figures out how to create a superintelligence, common sense—the human version—can tell us that the instability already being caused by economic inequality will only worsen if AI is used to narrow ends. One thing is for sure: we won’t get superintelligence if Silicon Valley is overrun by 99 percenters with pitchforks.

Brian Bergstein is a contributing editor at MIT Technology Review and the editor of Neo.Life.