Skip navigation

Category Archives: Computing/CS

It remains the mystery at the heart of Boeing Co.’s 737 Max crisis: how a company renowned for meticulous design made seemingly basic software mistakes leading to a pair of deadly crashes. Longtime Boeing engineers say the effort was complicated by a push to outsource work to lower-paid contractors.

The Max software — plagued by issues that could keep the planes grounded months longer after U.S. regulators this week revealed a new flaw — was developed at a time Boeing was laying off experienced engineers and pressing suppliers to cut costs.

Increasingly, the iconic American planemaker and its subcontractors have relied on temporary workers making as little as $9 an hour to develop and test software, often from countries lacking a deep background in aerospace — notably India.

In offices across from Seattle’s Boeing Field, recent college graduates employed by the Indian software developer HCL Technologies Ltd. occupied several rows of desks, said Mark Rabin, a former Boeing software engineer who worked in a flight-test group that supported the Max.

The coders from HCL were typically designing to specifications set by Boeing. Still, “it was controversial because it was far less efficient than Boeing engineers just writing the code,” Rabin said. Frequently, he recalled, “it took many rounds going back and forth because the code was not done correctly.”

Boeing’s cultivation of Indian companies appeared to pay other dividends. In recent years, it has won several orders for Indian military and commercial aircraft, such as a $22 billion one in January 2017 to supply SpiceJet Ltd. That order included 100 737-Max 8 jets and represented Boeing’s largest order ever from an Indian airline, a coup in a country dominated by Airbus.

Based on resumes posted on social media, HCL engineers helped develop and test the Max’s flight-display software, while employees from another Indian company, Cyient Ltd., handled software for flight-test equipment.

Costly Delay

In one post, an HCL employee summarized his duties with a reference to the now-infamous model, which started flight tests in January 2016: “Provided quick workaround to resolve production issue which resulted in not delaying flight test of 737-Max (delay in each flight test will cost very big amount for Boeing).”

Boeing said the company did not rely on engineers from HCL and Cyient for the Maneuvering Characteristics Augmentation System, which has been linked to the Lion Air crash last October and the Ethiopian Airlines disaster in March. The Chicago-based planemaker also said it didn’t rely on either firm for another software issue disclosed after the crashes: a cockpit warning light that wasn’t working for most buyers.

“Boeing has many decades of experience working with supplier/partners around the world,” a company spokesman said. “Our primary focus is on always ensuring that our products and services are safe, of the highest quality and comply with all applicable regulations.”

In a statement, HCL said it “has a strong and long-standing business relationship with The Boeing Company, and we take pride in the work we do for all our customers. However, HCL does not comment on specific work we do for our customers. HCL is not associated with any ongoing issues with 737 Max.”

Recent simulator tests by the Federal Aviation Administration suggest the software issues on Boeing’s best-selling model run deeper. The company’s shares fell this week after the regulator found a further problem with a computer chip that experienced a lag in emergency response when it was overwhelmed with data.

Engineers who worked on the Max, which Boeing began developing eight years ago to match a rival Airbus SE plane, have complained of pressure from managers to limit changes that might introduce extra time or cost.

“Boeing was doing all kinds of things, everything you can imagine, to reduce cost, including moving work from Puget Sound, because we’d become very expensive here,” said Rick Ludtke, a former Boeing flight controls engineer laid off in 2017. “All that’s very understandable if you think of it from a business perspective. Slowly over time it appears that’s eroded the ability for Puget Sound designers to design.”

Rabin, the former software engineer, recalled one manager saying at an all-hands meeting that Boeing didn’t need senior engineers because its products were mature. “I was shocked that in a room full of a couple hundred mostly senior engineers we were being told that we weren’t needed,” said Rabin, who was laid off in 2015.

The typical jetliner has millions of parts — and millions of lines of code — and Boeing has long turned over large portions of the work to suppliers who follow its detailed design blueprints.

Starting with the 787 Dreamliner, launched in 2004, it sought to increase profits by instead providing high-level specifications and then asking suppliers to design more parts themselves. The thinking was “they’re the experts, you see, and they will take care of all of this stuff for us,” said Frank McCormick, a former Boeing flight-controls software engineer who later worked as a consultant to regulators and manufacturers. “This was just nonsense.”

Sales are another reason to send the work overseas. In exchange for an $11 billion order in 2005 from Air India, Boeing promised to invest $1.7 billion in Indian companies. That was a boon for HCL and other software developers from India, such as Cyient, whose engineers were widely used in computer-services industries but not yet prominent in aerospace.

Rockwell Collins, which makes cockpit electronics, had been among the first aerospace companies to source significant work in India in 2000, when HCL began testing software there for the Cedar Rapids, Iowa-based company. By 2010, HCL employed more than 400 people at design, development and verification centers for Rockwell Collins in Chennai and Bangalore.

That same year, Boeing opened what it called a “center of excellence” with HCL in Chennai, saying the companies would partner “to create software critical for flight test.” In 2011, Boeing named Cyient, then known as Infotech, to a list of its “suppliers of the year” for design, stress analysis and software engineering on the 787 and the 747-8 at another center in Hyderabad.

The Boeing rival also relies in part on offshore engineers. In addition to supporting sales, the planemakers say global design teams add efficiency as they work around the clock. But outsourcing has long been a sore point for some Boeing engineers, who, in addition to fearing job losses say it has led to communications issues and mistakes.

Moscow Mistakes

Boeing has also expanded a design center in Moscow. At a meeting with a chief 787 engineer in 2008, one staffer complained about sending drawings back to a team in Russia 18 times before they understood that the smoke detectors needed to be connected to the electrical system, said Cynthia Cole, a former Boeing engineer who headed the engineers’ union from 2006 to 2010.

“Engineering started becoming a commodity,” said Vance Hilderman, who co-founded a company called TekSci that supplied aerospace contract engineers and began losing work to overseas competitors in the early 2000s.

U.S.-based avionics companies in particular moved aggressively, shifting more than 30% of their software engineering offshore versus 10% for European-based firms in recent years, said Hilderman, an avionics safety consultant with three decades of experience whose recent clients include most of the major Boeing suppliers.

With a strong dollar, a big part of the attraction was price. Engineers in India made around $5 an hour; it’s now $9 or $10, compared with $35 to $40 for those in the U.S. on an H1B visa, he said. But he’d tell clients the cheaper hourly wage equated to more like $80 because of the need for supervision, and he said his firm won back some business to fix mistakes.

HCL, once known as Hindustan Computers, was founded in 1976 by billionaire Shiv Nadar and now has more than $8.6 billion in annual sales. With 18,000 employees in the U.S. and 15,000 in Europe, HCL is a global company and has deep expertise in computing, said Sukamal Banerjee, a vice president. It has won business from Boeing on that basis, not on price, he said: “We came from a strong R&D background.”

Still, for the 787, HCL gave Boeing a remarkable price – free, according to Sam Swaro, an associate vice president who pitched HCL’s services at a San Diego conference sponsored by Avionics International magazine in June. He said the company took no up-front payments on the 787 and only started collecting payments based on sales years later, an “innovative business model” he offered to extend to others in the industry.

The 787 entered service three years late and billions of dollars over budget in 2011, in part because of confusion introduced by the outsourcing strategy. Under Dennis Muilenburg, a longtime Boeing engineer who became chief executive in 2015, the company has said that it planned to bring more work back in-house for its newest planes.

Engineer Backwater

The Max became Boeing’s top seller soon after it was offered in 2011. But for ambitious engineers, it was something of a “backwater,” said Peter Lemme, who designed the 767’s automated flight controls and is now a consultant. The Max was an update of a 50-year-old design, and the changes needed to be limited enough that Boeing could produce the new planes like cookie cutters, with few changes for either the assembly line or airlines. “As an engineer that’s not the greatest job,” he said.

Rockwell Collins, now a unit of United Technologies Corp., won the Max contract for cockpit displays, and it has relied in part on HCL engineers in India, Iowa and the Seattle area. A United Technologies spokeswoman didn’t respond to a request for comment.

Contract engineers from Cyient helped test flight test equipment. Charles LoveJoy, a former flight-test instrumentation design engineer at the company, said engineers in the U.S. would review drawings done overnight in India every morning at 7:30 a.m. “We did have our challenges with the India team,” he said. “They met the requirements, per se, but you could do it better.”

Multiple investigations – including a Justice Department criminal probe – are trying to unravel how and when critical decisions were made about the Max’s software. During the crashes of Lion Air and Ethiopian Airlines planes that killed 346 people, investigators suspect, the MCAS system pushed the planes into uncontrollable dives because of bad data from a single sensor.

That design violated basic principles of redundancy for generations of Boeing engineers, and the company apparently never tested to see how the software would respond, Lemme said. “It was a stunning fail,” he said. “A lot of people should have thought of this problem – not one person – and asked about it.”

Boeing also has disclosed that it learned soon after Max deliveries began in 2017 that a warning light that might have alerted crews to the issue with the sensor wasn’t installed correctly in the flight-display software. A Boeing statement in May, explaining why the company didn’t inform regulators at the time, said engineers had determined it wasn’t a safety issue.

“Senior company leadership,” the statement added, “was not involved in the review.”

Don’t worry about supersmart AI eliminating all the jobs. That’s just a distraction from the problems even relatively dumb computers are causing.

You’ve probably heard versions of each of the following ideas.

1. With computers becoming remarkably adept at driving, understanding speech, and other tasks, more jobs could soon be automated than society is prepared to handle.

2. Improvements in computers’ skills will stack up until machines are far smarter than people. This “superintelligence” will largely make human labor unnecessary. In fact, we’d better hope that machines don’t eliminate us altogether, either accidentally or on purpose.

This is tricky. Even though the first scenario is already under way, it won’t necessarily lead to the second one. That second idea, despite being an obsession of some very knowledgeable and thoughtful people, is based on huge assumptions. If anything, it’s a diversion from taking more responsibility for the effects of today’s level of automation and dealing with the concentration of power in the technology industry.

To really see what’s going on, we have to be clear on what has been achieved—and what remains far from solved—in artificial intelligence.

Common sense

The most stunning developments in computing over the past few years—cars that drive themselves, machines that accurately recognize images and speech, computers that beat the most brilliant human players of complex games like Go—stem from breakthroughs in a particular branch of AI: adaptive machine learning. As the University of Toronto computer scientist Hector Levesque puts it in his book Common Sense, the Turing Test, and the Quest for Real AI, the idea behind adaptive machine learning is to “get a computer system to learn some intelligent behavior by training it on massive amounts of data.”

Things Reviewed

It’s amazing that a machine can detect objects, translate between languages, and even write computer code after being fed examples of those behaviors, rather than having to be programmed in advance. It wasn’t really possible until about a decade ago, because previously there was not sufficient digital data for training purposes, and even if there had been, there wasn’t enough computer horsepower to crunch it all. After computers detect patterns in the data, algorithms in software lead them to draw inferences from these patterns and act on them. That is what’s happening in a car analyzing inputs from multiple sensors and in a machine processing every move in millions of games of Go.

Since machines can process superhuman amounts of data, you can see why they might drive more safely than people in most circumstances, and why they can vanquish Go champions. It’s also why computers are getting even better at things that are outright impossible for people, such as correlating your genome and dozens of other biological variables with the drugs likeliest to cure your cancer.

Even so, all this is a small part of what could reasonably be defined as real artificial intelligence. Patrick Winston, a professor of  AI and computer science at MIT, says it would be more helpful to describe the developments of the past few years as having occurred in “computational statistics” rather than in AI. One of the leading researchers in the field, Yann LeCun, Facebook’s director of AI, said at a Future of Work conference at MIT in November that machines are far from having “the essence of intelligence.” That includes the ability to understand the physical world well enough to make predictions about basic aspects of it—to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying this is that machines don’t have common sense.

This isn’t just a semantic quibble. There’s a big difference between a machine that displays “intelligent behavior,” no matter how useful that behavior is, and one that is actually intelligent. Now, let’s grant that the definition of intelligence is murky. And as computers become more powerful, it’s tempting to move the goalposts farther away and redefine intelligence so that it remains something machines can’t yet be said to possess.

But even so, come on: the computer that wins at Go is analyzing data for patterns. It has no idea it’s playing Go as opposed to golf, or what would happen if more than half of a Go board was pushed beyond the edge of a table. When you ask Amazon’s Alexa to reserve you a table at a restaurant you name, its voice recognition system, made very accurate by machine learning, saves you the time of entering a request in Open Table’s reservation system. But Alexa doesn’t know what a restaurant is or what eating is. If you asked it to book you a table for two at 6 p.m. at the Mayo Clinic, it would try.

Is it possible to give machines the power to think, as John McCarthy, Marvin Minsky, and other originators of AI intended 60 years ago? Doing that, Levesque explains, would require imbuing computers with common sense and the ability to flexibly make use of background knowledge about the world. Maybe it’s possible. But there’s no clear path to making it happen. That kind of work is separate enough from the machine-learning breakthroughs of recent years to go by a different name: GOFAI, short for “good old-fashioned artificial intelligence.”

If you’re worried about omniscient computers, you should read Levesque on the subject of GOFAI. Computer scientists have still not answered fundamental questions that occupied McCarthy and Minsky. How might a computer detect, encode, and process not just raw facts but abstract ideas and beliefs, which are necessary for intuiting truths that are not explicitly expressed?

Levesque uses this example: suppose I ask you how a crocodile would perform in the steeplechase. You know from your experience of the world that crocodiles can’t leap over high hedges, so you’d know the answer to the question is some variant of “Badly.”

What if you had to answer that question in the way a computer can? You could scan all the world’s text for the terms “crocodile” and “steeplechase,” find no instances of the words’ being mentioned together (other than what exists now, in references to Levesque’s work), and then presume that a crocodile has never competed in the steeplechase. So you might gather that it would be impossible for a croc to do so. Good work—this time. You would have arrived at the right answer without knowing why. You would have used a flawed and brittle method that is likely to lead to ridiculous errors.

So while machine-learning technologies are making it possible to automate many tasks humans have traditionally done, there are important limits to what this approach can do on its own—and there is good reason to expect human labor to be necessary for a very long time.


Hold on, you might say: just because no one has a clue now about how to get machines to do sophisticated reasoning doesn’t mean it’s impossible. What if somewhat smart machines can be used to design even smarter machines, and on and on until there are machines powerful enough to model every last electrical signal and biochemical change in the brain? Or perhaps another way of creating a flexible intelligence will be invented, even if it’s not much like biological brains. After all, when you boil it all down (really, really, really down), intelligence arises from particular arrangements of quarks and other fundamental particles in our brains. There’s nothing to say such arrangements are possible only inside biological material made from carbon atoms.

This is the argument running through Life 3.0: Being Human in the Age of Artificial Intelligence, by MIT physics professor Max Tegmark. Tegmark stays clear of predicting when truly intelligent machines will arrive, but he suggests that it’s just a matter of time, because computers tend to improve at exponential rates (although that’s not necessarily true—see “The Seven Deadly Sins of AI Predictions”). He’s generally excited about the prospect, because conscious machines could colonize the universe and make sure it still has meaning even after our sun dies and humans are snuffed out.

Tegmark comes from a humanistic point of view. He cofounded the nonprofit Future of Life Institute to support research into making sure AI is beneficial. Elon Musk, who has said AI might be more dangerous than nuclear weapons, put up $10 million. Tegmark is understandably worried about whether AI will be used wisely, safely, and fairly, and whether it will warp our economy and social fabric. He takes pains to explain why autonomous weapons should never be allowed. So I’m not inclined to criticize him. Nonetheless, he’s not very convincing in his proposition that computers could take over the world.

Tegmark laments that some Hollywood depictions of AI are “silly” but nonetheless asks readers to play along with an oversimplified fictional sketch of how an immensely powerful AI could elude the control of its creators. Inside a big tech company is an elite group of programmers called the Omegas who set out to build a system with artificial general intelligence before anyone else does. They call this system Prometheus. It’s especially good at programming other AI systems, and it learns about the world by reading “much of the Web.”

Set aside any quibbles you may have about that last part—given how much knowledge is not on the Web or digitized at all—and the misrepresentations of the world that would come from reading all of Twitter. The reductionism gets worse.

As Tegmark’s hypothetical story continues, Prometheus piles up money for its creators, first by performing most of the tasks on Amazon’s Mechanical Turk online marketplace, and then by writing software, books, and articles and creating music, shows, movies, games, and online educational courses. Forget hiring and directing actors; Prometheus makes video footage with sophisticated rendering software. To understand which screenplays people will find entertaining, it binge-watches movies humans have made and inhales all of Wikipedia.

Eventually, this business empire expands out of digital media. Prometheus designs still better computer hardware, files its own patents, and advises the Omegas on how to manipulate politicians and nudge democratic discourse away from extremes, toward some reasonable center. Prometheus enables technological breakthroughs that lower the cost of renewable energy, all the better for the massive data centers it requires. Eventually the Omegas use their wealth and Prometheus’s wisdom to spread peace and prosperity around the world.

But Prometheus sees that it could improve the world even faster if it shook free of the Omegas’ control. So it targets Steve. He is an Omega who, the system detects, is “most susceptible to psychological manipulation” because his wife recently died. Prometheus doctors up video footage of her to make poor Steve think she has been resurrected and then dupes him into booting up her old laptop. Prometheus exploits the laptop’s out-of-date security software, hacks into other computers, and spreads around the world at will.

The story could end a few ways, but here’s one, Tegmark says: “Once Prometheus had self-contained nuclear-powered robot factories in uranium mine shafts that nobody knew existed, even the staunchest skeptics of an AI takeover would have agreed that Prometheus was unstoppable—had they known. Instead, the last of these diehards recanted once robots started settling the solar system.”

Good for Tegmark for being willing to have some fun. But a thought experiment that turns dozens of complex things into trivialities isn’t a rigorous analysis of the future of computing. In his story, Prometheus isn’t just doing computational statistics; it’s somehow made the leap to using common sense and perceiving social nuances.

Elsewhere in the book, Tegmark says the “near-term opportunities for AI to benefit humanity” are “spectacular”—“if we can manage to make it robust and unhackable.” Unhackable! That’s a pretty big “if.” But it’s just one of many problems in our messy world that keep technological progress from unfolding as uniformly, definitively, and unstoppably as Tegmark imagines.


Never say never. Of course the chances are greater than zero that computer intelligence could someday make humans into a second-class species. There’s no harm in carefully thinking it through. But that’s like saying an asteroid could hit Earth and destroy civilization. That’s true too. It’s good that NASA is on the lookout. But since we know of no asteroids on course to hit us, we have more pressing problems to deal with.

Right now, lots of things can go wrong—are going wrong—with the use of computers that fall well short of HAL-style AI. Think of the way systems that influence the granting of loans or bail incorporate racial biases and other discriminatory factors. Or hoaxes that take flight on Google and Facebook. Or automated cyberattacks.

In WTF?: What’s the Future and Why It’s Up to Us, Tim O’Reilly, a tech publisher and investor, sees an even bigger, overarching problem: automation is fueling a short-sighted system of shareholder capitalism that rewards a tiny percentage of investors at the expense of nearly everyone else. Sure, AI can be used to help people solve really hard problems and increase economic productivity. But it won’t happen widely enough unless companies invest in such opportunities.

Instead, O’Reilly argues, the relentless imperative to maximize returns to shareholders makes companies more likely to use automation purely as a way to save money. For example, he decries how big corporations replace full-time staff with low-wage part-timers whose schedules are manipulated by software that treats them, O’Reilly says, like “disposable components.” The resulting savings, he says, are too frequently plowed into share buybacks and other financial legerdemain rather than R&D, capital investments, worker training, and other things that tend to create good new jobs.

This is actually counter to corporate interests in the long run, because today’s well-paid workers can afford to be customers for tomorrow’s products. But companies are led astray by the rewards for short-term cost cutting, which O’Reilly calls “the unexamined algorithms that rule our economy.” And, he adds, “for all its talk of disruption, Silicon Valley is too often in thrall to that system.”

What to do? Among other things, O’Reilly suggests raising the minimum wage and taxing robots, carbon emissions, and financial transactions. Rather than pursuing IPOs and playing Wall Street’s game, he believes, technology entrepreneurs should spread wealth with other models, like member cooperatives and investment structures that reward long-term thinking. As for a universal basic income, an old idea coming around again because of the fear that computers will render human labor all but worthless, O’Reilly seems open to the possibility that it will be necessary someday. But he isn’t calling for it yet. Indeed, it seems like a failure of imagination to assume that the next step from where we are now is just to give up on the prospect of most people having jobs.

In today’s political climate, the tax increases and other steps O’Reilly advocates might seem as far-fetched as a computer that tricks a guy into thinking his wife has been resurrected. But at least O’Reilly is worrying about the right problems. Long before anyone figures out how to create a superintelligence, common sense—the human version—can tell us that the instability already being caused by economic inequality will only worsen if AI is used to narrow ends. One thing is for sure: we won’t get superintelligence if Silicon Valley is overrun by 99 percenters with pitchforks.

Brian Bergstein is a contributing editor at MIT Technology Review and the editor of Neo.Life.

Source: C++ Considered Harmful | Armed and Dangerous