While Alec Ross was working as Senior Advisor for Innovation to the Secretary of State, he traveled to forty-one countries, exploring the latest advances coming out of every continent. From startup hubs in Kenya to R&D labs in South Korea, Ross has seen what the future holds.
In The Industries of the Future, Ross provides a “lucid and informed guide” (Financial Times) to the changes coming in the next ten years. He examines the fields that will most shape our economic future, including robotics and artificial intelligence, cybercrime and cybersecurity, the commercialization of genomics, the next step for big data, and the impact of digital technology on money and markets. In each of these realms, Ross addresses the toughest questions: How will we have to adapt to the changing nature of work? Is the prospect of cyberwar sparking the next arms race? How can the world’s rising nations hope to match Silicon Valley with their own innovation hotspots? And what can today’s parents do to prepare their children for tomorrow?
Ross blends storytelling and economic analysis to show how sweeping global trends are affecting the ways we live. Sharing insights from global leaders—from the founders of Google and Twitter to defense experts like David Petraeus—Ross reveals the technologies and industries that will drive the next stage of globalization. The Industries of the Future is “a riveting and mind-bending book” (New York Journal of Books), a “must read” (Wendy Kopp, Founder of Teach for America) regardless of “whether you follow these fields closely or you still think of Honda as a car rather than a robotics company” (Forbes).
Related collections and offers
|Publisher:||Simon & Schuster|
|Product dimensions:||5.50(w) x 8.30(h) x 0.90(d)|
About the Author
Read an Excerpt
The Industries of the Future
Welcome your new job takers and caregivers. The coming decade will see societies transform as humans learn to live alongside robots.
Japan is home to the longest-living citizens on earth and the biggest elderly population of any country—and it’s not getting any younger. Japan’s current life expectancy is 80 years for men and 87 years for women and is expected to rise to 84 and 91, respectively, over the next 45 years. Between 2010 and 2025, the number of Japanese citizens 65 years or older is expected to increase by 7 million. Today, 25 percent of Japan’s population is age 65 or older. By 2020, this is projected to increase to 29 percent and reach 39 percent by 2050.
All of those long-living elderly will need caretakers. Yet Japan’s low birthrates mean that what once was a staple of Japanese family life—taking care of one’s grandparents and great-grandparents—will no longer be a viable model at the scale the nation needs. There will not be enough grandchildren.
With Japan’s persistently strict immigration policies curtailing the number of workers in the country, there will not be enough humans around to do the job at all. Japan’s Ministry of Health, Labor, and Welfare predicts a need for 4 million eldercare nurses by 2025. Right now there are only 1.49 million in the country. Japan allows only 50,000 work visas annually, and unless something drastic changes, the math does not work.
This labor shortage will hit service-industry jobs like eldercare with ferocity and will be exacerbated because caretakers have a high job turnover rate due to low pay and high rates of work-related injury from lifting patients.
Enter the robots.
Our future caretakers are being developed in a Japanese factory right now. Just as Japanese companies reinvented cars in the 1970s and consumer electronics in the 1980s, they are now reinventing the family. The robots depicted in the movies and cartoons of the 1960s and 1970s will become the reality of the 2020s.
Rival Japanese companies Toyota and Honda are leveraging their expertise in mechanical engineering to invent the next generation of robots. Toyota built a nursing aide named Robina—modeled after Rosie, the cartoon robot nanny and housekeeper in The Jetsons—as part of their Partner Robot Family, a line of robots to take care of the world’s growing geriatric population. Robina is a “female” robot, 60 kilograms in weight and 1.2 meters tall, that can communicate using words and gestures. She has wide-set eyes, a moptop hairdo, and even a flowing white metallic skirt.
Robina’s brother, Humanoid, serves as a multipurpose home assistant. He can do the dishes, take care of your parents when they’re sick, and even provide impromptu entertainment: one model plays the trumpet, another the violin. Both versions are doppelgangers for the famous Star Wars C-3PO robot, although in gleaming white instead of gold.
In response, Honda has created ASIMO (the Advanced Step in Innovative Mobility robot), a fully functional humanoid that looks like a four-foot-tall astronaut stuck on Earth. ASIMO is sophisticated enough to interpret human emotions, movements, and conversation. Equipped with cameras that function as eyes, ASIMO can follow voice commands, shake hands, and answer questions with a nod or by voice. He even bows to greet others, demonstrating good Japanese manners. For an elderly patient, ASIMO can fulfill a range of tasks, from helping the patient get out of bed to holding a conversation.
Honda is also focusing much of its research and commercialization on robotic limbs and assistance devices that are robotic but not freestanding robots. Its Walking Assist device wraps around the legs and back of people with weakened leg muscles, giving them extra power to move on their own. In the future, expect to see Honda making robotic hands and arms. Its goal is nothing less than helping paraplegics walk and the very frail rediscover the speed and power of their youth.
Numerous other Japanese companies are pushing the big players like Toyota and Honda. Tokai Rubber Industries, in conjunction with the Japanese research institute RIKEN, has unveiled the Robot for Interactive Body Assistance (RIBA), which can pick up and set down humans up to 175 pounds and is designed for patient comfort: it resembles a giant smiling bear and is covered in a soft skin to guard against injury or pain. Similarly, Japanese industrial automation company AIST has created PARO, a robot baby harp seal covered in soft white fur. PARO exhibits many of the same behaviors as a real pet. Designed for those who are too frail to care for a living animal or who live in environments that don’t allow pets, such as nursing homes, it enjoys being held, gets angry when hit, and likes to nap. When President Barack Obama met PARO a few years ago on a tour of Japanese robotics innovations, he instinctually reached out and rubbed its head and back. It looks like a cute stuffed animal, but costs $6,000 and is classified by the US government as a class 2 medical device.
Japan already leads the world in robotics, operating 310,000 of the 1.4 million industrial robots in existence across the world. It’s turning to eldercare robots in part because it has to and in part because it, uniquely, is in a great position to leverage its advanced industrial technology toward the long assembly line of the human life span. But can robots really take care of humans?
Japan’s private and public sectors certainly think so. In 2013, the Japanese government granted $24.6 million to companies focusing on eldercare robotics. Japan’s prominent Ministry of Economy, Trade, and Industry chose 24 companies in May 2013 to receive subsidies covering one-half to two-thirds of the R&D costs for nursing care robots. Tasks for these robots include helping the elderly move between rooms; keeping tabs on those likely to wander; and providing entertainment through games, singing, and dancing.
Nevertheless, difficult challenges remain. On the technical side, it remains difficult to design robots capable of intimate activities like bathing patients or brushing their teeth. And most Japanese companies that are developing these robots specialize in industrial motors and electronic automation. They didn’t enter the caretaking field with a keen grasp of how to forge an emotional connection, a crucial aspect of eldercare. Even as they improve, some observers—like Sherry Turkle, a professor of the social studies of science and technology at MIT—question whether patients will ever be able to form a true emotional connection with robot caretakers. As Turkle warns, “For the idea of artificial companionship to be our new normal, we have to change ourselves, and in the process we are remaking human values and human connection.” If robot nurses catch on, she explains, they may even create a chasm between younger and older generations. “It’s not just that older people are supposed to be talking,” Turkle argues, referring to the goal of creating robots that can hold conversation, “younger people are supposed to be listening. We are showing very little interest in what our elders have to say. We are building the machines that will literally let their stories fall on deaf ears.”
These technical questions (Can a robot brush a person’s teeth?) and almost-spiritual doubts (Can, and should, emotional connections be made between humans and robots?) are both valid. Yet robot technology and applicability continue to advance in Japan, and answers to these questions will likely arise there in the near future. With too few caretakers, I expect robots to become a regular part of the Japanese family system.
If the aging nation can pull it off, robot caretakers will be a boon for its economy and will soon make the jump to the global economy, with potentially far-reaching consequences.
Much of the rest of the industrialized world is on the verge of a period of advanced aging that will mirror Japan’s own. In Europe, all 28 member states of the European Union have populations that are growing older, and in the decades ahead, the percentage of Europe’s population aged 65 and older will grow from 17 percent to 30 percent. China is already entering a period of advanced aging even as it continues to develop. Although its one-child policy is already being phased out, China is now demographically lopsided. Chinese women have on average 1.4 children, well below the replacement rate of 2.1, resulting in too few young people to provide for the elderly. The notable exception is the United States, where immigration policies partially mitigate the effects of an aging population.
As the populations of developed nations continue to age, they create a big market for those Japanese robots. And caretaking robots, alongside robotic limb technology, may simply be the first in a new wave of complex robots entering our everyday lives. Robots will be the rare technology that reaches the mainstream through elderly users first, spreading down as grandma shows off her next cutting-edge gadget for the kids and grandkids.
The robot landscape will be vastly differentiated by country. Just as wealthier and poorer citizens reside at different technological levels, so do wealthier and poorer countries.
A few countries have already established themselves as leading robot societies. About 70 percent of total robot sales take place in Japan, China, the United States, South Korea, and Germany—known as the “big five” in robotics. Japan, the United States, and Germany dominate the landscape in high-value industrial and medical robots, and South Korea and China are major producers of less expensive consumer-oriented robots. While Japan records the highest number of robot sales, China represents the most rapidly growing market, with sales increasing by 25 percent every year since 2005.
There is quite a gap between the big five and the rest of the world. As both consumers and producers of robots, these countries outpace all others. By way of illustration, the number of industrial robots produced in South Korea, a country of 50 million people, is several times greater than the number produced in South America, Central America, Africa, and India combined, with populations totaling 2.8 billion. Russia is effectively a nonplayer in robotics despite its industrial base. It neither produces nor buys robots to any significant degree, instead maintaining extractive industries (natural gas, oil, iron, nickel) and industrial manufacturing plants that look and function the way they did in the 1970s and 1980s.
The big five’s comparative advantage might even accelerate in the future, for these are the same countries that are most likely to incorporate the next generation of robotics into society, work, and home. They will own the name brands in consumer robots, and they’ll power the software and networks that enable the robotics ecosystem. When I think about this symbiosis, I think about the Internet in the 1990s. It was not just the consumer-facing Internet companies that were born and based in Silicon Valley; it was also the network equipment makers like Cisco Systems and Juniper Networks. Today Cisco and Juniper have a combined 85,000 employees and $154 billion in market value. The same types of back-end systems will exist in the robotics industry. And the big five countries will benefit from being home to the high-paying jobs and wealth accumulation that go with being out ahead of the 191 other countries around the world. They will produce the Ciscos and Junipers of robotics.
Interestingly, less developed countries might be able to leapfrog technologies as they enter the robot landscape. Countries in Africa and Central Asia have been able to go straight to cell phones without building landline telephones, and in the same way they might be able to jump ahead in robotics without having to establish an advanced industrial base.
The African Robotics Network (AFRON) offers a good model. A community of individuals and institutions, AFRON hosts events and projects to boost robotics-related education, research, and industry on the continent. Through initiatives like its 10 Dollar Robot Challenge, AFRON encourages the development of extremely low-cost robotics education. One winner was RoboArm, a project from Obafemi Awolowo University in Nigeria whose armlike structure is made out of plastic and runs on scavenged motors. The ability to generate low-cost innovation based on scarcity of materials is rooted in the concept of frugal innovation, which will be discussed in chapter 6.
As robotics starts to spread, the degree to which countries can succeed in the robot era will depend in part on culture—on how readily people accept robots into their lives. Western and Eastern cultures are highly differentiated in how they view robots. Not only does Japan have an economic need and the technological know-how for robots, but it also has a cultural predisposition. The ancient Shinto religion, practiced by 80 percent of Japanese, includes a belief in animism, which holds that both objects and human beings have spirits. As a result, Japanese culture tends to be more accepting of robot companions as actual companions than is Western culture, which views robots as soulless machines. In a culture where the inanimate can be considered to be just as alive as the animate, robots can be seen as members of society rather than as mere tools or as threats.
In contrast, fears of robotics are deeply seated in Western culture. The threat of humanity creating things we cannot control pervades Western literature, leaving a long history of cautionary tales. Prometheus was condemned to an eternity of punishment for giving fire to humans. When Icarus flew too high, the sun melted his ingenious waxed wings and he fell to his death. In Mary Shelley’s Frankenstein, Dr. Frankenstein’s grotesque creation wreaks havoc and ultimately leads to its creator’s death—and numerous B-movie remakes.
This fear does not pervade Eastern culture to the same extent. The cultural dynamic in Japan is representative of the culture through much of East Asia, enabling the Asian robotics industry to speed ahead, unencumbered by cultural baggage. Investment in robots reflects a cultural comfort with robots, and, in China, departments of automation are well represented and well respected in the academy. There are more than 100 automation departments in Chinese universities, compared with approximately 76 in the United States despite the larger total number of universities in the United States.
In South Korea, teaching robots are seen in a positive light; in Europe, they are viewed negatively. As with eldercare, in Europe robots are seen as machines, whereas in Asia they are viewed as potential companions. In the United States, the question is largely avoided because of an immigration system that facilitates the entry of new, low-cost labor that often ends up in fields that might otherwise turn to service robots. In the other parts of the world, attitudes often split the difference. A recent study in the Middle East showed that people would be open to a humanoid household-cleaning robot but not to robots that perform more intimate and influential roles such as teaching. The combination of cultural, demographic, and technological factors means that we will get our first glimpse of a world full of robots in East Asia.
The first wave of labor substitution from automation and robotics came from jobs that were often dangerous, dirty, and dreary and involved little personal interaction, but increasingly, robots are encroaching on jobs in the service sector that require personalized skills. Jobs in the service sector that were largely immune from job loss during the last stage of globalization are now at risk because advances in robotics have accelerated in recent years, due to breakthroughs in the field itself as well as new advancements in information management, computing, and high-end engineering. Tasks once thought the exclusive domain of humans—the types of jobs that require situational awareness, spatial reasoning and dexterity, contextual understanding, and human judgment—are opening up to robots.
Two key developments have dovetailed to make this possible: improvements in modeling belief space and the uplink of robots to the cloud. Belief space refers to a mathematical framework that allows us to model a given environment statistically and develop probabilistic outcomes. It is basically the application of algorithms to make sense of new or messy contexts. For robots, modeling belief space opens the way for greater situational awareness. It has led to breakthroughs in areas like grasping, once a difficult robot task. Until recently belief space was far too complex to sufficiently compute, a task made all the more difficult by the limited sets of robot experience available to analyze. But advances in data analytics (described in chapter 5) have combined with exponentially greater sets of experiential robot data to enable programmers to develop robots that can now intelligently interact with their environment.
The recent exponential growth of robot data is due largely to the development of cloud robotics, a term coined by Google researcher James Kuffner in 2010. Linked to the cloud, robots can access vast troves of data and shared experience to enhance the understanding of their own belief space. Before being hooked up to the cloud, robots had access to very limited data—either their own experience or that of a narrow cluster of robots. They were stand-alone pieces of electronics with capabilities that were limited to the hardware and software inside the unit. But by becoming networked devices, constantly connected to the cloud, robots can now incorporate the experiences of every other robot of their kind, “learning” at an accelerating rate. Imagine the kind of quantum leap that human culture would undertake if we were all suddenly given a direct link to the knowledge and experience of everyone else on the planet—if, when we made a decision, we were drawing from not just our own limited experience and expertise but from that of billions of other people. Big data has enabled this quantum leap for the cognitive development of robots.
Another major development in robotics arrives through materials science, which has allowed robots to be constructed of new materials. Robots no longer have to be cased in the aluminum bodies of armor that characterized C-3PO or R2-D2. Today’s robots can have bodies made of silicone, or even spider silk, that are eerily natural looking. Highly flexible components—such as air muscles (which distribute power through tubes holding highly concentrated pressurized air), electroactive polymers (which change a robot’s size and shape when stimulated by an electric field), and ferrofluids (basically magnetic fluids that facilitate more humanlike movement)—have created robots that you might not even recognize as being artificial, almost like the Arnold Schwarzenegger cyborg in The Terminator. An imitation caterpillar robot designed by researchers at Tufts University to perform tasks as varied as finding land mines and diagnosing diseases is even biodegradable—just like us.
Robots are now also being built both bigger and smaller than ever before. Nanorobots, still in the early phases of development, promise a future in which autonomous machines at the scale of 10-9 meters (far, far smaller than a grain of sand) can diagnose and treat human diseases at the cellular level. On the other end of the spectrum, the world’s largest walking robot is a German-made fire-breathing dragon that stands at 51 feet long, weighs 11 tons, and is filled with more than 20 gallons of stage blood. Apparently the Germans have a festival involving it.
Recent advances will continue. It is not just Japan’s government that is devoting ever-increasing resources to robotics. In the United States, President Obama launched the National Robotics Initiative in 2011 to stimulate development of robots for industrial automation, elder assistance, and military applications. Run by the National Science Foundation, the program has awarded more than $100 million in contracts. France has initiated a similar program, pledging $126.9 million to develop its industry and catch up to Germany. Sweden has similarly earmarked millions to give out to individuals and corporations through innovation awards such as Robotdalen (“robot valley”), launched in 2011.
The private sector is also investing at increasingly higher levels. Google purchased Boston Dynamics, a leading robotics design company with Pentagon contracts, for an untold sum in December 2013. It also bought DeepMind, a London-based artificial intelligence company founded by wunderkind Demis Hassabis. As a kid, Hassabis was the second-highest-ranked chess player in the world under the age of 14, and while he was getting his PhD in cognitive neuroscience, he was acknowledged by Science magazine for making one of the ten most important science breakthroughs of the year after developing a new biological theory for how imagination and memory work in the brain. At DeepMind, Demis and his colleagues effectively created the computer equivalent of hand-eye coordination, something that had never been accomplished before in robotics. In a demo, Demis showed me how he had taught his computers how to play old Atari 2600 video games in the same way that humans play them, based on looking at a screen and adjusting actions through neural processes responding to an opponent’s actions. He’d taught computers how to think in much the way that humans do. Then Google bought DeepMind for half a billion dollars and is applying its expertise in machine learning and systems neuroscience to power the algorithms it is developing as it expands beyond Internet search and further into robotics.
Most corporate research and development in robotics comes from within big companies (like Google, Toyota, and Honda), but venture capital funding in robotics is growing at a steep rate. It more than doubled in just three years, from $160 million in 2011 to $341 million in 2014. In its first year of investment, Grishin Robotics, a $25 million seed investment fund, evaluated more than 600 start-ups before coming to terms with the eight now in its portfolio. Singulariteam, a new Israeli venture capital fund, quickly raised two funds of $100 million each to invest in early-stage robotics and artificial intelligence. The appeal for investors is obvious: the market for consumer robots could hit $390 billion by 2017, and industrial robots should hit $40 billion in 2020.
As the technology continues to improve, there is an ongoing debate about just how radically human life will be transformed by advanced robots and whether robots will ultimately surpass us. One view in the debate is that it’s inevitable robots will pass us; another is that they can’t possibly compete with us; a third is that man and machine could merge. Within the robotics community, the future of technology is wrapped up in the concept of singularity, the theoretical point in time when artificial intelligence will match or surpass human intelligence. If singularity is achieved, it is unclear what the relationship between robots and humans will become. (In the Terminator series, once singularity is achieved, a self-aware computer system decides to launch a war on humans.) Enthusiasts for the singularity imagine that investments in robotics will do more than strengthen corporate balance sheets; they will radically enhance human well-being, eliminating mundane tasks and replacing diseased or aging parts of our bodies. The technology community is deeply divided about whether singularity is a good thing or a bad thing, with one camp believing it will enhance human experience as another camp, equally large, believes it will unleash a dystopian future where people become subservient to machines.
But will singularity occur?
Those who believe that singularity will be achieved point to several key factors. First, they argue that Moore’s law, which holds that the amount of computing power we can fit into a chip will double every two years, shows little sign of slowing down. Moore’s law applies to the transistors and technology that control robots as well as those in computers. Add rapid advances in machine learning, data analytics, and cloud robotics, and it’s clear that computing is going to keep rapidly improving. Those who argue for the singularity differ on when it will occur. Mathematician Vernor Vinge predicts that it will occur by 2023; futurist Ray Kurzweil says 2045. But the question looming over singularity is whether there’s a limit on how far our technology can ultimately go.
Those who argue against the possibility of singularity point to several factors. The software advances necessary to reach singularity demand a detailed scientific understanding of the human brain, but our lack of understanding about the basic neural structure of the brain impedes software development. Moreover, while weak artificial intelligence, whereby robots simply specialize in a specific function, is currently advancing exponentially, strong artificial intelligence, whereby robots demonstrate humanlike cognition and intelligence, is advancing only linearly. While inventions like IBM’s Watson (the computer designed by IBM that beat Jeopardy! champions Ken Jennings and Brad Rutter) are exciting, scientists need a better understanding of the brain before these advances progress beyond winning a game show. Watson didn’t actually “think”; it was basically a very comprehensive search engine querying a large database. As robotics expert and UC Berkeley professor Ken Goldberg explains, “Robots are going to become increasingly human. But the gap between humans and robots will remain—it’s so large that it will be with us for the foreseeable future.”
It’s my view that the current moment in the field of robotics is very much like where the world stood with the Internet 20 years ago. We are at the beginning of something: chapter one, page one. Just as it would have been difficult in the days of dial-up modems to imagine an Internet video service like YouTube streaming over 6 billion hours of video every month, it is difficult for us to imagine today that lifelike robots may walk the streets with us, work in the cubicle next to ours, or take our elderly parents for a walk and then help them with dinner. This is not happening today and it will not happen tomorrow, but it will happen during most of our lifetimes. The level of investment in robotics, combined with advances in big data, network technologies, materials science, and artificial intelligence, are setting the foundation for the 2020s to produce breakthroughs in robotics that bring today’s science fiction right into mainstream use.
Innovation in robotics will produce advances in degree—robots doing things faster, safer, or less expensively than humans—and also in kind: they’ll be doing things that would be impossible for humans to do, like allowing a sick, homebound 12-year-old to go to school, or giving those who are deaf and mute the power of speech.
People have been thinking about building driverless cars for almost as long as cars have been around. General Motors introduced the modern concept of the driverless car at the 1939 World’s Fair in New York, conceiving of a radio-guided car that could be developed alongside a modern highway system. Then in 1958, GM developed the first driverless test car, the Firebird, which would connect to a track wired with electrical cable. When hooked up with other cars, the system would let each know how much distance to give the others—not unlike the famous cable cars of San Francisco, which use a similar method to propel themselves and maintain safe distances.
But prior to the 2000s, the driverless car remained little more than a futuristic concept. As Sebastian Thrun, founder of the Google Car Project, explained, “There was no way, before 2000, to make something interesting. The sensors weren’t there, the computers weren’t there, and the mapping wasn’t there. Radar was a device on a hilltop that cost $200 million dollars. It wasn’t something you could buy at RadioShack.” His Google colleague Anthony Levandowski described the shortcomings of the earlier electric models in this way: “We don’t have the money to fix potholes. Why would we invest in putting wires in the road?”
Today, however, almost every major car company is researching and building its own version of a driverless car. But the company at the forefront is not a traditional car company at all: it’s Google. For the past six years, the tech giant’s moon shot development lab, Google X, has been working on the driverless Google car. While much of the technology is proprietary and secret, the company has disclosed a few of its most prominent features. Among other technologies, the Google car includes radar, cameras to ensure that cars stay within lanes, and a light detection and ranging system. Infrared, 3D imaging, an advanced GPS system, and wheel sensors are also being incorporated.
But why would Google get into the car-making business in the first place?
It stems from several important motivations for many of those involved. And it turns out that the development of a driverless car is deeply personal. As Sebastian Thrun explained in a TED talk, his best friend was killed in a car accident, spurring his personal crusade to innovate the car accident out of existence: “I decided I’d dedicate my life to saving 1 million people every year.”
Google has hired the former deputy director of the National Highway Traffic Safety Administration, Ron Medford, to be its director of safety for self-driving cars. Medford explained that Americans collectively drive approximately 3 trillion miles per year, and more than 30,000 people die in the process. Worldwide, those statistics are enormous; approximately 1.3 million people die every year in car crashes.
Google, of course, also has an interest in allowing consumers to have more time on their hands—quite literally, to have their hands free. The average American spends 18.5 hours a week driving, and Europeans spend about half that. Any time not spent behind the wheel is time you can spend using a Google product.
But will it work?
There is ample reason to think that robodrivers will be safer than we are now. Accidents are caused by the four Ds: distraction, drowsiness, drunkenness, and driver error. The driverless car promises to reduce all of these significantly. Chris Gerdes, a Stanford professor of engineering, cautions that driverless cars won’t fully wipe out human error, but rather will shift it from driver to programmer; that’s in all likelihood a significant step forward, especially if a human driver and the programmer can work together. A similar process has unfolded over the years with airplanes, which are now largely driven by autopilot with the pilot still stepping in during crucial times. There remain many gaps to be filled before we can unequivocally say that robodrivers are safer than human drivers. At the top of the list is the software development still to be done to enable robodriving in bad weather and to account for unexpected changes in traffic (e.g., when there’s a detour or a police officer directs traffic). But on the whole, given how rapidly progress has occurred and how well the Google car has been shown to perform in clear weather, it’s likely that at least partial-robotic driving will arrive in the near future.
The feasibility of the Google car depends on a range of technological, legal, safety, and commercial considerations. Will the technology work? Will it actually make the roads safer? Will people trust and purchase it? Will it even be legal?
These are not academic questions. While only California, Florida, and Nevada have passed laws as of 2013 permitting autonomous cars on the roads, these already represent huge driving cultures and markets. The driverless car has the potential to fundamentally disrupt the modern automotive industry and all of its various branches. As with every other development in robotics, many people will gain—some, like Google’s executives and shareholders, may gain immensely—but it’s inevitable that others will be displaced. Technology companies have already challenged the automotive market. Uber, the mobile app that connects passengers with drivers for hire, has turned the taxi market on its ear. But what happens when that market is challenged by robots? Uber has already built a robotics research lab stuffed with scientists to “kickstart autonomous taxi fleet development” so they can go driverless. At last count, there were 162,037 active drivers in the Uber fleet who would be kickstarted into obsolescence.
In the United States and many other countries, taxi drivers are often immigrants or others hustling their way up the socioeconomic ladder. It’s also a job with tremendous amounts of human interaction. Cab drivers are a great source for every new diplomat or lazy journalist. Conversations with a taxi driver can help assess the national mood, determine what the politics are, or just find out what the weather will be. I suppose a robot can tell you all this—probably with more precision. But will we lose the human touch? More to the point, even if passengers end up preferring robot drivers to humans, what happens to the human taxi driver who loses his job because service industry jobs are at risk in the next wave of innovation as never before?
This isn’t just about taxi drivers; the delivery driver may be replaced by Amazon’s airborne delivery drones or automated delivery trucks. UPS and Google are also testing their own versions of the delivery drone. Two and a half million people in the United States make their living from driving trucks, taxis, or buses, and all of them are vulnerable to displacement by self-driving cars. It’s hard to wrap your head around all the changes this might mean. I met the CEO of a company that develops high-tech access control systems (like the new parking garage system at the airport that tells you how many open spaces are available on each floor) and asked him what worries him about the future. He cited a disruption that I’d never considered before: what driverless cars might mean for parking garages. Would the cars just drive themselves back home and come back when needed? Why have your car sit in a garage and have to pay for it?
The degree to which delivery drones fill the sky or driverless cars fill the streets will eventually be determined not by whether it is feasible technologically and economically—at some point it will be—but by whether humans accept the changes they bring about. Who would you rather trust behind the wheel: a friend, a parent, a person—or a black box that you can’t control? Even though accidents happen every day with cars, would we be willing to accept the same from a software glitch? Judging by how much scrutiny each plane crash receives, probably not. If there were a pile-up on the highway because of a software glitch, there would be calls to take the system offline. Such a thing happens every day with human drivers. We have grown to accept that driving leads to more than 1 million deaths a year. Would we accept a computer-based system that produces tens of thousands or hundreds of thousands instead from driverless cars? Probably not. The driverless system will have to prove to be nearly perfect before it scales.
Robots are also beginning to play an important role in the operating room, another place with zero tolerance for error given the life-and-death stakes. In 2013, 1,300 surgical robots were sold for an average cost of $1.5 million each, accounting for 6 percent of professional service robots and 41 percent of the total sales value of industry robots. The number of robotic procedures is increasing by about 30 percent a year, and more than 1 million Americans have already undergone robotic surgery.
The medical applications of robotics are varied. There’s the da Vinci surgical system manufactured by Intuitive Surgical in the United States. It’s a minimally invasive remote robotic system created to assist with complex surgeries such as cardiac valve repair and is used in more than 200,000 surgeries a year. The robot translates a surgeon’s hand into more precise “micromovements” of the robot’s tiny instruments. But at a cost of $1.8 million, it’s only available to the wealthiest hospitals and institutions. Then there is the Raven, designed for the US Army, a newer surgical robot that can test out experimental procedures. At $250,000, it’s a much more accessible option than the da Vinci system, and it’s the first surgical robot to use open-source software, which could allow for lower-cost telesurgery systems.
Johnson & Johnson’s SEDASYS system automates the sedation of patients undergoing colonoscopies, easing the over $1 billion cost of sedation each year. The services of anesthesiologists typically increase the price of surgery by $600 to $2,000. SEDASYS, already approved by the Food and Drug Administration and going into hospitals today, would cost only $150 per procedure. It would not eliminate anesthesiologists altogether. Instead, like autopilot, systems like SEDASYS merely aid the doctor, enabling an anesthesiologist to monitor ten procedures taking place simultaneously as opposed to having an anesthesiologist in each operating room.
Beyond aiding in existing procedures, robots will even be able to reach places that human surgeons cannot. Ken Goldberg’s research team is working on treating cancer with robots that could be temporarily inserted into the human body to release radiation. Instead of radiation from an external source, which damages healthy living tissues along with cancer, these robots release a radio beam inside the body that emits radiation into cancer cells with pinpoint accuracy. Using 3D printing, a medical engineer can even create a customized implant that can travel through a patient’s body to fit perfectly where it’s needed.
Despite the promise of robot-assisted surgery, it is important not to jump to techno-utopianism. Allegations of unreported injuries from robotic surgery are troublingly common. The Journal for Healthcare Quality has reported 174 injuries and 71 deaths related to da Vinci surgeries. With the pressure on insurance companies and health care providers to lower costs, I worry that there will be market forces pushing robots into the operating room at times when a patient is better served by a human being. Robots can eventually improve outcomes in health care, but it would be a human failing if we rush to Doctor Robot due to financial considerations alone.
Robots are also having an impact in the medical field outside the operating room. Across the globe, 70 million people have severe hearing and speech impairments. There is rarely a medical solution to being deaf or mute, and people with these disabilities often live at high levels of social exclusion. While I was traveling in Ukraine, a group of engineering students in their twenties showed me a shiny black-and-blue robot glove called Enable Talk that uses flex sensors in the fingers to recognize sign language and translate it to text on a smartphone via Bluetooth. This text is in turn converted to speech, allowing the deaf and mute person to be able to now “speak” and be heard in real time. With advances like Enable Talk’s robot inserts and robot sensory enhancement, robotics might not just aid medicine; the distinction between human and machine itself could start to become blurred.
We can see this line start to blur at Greenleaf Elementary School in Splendora, Texas, where a 12-year-old boy named Christian was diagnosed with acute lymphoblastic leukemia. Because his immune system was compromised, he could not attend school. Instead, a VGo robot, made by a company in New Hampshire, sits in the front row of class for him. The robot has a network-enabled video camera, allowing Christian to sit in his living room and from his laptop see and hear what is happening in class in real time. He can raise his hand (which VGo does for him), be called on by the teacher, and answer a question that the teacher and whole class can hear through the speakers on the robot. Through his robot, Christian leaves the building for fire drills. He walks the halls and stands in line with the students. And students talk to Christian, the sick, homebound 12-year-old, by talking to his robot.
A French robotics company, Aldebaran, has created another interesting use for robots in the classroom: a less than two-foot-tall humanoid robot called NAO that is serving as a teaching assistant in science and computer science classes in 70 countries. It has also been adapted to serve as a classroom buddy to help students with autism communicate more effectively. At an elementary school in Harlem, the NAO robot sits or stands on students’ desks and helps them with their math work, all while a professor from Columbia University’s Teachers College (who got her PhD from Keio University in Japan) monitors and studies the interactions and pedagogy.
Ten years ago, the advances now entering operating rooms and classrooms would have been nearly impossible to foresee. As researchers, entrepreneurs, and investors think about new applications of robotics, they are no longer considering only tasks that could be done more efficiently by a machine than a human. They are thinking more and more about doing things that humans could never have imagined doing on their own—like Ken Goldberg’s radiation-emitting nanobots or Honda’s Walk Assist robot that enables otherwise wheelchair-bound people to walk.
Another idiosyncratic but vivid example can be seen in South Korea, where fishermen had long been powerless to deal with the negative impact of jellyfish on their businesses. Jellyfish cost the world’s fishing and other maritime industries billions of dollars annually—$300 million in South Korea alone. Then the Urban Robotics Lab at the Korea Advanced Institute of Science and Technology created JEROS—the Jellyfish Elimination Robotic Swarm—a large, autonomous blender that hunts and kills jellyfish at a rate of up to one ton of jellyfish every hour.
While robots are doing certain things that humans could never do, their main use continues to be work that humans have been doing occupationally for centuries. The term robot was coined in a 1920 play, Rossum’s Universal Robots, by the Czech science-fiction writer Karel Čapek. But its name betrays deeper historical roots. Robot derives its etymological roots from two Czech words, rabota (“obligatory work”) and robotnik (“serf”), to describe, in Čapek’s conception, a new class of “artificial people” that would be created to serve humans.
Robots in essence represent the merger of two long-standing trends: the advancement of technology to do our work and the use of a servant class that can provide cheap labor for higher classes of society. In this light, robots are a sign of technological advancement but also an updated version of the slave labor that in past centuries people used to exploit other human beings.
The next generation of robots will be mass-produced at declining costs that will make them increasingly competitive with even the lowest-wage workers. They will dramatically affect employment patterns as well as broader economic, political, and social trends. An example can be seen with Foxconn, the Taiwanese company that manufactures your iPhone, along with many other gadgets developed by companies like Apple, Microsoft, and Samsung. Its largest factory complex, in the Shenzhen manufacturing zone near Hong Kong, employs half a million workers in 15 separate factories.
Perhaps thinking ahead about both the economics and the sociology of his business, Foxconn’s founder and chairman, Terry Gou, announced a plan in 2011 to purchase 1 million robots over the next three years to supplement the approximately 1 million human workers he employs. Gou has come under fire for his factories’ poor working conditions and labor mistreatment. Many workers live inside the factory itself and work up to twelve hours a day, six days a week. But what happens to Gou’s 1 million human workers when they have 1 million robot coworkers? While the robots are designed to work alongside humans, they’re also designed to keep Gou from having to hire more humans, effectively ending job creation in his factories.
Right now, Gou’s robots are slated to take over routine jobs like painting, welding, and basic assembly. Each of these robots currently costs $25,000, about three times a worker’s average annual salary, although the Taiwanese firm Delta plans to sell a similar version for $10,000. By the end of 2011, Foxconn had 10,000 robots, or one for every 120 workers, in its facilities. By the end of 2012, the number of robots had jumped to 300,000, or one for every four workers. Gou hopes to have the first fully automated plant in operation in the next five to ten years.
Why would Foxconn make such a massive investment in robotics? Some of it may have to do with Gou’s peculiar management style. As he explained in a 2012 New York Times article, “As human beings are also animals, to manage one million animals gives me a headache.” But Gou is also responding to pure market forces as well. For the past ten years, Gou was able to amass such a large workforce because labor in China has been so cheap. But wages in China have risen along with its overall economic growth—wages for manufacturing jobs have soared between fivefold and ninefold in the past decade—making it increasingly expensive to maintain a large Chinese labor force.
Boiled down to economic terms, the choice between employing humans versus buying and operating robots involves a trade-off in terms of expenditures. Human labor involves very little “capex,” or capital expenditures—up-front payments for things like buildings, machinery, and equipment—but high “opex,” or operational expenditures, the day-to-day costs such as salary and employee benefits. Robots come with a diametrically opposed cost structure: their up-front capital costs are high, but their operating costs are minor—robots don’t get a salary. As the capex of robots continues to go down, the opex of humans becomes comparatively more expensive and therefore less attractive for employers.
As the technology continues to advance, robots will kill many jobs. They will also create and preserve others, and they will also create immense value—although as we have seen time and again, this value won’t be shared evenly. Overall, robots can be a boon, freeing up humans to do more productive things—but only so long as humans create the systems to adapt their workforces, economies, and societies to the inevitable disruption. The dangers to societies that don’t handle these transitions right are clear.
I anticipate that the same kind of protest and labor movements that advocated against free trade agreements in the 1990s will form in the 2020s once robots begin to really make their presence known in the workplace. The degree to which these robots look more lifelike because of advances in materials science will only make the response angrier and more scared. I got a glimpse of this during violent protests in my adopted hometown of Baltimore in spring 2015. The national and international media portrayed the protests as being about race-based police brutality. Those of us in Baltimore knew it was about more, though. While the triggering event was the death of a 25-year-old African American man in police custody, the protesters themselves consistently rooted their cause and rallying cry of “Black Lives Matter” in more than police brutality. It was about the hopelessness that came from growing up poor and black in a community that had been laid to waste with the loss of Baltimore’s industrial and manufacturing base and then gone ignored. Black working-class families had effectively been globalized and automated out of jobs. Many barely hold on with low-paying service industry jobs.
In industrialized countries, what we have witnessed in terms of manufacturing job loss is repeating itself across the economy. Now service industry jobs are also at risk—precisely the jobs that were shielded from job loss in the last wave of mechanization. During the recent recession, one in twelve people working in sales in the United States was laid off. Two Oxford University professors who studied more than 700 detailed occupational types have published a study making the case that over half of US jobs could be at risk of computerization in the next two decades. Forty-seven percent of American jobs are at high risk for robot takeover, and another 19 percent face a medium level of risk. Those with jobs that are hard to automate—lawyers, for example—may be safe for now, but those with more easily automated white-collar jobs, like paralegals, are at high risk. In the greatest peril are the 60 percent of the US workforce whose main job function is to aggregate and apply information.
When I was growing up, my mom worked as a paralegal at the Putnam County Courthouse in Winfield, West Virginia. Her job largely consisted of rummaging through enormous 15-pound books looking for specific information on old court cases and real estate closings. The books were so heavy and the stacks so high that my mom used to conscript me and my little brother to help her. Even as an unemployed high school student in the pre-Internet world when few people owned a home computer, I remember thinking that a computer should be able to do this job more efficiently. But my mom said, “If that ever happens, I won’t have a job.” Today my mom’s job is largely computerized. I now think the same thing about my dad, an attorney who’s still working at age 77 with a storefront legal practice just off Main Street in Hurricane, West Virginia. In the next wave of globalization, his job would be at risk as computers develop the ability to work through the more formulaic aspects of legal practice. The role of the lawyer litigating a case in front of judge and jury is not going to be mechanized. But the majority of what most lawyers actually do—developing and reviewing contracts, preparing stacks of paper in legal language to codify the sale of a house or car—these functions will disappear for all but the largest and most complex transactions.
These are just the tip of the iceberg. Think of those taxi drivers who could be replaced by driverless cars. Panasonic created a 24-fingered hairwashing robot that has been tested in Japanese salons. The robot will likely be installed in hospitals and homes as well. It measures the shape and size of the customer’s head and then rinses, shampoos, conditions, and dries the customer’s hair using its self-advertised “advanced scalp care” abilities.
Then there are waiters and waitresses. Working as a waiter has been an integral part of the career profile for millions of people around the world. By way of illustration, 50 percent of American adults have spent time working in a restaurant; 25 percent say it was their first job. More than 2.3 million people are currently employed as waiters or waitresses in the United States. There is potential for robots to replace many of these waitstaff jobs over time. It’s already happening in trial forms in many restaurants around the world. In Asia, many countries are starting to experiment with adopting robots in their restaurants. The Hajime restaurant in Bangkok solely uses robot waiters to take orders, serve customers, and bus tables. Similar restaurants are cropping up in Japan, South Korea, and China. These robots, designed by the Japanese company Motoman, are programmed to recognize an empty plate and can even express emotion and dance to entertain customers. It’s unclear exactly how you tip for good service.
The potential loss of restaurant jobs could mean a lot more than the loss of a paycheck; it could mean the loss of social mobility. Waiting tables is a job often held by those with big dreams but a small bank account. Young people, women, minorities, and those without a college degree disproportionally hold these positions and use them as a leg up in society. Currently youth unemployment in the United States is 12 percent, more than twice the nation’s overall average, and it is far higher in most of the rest of the world. If entry-level restaurant jobs are reduced or eliminated, how much harder will it be to get a first job? How about a second?
There are earlier precedents for these types of job declines. MIT professor Erik Brynjolfsson calls it “the great paradox of our era. Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and our organizations aren’t keeping up.” In the previous wave of globalization, bank tellers were largely replaced by ATMs, airline ticket counter workers were replaced by electronic kiosks, and travel agents were replaced by travel websites. The robot era may see an even more extreme blow to the sales sector.
The effect of robots on job loss will be highly differentiated by country. The countries that are best positioned are those that are developing and manufacturing robotics for export, that house the headquarters, the engineers, and the manufacturing facilities. These are nations like South Korea, Japan, and Germany.
Those at the highest risk are countries like China that have relied on cheap labor to build up their manufacturing base. As advances in robotics continue, what has happened to manufacturing jobs in many advanced industrial countries may soon happen to industrializing countries. Even in China, where labor has historically been cheapest, it has become increasingly advantageous economically to start buying robots, as Terry Gou is demonstrating at Foxconn.
How will the Chinese government respond to this development? The Tiananmen uprising was a quarter-century ago, but in the minds of Chinese leaders, it may as well have been yesterday. As China grows, it optimizes first and foremost for stability. Above all else, they don’t want political instability rooted in economic hardship. The Chinese don’t want Baltimore-style protests.
The Chinese government is taking a two-pronged approach: focusing on developing employment by investing heavily in the industries of the future while keeping labor costs low by continuing a forced urbanization policy. In 1950, 13 percent of China’s population lived in cities. Today, roughly half the population has been pushed into cities, and the government aims to push that statistic to 70 percent by 2025. This will mean the forced migration of 250 million people from the countryside to city factories in under a decade. Today China has five metropolitan areas with more than 10 million people and 160 with more than 1 million. By comparison, the United States has two metropolitan areas with more than 10 million people and 48 with more than 1 million. The Chinese government continues its forced urbanization program despite the major environmental, political, and administrative obstacles of doing so, because the goal is to keep the cost of labor low. Absent continued movement of people from rural China to the cities, the cost of labor will continue to go up; it’s simple supply and demand. If the cost of labor continues to rise, China will lose its special advantage in the global marketplace. Jobs that previously would have gone there have instead begun to move to even cheaper labor markets like Sri Lanka and Bangladesh.
This solution to the challenge of robotics amounts to little more than a country preparing for the future by doubling down on the past—even when it may no longer suit the current era. It’s a strategy that holds little hope for coping with the competitive markets of the future, as can be seen in West Virginia.
West Virginia’s economy was rooted in the coal mining industry of the 19th and 20th centuries. Scots-Irish immigrants provided cheap labor, and as the cost of these native Appalachians went up, Italian immigrants and then African Americans were brought in to provide lower-cost labor. But as machines grew cheaper and labor more expensive, employers opted for the machines. After all, machines can’t go on strike or get black lung, which killed my great-grandfather, an Italian immigrant who worked in the coal camps. The blue-collar workers who traditionally fueled the economy lost their jobs, and the economy fell apart. The state became older and depopulated. The day I was born in 1971, West Virginia had 2.1 million people. Today it has 1.7 million.
The decline of West Virginia was, in essence, a failure to convert from an economy rooted in the strength of people’s shoulders to one increasingly mechanized and information based. As much coal is being extracted in the hills of West Virginia today as was extracted decades ago, but the number of workers employed in the mines has plummeted. In 1908, 51,777 workers were employed in West Virginia mines; today only 20,076 people work the mines. Foxconn’s employees are the coal miners of today’s economy.
Robots will produce clear benefits to society. There will be fewer work-related injuries; fewer traffic accidents; safer, less invasive surgical procedures; and myriad new capabilities, from sick, homebound children being able to attend school to giving the power of speech to those who are deaf and mute. It is a net good for the world. The same can be said of globalization more broadly. It has increased wealth and well-being for people all over the world, but the states and societies (like my native West Virginia) that did not redirect their labor force toward growing areas of employment have foundered.
I think back to the men I worked with on the midnight janitors’ shift. Forty years ago, they would have had better-paying jobs in the coal mines or factories. By the 2020s, they might not be able to make a living even by pushing a mop. Right now at the Manchester Airport in England, robot janitors use laser scanners and ultrasonic detectors to navigate while cleaning the floors. If the robot encounters a human obstacle, it says in a proper English accent, “Excuse me, I am cleaning,” and then navigates around the person.
How societies adapt will play a key role in how competitive and how stable they are. The biggest wins from new technology will go to the societies and firms that don’t just double down on the past but that can adapt and direct their citizens toward industries that are growing. Robotics is one of them, and the others are the very focus of this book. That is why China is not just relying on forced urbanization to produce low-cost labor; it is also investing heavily in the industries of the future. There needs to be investment in growing fields like robotics but also a social framework that makes sure those who are losing their jobs are able to stay afloat long enough to pivot to the industries or positions that offer new possibilities. Many countries, particularly those in Northern Europe, are strengthening the social safety net so that displaced workers have hopes of reemerging in a new field. That means taking some of the billions of dollars of wealth that will be produced from the field of robotics and reinvesting it in education and skills development for the displaced taxi drivers and waitresses. The assumption with robots is that they’re all capex, no opex, but the capex you spend on robots doesn’t get rid of the opex that people still require. We need to revise that assumption to account for the ongoing costs of keeping our people competitive in tomorrow’s economy. We aren’t as easy to upgrade as software.
Table of Contents
1 Here Come the Robots 15
2 The Future of the Human Machine 44
3 The Code-Ification of Money, Markets, and Trust 76
4 The Weaponization of Code 121
5 Data: The Raw Material of the Information Age 152
6 The Geography of Future Markets 186
Conclusion: The Most Important Job You Will Ever Have 240