Average Is Over: Powering America Beyond the Age of the Great Stagnationby Tyler Cowen
The United States continues to mint more millionaires and billionaires than any country ever. Yet, since the great recession, three quarters of the jobs created here pay only marginally more than minimum wage. Why is there growth only at the top and the bottom/i>/i>
- Editorial Reviews
- Product Details
- Related Subjects
- Read an Excerpt
- What People Are Saying
- Meet the author
The groundbreaking follow-up to the New York Times bestseller The Great Stagnation
The United States continues to mint more millionaires and billionaires than any country ever. Yet, since the great recession, three quarters of the jobs created here pay only marginally more than minimum wage. Why is there growth only at the top and the bottom?
Renowned economist and bestselling author Tyler Cowen explains that high earners are taking ever more advantage of machine intelligence and achieving ever-better results. Meanwhile, nearly every business sector relies less and less on manual labor, and that means a steady, secure life somewhere in the middle—average—is over.
In Average is Over, Cowen lays out how the new economy works and identifies what workers and entrepreneurs young and old must do to thrive in this radically new economic landscape.
“A buckle-your-seatbelts, swiftly moving tour of the new economic landscape.” - Kirkus Reviews
"Cowen has a single core strength...his taste for observations that are genuinely enlightening, interesting, and underappreciated." - The Daily Beast
"A bracing new book" - The Economist
"Tyler Cowen's new book Average is Over makes an excellent followup to his previous work The Great Stagnation and I expect it will set the intellectual agenda in much the way its predecessor did." - Slate
"The author roves broadly and interestingly to make his case, outlining radical economic transformations that lie in store for us, predicting the rise and fall of cities depending on their capacity to adapt to this machine-driven world and offering policy prescriptions for preserving American prosperity." - The Wall Street Journal
"Audacious and fascinating." - The Financial Times
"Thomas Friedman - move over. There's a new guy on the block." - Tampa Bay Tribune
"Eminently readable." - The Brookings Institute
"Cowen has a rare ability to present fundamental economic questions without all of the complexity and jargon that make many economics books inaccessible to the lay reader." - The American Interest
Praise for The Great Stagnation
“As Cowen makes clear, many of this era’s technological breakthroughs produce enormous happiness gains, but surprisingly little additional economic activity” —David Brooks, The New York Times
“One of the most talked-about books among economists right now.” —Renee Montagne, Morning Edition, NPR
“Tyler Cowen may very well turn out to be this decade’s Thomas Friedman.” —Kelly Evans, The Wall Street Journal
“Perhaps it’s the mark of a good book that after you’ve read it, you begin to see evidence for it’s thesis in lots of different areas… it’s well worth the time and the money.” —Ezra Klein, The Washington Post
“Cowen says over the last 300 years the U.S. has eaten all the low-hanging fruit. We’ve exhausted the easy pickings of abundant land, technological advance, and basic education for the masses. We thought the low-hanging fruit would never run out. It did, but we pushed ahead. And thus Cowen’s understated but penetrating summation of the financial crisis: “We thought we were richer than we were.” —Bret Swanson, Forbes
"The Great Stagnation has become the most debated nonfiction book so far this year." - David Brooks, The New York Times
"Cowen's book...will have a profound impact on the way people think about the last thirty years." - Ryan Avent, Economist.com
Praise for An Economist Gets Lunch
“Part Economic history… part guide to getting a better meal at home or a restaurant. Renowned economist…Professor Cowen is an expert on the economics of culture and the arts.” —Damon Darlin, The New York Times Dining Section
“[a] Calvin Trillin-like ode to tamale stands and ethnic food, the more exotic the better” —Dwight Garner, The New York Times Book Review Section
“If one’s goal is to eat well, Mr. Cowen’s rules are golden.” —Graeme Wood, The Wall Street Journal
“An Economist Gets Lunch is a mind-bending book for non-economists.” —USA Today
- Penguin Publishing Group
- Publication date:
- Sold by:
- Penguin Group
- NOOK Book
- Sales rank:
- File size:
- 1 MB
- Age Range:
- 18 Years
Read an Excerpt
Work and Wages in iWorld
his book is far from all good news. Being young and having no job remains stubbornly common. Wages for young people fortunate enough to get a job have gone down. Inflation-adjusted wages for young high school graduates were 11 percent higher in 2000 than they were more than a decade later, and inflation-adjusted wages of young college graduates (four years only) have fallen by more than 5 percent. Unemployment rates for young college graduates have been running for years now in the neighborhood of 10 percent and underemployment rates near 20 percent. The sorry truth is that a lot of young people are facing diminished job opportunities, even several years after the formal end of the recession in 2009, when the economy began to once again expand after a historic contraction.
Many people are seeing the erosion of their economic futures. The labor market troubles of the young—which you can observe in many countries—are a harbinger of the new world of work to come.
Lacking the right training means being shut out of opportunities like never before.
At the same time, the very top earners, who often have advanced postsecondary degrees, are earning much more. Average is over is the catchphrase of our age, and it is likely to apply all the more to our future.
This maxim will apply to the quality of your job, to your earnings, to where you live, to your education and to the education of your children, and maybe even to your most intimate relationships. Marriages, families, businesses, countries, cities, and regions all will see a greater split in material outcomes; namely, they will either rise to the top in terms of quality or make do with unimpressive results.
These trends stem from some fairly basic and hard-to-reverse forces: the increasing productivity of intelligent machines, economic globalization, and the split of modern economies into both very stagnant sectors and some very dynamic sectors. Consider the iPhone. The iPhone is made on a global scale, and it blends computers, the internet, communications, and artificial intelligence in one blockbuster, game-changing innovation. It reflects so many of the things that our contemporary world is good at, indeed great at. Today’s iPhone would have been the most powerful computer in the world as recently as 1985. Yet to cite two contrasting sectors, typical air travel doesn’t go faster than it did in 1970, and it is not clear our K–12 educational system has much improved.
This imbalance in technological growth will have some surprising implications. For instance, workers more and more will come to be classified into two categories. The key questions will be: Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? Worst of all, are you competing against the computer? Are computers helping people in China and India compete against you?
If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.
This insight clarifies many key issues, such as how we should reform our education; where new jobs will come from and why (some) wages might start rising again; which regions will see skyrocketing real estate prices and which will empty out; why some companies will get smarter and smarter, while others just try to ship product out the door; which human beings will earn a lot more and which workers will move to low-rent areas to make ends meet; and how shopping, dating, and meeting negotiations will all change.
What lies ahead of us will be a very surprising time, and it is likely that new technologies already emerging will lead us out of what I called in a previous book “the great stagnation.” It is true that there has been a persistent slowdown in real economic growth in the Western world and Japan, but this book suggests how that might plausibly change. It is not the new technologies per se; it is how some of us will use them.
The technology of intelligent machines may conjure up science fiction visions of rebellious robots or computers that feel and maybe fall in love or proclaim themselves to be gods. The reality of the progress on the ground is based on an integration of capabilities rather than on any one thing that might be described as “artificial intelligence.” What is happening is an increase in the ability of machines to substitute for intelligent human labor, whether we wish to call those machines “AI,” “software,” “smart phones,” “superior hardware and storage,” “better integrated systems,” or any combination of the above. This is the wave that will lift you or that will dump you.
The fascination with technology and the future of work has inspired some fascinating writings, including Martin Ford’s classic The Lights in the Tunnel, the more recent and excellent eBook Race Against the Machine by Erik Brynjolfsson and Andrew McAfee, and Ray Kurzweil’s futuristic work on how humans will meld with technology. Debates about mechanization periodically resurface, most prominently in the 1930s and in the 1960s but now once again in our new millennium. Average Is Over builds upon these influential works and attempts to go beyond them in terms of detail and breadth. In these pages I paint a vision of a future which at first appears truly strange, but at least to me is also discomfortingly familiar and indeed intuitive. As a blogger and economics writer, I find that the question I receive most often from readers is—by far— something like: “What will the low- and mid-skilled jobs of the future look like?” This question is on everyone’s mind with a new urgency but it goes back to David Ricardo and Charles Babbage in the nineteenth century. Ricardo was a leading economist of his time who wrote on “the machinery question,” while Babbage was the intellectual father of the modern computer and he—not coincidentally—also wrote on how radical mechanization was going to reshape work.
These questions have reemerged as culturally central because we are at the crux of a technological revolution once again. It’s becoming increasingly clear that mechanized intelligence can solve a rapidly expanding repertoire of problems. Solutions began appearing on the margins of the world’s interests. Deep Blue, an IBM computer, defeated the then–world champion Garry Kasparov in a chess match in 1997. Watson, a computer program, beat Ken Jennings—the human champion—on Jeopardy! in 2010, surpassing most expectations as to how quickly this would happen. Interesting developments, yes, but the technological news is becoming more central to our concerns.
We’re on the verge of having computer systems that understand the entirety of human “natural language,” a problem that was considered a very tough one only a few years ago. Just talk to Siri on your iPhone and she is likely to understand your voice, give you the right answer, and help you make an appointment. Siri disappoints with its mistakes and frequently obtuse responses, but it—or its competitors—will improve rapidly with more data and with assistance from crowd-sourced recommendations and improvements. We’re close to the point where the available knowledge at the hands of the individual, for questions that can be posed clearly and articulately, is not so far from the knowledge of the entire world. Whether it is through Siri, Google, or Wikipedia, there is now almost always a way to ask and—more importantly—a way to receive the answer in relatively digestible form.
It must be emphasized that every time you use Google you are relying on machine intelligence. Every time Facebook recommends a new friend for you or sends an ad your way. Every time you use GPS to find your way to a party.
Don’t write off those robots either, even if they may never pray to God or pass for human beings. In 2011 Taiwan-based Foxconn, the world’s largest contract electronics manufacturer, announced a plan to increase the use of robots in its factories one hundredfold within three years, bringing the total to one million robots. After recent wage increases in China—to levels still low by Western standards—the company doesn’t consider its labor so cheap anymore. In the United States as well, the use of industrial robots is booming, and the likely future for North America is that of a coherent economic unit where the United States, Canada, and Mexico band together to make major investments in customized robot production and then use these investments to dominate global manufacturing.
Robot-guided mechanical arms are common in the operating room, and computers spend more time flying our planes than do the pilots. South Korea is experimenting with robotic prison wardens that patrol when the inmates do something wrong and report the misdeeds.
Driverless cars are already operating on the streets of Berlin and Nevada, and Florida and California have passed bills to legalize computer-commanded “driverless cars” on their roads. Google’s team has test-driven hundreds of thousands of miles with these cars, so far without an accident or major incident; the one reported five-car pileup happened after a human took over from the computer. Some Google employees have their self-driving vehicles take them to work. These car robots don’t look like something from The Jetsons; the driverless features on these cars are a bunch of sensors, wires, and software. This technology works.
There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”
Software is also encroaching upon journalism. One experiment found that the intelligent mechanized analysis of Narrative Science, a start-up from Illinois, can do a passable job of taking statistics and writing up descriptions of sporting events, company financial reports, and macroeconomic data. These programs won’t soon be at the frontier of creative journalism, but they may soon be generating a lot of run-of-the-mill news for purposes of search and storage. They also may take away some jobs: Should the local newspaper really send a reporter down to that minor league baseball game? Software is not only taking a shot at writing essays but also grading them and providing instant feedback on student work in progress, analysis that is well beyond grading multiple-choice quizzes. These programs still need to work out some bugs (a clever student can game them with coherent-sounding nonsense), but they are much further along than we had been expecting five or ten years ago. Writers and teachers need to consider what aspects of their work are better done by intelligent-machine analysis and look closely at the irreplaceable value they do provide.
Date-matching algorithms are steering our love lives and replacing the matchmaker. Match.com recently improved its services, and as of summer 2011 more than half of the emails sent on the service originate from recommended matches, rather than from unaided individual choices. Better algorithms often are seen as the future of the sector, whether or not they really find the best person for us. Arguably the machine recommendations are a way of tricking the user into making a plausible date choice rather than cruising more profiles and postponing a decision; that possibility illustrates our willingness to defer to the machines, even when they aren’t necessarily better at the task at hand.
On Netflix, it is now standard for users to consult or follow the system’s algorithm for movie choices. We make the choice, but we have a new smart partner when it comes to movie watching.
Some of these developments may end up creeping us out, precisely because they might prove effective. The city of Santa Cruz, California, has already begun using mechanized intelligence to deploy police officers against car and property theft. The program, written by a team of social scientists and two mathematicians, generates predictions about which areas and windows of time are most likely to see property crimes; the model has been published in Journal of the American Statistical Association. As new crimes occur, the predictions are recalibrated daily; it’s based on some of the models for predicting aftershocks from earthquakes. The program awaits further study, but this will not be the last attempt to automate crime prevention. The TSA is experimenting with software that tries to detect, by scanning body language, which plane passengers have hostile intentions.
Not each and every one of these innovations will pay off. But let’s ask a few questions. First, in which major areas do we see ongoing technological advances exceeding expectations from just a few years ago? Second, in which areas do we see a lot of new and promising technological works in progress? Third, in which areas can we expect the general forces propelling innovation (say globalization or Moore’s law that processing power for computers will continue rising at rapid rates) to remain powerful? Finally, can we see evidence that these areas are already influencing economic statistics measuring our nation’s well-being? I’ll get into more detail on all of these questions, but for now the point is that the areas of the economy identified in the answers to these questions all overlap on one technology: mechanized intelligence. And its effect on economic statistics is trending up.
The next step perhaps is that we will start to see just how well some of these machines can predict our behavior. One of Isaac Asimov’s most profound works is his neglected short story Franchise. In this tale democratic elections have become nearly obsolete. Intelligent machines absorb most of the current information about economic and political conditions and estimate which candidate is going to win. (In fact a small number of variables, such as the change in GDP, the unemployment rate, the inflation rate, and the presence of a major war, predict presidential elections pretty well.) In the story, however, the machines can’t quite do the job on their own, as there are some ineffable social influences the machines cannot measure and evaluate. The American government thus picks out one “typical” person from the electorate and asks him or her some questions about moods. The answers, combined with the initial computer diagnosis, suffice to settle the election. No one needs to actually vote.
This will sound outrageous to many. It seems to cross a precious line of liberty and freedom. But perhaps we are not as free as we might think in the first place. Given your background, your friends, your family, the books you read, and the movies you watch, how surprising is your vote in a federal election? The future of technology is likely to illuminate the unsettling implications of how predictable we are and indeed in 2012 political campaigns invested heavily in predicting where to find supporters and important swing districts.
Here is a short paragraph from The New York Times that illustrates where we already have arrived:
As Pole’s computers crawled through the data, he was able to identify about 25 products that, when analyzed together, allowed him to assign each shopper a “pregnancy prediction” score. More important, he could also estimate her due date to within a small window, so Tar
get could send coupons timed to very specific stages of
The computers crawling over this mountain of records of individual retail sales used an algorithm that recognized pregnant women buying lots of calcium, magnesium, and zinc supplements early in their pregnancies, unscented lotion around the beginning of the second trimester, and hand sanitizer and extra-big bags of cotton balls as they approach the date of delivery.
Whether we like it or not, our sparring partners will use mechanized intelligence during their business contests. When a major negotiation is being conducted, or when potential business partners are being introduced, the interaction will be recorded, processed, and analyzed in real time, just as the genius machine Watson parses a Jeopardy! question. Each party to the communications might receive a real-time report on when the other people are likely lying, what their stress levels are, how detailed their narratives are, who in the group really speaks with authority, and how many first-person pronouns they are using, all based on an analysis of voice data. From this data, and other measurable factors, a program will construct and transmit some kind of “read” on the conversation. The machine doesn’t have to reach perfection or even come close, it just has to do better than you.
There is work in progress on using software to detect human lies, based on computer analyses of our voices; Dan Jurafsky of Stanford University and Julia Hirschberg of Columbia University are two leaders in this area. They claim their programs already can detect deception more readily than can human observers; in any case, improvements and refinements stand a good chance of getting us there.
Imagine a vibrating iPhone in one’s pocket transmitting signals, based on the computer analysis—a slight vibration indicating each time a lie is told. Or a message could appear in your contact lenses. But it isn’t just about the gadgetry. Eventually it will be commonly understood that such analyses are going on in real time. Negotiators will be trained to fool or otherwise throw off the voice-tracking programs. In turn the programs will be improved to keep up with these tactics, setting up a never-ending “arms race” between technologies of deception and detection. And a new kind of sophisticated social interaction will develop. That is bigger news that any new gadget.
Will we see this social interaction prove effective in business negotiations and stop there? Our human curiosity is likely to run its usual courses. There will be new options for calming nerves and doubts on first dates: Does she like me? Is this guy trying to hit on me? Is he actually married? Will she let me kiss her on the cheek? Of course, we won’t be able to stop other people from bringing inconspicuous recording and analyzing devices to our face-to-face meetings. Someone will surely try to develop a way to measure the genetic information of the people one is dealing with, as was portrayed in the seminal movie Gattaca.
And then there will be mechanized intelligence in our home. Imagine recording and analyzing scenes from your living room and bedroom. The very idea might be loathsome, and it is probably not useful to see a running prediction on the likelihood of mom and dad staying together as a couple. But could you avoid the temptation to take a peek at that number every now and then?
We may tend to think of mechanized intelligent analysis as primarily useful in judging other people, but it will also have the potential to promote self-knowledge. During a date, a woman might consult a pocket device in the ladies’ room that tells her how much she really likes the guy. The machine could register her pulse, breathing, tone of voice, the level of detail in her narrative, or whichever biological features prove to have predictive power.
Self-scrutiny doesn’t have to be restricted to matters of the heart. Which products do we really like or really notice? How are we responding to advertisements when we see them? Consult your pocket device. There is currently a DARPA (Defense Advanced Research Projects Agency, part of the Department of Defense) project called “Cortically Coupled Computer Vision.” The initial applications help analysts scan satellite photos or help a soldier-driver navigate dangerous terrain in a Jeep. Basically, the individual wears some headgear and the device measures neural signals whenever the individual experiences a particular kind of subconscious alert (Danger! or, Salt! or, Familiar!). Someday we will be able to dispense with the headgear or otherwise make the device less burdensome and less conspicuous. You will be able to walk down the aisle of a shopping mall or scan a window in a store, and the device will record and note whichever products grabbed your attention. You could program it to prompt you at the time or to store the information, with appropriate tags, for future reference, perhaps combined with a web search for appropriate coupons.
If you’re not interested, the store probably will be. Your shopping cart will use GPS to track your moves through the store, including which aisles you visit most often. Or they can track you by video and smart cameras, using face recognition technology to aggregate data across your different store visits. Stores have even started to use cameras on test subjects to track shoppers’ retinas, to see how and when they notice products, or to lock on to cell phone signals and track and analyze shoppers’ movements. When you enter the store, they will promise you a discount, but only if you run your card through the cart and identify yourself. Many will grab the discount, just as they participate in frequent-shopper programs, even though it means the food company knows what they are buying. They’d rather have the lower prices than the privacy—as indeed would I.
There are already pilot projects along these lines. Companies have discovered that if the first product seems cheaper than expected, the customer is more trusting, and more willing to spend, for the duration of the shopping trip. So if the intelligent machines spot you coming in the door and heading to the gourmet chocolate, maybe there will be a quite temporary sale, communicated by electronic sensors—for your benefit, of course.
It’s easy to see how comparable developments, as applied to analyzing human behavior and intentions, could thwart a lot of relationships or business deals in the works. The sorry truth is that if we knew all or even some of the bad things about our prospective partners, we might be so cautious that we never take a romantic leap. As it stands, the world is set up to overreact to negative information, as even a whiff of scandal causes us to lose trust in other individuals. We will need some significant cultural changes to make do with an increase in the “warts and all” coverage that mechanized intelligent analysis will soon be delivering about virtually all notable public figures and many private individuals too. What if a woman’s pocket device reported during the bathroom break that her date smiled at the waitress for a little too long? Sometimes the guy is a creep who should be dumped, but, well, in this particular case, maybe not. The positive illusions (all our children are above average) that help get us through everyday life could so easily wither in the face of sangfroid machine critiques. Not this year or next year, but most likely within our lifetimes.
Smart software is already used to ferret out phony web reviews on sites such as Amazon and TripAdvisor. A team of Cornell researchers discovered how to spot phony reviews about 90 percent of the time. The paid-for fake reviews use too many superlatives, too many vague evaluations, and not enough detailed descriptions, and they also use the words “I” and “me” more often, perhaps as a substitute for knowing what they are talking about. In due time the fake reviews will evolve and will be harder to spot by these older methods, and the fake-spotting software will have to catch up— another ongoing arms race.
Similarly, researchers are working on how to spot the liars in online dating, so we can expect software to follow suit. Preliminary results suggest that liar profiles avoid the use of the word “I” (the opposite of liar user reviews), use lots of negation (they write “not sad” rather than “happy”), and they write shorter self-descriptions, presumably to avoid having to manage and keep all the lies consistent. Profile writers who are lying about their age or weight tend to devote more space to boasting about their personal achievements. Expect improvement in our abilities to nail the cheaters.
It is no accident that we are seeing so many signs of significant progress in mechanized intelligent analysis, albeit in varying stages of maturity. First, Moore’s law about ongoing advances in processing speed has continued to pay off, with no immediate end in sight. Second, the machine intelligence sector is largely unregulated. If you compare it to health care as a world-altering, stagnation-ending breakthrough industry, regulatory obstacles are a far greater problem for pharmaceutical companies and for hospitals than for the likes of Google, Amazon, and Apple. Health care, with its physician licensing, Byzantine hospital regulations, and FDA approval process, also makes most of its changes quite slowly, for better or worse. It’s not just about the laws, but also because doctors and patients often have very conservative or moralistic views about how health care should be done; just look at the recent controversies over stem cell treatments and genetic engineering. Health care is an ethical minefield and arguably we should be especially cautious when evaluating new medical and institutional breakthroughs. In any case, we can expect slower progress.
When it comes to mechanized intelligent analysis, patent law can be a problem but for the most part the paths forward are relatively free of regulatory obstacles. Some applications, such as driverless cars, do face potential lawsuits; for instance, imagine the first time an out-of-control driverless car runs down a child. The parents would likely sue the deep-pocketed company that made the car, in addition to suing the owner of the car and maybe also the person who programmed the software of the car. Given this likelihood, there may be delays in turning the cars into marketable products. Maybe it will happen first in some part of the world far less litigious than the United States.
Still, very often entrepreneurs and scientists can do the work behind smarter machines, and develop usable products, without needing much special permission from the powers that be. Deep Blue and Watson and Siri were brought to the world without facing many regulatory or legal barriers. Technological progress slows down when there are too many people who have the right to say no, but software in general gets around a lot of the traditional veto points. The key work is done in the individual mind and with relatively small teams and in computer code, and it’s hard to hold back the innovations by law or custom.
So how does this new world change what young people should expect from their work lives?
What People are saying about this
“A lively and worryingly prophetic read… some of the most talked-about issues in present-day America… observations that are genuinely enlightening, interesting, and underappreciated” —The Daily Beast
“A book that is gripping policy makers in Washington… An engaging and eclectic thinker.” —The Sunday Times
“Cowen’s book represents a fundamental challenge.”—Wall Street Journal
“A buckle-your-seatbelts, swiftly moving tour of the new economic landscape.”—Kirkus
Meet the Author
TYLER COWEN is a professor of economics at George Mason University. His blog, Marginal Revolution, is one of the world’s most influential economics blogs. He also writes for the New York Times, Financial Times and The Economist and is the cofounder of Marginal Revolution University. The author of five previous books, Cowen lives in Fairfax, Virginia.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >
A great title and an interesting premise. But this is my second disappointment by this author. 'The world is changing and increasingly the jobs will be in management, sophisticated services, and in tech sectors assisted by machines.' The author loves his ipad and chess, and throws in some topical politics at the beginning and end. But I found a lot of the information baseless and anecdotal and the analysis somewhat lacking in rigor. He may well be right but he fails to make a compelling case, and what he does conclude could have been wrapped up in 50 pages. An average book - worth reading but not overwhelming.