Praise for Lawrence Lessig
"Lawrence Lessig gets things changed not for the benefit of corporations but to unleash the creative potential of ordinary people in a digital age."
In an era when special interests funnel huge amounts of money into our government-driven by shifts in campaign-finance rules and brought to new levels by the Supreme Court in Citizens United v. Federal Election Commission-trust in our government has reached an all-time low. More than ever before, Americans believe that money buys results in Congress, and/i>… See more details below
In an era when special interests funnel huge amounts of money into our government-driven by shifts in campaign-finance rules and brought to new levels by the Supreme Court in Citizens United v. Federal Election Commission-trust in our government has reached an all-time low. More than ever before, Americans believe that money buys results in Congress, and that business interests wield control over our legislature.
With heartfelt urgency and a keen desire for righting wrongs, Harvard law professor Lawrence Lessig takes a clear-eyed look at how we arrived at this crisis: how fundamentally good people, with good intentions, have allowed our democracy to be co-opted by outside interests, and how this exploitation has become entrenched in the system. Rejecting simple labels and reductive logic-and instead using examples that resonate as powerfully on the Right as on the Left-Lessig seeks out the root causes of our situation. He plumbs the issues of campaign financing and corporate lobbying, revealing the human faces and follies that have allowed corruption to take such a foothold in our system. He puts the issues in terms that nonwonks can understand, using real-world analogies and real human stories. And ultimately he calls for widespread mobilization and a new Constitutional Convention, presenting achievable solutions for regaining control of our corrupted-but redeemable-representational system. In this way, Lessig plots a roadmap for returning our republic to its intended greatness.
While America may be divided, Lessig vividly champions the idea that we can succeed if we accept that corruption is our common enemy and that we must find a way to fight against it. In REPUBLIC, LOST, he not only makes this need palpable and clear-he gives us the practical and intellectual tools to do something about it.
"Lawrence Lessig gets things changed not for the benefit of corporations but to unleash the creative potential of ordinary people in a digital age."
REPUBLIC, LOST is a powerful reminder that this problem goes deeper than poor legislative tactics or bad character. As progressives contemplate how best to pick up the pieces after recent setbacks, a robust agenda to change how business gets done in the capital needs to be part of the picture. This time, we'd better mean it.Matthew Yglesias, The American Prospect
Praise for Lawrence Lessig
"Lawrence Lessig gets things changed not for the benefit of corporations but to unleash the creative potential of ordinary people in a digital age."
The Guardian "
Lessig is one of those rare legal scholars with both a clear narrative voice and a fine eye for historical irony."
The Washington Post"
A bright and spark-filed polemic... combining legal sophistication with a storyteller's knack."
Wall Street Journal, on Free Culture"
A powerfully argued and important analysis... it is also surprisingly entertaining."
The New York Times Book Review, on Free Culture"
Once dubbed a 'philosopher king of Internet law,' he writes with a unique mix of legal expertise, historic facts and cultural curiosity, citing everything from turn-of-the-century Congressional testimony to Wikipedia to contemporary best-sellers like Chris Anderson's The Long Tail. The result is a wealth of interesting examples and theories on how and why digital technology and copyright law can promote professional and amateur art."
M.J. Stephey, Time Magazine"
More than anything, Lessig understands and often wrestles with a rather understated theory: common sense."
Derek Bores, PopMatters"
As an initial matter, Lessigian thought is deeply critical in nature... Perhaps it is the luxury of academia, or his nature generally, but Lessig is not afraid to say (loudly) at times: This doesn't work! We need to change. He says it often, and people are listening."
Russ Taylor, Federal Communications Law Journal"
No one is more skilled at making arcane legal and technological questions terrifyingly relevant to everyday life than Lessig."
Sonia Katyal, Texas Law Review
There are no vampires or dragons here. Our problems are much more pedestrian, much more common. Indeed, anything we could say about the perpetrator of the corruption that infects our government (Congress) we could likely say as well about ourselves. In this part, I frame this sense of corruption, to make that link clear, and to make its solution more obvious.
In the summer of 1991, I spent a month alone on a beach in Costa Rica reading novels. I had just finished clerking at the Supreme Court. That experience had depressed me beyond measure. I had idolized the Court. It turns out humans work there. It would take me years to relearn just how amazing that institution actually is. Before that, I was to begin teaching at the University of Chicago Law School. I needed to clear my head.
I was staying at a small hotel near Jaco. In the center of the hotel was a large open-air restaurant. At one end hung a TV, running all the time. The programs were in Spanish and hence incomprehensible to me. The one bit someone did translate was a warning that flashed before the station aired The Simpsons, advising parents that the show was “antisocial,” not appropriate for kids.
Midway through that month, however, that television became the center of my life. On Monday, August 19, I watched with astonishment the coverage of Russia’s August Putsch, when hard-line Communists tried to wrest control of the nation from the reformer Mikhail Gorbachev. Tanks were in the streets. Two years after Tiananmen, it felt inevitable that something dramatic, and tragic, was going to happen. Again.
I sat staring at the TV for most of the day. I pestered people to interpret the commentary for me. I annoyed the bartender by not drinking as I consumed the free TV. And I watched with geeky awe as Boris Yeltsin climbed on top of a tank and challenged his nation to hold on to the democracy the old Communists were trying to steal.
I will always remember that image. As with waking up to the Challenger disaster or watching the reports of Bobby Kennedy’s assassination, I can remember those first moments almost as clearly as if they were happening now. And I vividly remember thinking about the extraordinary figure that Yeltsin was: bravely challenging in the name of freedom a coup that if successful—and on August 19 there was no reason to doubt it would be—would certainly result in the execution of this increasingly idolized defender of the people.
Every other player in that mix seemed tainted or compromised, Gorbachev especially. And compromise (what life at the Court had shown me) was exactly what the month away was to allow me to escape. So at that moment, Yeltsin was the focus for me. Here was a man who could be for Russia what George Washington had been for America. History had given him the opportunity to join its exclusive club. It had taken some initial courage for him to climb on, but on August 19, 1991, I couldn’t imagine how he could do anything other than ride this opportunity to its inevitable end. If democracy seemed possible for the former Soviets, it seemed possible only because it would have a voice through the rough and angry Yeltsin.
That’s not, of course, how the story played out. No doubt Yeltsin’s position was impossibly difficult. But over the balance of the 1990s, the heroic Yeltsin became a joke. Perhaps unfairly—and certainly unfairly at the beginning, since his real troubles with alcohol began only after he became Russia’s president—he was increasingly viewed as a drunk. After his first summit with Yeltsin, Clinton became convinced that his addiction was “more than a sporting problem.” The public didn’t even learn about the most incredible incident until two years ago: on a visit to Washington to meet with Clinton, Yeltsin was found by the Secret Service on a D.C. street in the predawn hours, dressed only in underwear, trying in vain to flag down a taxi to take him to get pizza. Yeltsin fumbled his chance at history, all because of the lure of the bottle.
As clearly as I remember watching him on that tank on August 19, I remember thinking, over the balance of that decade, about the special kind of bathos that Yeltsin betrayed. He was handed a chance to save Russia from authoritarians. Yet even this gift wasn’t enough to inspire him to stay straight.
Yeltsin is a type: a particular, and tragic, character type. No doubt a good soul, he wanted and worked to do good for his nation. But he failed, in part because of a dependency that conflicted with his duty to his nation. We can’t hate him. We could possibly feel sorry for him. And we should certainly feel sorry for the millions who lost the chance of a certain kind of free society because of this man’s dependency.
Such characters and such dependencies, however, are not limited to individuals. Institutions can suffer them, too. Not because the individuals within the institutions are themselves addicted to some drug or to alcohol. Maybe they are. No doubt many are. That’s not my point. Instead, an institution can be corrupted in the same way Yeltsin was when individuals within that institution become dependent upon an influence that distracts them from the intended purpose of the institution. The distracting dependency corrupts the institution.
Consider an obvious case.
A doctor at a medical school teaches students how to treat a certain condition. That treatment involves a choice among a number of drugs. Those drugs are produced by a number of competing drug companies. One of those companies begins to offer the doctor speaking opportunities—relatively well paid, and with reliable regularity. The doctor begins to depend upon this income. She buys a fancier car, or a vacation house on a lake. And while there’s no agreement, express or implied, about the doctor’s recommending the drug company’s treatment over others, assume the doctor knows that the company knows what in fact she is recommending. Indeed, it is amazing if you don’t know this, that drug companies are able to track precisely which drugs a particular doctor prescribes, or not, and therefore adjust their marketing accordingly.
In this simple example, we have all the elements of the kind of corruption I am concerned with here. The institution of medical education has a fairly clear purpose—Harvard’s is to “create and nurture a diverse community of the best people committed to leadership in alleviating human suffering caused by disease.” That purpose requires doctors to make judgments objectively, meaning based upon, or dependent upon, the best available science about the benefits and costs of various treatments. If a doctor within that institution compromises that objectivity by weighing more heavily, or less critically, the treatments from one company over another, we can say that her behavior would tend to corrupt the institution of education—her dependency upon the drug company has led her to be less objective in her judgment about alternatives.
Of course, we can’t simply assume that money for speaking would bias the doctor’s judgment. There is plenty of research to show why it could, but so far that research is an argument, not proof. It is at least possible that such an arrangement leaves the judgment of the scientist unaffected. Although, again, my own reading of the evidence suggests that’s unlikely. But my point just now is not to prove the effect of money. It is instead to clarify one conception of corruption. It is perfectly accurate to say that if the relationship between the doctor and the drug company affected the objectivity of the doctor, then the relationship “corrupted” the doctor and her institution.
In saying this, however, we need not be saying that the doctor is an evil or bad person. If our doctor has sinned, her sin is ordinary, understandable. And indeed, among doctors in her position, her “sin” is likely not even viewed as a sin. The freedom or latitude to supplement one’s income is an obvious good. To anyone with kids, or a mortgage, it feels like a necessity. We can all, if we’re honest, imagine ourselves in her position precisely. Ordinary and decent people engage all the time in just this sort of compromise. It is the stuff of modern life, to be managed, not condemned, because if condemned, ignored.
We manage this sort of corruption by, first, recognizing its elements and, second, evaluating explicitly whether the institution can afford the compromise it produces. We recognize its elements by being explicit about the range of influences that operate upon individuals within that institution—particular influences within, we could say, an economy of influence. Some of those influences may be too random to regulate. Some may be the sort that any mature understanding of human nature would say produced a dependency.
Where there is such a dependency, those responsible for the effectiveness of the institution must ask whether that dependency too severely weakens the independence of the institution. If they don’t ask this question, then they betray the institution they serve.
By invoking this idea of dependency, I mean to evoke a congeries of ideas: a dependency develops over time; it sets a pattern of interaction that builds upon itself; it develops a resistance to breaking that pattern; it feeds a need that some find easier to resist than others; satisfying that need creates its own reward; that reward makes giving up the dependency difficult; for some, it makes it impossible.
We all understand how these ideas map onto Yeltsin’s struggle. Few of us have not been harmed by, or not done harm as, an alcoholic. We get this dynamic. We have lived with it.
How these ideas map onto an institution, however, is something we need still to work out. Institutions are not spirits. They don’t act except through individuals. Yet each of these ideas is at least understandable when we think of an institution in which key individuals have become distracted by an improper, or conflicting, dependency.
That distraction is the corruption at the core of this book. Call it dependence corruption. As I will show in the pages that follow, it is this pattern precisely that weakens our government. It is this pattern that explains that corruption without assuming evil or criminal souls at the helm. It will help us, in other words, understand a pathology that all of us acknowledge (at the level of the institution) without assuming a pathology that few could fairly believe (at the level of the individual).
As an introduction to dependence corruption, consider a link between the idea and an example more directly related to the aim of this book.
Imagine a young democracy, its legislators passionate and eager to serve their new republic. A neighboring king begins to send the legislators gifts. Wine. Women. Or wealth. Soon the legislators have a life that depends, in part at least, upon those gifts. They couldn’t live as comfortably without them, and they slowly come to recognize this. They bend their work to protect their gifts. They develop a sixth sense about how what they do in their work might threaten, or trouble, the foreign king. They avoid such topics. They work instead to keep the foreign king happy, even if that conflicts with the interests of their own people.
Just such a dynamic was the fear that led our Framers to add to our Constitution a strange and favorite clause of mine. As Article I, section 9, clause 8, states,
[N]o Person holding any Office of Profit or Trust under [the United States], shall, without the Consent of the Congress, accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State.
The motivation for this clause was both contemporary to the Framers and a part of their history. At the time of the founding, the king of France had made it a practice to give expensive gifts to departing ambassadors when they had successfully negotiated a treaty. In 1780 he gave Arthur Lee a portrait of himself set in diamonds and fixed above a gold snuff box. In 1784 he gave Benjamin Franklin a similar portrait, also set in diamonds. The practice was common throughout Europe. During negotiations with Spain, for example, the king of Spain presented John Jay with a horse. Each of these gifts raised a reasonable concern: Would agents of the republic keep their loyalties clear if in the background they had in view these expected gifts from foreign kings? Would the promised or expected gift give them an extra push to close an agreement, even if (ever so slightly) against the interests of their nation?
The same fear was a part of England’s past. The reign of Charles II was stained by the fact that he, and most of his ministers, received payments (“emoluments”) from the French Crown while in exile in France. Many believed the British monarchy thus became dependent upon those emoluments, and hence upon France. Those emoluments were viewed as a form of corruption, even if there was no clear quid pro quo tied to the gifts.
Likewise with the relationship of the British Crown to ministers in Parliament: The core corruption the Framers wanted to avoid was Parliament’s loss of independence from the Crown because the king had showered members of Parliament with offices and perks that few would have the strength to resist. Members were thus pulled to the view of the king, and away from the view of the people they were intended to represent.
In each of these cases, the concern was not just a single episode. It was a practice. The fear was not just that a particular minister might be bribed. It was that many ministers might develop the wrong sensibilities. The fear, in other words, was that a dependency might develop that would draw the institution away from the purpose it was intended to serve: The people. The realm. The commons.
Think about it like this: Imagine a compass, its earnest arrow pointing to the magnetic north. We all have a trusting sense of how this magical device works. When we turn with the compass in our hands, the needle turns back. It is to track the magnetic north, regardless of the spin we give it.
Now imagine we’ve rubbed a lodestone on the metal casing of the compass, near the mark for “west.” The arrow shifts. Slightly. That shift is called the “magnetic deviation.” It represents the error induced by the added magnetic field.
Magnetic north was the intended dependence. Tracking magnetic north is the purpose of the device. The lodestone creates a competing dependence. That competing dependence produces an error. A corruption. And we can see that error as a metaphor for the corruption that I am describing by the term dependence corruption.
If small enough, the magnetic deviation could allow us to believe that the compass remains true. Yet it is not true. However subtle, however close, however ambiguous the effect might be, the deviation corrupts.
Depending on the context, depending on the time, depending on the people, that corruption will matter. Repairing it, at least sometimes, will be critical.
It is late at night, a sleepless night, as all nights have been since the birth of your child. The kid is crying. You stumble into her room to change her. She is frantic, maybe afraid. You fumble in the dark for the pacifier, which will magically turn this anxious source of joy into a sleeping baby. You give her the pacifier. She starts sucking. And then an evil demon drops a single thought into your head, a question perfectly crafted to keep you up for the rest of the night: How do you know that plastic is safe?
And not just that plastic. What about the plastic of her cereal bowl? Or her bottle? Or the soft spoon you use to feed her? Or anything else that she puts in her mouth, which of course, for months of her life, is absolutely anything she can touch?
If you’re like I was about a decade ago (and this is not a fact I’m proud of), you’ll answer that question with a calming reassurance: Obviously the plastic is safe. We spend billions running agencies designed to ensure the safety of the stuff we put in our mouths. How could it possibly be that the safety of something a baby puts into his mouth could still be in doubt? A hundred years of consumer safety law haven’t left something as obvious as that untested.
I would have delivered that lecture to myself with some pride. This isn’t a political issue. There’s no Republican in the U.S. Congress who believes that the products our children consume should be unsafe or untested. Instead, we have all come to the view that the complexity of modern society demands this minimal regulatory assurance at least.
Not all societies are yet at this place. The weekend my wife and I discovered she was pregnant with our first child, we were in China. In the paper that morning was the story of a Chinese businessman who had been convicted for selling sugar water as baby formula. Parents who had relied upon the assurances of safety printed on the bottles watched in horror as their children bloated and died. The owner of the factory defended himself in a Chinese court with words Charles Dickens might have penned: “No one forced these parents to use my formula. They chose to use it. Any deaths are their own fault, not mine.”
But in fact, the demon pestering you as you lie awake in bed after putting your child back to sleep has asked a pretty good question. For years my wife imported our pacifiers from Europe. Until I began the research for this book, I never asked why. “BPA” (aka Bisphenol A), she said. In America, the vast majority of soft plastic for children contains BPA. In many countries around Europe that chemical has been removed from children’s products.
Among the complexities in the development of a fetus is the precision of its timing. Certain things must happen at certain times, and ordinarily they do. At certain times, for example, exposure of the fetus to estrogen can be harmful. At those precise times, the fetus develops a protective layer, a sex-hormone-binding globulin, that blocks the fetus from its mother’s estrogen.
In the mid-1990s, Frederick vom Saal, a professor of biological sciences now at the University of Missouri–Columbia, began to wonder whether the same blocking mechanism blocked man-made estrogenic chemicals as well. Those chemicals, in theory at least, could have the same harmful effect on the fetus. Did sex-hormone-binding globulins protect against those, too?
The answer was not good. “The great majority of man-made chemicals,” vom Saal found, “are not inhibited from entering cells like natural estrogens are.” Worse, vom Saal found, “the receptor in the cell that causes changes when estrogen binds to it [remember, changes that can, at specific stages of development, be extremely harmful] is very responsive” to synthetic estrogenic chemicals, including BPA.
Armed with (and alarmed by) this finding, vom Saal and others started testing the actual effects of BPA on the development of mice. The findings confirmed their worst fears. And because the “molecular mechanisms at the cellular level [produce] no difference in the way that mouse and rat cells respond to BPA and the way that human cells respond to it,” vom Saal believed he had tripped onto a potential health disaster. Almost everyone (95 percent) within the developed world now has “blood levels of [BPA] within the range ‘that is predicted to be biologically active,’ based on animal studies conducted with low doses of the chemical.” A study by the Harvard School of Public Health found that “BPA concentrations increased by 69% in the urine of subjects who drank from plastic bottles containing BPA.” Some studies have even detected BPA in the cord blood of newborns. The consequences of this exposure according to this study range from “reduced sperm count to spontaneous miscarriages; from prostate and breast cancers to degenerative brain diseases; from attention deficit disorders to obesity and insulin resistance, which links it to Type 2 diabetes.” Indeed, just last year, “the White House task force on childhood obesity worried [that BPA] might be promoting obesity in children.” Its fear followed this extensive and growing research.
Vom Saal’s conclusions are not his alone. Indeed, to give the issue prominence, more than thirty-six “of the world’s best brains on BPA” signed “an unprecedented consensus statement [that] laid out [the] chilling conclusions” of the research. In the view of these scientists, BPA is a danger already causing significant harm to children in developed nations, and will no doubt cause more harm in the years to come.
Not all scientists agree with vom Saal and his colleagues, however. Indeed, there are many who believe BPA is either harmless or not yet proven to cause harm in humans. Many of the studies of BPA, these scientists believe, have been methodologically flawed. Indeed, the National Institutes of Health itself has acknowledged problems with some of the research. Regulations that would ban BPA, these scientists believe, are an unnecessary burden that will only raise the cost of the products our children need (and yes, reader who has never had a child, children need pacifiers).
Among those insisting upon the safety of BPA is, not surprisingly, the industry that produces it. In December 2009, Harper’s published a summary memo from a meeting of the “BPA Joint Trade Association.” That meeting was intended to “develop potential communication/media strategies around BPA.” Members at the meeting believed that a “balance of legislative and grassroots outreach (to young mothers and students) is imperative to the stability of their industry.” Among the strategies discussed was “using fear tactics (e.g., ‘Do you want to have access to baby food anymore?’),” and urging that consumers should have choice (e.g., “You have a choice: the more expensive product that is frozen or fresh, or foods packaged in cans”). The association was concerned that the “media is starting to ignore their side,” and “doubts obtaining a scientific spokesman is attainable.” The memo identified the “holy grail spokesman” for the BPA industry in the minds of these committee members: a “pregnant young mother who would be willing to speak around the country about the benefits of BPA.”
Okay, so some say that BPA is dangerous. Some say it is not. You may be with me in the former camp, or you may be in the latter camp. Both views are fair enough.
But notice how your feelings change when you read the following:
Since vom Saal published his first study in 1997, there have been at least 176 studies of the low-dose effects of BPA. Thirteen of these studies have been sponsored by industry. The balance (163) have been funded by the government, and conducted at universities. The industry-funded studies have the advantage of being large scale. Most of the government-funded studies are smaller scale. Nonetheless, here are the results:
All of the large-scale studies found no evidence of harm. When added to the smaller-scale studies, this meant about 24 out of the 176 found no evidence of harm. But 152 of these studies did find evidence of harm. So from this perspective, we could say about 15 percent of the studies found the chemical harmless, while 85 percent found it potentially harmful.
That doesn’t sound good for BPA. And it does not get any better.
If you divide the studies on the basis of their funding, the results are even starker.
In a single line, none of the industry-funded studies found evidence of harm, while more than 85 percent of the independent studies did.
Researchers who conduct these industry-sponsored studies are of course “offended,” as one director commented, “when someone suggests that who pays for the study determines the outcome.” She explains the difference by pointing to the “nature of the study,” not “who pays for the studies.” Independent studies “typically focus on hazards, or the intrinsic capacity to do harm,” while industry-funded studies “are interested in determining the risks of exposure.”
Maybe. And maybe that’s enough to explain the difference. But here is the point I want you to recognize: Some will read this analysis and conclude that BPA is unsafe. Some will read it and won’t change their view of BPA in the slightest. But the vast majority will read this analysis and become less certain about whether BPA is safe. The presence of money with the wrong relationship to the truth is enough to dislodge at least some of the confidence that these souls once had.
And among those not so sure, at least some will have the reaction that I did, and do, every time I hand my kid a piece of plastic: It is absurd that in America I don’t know if the thing I’m feeding my child with is safe—for her or for us.
The next time you’re holding your cell phone against your ear and notice your ear getting a bit warm, ask yourself this question: Is your cell phone safe? Does the radiation coming from that handheld device—microwave radiation, emitted one inch from your brain—cause damage to your brain? Or head? Or hand?
The vast majority of Americans (70 percent) either believe the answer to the latter question is no or they don’t know. Part of that belief comes from the same sort of confidence I’ve just described—we’ve had cell phone technology for almost fifty years; certainly someone must have determined whether the radiation does any damage. Part of that belief could also come from reports of actual studies—hundreds of studies of cell phone radiation have concluded that cell phones cause no increased risk of biological harm. And, finally, part of that belief comes from a familiar psychological phenomenon: cognitive dissonance—it would be too hard to believe to the contrary. Like smokers who disbelieved reports about the link between smoking and lung cancer, we cell phone users would find it too hard to accept that this essential technology of modern life was in fact (yet) another ticking cancer time bomb.
Yet, once again, the research raises some questions.
Depending on how you count, there have been at least three hundred studies related to cell phone safety—or, more precisely, studies that try to determine if there is any “biologic effect” from cell phone radiation. The most prominent of these is a recent, $24 million UN-sponsored study covering thirteen thousand users in thirteen nations for more than a decade. That study was deemed “inconclusive,” but it did find that “frequent cell phone use may increase the chances of developing rare but deadly forms of brain cancer.” Specifically, the study found up to “40% higher incidence of glioma among the top 10 percent of people who” used their phone the most. That qualification may give you comfort, at least if you don’t think of yourself as one of those sad souls glued to their cell phones. But don’t get too comfortable yet, because the study was conceived more than a decade ago, when “heavy use” was actually quite moderate by today’s standards: thirty minutes a day put you in the highest category for the purposes of this study. Indeed, as Dr. Devra Davis writes in her book Disconnect (2010), there’s a very general problem with the established standards for cell phone usage: “Today’s standards… were set in 1993, based on models that used a very large heavy man with an eleven-pound head talking for six minutes, when fewer than 10% of all adults had cell phones. Half of all ten-year-olds now have cell phones. Some young adults use phones for more than four hours a day.”
The concern that I want to flag, however, begins, again, when one looks at the source of these studies. Dr. Henry Lai of the University of Washington has examined 326 of these radiation studies. His analysis divides the studies into those that found some biologic effect and those that did not. Good news: the numbers are about even. Fifty-six percent of the studies found a biologic effect, while 44 percent did not. Not great (for cell phone users), but perhaps not reason enough (yet) to chuck your iPhone.
But Professor Lai then divided the studies into those that were funded by industry and those that were not. Once that division was made, the numbers no longer seemed so benign. Industry-funded studies overwhelmingly found no biologic effect, while independent studies found overwhelmingly that there was a biologic effect.
NO BIOLOGIC EFFECT
Lai’s work is careful, but it has not yet been published in a peer-reviewed journal. Its conclusions, however, have been supported by important peer-reviewed work. In a paper published in 2007 in the journal Environmental Health Perspectives, researchers reviewed published studies of controlled exposure to radio-frequency radiation. They isolated fifty-nine studies that they believed meaningful, and divided those into ones funded by industry, funded by the public or charity, and funded in a mixed way.
Their conclusions are consistent with Lai’s. As they wrote, “studies funded exclusively by industry were indeed substantially less likely to report statistically significant effects on a range of end points that may be relevant to health.” This conclusion added “to the existing evidence that single-source sponsorship is associated with outcomes that favor the sponsors’ products.”
So how do these facts affect your view of cell phones?
Again, some will conclude that cell phones are dangerous. Some will continue to believe that they are safe. But the majority will process these facts by concluding that they are now no longer sure about whether cell phones are safe. The mere fact of money in the wrong place changes their confidence about this question of science.
These two stories rely upon an obvious intuition—that money in the wrong places makes us trust less. My colleagues and I at Harvard wanted to test that intuition more systematically. Can we really show that money wrongly placed weakens the confidence or trust that people have in any particular institution? And if it does, does it have the same effect regardless of the institution? Or are some institutions more vulnerable—more untrustworthy—than others?
Our experiment presented participants with a series of vignettes in three different institutional contexts: politics, medicine, and consumer products. In each context, the cases differed only by the extent to which an actor’s financial incentive was described to be dependent upon a particular outcome.
Across all three of the domains we tested, the mere suggestion of a link between financial incentives and a particular outcome significantly influenced the participants’ trust and confidence in the underlying actor or institution. Doctors’ advice was judged to be less trustworthy if the procedure they recommended was tied to a financial incentive. Politicians were judged to be less trustworthy if they supported a policy consistent with the agenda of contributing lobbyists. Researchers for consumer products were judged less trustworthy if their work was funded by an agency that had a financial stake in the outcome. And most surprisingly to us, these variations in the hypotheticals we presented also significantly influenced the participants’ judgments of their own doctors, politicians, and consumer goods. Even the suggestion of one bad apple was enough to spoil the barrel.
In each of these contexts, of course, we might well say that the participants made a logical mistake. In none of the cases did we prove that the money was affecting the results. In none of the cases did we even suggest that it was. But logic notwithstanding, trust was affected merely because money was present in a way that could have biased the results. We infer bias from the structure of the case. Rightly or wrongly, this is how we read.
The field of “conflicts of interest” focuses on the question of when we should be concerned about dueling loyalties within a single decision maker or single institution. If, for example, you’re a judge deciding a billion-dollar lawsuit brought against Exxon, the fact that you’ve got any financial connection to Exxon, however small, is enough to disqualify you from that suit. Your decision should depend upon the law alone. And one fear addressed by “conflicts” rules is that your loyalty might be split between the law and your own personal gain.
But come on—a single share of Exxon stock is enough to get a judge kicked from the case? Does anyone actually believe that a judge would throw a case because her stock might move from sixty dollars to sixty-one? Why does the law worry about such tiny things? Or, more sharply, why would it require a judge to step aside merely because, as the law states, her “impartiality might reasonably be questioned”? Shouldn’t the test be whether the judge is partial? And if she is not partial, then shouldn’t the question of whether people “might reasonably question her impartiality” be irrelevant? We don’t lock people up in jail merely because other people “might reasonably” believe they’re guilty. Why do we kick a judge from the bench?
Imagine a judge we know is impartial. Put aside how we know that; just assume that we do. If we know the judge is impartial, why should the fact that others might “reasonably” think otherwise matter? Sure, if we don’t know, what others might “reasonably” think might be important. But what if we do know?
The answer to these questions is that uncertainty has its own effect. The law might say someone is innocent until proven guilty. But law be damned, if you learn that a school bus driver has been charged with drunk driving, you’re going to think twice before you put your child on his bus. Indeed, even if you think the charge is likely false, the mere chance that it is true may well be enough (and rationally so) for you to decide to drive your kid rather than risk his life on the bus. The charge doesn’t make the driver “guilty” in your head; but it certainly will affect whether you think it makes sense to let him drive your kid.
That’s the same (Bayesian) principle that guides conflict-of-interest analysis. The legal system doesn’t assume that a judge is partial merely because her “impartiality might reasonably be questioned.” But it does assume that the fact that her “impartiality might reasonably be questioned” will affect people’s trust of the judicial system. And so to protect the system, or, more precisely, to protect trust in the system, the system takes no chances. As President William Howard Taft explained in his “Four Aspects of Civic Duty”:
This same principle is one that should lead judges not to accept courtesies like railroad passes from persons or companies frequently litigants in their courts. It is not that such courtesies would really influence them to decide a case in favor of such litigants when justice required a different result; but the possible evil is that if the defeated litigant learns of the extension of such courtesy to the judge or the court by his opponent he cannot be convinced that his cause was heard by an indifferent tribunal, and it weakens the authority and the general standing of the court.
The legal system thus avoids that chance. Or at least it takes the smallest chances it can. In this sense, following Professor Dennis Thompson, we can say that the “appearance standard identifies a distinct wrong, independent of and no less serious than the wrong of which it is an appearance”—because of this effect.
But there’s another side to this “impartiality might reasonably be questioned” standard that people often miss: the word reasonably. The question isn’t whether any crazy person might wonder if a judge were biased. (“Your Honor, I notice you have the same birthday as the plaintiff, and I am concerned that might mean you are biased against Capricorns.”) The question is what a “reasonable” person might think. And so a reasonable question might be: Why stop at “reasonable”? If the objective is to protect the system, why not require recusal whenever someone in good faith at least worries that the judge is biased?
I learned about this side of the recusal rules the hard way. On December 11, 1997, the judge in the Microsoft antitrust trial appointed me a “special master” in that case. That meant I was to be a quasi, temporary, mini-judge, charged with understanding, and then making understandable, a complex technical question about how Windows was “bundled” with Internet Explorer. Microsoft didn’t want a special master in the case, or at least they didn’t want me. So almost immediately after the appointment, they launched a fairly aggressive campaign, in the courts and in the press, to get me removed. Their opening bid was that I used a Mac (on the theory that a neutral master would use Windows). It went downhill from there.
My first reaction to this firestorm (coward that I am) was to flee. To resign. I didn’t need the anger. I certainly didn’t need the hate mail (and there was tons of that). But when I spoke to a couple of friends who were federal judges, they insisted that it would be wrong for me to resign. If a party could dump a judge merely by complaining, then parties could simply dial through all the judges until they found the one they liked best. The test, as I was told, was not whether a party could question my impartiality. The question was whether my “impartiality might reasonably be questioned.” In their view, given the facts, it could not.
This story will help us understand the dynamic I described earlier in this chapter. In both cases, there was a factual question at stake: Is BPA, or are cell phones, safe? In both of those cases, there was a process by which that question was answered: scientific studies that presumably applied scientific standards to reach their results. But in both cases, there was also an influence present when conducting those studies that made at least some of us wonder. Why—except bias, one way or the other—would 72 percent of industry-funded studies find no danger from cell phones when 67 percent of independent studies found danger? Why would 100 percent of industry-funded studies find no harm from BPA while 86 percent of independently funded studies found some harm? And is it reasonable that someone would wonder about this scientific integrity given these differences?
That question at the very least reduces our confidence in the resulting claims of safety. Like a mom deciding to drive her kid to school rather than let him ride the school bus, that lack of confidence could also change how we behave. Again, not because we’ve necessarily concluded that something is unsafe, but because we now have reason to doubt whether something we thought safe actually is. That reason is the presence of an interested party, suggesting that it might have been interest, not science, that explains the difference in the result.
Put most simply: the mere presence of money with a certain relationship to the results makes us less confident about those results.
What follows from this put-most-simply fact, however, is not itself simple. The concern about conflicts must be “reasonable,” as I’ve described, and there are many contexts in which we can’t simply wish away the money that weakens our confidence. Sixty-three percent of drug trials are funded by the pharmaceutical industry. We can’t just pretend that’s a small number, or wish the government would step in to fund trials on its own. Likewise with chemicals such as BPA or devices such as cell phones: It’s a free country. The government should have no power to ban industry from studying its own chemicals or devices, and publishing to the world those results, at least barring fraud.
Instead, our response to this conflict, or potential conflict, is always going to be more complicated. We need to ask whether there is a feasible or reasonable way to win back the confidence that the presence of money takes away. Are there procedures that would remove the doubt of the reasonable person? Are there other ways to earn back that confidence?
Many private institutions get this. Many structure themselves in light of it, taking the risk of this apparent corruption into account and pushing it off the table.
If you’re old enough to remember the Internet circa 1998, you may remember thinking, as I did then, “This is a disaster. There’s no good way to search this network without drowning in advertising muck.” Then came Google, committed to the idea, and convincing in their commitment, that at least the core search results (not the “sponsored links” but the core bottom-left frame of a search screen) were true, that they reflected relevance as judged by some disinterested soul (maybe the Nets), not as bought by the advertisers. As the founders wrote at the time,
We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers…. [T]he better the search engine is, the fewer advertisements will be needed for the consumer to find what they want…. [W]e believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
That commitment gave us confidence. It lets us trust the system, and trust Google.
The same with Wikipedia. Wikipedia doesn’t accept advertising. As it is the fifth most visited site on the Internet, that means it leaves about $150 million on the table every year. As a believer in Wikipedia, and the values of Wikipedians, this is a hard fact for me to swallow. The good (at least from my perspective) that could be done with $150 million a year is not trivial. So what is the good that the world gets in exchange for Wikipedia’s abstemiousness?
As Jimmy Wales, founder of Wikipedia, described it to me, “[W]e do care that… the general public looks to Wikipedia in all of its glories and all of its flaws, which are numerous of course. But the one thing they don’t say is, ‘Well, I don’t trust Wikipedia because it’s all basically advertising fluff.’ ”
So the Wikipedia community spends $150 million each year to secure the site’s independence from apparent commercial bias. Wow.
Or again, think about the Lonely Planet series. Among the most popular travel books in the world (with 13 percent of the market share), Lonely Planet has earned the trust of many. It is a reliable source for information about the unknown places you might visit. I use the books as often as I can.
But in gathering the information for its books, Lonely Planet needs to assure, both itself and its readers, that the reviews it is relying upon are trustworthy. And it strives to earn that trust with a very clear policy: “Why is our travel information the best in the world? It’s simple. Our authors are passionate, dedicated travelers. They don’t take freebies in exchange for positive coverage so you can be sure the advice you’re given is impartial.”
In all three of these cases, these private entities depend for their success upon the public trusting them. So they adopt rules that help them earn that trust. These rules alone, of course, are not enough. But they help. It is because of them that I have reason at least to give the institution the benefit of the doubt. Or, more important, it is because of these rules that I don’t automatically assume financial bias whenever I see something I don’t understand, or don’t agree with. These clear and strong rules cushion skepticism; they make trust possible because they give the public a reason to believe that the institution will act as it has signaled it would act.
These freedom-restricting rules, moreover, are self-imposed. Search results with integrity were a competitive advantage for Google. That’s part of why it made that choice. The same with Wikipedia: The Internet is filled with ad-driven information sites. Wikipedia’s choice gave it a competitive advantage over others, and a community advantage as it tried to attract authors. Likewise with Lonely Planet: It wants a brand people can trust, as a way to sell more books. It therefore restricts its freedom to better achieve its goals.
In none of these cases was government regulation necessary. In none of the cases did some professional body, such as the Bar Association or the AMA, need to intervene to force the companies to do what was “right.” “What was right” coincided perfectly with what was in the best interest of these entities. As Adam Smith famously said, they were “in this, as in many other cases, led by an invisible hand to promote an end which was no part of [their] intention.”
That’s not always true of course. Indeed, as we’ll see, pursuing self-interest alone, without the proper regulatory structure, is often fatal to the public interest. But here, private interests coincide with a public good. Government intervention was therefore not necessary.
I’m sure that with each of these entities, this freedom-restricting rule wasn’t obvious, at least at the time it was chosen. Just at the time Google launched in a big way, the biggest competitor was ad-driven Yahoo. At the time, I’m sure everyone thought the future of Internet search was simply Yellow Pages on steroids. Wikipedians fight all the time about whether the restriction on advertising is actually necessary. And I’m quite sure that the editors at Lonely Planet have at least thought about how much cheaper their production costs would be if the reviewers got comp’d meals and lodging. My claim with each is not that the choice was easy or obvious. It is instead that the choice was made with the belief that the choice, regardless of the cost, was in the long-term interests of that institution.
In each case, these institutions recognized that to preserve a public’s trust, they had to steel themselves against a public’s cynicism. They had to starve that cynicism by structuring themselves to block the obvious cynical inference that money in the wrong place creates. Not money. Money in the wrong place. If properly cabined, or properly insulated, money within an institution (Google, Wikipedia, Lonely Planet) can be fine. It is when it is in a place where, as we all recognize, it will or can or could cause even the most earnest compass to deviate that we should have a concern.
There’s a frog at the center of a well-known metaphor about our inability to respond “to disasters that creep up on [us] a bit at a time.” The rap on the frog, it turns out, is false: frogs will jump from a tub of water as it is heated to boiling. (Trust me on this; please don’t try it at home.) But the charge against us is completely fair: We don’t do well with problems that don’t scream their urgency. We let them slide. We wait for the dam to break.
The previous two chapters should suggest a related disability that is also fairly predicated of us: We don’t do well responding to bads that stand between good and evil. We teach our kids the difference between good and evil. We craft blockbuster movies to test good versus evil. But to grow up is to recognize, and to live, the bad that stands between good and evil. And the challenge, always, is to motivate a response.
For while we respond appropriately to evil, we don’t respond well to good souls who do harm. We don’t identify the harm well. We don’t act to stop it. Indeed, even when we see the harm clearly, we deny its most obvious source. We can’t imagine this decent soul has caused it. So we scour the scene for the obviously corrupt or evil one, as if only the evil could be responsible for great harm.
Yet we all know better than this. We all recognize Yeltsin, or his character. It is our father. Or our mother. Or our uncle, or wife. Or us. We believe the dependency is his or her responsibility, not ours. We tell ourselves, There’s nothing I can do. And so we don’t.
It is because we are so familiar with this subtle form of bad—and with our weakness in the face of it—that we are in turn also so suspicious, or cynical, when certain puzzles confront us, and we see an obvious source—money in the wrong place.
The job of the decent souls we call “scientists” is to tell us truthfully whether BPA is safe, or whether cell phones will give us gray lumps behind the ears. But we’re very quick to believe that even these good souls can be bought—again, not just by bribes, or through fraud, but in the subtle and obvious ways in which we all understand that money bends truth. So merely telling Americans that money is in the mix is enough for most Americans to jump to the ship Cynical. An institution that depends upon trust to be effective will thus lose that trust, and therefore become less effective, if it lets money seep into the wrong place.
I mark these as obvious points, yet we forget them, always. We know them; they guide how we live and negotiate our day-to-day life. But when we talk about the great failing that is at the center of this book, Congress, it is as if we return to the moral universe of kindergarten. We have an enormous frustration with our government. All sides try to identify the source of our frustration with this institution in the evil or stupid acts of evil or stupid people—senators, or worse, congressmen! Americans believe “money buys results” in Congress—almost literally. Some believe congressmen take bags of cash in exchange for changing their votes. They speak as if they believe that members of Congress entered public life because they thought public life was a quicker path to quick cash. They wouldn’t have their son or daughter marry a member of Congress—at least the member of Congress who lives in their abstract thoughts.
Yet when we actually meet our congressman, we confront an obvious dissonance. For that person is not the evil soul we imagined behind our government. She is not sleazy. He is not lazy. Indeed, practically every single member of Congress is not just someone who seems decent. Practically every single member of Congress is decent. These are people who entered public life for the best possible reasons. They believe in what they do. They make enormous sacrifices in order to do what they do. They give us confidence, despite the fact that they work in an institution that has lost the public’s confidence.
Don’t get me wrong. Of course there are exceptions. Obviously some are more and some are less decent; some are more and some are less publicly minded. And no doubt, why politicians make the sacrifices they make is hard, psychologically, to understand. But however much you qualify the rosy picture I have drawn, the truth remains miles from the kind of machine of evil that most of us presume occupies our capital. Any account of the failure of our democracy that places idiots or felons in the middle fundamentally misses what’s actually going on.
Instead, the story of our Congress is these two previous chapters added together:
We have a gaggle of good souls who have become dependent in a way that weakens the democracy, and
We have a nation of good souls who see that dependency, and assume the worst.
The first flaw bends policy. The second flaw weakens the public’s trust. The two together condemn the republic, unless we find a way to reform at least one.
None of us are expert—enough. We each may know a great deal about something, but none of us know enough about the wide range of things that we must understand if we’re to understand the issues of government today.
For those bits that we don’t understand, we rely upon institutions. But whether we trust those institutions will depend upon how they seem to us: how they are crafted, and whether they are built to insulate the actors from the kind of influences we believe might make their decisions untrustworthy.
We don’t have a choice about this. We can’t simply decide to know everything about everything, or decide to ignore the things that make us suspicious. We are human. We will respond in human ways. And we will believe long before scientists can prove. Thus we must build institutions that take into account what we believe, especially when those beliefs limit our ability to trust.
Including the institutions of government: We don’t have a choice about whether to have government. There are too many interconnected struggles that we as a people face. There may well be a conservative or libertarian or liberal response to those struggles. But all sensible sides believe there’s a role for government in at least some of these struggles, even if some believe that role is less than others.
When the government plays its role, we need to be able to trust it. Not trust that it will do whatever we want, for sometimes our party loses, and when it does, we lose the right to demand that the government do the right (from our perspective) thing. But whether we’ve won or lost, we need to trust that the government is acting for the (politically) correct reasons: liberal, if liberals have won; conservative, if conservatives have won; libertarian, if libertarians have won. We need to believe that the government is tracking the sort of interests it was intended to track. Or at least, as Marc Hetherington puts it, that the “government is producing outcomes consistent with [our] expectations.”
When the actions of government conflict with those expectations, we will look beyond trust, for other reasons, to see whether they might explain the puzzle. Other reasons, such as money in the wrong places. When we find it—when we see that money was in the wrong place—it will affect us. It will weaken our trust in government. It will undermine our motivation to engage.
In this section, I select four policy struggles and point to puzzles about each. I then stand these puzzles next to some facts about money that might or might not have affected each struggle. The drama here is not always as pronounced as with BPA or cell phones. But the exercise is crucial to understanding the kind of trouble our republic is facing.
Type 2 diabetes is a disease that causes the body to misuse its own insulin. Overproduction of insulin causes insulin resistance. Insulin resistance increases the level of free fatty acids in the bloodstream, and the level of sugar. Out-of-whack levels of fatty acids and sugar do no good. The direct harms are bad enough. Indirect harms include the loss of limbs, blindness, kidney failure, and heart disease.
In 1985 only 1 to 2 percent of children with diabetes had type 2 diabetes. Of the adults with diabetes, 90 to 95 percent had type 2. Over the past two decades, these numbers have changed, dramatically. Now it is children who, in at least some communities, “account for almost half of new cases of type 2 [diabetes].” Among all new cases of childhood diabetes, “the proportion of those with type 2… ranges between 8% and 43%.”
In the view of some, the rise in type 2 diabetes among kids is tied to an “epidemic” rise in childhood obesity. Today, 85 percent of children with type 2 diabetes are obese. That level, too, is rising.
And obesity is rising not just among children. Between 1960 and 2006, the “percentage of obese adults has nearly tripled…. [T]he proportion… who are ‘extremely obese’ increased more than 600%.” Amazingly, less than a third of Americans ages twenty to seventy-four today are at a healthy weight. That proportion is not going to improve in the near future.
Obesity-related disease costs the medical system $147 billion annually —a greater burden than the costs of cigarettes or alcohol.
So what accounts for this bloat? How did we go from being a relatively healthy country to one certain to blow the highest proportion of GDP of any industrialized nation dealing with the consequences of one thousand too many Twinkies?
The most likely reason for this explosion in obesity is a change in what we eat. As people who know something about the matter will testify, we eat too much of the wrong stuff, and not enough of the right stuff: too much sugar, fat, processed food; not enough vegetables and unprocessed food. Between 1990 and 2006 the percentage of adults who ate five or more fruits and vegetables a day fell from 42 percent to 26 percent. Americans now drink fifty-two gallons of soft drinks a year, with teenage girls getting 10 to 15 percent of their total caloric intake from Coke or Pepsi. These choices matter to our bodies. They make us unhealthy and increasingly fat.
Why we make these particularly bad eating choices is a complicated story. We all (and especially women) work outside the home more than before. That means we have less time to prepare meals and more need for meals prepared by others. The others preparing those meals recognize that certain food qualities—the sweetness, the saltiness, the fattiness—will affect the strength of demand for that food. The ideal demand-inducing mix is all three together: think double-tall caramel latte.
We’re not about to empower federal food police, however, and neither are we going back to the 1950s, when more of us stayed at home cooking beets (or better). If we’re going to make progress with this problem, we need to think about the parts of the problem that we can actually change.
The part that I want to focus on is the economics of what we eat. Or, more precisely, the economics of the inputs to what we eat. It’s clear we eat a lot of sweet stuff. Since 1985, U.S. consumption of all sugars has increased by 23 percent. But what’s interesting is the mix of the sweet stuff we eat. It’s not just sugar, or predominantly sugar. Increasingly it is high-fructose corn syrup, a sugar substitute. In 1980, humans had never tasted high-fructose corn syrup. In 1985 it accounted for 35 percent of sugar consumption. In 2006 that number had risen to over 41 percent.
One simple answer is price. Natural sugar is expensive, relative to high-fructose corn syrup. So the market in sweeteners moves more and more to this sugar substitute. Or better, races to this sugar substitute. Forty percent of the products in your supermarket right now have high-fructose corn syrup in them. That number is certain to rise.
Invocation of the “market” is likely to lead some to say, “Them’s just the breaks.” Markets are designed to channel resources to where they can be most efficiently used, and to push out inefficient inputs for more-efficient ones.
Yet lovers of the market should hesitate a bit here before they embrace this particular mix of sweetness. Indeed, an alarm for free-market souls should sound whenever anyone talks about the input costs from agriculture and related industries. Even for a liberal like me, it is astonishing to recognize just how unfree the market in foodstuff is. And it is embarrassing to reckon the huge gap between our pro-free-market rhetoric around the world and the actual market of government regulation of food production we’ve produced here at home. As Dwayne Andreas, chairman of Archer Daniels Midland (ADM), one of the most important beneficiaries of our unfree-food market, told Mother Jones: “There isn’t one grain of anything in the world that is sold in a free market. Not one! The only place you see a free market is in the speeches of politicians. People who are not in the Midwest do not understand that this is a socialist country.”
A socialist country.
It’s easy to see why this enormously wealthy capitalist celebrates this chunk of American socialism: he is a primary beneficiary. Headquartered in Illinois, ADM is a conglomerate of companies with revenues exceeding $69 billion in 2009. According to one estimate, at least 43 percent of ADM’s annual profits are “from products heavily subsidized or protected by the American government.” More dramatically, “every $1 of profits earned by ADM’s corn sweetener operation costs consumers $10, and every $1 of profits earned by its ethanol operation costs taxpayers $30.”
Andreas is certainly right that few from the coasts (including the west coast of Lake Michigan) recognize just how pervasive this socialism is. We protect milk in America. Milk, for God’s sake! “Most milk in the United States is marketed under… regulations known as ‘milk marketing orders.’ Currently, there are [ten] federal orders that regulate how milk is priced.”
That means there is a map controlled by government regulators that divides the country and sets the price. And by “most,” that commentator means almost 60 percent of milk production under federal regulation, with most of the rest subject to state regulation.
This regulation is intended to subsidize dairy farmers. The Organisation for Economic Co-operation and Development (OECD) estimates that that subsidy increases the price of milk by about 26 percent. Cheese costs 37 percent more in the United States than elsewhere, again because of this regulation. Butter: 100 percent more in the United States than elsewhere. These differences are not trivial.
This system of subsidy dates back to the New Deal, when at least the government had the excuse of the phenomenally bad economics that seemed to rule the day. “Got a depression? Here’s an idea: mandate higher prices!”
Since the 1930s the economics has improved. The politics has not. Richard Nixon hinted that he planned to abolish the price supports for milk. After receiving—because of the hints?—$2 million in campaign contributions from the dairy lobby, he changed his mind. Since his flirt with free markets, no one has seriously thought to end this economic idiocy—because it is political genius. Highly organized special interests leverage their power to transfer wealth from consumers to farmers.
And not just dairy farmers. The government has intervened to protect shrimp producers against foreign competition. It has blocked more-efficient Brazilian cotton producers from selling in the American market (by subsidizing American cotton farmers and paying off Brazilian farmers so they won’t retaliate). It has waged war to protect banana producers. It has even imposed import restrictions and offered low-cost loans to protect peanut farmers (and no, Jimmy Carter is not to blame for that).
This protection is not just for farmers. Republican president George W. Bush led the charge to protect steel in 2001. So, too, do we protect domestic lumber firms from Canadian competition. According to the Cato Institute, this adds between fifty and eighty dollars per thousand board feet, pricing three hundred thousand families out of the housing market. As University of Chicago professors Raghuram Rajan and Luigi Zingales estimate, “trade restrictions imposed in the 1980s… cost consumers $6.8 billion a year, while the value of government subsidies received by the industry over the same period amounted to $30 billion.”
Liberals are often untroubled by the idea of the government mucking about in the market. They like the idea of the government stepping in to help the weak. And certainly, as we non-farmers are likely to believe, farmers are among the poorest in our society. If a bit of milk regulation keeps a few cows on a dairy farm, latte-sipping Starbucks customers can afford it.
But these subsidies don’t help poor farmers. Nor are they produced because of a concern for the poor. The biggest beneficiaries are the world’s richest and most powerful corporate farmers. Ten percent of the recipients of farm subsidies collect 73 percent of the subsidies—between 2003 and 2005, $91,000 per farm. The average subsidy of the bottom 80 percent? Three thousand dollars per farm. And among those receiving large farm subsidies are Fortune 500 companies such as John Hancock Life Insurance ($2,849,799), International Paper ($1,183,893), and Chevron/Texaco ($446,914); many celebrities, such as David Rockefeller ($553,782), Ted Turner ($206,948), and Scottie Pippen ($210,520); and several prominent current and former members of Congress such as Chuck Grassley (R-Iowa; 1975–: $225,041), Gordon Smith (R-Ore.; 1997–2009: $45,400), and Ken Salazar (D-Colo.; 2005–2009: $161,084).
The same story can be told about steel. If the United States wanted to help steel workers hurt because of shifts in the market for steel production, it could compensate them directly. But “instead of direct compensation to workers… [the] government imposed tariffs to protect fewer than nine thousand jobs in the steel industry”—which in turn was likely “to cost 74,000 jobs in steel-consuming industries.”
The list of anti-free-market interventions by our government is endless. But the particular regulations I want to focus upon here tie to the cost of sugar and high-fructose corn syrup (HFC). For the interventions with this are quite extreme, and they produce quite obvious effects. HFC is cheap relative to sugar for two very anti-free-market reasons: the first is tariffs; the second, subsidies.
Excerpted from Republic, Lost by Lessig, Lawrence Copyright © 2011 by Lessig, Lawrence. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
and post it to your social network
See all customer reviews >