- Shopping Bag ( 0 items )
Airplane crashes. The AIDS epidemic. Presidential election polls and voting results. Global warming. The latest cancer scare. All these news stories require scientific savvy first, to report, and then—for news consumers—to understand. It Ain't Necessarily So cuts through the miasma surrounding media reporting of scientific studies, surveys, and statistics. Whether the problem is bad science, media politics, or a simple lack of information or knowledge, this book gives news consumers the tools to penetrate the hype and dig out the facts. Don't stop flying, run to the doctor, or change your diet before reading It Ain't Necessarily So.
THE NEWS THAT ISN'T THERE
Stories That Are—and Aren't—Covered
When the Centers for Disease Control and Prevention (CDC) found that AIDS deaths increased in 1994, that story was covered by the New York Times—as it should have been. But two months later, the CDC announced that the number of AIDS diagnoses fell in 1995. An interesting and important piece of news, you might think, yet the Times effectively ignored it.
In April 1996 the federal government released figures from the National Criminal Victimization Survey (NCVS) showing that violent crime dropped slightly between 1993 and 1994 and that sexual assaults and rapes declined by an impressive 13 percent. That story, however, received almost no news coverage. But in October 1996, when the FBI's Uniform Crime Reports (UCR) showed that violent crime had dropped by 4 percent between 1994 and 1995, the story made front-page news—even though the UCR's figures are generally thought to be less reliable than those of the NCVS.
A Federal Reserve study showing that minority applicants for mortgages fared worse than white applicants was big news in the summer of 1995, but in the same month Federal Reserve figures pointing to a 55 percent increase in mortgages for black applicants were mostly ignored.
In May 1996 the media appropriately alerted readers to a Word Health Organization (WHO) report calling attention to a worldwide resurgence of familiar infectious diseases like tuberculosis. But in the same month, the media failed tocoverdata released by the CDC showing that tuberculosis cases had declined to an all-time low in the United States. In the fall of 1996 front-page stories were devoted to a report from the National Center for Health Statistics (NCHS) that showed a decline in illegitimate births in 1995. Only a few months earlier, though, the media ignored an NCHS finding that illegitimate births had reached an all-time high in 1994.
As these examples illustrate, some stories make it into the newspapers, while others don't—and it's not always because the stories that make it are inherently more newsworthy. If a story isn't covered, is it "news"? Almost by definition, no, just as some philosophers like to argue that a tree falling in the forest doesn't make a sound unless someone is nearby to hear it. News, in this view, is what appears in the newspapers.
But even if uncovered stories aren't news, one can still argue that they should have been news. Not, of course, from the standpoint of the occurrences themselves; we don't mean to attribute human feelings to uncovered data, imagining them conversing with other data that do make it into the newspapers, insisting (as Marion Brando does in On the Waterfront) that "I coulda been a contender." Instead, of course, we adopt the standpoint of news consumers when we say that some stories should have been covered, even if they weren't. Often news consumers would be better informed if they learned of research findings that go unreported even in our best and most comprehensive papers.
Uncovered potential news stories are reminiscent of the nursery rhyme about the "little man who wasn't there": "Last night I met upon the stair/ A little man who wasn't there/He wasn't there again today/Oh, how I wish he'd go away." Our purpose is to account for the phenomenon of the little (and sometimes big) story that wasn't there, because it escaped the attention of reporters.
Why don't we learn about some developments, even though they seem to be of genuine importance? To answer this question, we'll begin by looking at individual stories. Our procedure here will necessarily differ from what it is elsewhere in our examination of what's right and what's wrong with news coverage. We can't examine nonexistent coverage, but we will explain the importance of the various ignored research findings and document the fact that they were ignored. So as to set a standard of newsworthiness that in our view was met by the uncovered story, in each case we'll pair an uncovered story with a related (and not obviously more significant) story that did receive media attention.
An Untold AIDS Story
In February 1996 the New York Times appropriately and responsibly reported the CDC's finding that in 1994 deaths from AIDS had increased by 9 percent from the previous year; AIDS actually became the leading cause of death among American women aged twenty-five to forty-four. As the article explained (summarizing the views of CDC scientist John Ward), "death rates for AIDS [are] only one way to measure the epidemic. Another measure is the number of people in which AIDS has been diagnosed.... The measure that gives the most up-to-date measure [sic] of the continuing spread of H.I.V. is the number of new infections" that have not yet developed into full-blown AIDS.
Among these various AIDS statistics, death totals in effect track the epidemic's past; AIDS deaths provide a coda to tragedies that may have occurred ten or fifteen years earlier, when individuals first were infected with the virus that until recently was thought to lead inexorably to their doom. Diagnoses of AIDS and reports of HIV infections, on the other hand, track the epidemic's future: HIV infections become AIDS diagnoses, and AIDS diagnoses (it was believed, before the promising development of anti-AIDS drug "cocktails") culminate in AIDS deaths. In principle the number of HIV infections offers a better guide to the future than the number of AIDS diagnoses, but in practice the number of new infections is harder to pin down (since all AIDS diagnoses, but not all reports of HIV infections, must be reported to the CDC).
All of this is to say that the yearly total of new AIDS diagnoses is an important statistic: it enables us to judge whether or not the disease is likely to do even more damage in the future. For that reason, the CDC's news about 1995 AIDS diagnoses was surprisingly encouraging: in April 1996—two months after releasing the information about AIDS deaths—the CDC reported that the number of AIDS diagnoses had fallen 7 percent between 1994 and 1995 and that diagnoses of AIDS in children had dropped by 23 percent. The CDC learned of 79,897 people who were diagnosed with AIDS in 1994 (including 1,034 children), whereas the number fell to 74,180 in 1995 (including only 800 children).
Thus the CDC offered on-the-whole encouraging news, indicating that the horrific toll taken by AIDS promised to decrease in years to come. And particularly because so much of the news about AIDS over the years has been so grim, one might have thought that newspapers would eagerly seize on the small glimmer of encouragement offered by the CDC as evidence that things were finally becoming less dire. In the months preceding the first successes of the promising drugs known as protease blockers, an "AIDS-is-becoming-less-bad-than-you-think" story would really have been new—and should really have been news.
But it wasn't, at least as far as the Washington Post and the New York Times were concerned. To be more precise, the Post and the Times chose to treat the story as local (and discouraging) news, rather than national (and somewhat encouraging) news. The Post ignored the story in its news columns but published an editorial focusing on the fact that the District of Columbia has a higher proportion of AIDS cases than any state. The editorial also argued that the CDC figures don't "necessarily mean the disease is on the wane," because "the definition of AIDS was changed in 1994," when "many more cases were moved from HIV-positive status to full-blown AIDS. The number of `new' cases reported that year included thousands that wouldn't have been counted at that stage under the old guidelines," which means that the perceived "drop" in the 1995 numbers is "exaggerated."
Thus the Post attempted to explain on its editorial page why it failed to cover this particular bit of basically encouraging AIDS news. But the explanation happens to be wrong: the CDC's redefinition of AIDS took place in 1993 (when the number of AIDS diagnoses surged to 105,828) rather than 1994. Consequently, the 74,180 AIDS diagnoses in 1995 are in every respect comparable to (and represent a genuine decline from) the 79,897 AIDS diagnoses in 1994.
The Times, by contrast, did not attempt to explain away the encouraging CDC report; it simply ignored what was encouraging about it. The report was covered by the Times metropolitan desk, for which the only relevant news was that Jersey City, New Jersey, had attained the "grim distinction" of being "second only to Washington in numbers of AIDS cases per 100,000 population." Almost in spite of itself, however, the Times story (which was all of eighty-four words) conveyed to the attentive reader that even for Jersey City the news was not unrelievedly "grim." The Times noted that "Jersey City's rate in 1995 was 138.1 cases [per 100,000 people], down from 148.7 in 1994." Thus the Jersey City rate declined by 7.1 percent, which almost exactly replicates the decline in the national rate. The decline in the national rate went unmentioned in both the Post and the Times.
As we have argued elsewhere, the media's coverage of AIDS has tended (to reverse the words of songwriter Johnny Mercer) to "accentuate the negative" and "eliminate the positive." As a result, many inherently newsworthy findings about AIDS have not in fact become actual news.
What's Not Reported Can Be Criminal
In October 1996 the Washington Post published a front-page story documenting an encouraging drop in the nation's crime rate. The UCR, the FBI's survey of crimes reported to almost all American law-enforcement agencies, showed that violent crime fell 4 percent between 1994 and 1995; crime overall dropped to its lowest level in a decade.
In some ways the attention given to the release of the UCR data was surprising. To begin with, as the Post article pointed out, the drop in serious crime had already been documented back in May, when preliminary data from the UCR were first made available. The preliminary data had not been covered by the Post but did receive attention in, for example, the Chicago Tribune. More than five months before the Post article appeared, Tribune readers learned that crime had fallen for the fourth straight year in 1995, with particularly impressive drops in the number of murders, robberies, and rapes. In that respect, the Post accorded front-page status to something that was arguably not "new" but instead a confirmation of what was already "old."
The attention lavished on the UCR is also surprising because the report isn't thought to give a particularly accurate reading of America's crime problem: the UCR counts only crimes that are reported to the police (and then reported by the police to the FBI), while many crimes are committed but never reported to the police. With good reason, the FBI's count of rapes in particular is thought to be unreliable, since we know that a great many rape victims never report the crime to the police. To be sure, the UCR may offer useful information about the crime rate's trend, since its limitations are the same, year in and year out; nevertheless, its data aren't considered all that reliable.
For this reason, the government's more accurate survey of crime is thought to be the NCVS, which surveys a large and representative sample of ordinary Americans to get a sense of the total amount of crime, both reported and unreported. NCVS findings (admittedly for 1994 rather than 1995) were released in April 1996. Although they offered data that are thought to be more informative, data that actually track the UCR numbers reasonably closely, they received virtually no media attention.
The NCVS offered only slightly less encouraging news about our crime problem. The rate of violent crime dropped slightly between 1993 and 1994, falling from 51.3 to 50.8 victimizations per thousand people, a decrease of 1 percent. The rate of property crime dropped substantially, going from 322.1 to 307.6 victimizations per thousand people, a decrease of 5 percent.
But the one arguably big piece of news in the NCVS was ignored by the media. As we will see in our discussion of surveys in chapter 6, the NCVS was substantially redesigned in 1992, with the aim of obtaining a more accurate count of the total number of rapes, attempted rapes, and other sexual assaults. Between 1993 and 1994, the survey showed, the victimization rates for rape and attempted rape held steady—but the rate for other sexual assaults plummeted by 38 percent. As a result, the combined rate for all sexual assaults fell by 13 percent.
On the face of it, this would seem to be very big news. In our subsequent discussion of surveys we see that the media attentively—and appropriately—covered the redesign of the NCVS, emphasizing that the survey was now likely to be able to count incidents of sexual violence against women more accurately. But in only the second year of the redesigned survey, its findings received next to no attention, even though the supposedly more accurate count showed that sexual violence against women had declined significantly.
The NCVS findings should have been big news, then. Amid a mass of contentions that an epidemic of sexual violence was being directed against American women, a survey that feminists had rightly hailed for its improved capacity to detect such violence had uncovered a notable decline. Again, if "news" is understood to be what is unexpected, the NCVS findings on sexual assaults should undeniably have been news. But they weren't. The NCVS results were wholly ignored by the New York Times and the Washington Post; to our knowledge they were covered most extensively in a ninety-four-word report in the Pittsburgh Post-Gazette. Thus a report that should have been big news turned out to be no news.
Do Minorities Get Mortgages?
In 1985 the New York Times paid great attention to a study conducted by the Federal Reserve Bank of Chicago, which found disparities in the treatment of minority and white applicants for mortgages. Among those with bad credit ratings, 90 percent of the white applicants—versus only 81 percent of the minority applicants—received mortgages. The Times placed its 902-word story on the first page of the business section—even though the study was little more than a rehash of 1992 research carried out by the Federal Reserve Bank of Boston. Because the Chicago study was derivative (and because it didn't address criticisms leveled at the Boston study), reporter David Andrew Price of Investor's Business Daily has argued that "the Chicago Fed study really didn't warrant press coverage." Nevertheless, it got it.
But the media paid much less attention to a second set of Fed findings, released the same week, that was arguably more illuminating. Between 1993 and 1994, the Fed found, home loans to black applicants rose by an impressive 55 percent; loans to Hispanics went up 42 percent. Native Americans received 24 percent more loans, and loans to Asians rose some 19 percent. Trailing the pack were white mortgage recipients, whose numbers rose by 16 percent.
These encouraging findings were covered most extensively in the Wall Street Journal. They were not wholly ignored by the Times, but it is fair to say that the Times coverage was far less extensive than that accorded the Chicago study: the 215-word story appeared on page 6 of the business section. Of greater importance, the Times story made the good news sound more like bad news. It led by announcing that "black and Hispanic mortgage applicants remain much more likely than white applicants to be turned down for loans to buy homes but the gap is narrowing." The fact that loans to black applicants "soared" by 54.7 percent was treated as an afterthought, relegated to the third paragraph of the four-paragraph story.
The Times coverage evidently presupposed that the continued (although lessening) disparity in the rates at which minority and white applicants received mortgages was more important than the sharp rise in the number of loans awarded to minority applicants. The Times seemed to explain away the increase in loans to blacks, noting that "more applications were filed" by black applicants.
The alternate view, though, is that the disparities in denial rates are not very meaningful; as the Journal article pointed out, denial-rate disparities "don't take into account such factors as disparities in net worth, assets or credit history." If, as seems likely, white applicants have higher net worth and more assets, the disparities hardly indicate that minority applicants are treated worse than white applicants with comparable qualifications.
But the larger point was made by economist Lawrence Lindsey, at that time one of the seven governors of the Federal Reserve Board. Speaking in September 1995, Lindsey complained that the dramatic increase in home loans to minorities was "the great underreported news story of the summer." He argued that the 54.7 percent increase in loans to black applicants was "a staggering, or at least a newsworthy, economic statistic"—and that, unlike the disparities in rejection rates, it received virtually no attention from newspapers. Lindsey observed that the media had never refrained from "printing negative stories" about the obstacles faced by minority mortgage applicants—so he wondered why many media outlets ignored or played down the good news represented by the huge increase in home loans to blacks.
In short, the increase in loans to minority applicants was not obviously less newsworthy than the disparate treatment of white and minority applicants, yet the latter story received far more attention from the press.
All Quiet on the Tuberculosis (News) Front
In May 1996 the WHO published its annual report, which highlighted grave difficulties in the worldwide battle against infectious diseases—a battle that had seemed easily winnable a generation earlier. The report spoke of "ominous trends on all fronts."
This report was undeniably newsworthy, and it received extensive attention from the press. The advent of AIDS and the danger posed by an outbreak of Ebola in Africa had focused popular attention on the threat of infectious diseases. Additionally, an influential article in the Journal of the American Medical Association (JAMA) had pointed to a large upsurge in American deaths from infectious diseases since 1980. The WHO report added an important international perspective to buttress the concerns.
Nevertheless, coverage of the report may have been a bit unbalanced by a tilt toward the negative. Consider, for example, the story published in [New York] Newsday, written by Laurie Garrett. In her Newsday article Garrett correctly declared that "a host of diseases that were once thought controllable are now taking record tolls worldwide—including tuberculosis, malaria and cholera." But she went on to argue that several diseases once thought almost eradicated in the United States had "surged, notably tuberculosis." The story appeared on May 20, 1996. Exactly ten days earlier, the CDC published a report showing that in 1995 reported TB cases were at the "lowest rate" ever, "since national surveillance began in 1953." In short, Garrett's claim makes sense only if you think there are downward surges.
Lest it seem that we are unfairly picking on Garrett, we note that she stands at the top of her profession: she has served as president of the National Association of Science Writers and is the author of a massive and well-received study of infectious diseases, The Coming Plague: Newly Emerging Diseases in a World Out of Balance. But despite her deserved eminence, Garrett did not cover the CDC report on tuberculosis. Nor did almost anyone else. In fact, as far as we can tell the only report of the CDC's encouraging findings appeared in the Orange County Register.
In this case we do not argue that the CDC tuberculosis study was more important than or even as important as the WHO report. Still, the fact remains that the WHO report basically did nothing but confirm a sense of pessimism about infectious diseases that was already, and in some respects justly, widespread. The CDC report, on the other hand, had all the makings of a great contrarian story. Amidst all the lamentations about the unchecked resurgence of infectious diseases, here was striking evidence that at least one notable infectious disease was very much under control—at least in the United States, the country of greatest interest for almost all American newspaper readers. If ever there was a "man bites dog" story, this would seem to be it. Imagine the headline: TB Is Not To Be: Tuberculosis Cases Way Down. Yet the story received almost no attention anywhere in the United States.
When Are Illegitimate Births Legitimate News?
In October 1996 the National Center for Health Statistics (NCHS) issued data showing that the rate of births to unwed mothers had declined in 1995—the first decline after almost two decades of consecutive rises. The rate dropped by 4 percent, falling from 46.9 births per thousand unmarried women in 1994 to 44.9 in 1995.
This finding was treated as big news, as indeed it should have been. It made the front page of papers like the New York Times and the Los Angeles Times. Analysts offered competing explanations for the encouraging news. Perhaps predictably, President Clinton tried to take credit in his weekly radio address, mentioning a 1996 executive order that required young mothers to stay in school or live with their parents so as to receive welfare benefits. It is hard to see how that order could have affected women who did or did not become pregnant in 1994 and early 1995. Liberals said the drop proved that sex education was resulting in increased use of birth control, while conservatives took it as a sign of increasing sexual abstinence among the young and unwed.
Whatever else was responsible for the downturn, the good news also resulted in part from a methodological improvement. As the New York Times reported, "About half of the decline in the out-of-wedlock birth rate stemmed from changes in reporting births in California, so that children whose parents had different surnames were no longer automatically considered to have been born out of wedlock."
Less than four months earlier, the NCHS produced a second set of natality findings, which pertained to 1994. Though these findings were also of great interest, they were ignored by newspapers, surfacing only when policy analyst Charles Murray called attention to them in an article in the Weekly Standard. As Murray noted, "in 1994 the percentage of children born out of wedlock logged its largest one-year increase since national figures have been kept. The new figure, 32.6 percent, was up from 31.0 percent in 1993.... The percentage of black births out of wedlock passed 70 percent, marking the largest increase since 1973." Yet no newspaper whatsoever covered this alarming story; instead, newspapers covering the 1994 findings reported a drop in the birth rate for teenagers.
The first decline in illegitimate births in twenty years was (and should have been) a huge story. Still, the twentieth consecutive rise in illegitimacy should have been big news as well, especially since, as Murray argued in his Weekly Standard piece, there was every reason to predict a downturn. The illegitimacy ratio (especially among blacks) was already so high in 1993 that there seemed to be little room for further increases. In addition, a consensus had finally developed, among liberals as well as conservatives, blacks as well as whites, that illegitimacy was wrong; it was not unreasonable to expect that this would have some impact on the behavior of the man and woman in the street (and between the sheets).
For these reasons, the decline that did not occur in 1994 should have been newsworthy. Just as it was significant when the dog in a Sherlock Holmes story did not bark during a nighttime intrusion, it's generally noteworthy when predictable things don't take place. Thus the continued rise in the illegitimacy ratio in 1994 should have made headlines; instead it didn't even make it into the newspapers.
How can we make sense of the fact that some stories become news, while others that are often intrinsically of equal interest don't? Our five case studies don't offer anything like an exhaustive—or statistically representative—sample. Still, it may be helpful to begin by comparing the stories that made the news with those that didn't. Crudely categorizing the stories as either optimistic or pessimistic, we can say that in three instances (AIDS, mortgages, and tuberculosis) pessimistic news was covered and optimistic news ignored (or, in the case of mortgages, downplayed). In one instance (crime), optimistic news was covered and more or less equally optimistic news ignored. In the final example (illegitimacy), optimistic news was covered and pessimistic news ignored. These findings appear to suggest at least a modest bias in favor of bad news. Because there is no reason to believe that our sample is representative, the bias that we detect here could safely be ignored—except for the fact that many other observers agree that there is indeed a media bias in favor of bad news.
That view was advanced forcefully in 1984 by columnist Ben Wattenberg, in a book entitled The Good News Is the Bad News Is Wrong. Wattenberg argued that a great many statistical indicators pointing to improvements in Americans' lives—as manifested in things like better health, increased life expectancy, cleaner air and water, greater prosperity, and decreased poverty—were unknown to an unreasonably pessimistic American public, because they had mostly been ignored by the unreasonably pessimistic American media. As he put it (in words that apply to our enterprise as well), "sooner or later [readers would] have to make a choice" whether to believe the "media—or [the] data."
Wattenberg notes that judgments about what is newsworthy often reflect three invalid criteria: "1. Bad news [e.g., `the carcinogen-of-the-month'] is big news; 2. Good news [e.g., the dramatic increase in American life expectancy] is no news; 3. Good news is bad news [e.g., this 1983 New York Times headline: Longer Lives Seen as Threat to Nation's Budget]."
For our purposes, of course, the question is why bad news might tend to be emphasized. Wattenberg points to four causes. He argues first that there is a "commercial negative tilt" to the news, in that bad news appeals to readers and viewers: "Bad news is exciting: scandal, war, murder."
Secondly, he spoke of a "left-of-center tilt in the news-gathering establishment." Most reporters at the most prestigious newspapers are politically liberal, and contemporary liberals "believe that accentuating the negative will let ... others see the problems and this ... will engender further progress." He denied, though, that any journalistic tilt toward the left was "typically a conscious decision," emphasizing instead that "we all see reality through a filter." For liberal journalists, what tends to make it through the filter is "a set of severe problems, subject to solution through aroused concern, often through the instrumentality of government."
In addition, Wattenberg hypothesized that there is an "adversarial tilt to the media," in that "journalists are, almost by definition, antistatus quo," dedicated to criticizing "a corrupt, dissembling, and heartless establishment." Lastly, he spoke of the self-righteousness of reporters who "believe that only their vigilant eyes can keep the nation from international adventurism, political skullduggery, and corporate corruption."
Wattenberg's observations about the proclivities of the press are provocative, and they may help account for the stories that did and didn't make the news in some of the case studies that we've looked at. Still, if journalists are pessimists, they are also professionals. Why would they ignore potentially big stories that are worthy of coverage (and might also advance their careers)? To answer this question, we need to look at some other factors.
Part 1 Introduction Part 2 The Ambiguity of News Chapter 3 The News That Isn't There: Stories that Are—And Aren't—Covered Chapter 4 Much Ado about Little: Making News Mountains Out of Research Molehills Part 5 The Ambiguity of Measurement Chapter 6 Bait and Switch: Understanding "Tomato" Statistics Chapter 7 The Perils of Proxies: Is There a There There? Chapter 8 Is The Glass Half Empty or Half Full? A Look at Statistics from Both Sides Now Chapter 9 Polls Apart: The Gertrude Stein Approach to Making Sense of Contradictory Surveys Chapter 10 The Reality and Rhetoric of Risk: Telling It Like It Is— and Isn't Chapter 11 Distinguishing Reports From Reality Part 12 The Ambiguity of Explanation Chapter 13 Blaming the Messenger, Ignoring the Message Chapter 14 Tunnel Visions and Blind Spots: The Danger of Hedgehog Interpretations Chapter 15 Conclusion: "Hard to Tell": Journalism, Science, and Public Policy—An Inherent Conflict?
Posted May 23, 2002
Murray, Lichter, and Lichter add to the mounting evidence that individuals who rely on the news media as their primary source of information are highly subject to being misinformed on important issues. With respect to the main stream media, one is left to wonder if being uninformed is a lesser evil than being misinformed.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.