- Shopping Bag ( 0 items )
This chapter will meander like an old riverboat navigating the sandbars as it steams up the calmer waters of the Mississippi River. We'll take a variety of short excursions into Twain's World along the way. Our theme will be Twain's insightful caution about Lies, Dammed Lies and Statistics - not to mention common sense and pernicious conventional wisdom - as we quickly narrow our focus to medical issues. The uncertainty in medical statistics will be a sharp contrast to the exactness found in the physics of later chapters. Twain's view of human irony will show why you must squint and check your wallet when scientists try to sell you the latest modern version of old riverboat snake-oil, or politicians try to impose new medical regulations.
Politicians love statistics. A good example of a politician's favorite analysis tool is UC Berkeley's 1970s attempt to check for sex discrimination in its graduate schools admissions. Berkeley had no shortage of in-house talent to conduct this analysis and decided to use what it considered to be the reasonable criteria of comparing the ratio of females accepted over total female applicants versus the ratio of males accepted over total number of male applicants. Data showed that the ratio of females accepted was smaller than the ratio of males. Hence it was concluded that women were clearly being discriminated against. The next step was to root out this evil injustice by finding out which departments were most flagrantly conducting this sinister deed. However, the results of the individual departments made no sense - every department's male ratio was smaller than the female ratio. Wow! Politicians rejoice!
With actual data and honest analysis, opposite sides can come to a "factual conclusion" that serves their purpose. George W. Bush can travel the campaign trail and lament how every department in Berkeley has been proven to give excessive favoritism to women, while Hilary Rodham Clinton can solicit generous contributions from women's groups by vowing to eradicate the proven bias of universities, like U.C. Berkeley, that have unfairly discriminated against women. Both can passionately believe they are right. So, who is lying? Well neither of course - that's why statistical facts are so much better than regular lies! Statistics are solid facts compared to other campaign claims. Why hell, the term solid facts does not give statistics their saintly due for determining justice. They're miracles for lawyers making a healthy living arguing statistical facts to juries!
We'll soon see how this glorious reality for politicians and lawyers has a real downside for the FDA and medical professionals trying to decide if a new drug for cancer therapy should be used. Why do contradictory statistical results occur using the same data and analysis? How often does this happen? Before delving into these two questions - another more disconcerting question arises. Can statistics get worse than the dilemma above? Answer: Yes. Much worse.
While we look at the above paradox, we're going to step away from the specifics of the Berkeley study. Evaluating a statistical study of sex discrimination is overly passionate for our purpose. The so-called "Battle of the Sexes" has waged on in many forms since before the trappings of civilization. A few examples of never-ending "issues" can start with the exclusively female population of ancient Lesbos while the Spartan warriors stole young boys from the poison of loving mothers; to the more modern Taming of the Shrew and the 1970's bra burnings. We'll surely be distracted by headaches from this eternal battle if we try to include political fodder within an objective study focused on this particular paradox. Instead, let's have a make-believe parallel study about eliminating headaches with aspirin or Tylenol.
A Simple Example of This Paradox
Let's ascribe the Berkeley male data to Tylenol and its female data to aspirin. Those accepted into graduate school will become headaches that disappeared in the study. Graduate departments become clinics and now our substitution is complete. In other words, every clinic will show that aspirin is better than Tylenol, but when all the clinics are combined, Tylenol works better than aspirin! So which really works best?
In this numeric example, we will have only three small clinics conducting this aspirin/Tylenol trial. The volunteers will choose their pain reliever and note whether or not their headache goes away after using either aspirin or Tylenol. One clinic is in Aspen, Colorado, another in Vail, Colorado, and the third is in Mammoth, California.
The results in Aspen were 10 out of 22 taking aspirin said they got relief while 6 out of 14 volunteers said they got relief from Tylenol. We'll put this and the other clinics in standard chart form:
ASPEN 10/22 6/14 (.455) (.429) VAIL 6/9 9/14 (.667) (.643) MAMMOTH 13/21 31/52 (.619) (.596)
Aspirin works best at every clinic, since 10/22 is greater than 6/14, 6/9 is greater than 9/14, and 13/21 is greater than 31/52. So, aspirin will obviously beat out Tylenol when combining these boring trials. Right?
TOTAL 29/52 (.558) 46/80 (.575)
Wrong! All together, 46 out of the 80 volunteers choosing Tylenol said they got relief while 29 of 52 volunteers choosing aspirin said their headache disappeared. Surprisingly, .558 is smaller than .575, so Tylenol works best when combining these studies! Hmm ... this doesn't make much sense.
So, these statistics are contradictory. Unless you have a financial interest in Tylenol or Bayer, or a bad headache, who really cares? Let's raise the stakes and make it personal. What if you have just been told you have a very aggressive type of lung cancer? Your doctor presents you with two treatment options. Neither option offers you much hope of surviving the year with your lung intact ... AND you desperately avoid even acknowledging - you are likely to die soon from this cancer. What if the clinical survival results for your two cancer therapy options are exactly the same as those in the above aspirin/Tylenol trial? Do you choose the cancer treatment that worked best at every clinic or the one that worked best overall? Our college education tends to focus on only the benefits of statistics, which churn out graduates who believe such paradoxes are impossible. Statisticians quietly hide unsightly realities.
This illusion is also helped by our lack of teaching an adequate understanding of fractions. This statistical paradox will never occur with standard algebra addition where fractions are added by first finding a common denominator then adding the numerators over their common denominator. These statistics skip the common denominator part, and add numerator to numerator and denominator to denominator. The integrity of each clinical group size is not lost. This sort of addition is very common in the world of physics in which vectors are often added in this manner to get accurate results.
Our discussion on the different methods of adding fractions isn't doing much to save you from your cancer. So let's get back to the problem at hand - Which treatment is your better choice?
We'll combine all the studies. This is based on the reasoning that the more data you have, the more confident you can be with your results. Looking at all parts as a whole is best.
It's best to choose the one that worked best in each clinic. Since Mammoth had the largest patient base, it overshadows the Aspen and Vail results. Vail may have done something different that made both options more effective, while Aspen did something to decreased the effectiveness of both options. Neither should be penalized for having smaller groups. Mammoth unfairly overshadows the other trials. Disregard accumulated results contrary to this, as they are tainted. Use this logic to choose your cancer option.
Both answers sound reasonable, but which carries more weight? And are there more problems with the supporting arguments than we're addressing? Sure. For example, if you prefer the argument in Answer #2, what do you do when one worked best in 4 out of 5 clinics, and still performed worse overall? Do you still favor the reasoning of Answer #2? Clearly, these sorts of statistical problems make for difficult choices and sleepless nights for cancer patients and doctors trying to give the best care possible.
And we can't even come close to touching on a fraction of other possibilities - what if there were further layers of contradictions? What if the best option for a postoperative largely intact lung for better breathing capacity might mildly contradict what's best for patient survival in a similar peculiar unsure way? What then?
Well, at least we know the answer to one brainteaser. Which job is easier: Problem Identifier or Problem Solver?
One can understand why mathematicians tend to migrate to the clearer world of physics, where laws like "The Conservation of Energy and Momentum" make math results from experiments exact, predictable and easily repeatable. Finding answers to medical predicaments with statistics often gives researcher, regulator and medical professional only regrettable choices, particularly when a trade-off must occur. Like Dorothy and the Scarecrow choosing the right path to Emerald City - statistical results often leave lingering doubts if our results are really leading us in the right direction. Unsure statistical analysis is often the factor for making life-or-death decisions.
And the uncertainty gets worse. There are many other medical paradoxes and human intrigues that exist even before statistics are used for evaluation. Human factors can exponentially influence the uncertainty in medicine. Statistical analysis is supposed to be the unbiased arbitrator determining which medication is best for us. Unfortunately, this arbitrator is fundamentally flawed as illustrated by our first very simple example.
Some Medical Stories from Twain's World
A little anecdotal history of why U.S. medicine has changed its regulatory rules and statistical guidelines helps in understanding a major problem in medicine - Seemingly great new therapies aren't always better.
A good example is the use of oxygen tents for healthy newborns in the 1920-30s. There have always been a certain number of stillbirths or listless newborns. The thinking was that the detachment of the umbilical cord followed by a delayed birth, resulted in suffocation, and this was a major cause of these tragedies. Perhaps giving babies a nice dose of oxygen ASAP would help. Obviously, somewhere between death and a healthy baby lie those that barely survive and are left with permanent damage from lack of oxygen. Oxygen was first administered to the bluest of blue babies by putting the baby in an oxygen tent - with great success. These heartening results led to the thinking that even normal births would benefit from an enriched oxygen environment after the trauma of delivery. After all, the elderly with lung diseases get great relief from enriched oxygen. What could be the harm? The logical benefit should be a healthier, smarter child. This therapy was new and somewhat expensive, so it was used primarily by those of wealth and medical knowledge. Unfortunately, healthy newborn babies are highly sensitive to excessive oxygen in the atmosphere. An over enriched oxygen tent can lead to blindness or mental retardation for a healthy newborn.
This VIP baby oxygen treatment had similar results to the VIP surgery given to President Lincoln for the bullet lodged in his brain-both led to tragedy. Some modern brain surgeons now believe Lincoln would have likely survived Booth's assassination attempt if overly-ambitious doctors had opted not to remove the bullet. President Andrew Jackson lived a long and healthy life having a lucky bullet lodged in or near his heart - a memento from a youthful duel. The young Jackson was lucky he had neither fame nor fortune to entice surgeons to perform VIP surgery. Likewise, parents wanting only the best for their child did not get what they had bargained for when paying for an oxygen tent for their healthy newborn. History has many analogous examples of logical, promising new therapies that were tragically wrong. Math - being the distillation of logic - can be deceptively poor at predicting these tragedies. Embracing a new therapy in medicine has its risks.
Alternatively, a delay in new therapy can be equally tragic. Patients are denied a new prescription therapy prior to FDA approval. Therefore any drug approved and found useful, can always be critiqued as coming to market too late ... in hindsight. To skirt unfair finger pointing at the FDA, our example will be aspirin's ascent from lowly headache-reliever to a major component of heart therapy. Aspirin was already sold over the counter, so the FDA played the role of bystander rather than its usual role as gatekeeper as aspirin slowly became a mainstay for preventing heart attacks. Our story starts in the early 1960s, when a premier lecturer in Wisconsin's Medical School predicted that aspirin would soon become a common tool for heart therapy.
But this view was soon overshadowed by news of a rare but dangerous reaction to aspirin leading to Reyes Syndrome in children. The FDA was even considering removing aspirin as an over the counter medication because of this danger. By the early 1970s, numerous impressive papers presenting the virtues of aspirin for heart care added to the confusion. A general conclusion was gradually made - aspirin is a good option for adults with heart concerns but risky for children. Unfortunately, aspirin therapy was advised by only a small minority of moderately risk-taking cardiologists in the early 1980s, and did not achieve broad consensus for heart care until the late 1990s. How many adults with heart disease would have led healthier, longer lives if aspirin therapy started earlier?
Actually, that rhetorical question has an answer that can be estimated. Assuming a conservative 50,000 deaths from heart disease annually and a third more from strokes will total to over 65,000 annual deaths. Now add in double this number for those who did not actually die from a heart attack or stroke but had a dramatic loss of quality of life from a weaker heart and you get close to 200,000 Americans annually. How many years was the delay of aspirin therapy and what percentage would have actually benefited from aspirin therapy? Let's give the delay of therapy a range of 15 to 25 years and the percent who suffered loss of quality life from the delay of aspirin therapy a range of 20 percent to 60 percent. This gives a range at low end of 500,000 to a high 3 million Americans who most likely lost out on the timely benefit of aspirin therapy. Even the smaller number in our rough estimates is much greater than all the Americans who died in combat from WWI to the Iraqi conflict. De facto caution has risks too. A reasonable claim from our sloppy math could say that over a million Americans were likely denied a longer healthier life from the cautious delay of aspirin therapy.
In truth, this estimate is not as sloppy as one might think when compared to the more complex formulas with fancy symbols and names in sophisticated statistical analysis. We will compare this simple estimate to the more complex methods later in this chapter. "Pay no attention to the man behind the curtain," in the Wizard of Oz, has the usual flavor of clamoring to queries about statistics' inner-workings.
Another more poignant example of rethinking the risk of new therapy exposure is the medical battle against AIDS. Many cancers and other rare catastrophic diseases are attacked in a one step at a time manner for a number of pragmatic reasons. One reason is the accumulation of dependable data. Historically, using multiple compounds in a shotgun attempt leaves the researcher with little if any convincing results. Unimpressive results from reckless data will get no further funding for research in the medical community. This ultimately hurts the effort to find a cure. This one-step-at-a-time approach is very logical in a purely theoretical world. However, this pragmatic method is painstakingly slow for those desperate and in immediate need for a cure. The movie Lorenzo's Oil depicts this frustration and despair for those suffering from an incurable, catastrophic rare disease.
Excerpted from The Cults of Relativity by Drake Larson Nora De Caprio Copyright © 2008 by Drake Larson. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Posted September 29, 2008
Check-out, ¿The Cults of Relativity¿ if you like learning more about the remarkable Mark Twain and how similar he was to Albert Einstein. It¿s a remarkable book. Particularly pages 1-24, and page 88 goes off the scale Twain. You¿ll love it if you¿re and admirer of Mark Twain.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted October 28, 2008
No text was provided for this review.