Howard S. Becker is a master of his discipline. His reputation as a teacher, as well as a sociologist, is supported by his best-selling quartet of sociological guidebooks: Writing for Social Scientists, Tricks of the Trade, Telling About Society, and What About Mozart? What About Murder? It turns out that the master sociologist has yet one more trick up his sleeve—a fifth guidebook, Evidence.
Becker has for seventy years been mulling over the problem of evidence. He argues that social scientists don’t take questions about the usefulness of their data as evidence for their ideas seriously enough. For example, researchers have long used the occupation of a person’s father as evidence of the family’s social class, but studies have shown this to be a flawed measure—for one thing, a lot of people answer that question too vaguely to make the reasoning plausible. The book is filled with examples like this, and Becker uses them to expose a series of errors, suggesting ways to avoid them, or even to turn them into research topics in their own right. He argues strongly that because no data-gathering method produces totally reliable information, a big part of the research job consists of getting rid of error. Readers will find Becker’s newest guidebook a valuable tool, useful for social scientists of every variety.
|Publisher:||University of Chicago Press|
|Sold by:||Barnes & Noble|
|File size:||732 KB|
About the Author
Read an Excerpt
By Howard S. Becker
The University of Chicago PressCopyright © 2017 The University of Chicago
All rights reserved.
Models of Inquiry: Some Historical Background
Alain Desrosières (2002) suggested that we think about the development of data, and methods for turning it into evidence, in ways proposed by contemporary work in the sociology of science. He described how the kind of statistical data social scientists use today took shape from the activities of the functionaries of developing European states who needed systematic information so that they could adequately administer the ever-larger territories under their control. And so, unable to get data as accurate as they wanted, they dealt with the resulting uncertainties by developing mathematical methods for estimating the probabilities associated with their conclusions.
Desrosières traces the way modern statistical method and practice developed to do the work whose results these people needed, "the task of objectifying, of making things that hold, either because they are predictable or because, if unpredictable, their unpredictability can be mastered to some extent, thanks to the calculation of probability" (2002, 9). The objects so made embody one kind, perhaps the model we all almost instinctively have in mind, of data. Their ability to hold, to stay constant, is what allows them to work as evidence. When we point to these things-that-hold, we do it confidently, knowing that our scientific peers will agree that those data support the idea we say they support.
Desrosières describes two things researchers have to do to get that kind of assent from their audiences: "On the one hand, they will specify that the measurement depends on conventions concerning the definition of the object and the encoding procedures. But, on the other hand, they will add that their measurement reflects a reality. ... By replacing the question of objectivity with that of objectification ... reality appears as the product of a series of material recordings: the more general the recordings — in other words, the more firmly established the conventions of equivalence on which they are founded, as a result of broader investments — the greater the reality of the product" (2002, 12). And thus the more convincing they are as evidence. I'm concerned with the work done by the "conventions of equivalence" that let us accept the "reality" of what are after all pretty shaky data (no matter how scientific our methods of gathering them). So, yes, our data rest on an agreement to accept as good enough for our purposes the less than perfectly reliable objects our methods of objectification produce.
Social scientists work under conditions they can't control. Unlike some other scientists, we can't even pretend to be sure that the "all other things being equal" condition, so central to the model of experimental control as a way of isolating causal links, ever holds for the data we gather. We're always contending with events and people who interfere with our plans for collecting data that stands up, "holds," as evidence for our ideas. As a result, skeptics always have a good chance of falsifying the links we make to connect our data, evidence, and ideas. Critics can find reasons to reject the data's value as evidence for the idea presented, claiming that something other than what the presenter claims might have produced the same results, pointing to the possibility of errors of observation, analysis, or reporting. Or they can claim that the evidence, even if acceptable, doesn't logically support the idea, because ... and then cite a reason not envisioned in the original research design. Or a critic might argue that the idea is logically fallacious or have some other flaw, rendering untenable the entire argument the research aims to construct.
Disciplines vary in how much their members agree on what they will accept as data "good enough" to serve as evidence for the ideas they are supposed to support. We'll see later that natural scientists have plenty of such troubles themselves but (somewhat) more easily find ways to conquer them. In one extreme and not uncommon case, described by Thomas Kuhn in his classic book on scientific revolutions ( 2012), all (or, more likely, most) members of the natural-science disciplines agree on the basic premises their collective work rests on. They have, in the useful term he gave us, a paradigm. They agree on what problems they should be trying to solve and what data will provide convincing evidence to support the particular subideas the paradigm generates. They can tell when they're right and when they're wrong.
Kuhn observed that we seldom see any such happy situation in the social sciences, giving as evidence for that conclusion the data he collected observing the small group of social scientists he joined for a year as a fellow at the Center for Advanced Studies in the Behavioral Sciences, a group of some fifty scholars eminent in their various fields: "Particularly, I was struck by the number and extent of the overt disagreements between social scientists about the nature of legitimate scientific problems and methods. Both history and acquaintance made me doubt that practitioners of the natural sciences possess firmer or more permanent answers to such questions than their colleagues in social science. Yet, somehow, the practice of astronomy, physics, chemistry, or biology normally fails to evoke the controversies over fundamentals that today often seem endemic among, say, psychologists or sociologists" (Kuhn  2012, xlii). These facts, which surprised Kuhn, the physicist turned historian and sociologist of science, infuse the everyday experience of most social scientists, who know from their own work lives that that's just the way people in their fields do things. But they also know that the disagreements vary considerably in degree, permitting enough consensus among at least some of their members that some work ordinarily does get done.
I grew up in a sociological tradition that minimized such conflicts, although it contained plenty of the methodological differences that became more pronounced in later years. The University of Chicago Sociology Department in the post-World-War-II era (approximately the early 1940s until the middle 1950s), still somewhat influenced by the broad and inclusive vision, created and promoted by Robert E. Park, of what sociology could be, harbored all kinds of serious and deeply felt differences of opinion about these matters, but the differences existed — at least this was my experience, and I wasn't the only one — in an atmosphere of general acceptance of multiple ways of doing research on social life. People argued (after all, it was a university department; what else would they do?) about everything but essentially accepted multiple approaches to basic questions, accepted the data their colleagues provided as evidence for their overlapping ideas. Many people utilized multiple forms of data in their studies. Park's students Clifford Shaw and Henry McKay, for instance, studied juvenile delinquency for years using mass quantitative data, generally taken from police statistics and court records, which permitted the use of statistical techniques of data analysis (correlation coefficients, for example). Simultaneously, they studied the same questions in less formalized ways, collecting and publishing detailed life-history materials provided by individual actors, stories of lives in crime, delinquent careers, successes and failures. Others used similar combinations of material to pursue knowledge about the specific experiences that made up criminal careers, suicides, and other such activities. Some of the great community studies of the period — Middletown (Lynd 1929), Middletown in Transition (Lynd, 1937), Deep South (Davis, Gardner, and Gardner 1941), Black Metropolis (Drake and Cayton, 1945) — were models of such methodological breadth.
Strong (and stubborn) proponents of differing methodological approaches had major disputes — the disagreements of Herbert Blumer and Samuel Stouffer about what form sociological science should take were legendary — and some people specialized in one method rather than another, but no organized, even institutionalized, conflict went on between what later came to be called "quantitative" and "qualitative" methods. It's true that the building at 1126 E. Fifty-Ninth Street in Chicago, the home of social science at the University of Chicago, bore this legend (attributed to the famous physicist Lord Kelvin) on its facade: "When you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." But a story lovingly preserved by some of the people who worked in that building, at least in my time, told of the economist Jacob Viner walking by one day, observing Kelvin's remark and saying, contemplatively: "Yes, and when you can express it in numbers, your knowledge is also of a meager and unsatisfactory kind" (Coates and Munger 1991, 275). My introduction to this ecumenical view of my new profession came from Everett Hughes, who had supervised my dissertation. After I got my PhD, the department hired me to do some teaching, which meant that I now attended faculty meetings. I was surprised to see the evident good feeling and friendship between Hughes and William F. Ogburn, who we graduate students (who were not at all aware of what actually went on among faculty members) thought must surely be mortal enemies, and said as much to Hughes. He looked at me like I was insane (I think he must often have had that feeling when I spouted my twenty-three-year-old's opinions) and wanted to know what I was talking about. I explained that we all thought that their evident differences in methods of research must necessarily have created some enmity between them. He humphed and said, "Don't be silly. Will Ogburn and I are the greatest of friends," and then provided what was for him definitive proof: "Who do you think helped me with all the tables in French Canada in Transition?" A lesson I never forgot.
Since all our knowledge is unsatisfactory and just a beginning, we shouldn't equate good science exclusively with the kind that uses numbers (or with its opposite) and should instead refuse to add to our troubles in making social science by engaging in that kind of intramural quarreling. Nor should we equate good science exclusively with work whose warrant rests on long immersion in all the details of social interaction and its results as a way to understanding the organization of social life. We can all use the deficiencies in our own way of working as sources of ideas about how to improve our data gathering and evidence-using to generate more and better ideas, which we can then check out with new ways of gathering data, and so on around the circle.
Because data, evidence, and ideas really do constitute a circle of dependencies, we can move in both directions around that circle. We can try the classical route, using data we create as evidence to check out ideas we have already generated. But we can also use data that unexpectedly differ from what we expected, to create new ideas. Depending on the direction you take, you will probably find yourself using different methods of gathering and analyzing data. Both directions work and produce useful results. Some of us will specialize in work going in one direction, seeking ever more accurate ways of measuring to create data that let us test ideas we (or someone else) have already generated. Others will go in the other direction, looking for data whose unexpectedness will provoke new ideas. Some of us will do both, looking for data that let us generate ideas that further our understanding of the social situations we study, and simultaneously working on ways to test the new understandings we have provisionally arrived at. We get further, collectively, by recognizing the multiple ways we can advance knowledge in our field.
I've conceived this book in that spirit, trying to rethink the contemporary split between these two allegedly different ways of doing scientific business, trying to avoid unnecessary quarrelsomeness. And recognizing what's good in every way of working by connecting the variety of methods involved to basic questions about the connection between data, evidence, and ideas. This has led me to revisit a lot of well-known flaws in quantitative work, not to be argumentatively snotty, but to see how recognizing them can be used to improve the way we all do business. And to apply the same serious critical standards to qualitative work as well, identifying flawed procedures and looking for ways to improve them. And, especially, to call attention to the long-standing (though often overlooked) tradition I've already mentioned that combines both kinds of data gathering in the same studies, work that sees and implements the unity in good social science research.
One consequence of reasoning this way is that we can all cultivate flexibility in what we know and what we do, participating and observing at times, counting and calculating at others. Later on, I'll offer examples of excellent research and thinking that proceeded in just that way.
Models of Knowledge
Desrosières, in his masterful history of statistical reasoning (2002), calls attention to two classical models of scientific knowledge, associated withtwo eighteenth-century scientists, Carl Linnaeus (also known as Linné) and Georges-Louis Leclerc, Comte de Buffon. Linnaeus proposed the use of a fully made classificatory scheme into which scientists could insert the information their research produced. Scientists completed their work when they filled all the slots in the classification scheme with data. Buffon proposed, on the contrary, to make the construction of the classificatory scheme itself the main job to be done, a job that would never end because, he thought, new and unexpected data would continually overflow the then-existing classificatory boxes, requiring rearrangements of ideas into new, until then unexpected, patterns and arguments. Both thinkers investigated animals and plants, but each used the information his research produced in different ways. To repeat, Linnaeus defined the job as slotting research results into the proper boxes in the scheme he had constructed. Buffon saw it as continuing to create new boxes as new facts came to light.
These two modes of analysis differ in their prescriptive forms (but only to a degree) about what research-produced data can and should be used for. Here's Desrosières's analysis of their differences: Of all the features available, Linné chose certain among them, characteristics, and created his classification on the basis of those criteria, excluding the other traits. The pertinence of such a selection, which is a priori arbitrary, can only be apparent a posteriori; but for Linné this choice represented a necessity resulting from the fact that the "genera" (families of species) were real, and determined the pertinent characteristics: "You must realize that it is not the characteristic that constitutes the genus, but the genus that constitutes the characteristic; that the characteristic flows from the genus, and not the genus from the characteristic." ... There were thus valid natural criteria to be discovered by procedures that systematically applied the same analytical grid to the entire space under study. Valid criteria were real, natural, and universal. They formed a system.
For Buffon, on the other hand, it seemed implausible that the pertinent criteria would always be the same. It was therefore necessary to consider all the available distinctive traits a priori. But these were very numerous, and his Method could not be applied from the outset to all the species simultaneously envisaged. It could only be applied to the large, "obvious" families, constituted a priori. From that point on, one took some species and compared it with another. The similar and dissimilar characteristics were then distinguished and only the dissimilar ones retained. A third species was then compared in its turn with the first two, and the process was repeated indefinitely, in such a way that the distinctive characteristics were mentioned once and only once. This made it possible to regroup categories, gradually defining the table of kinships. This method emphasized local logics, particular to each zone of the space of living creatures, without supposing a priori that a small number of criteria was pertinent for this entire space....
This method is antithetical to Linné's criterial technique, which applied general characteristics presumed to be universally effective. (Desrosières 2002, 240–42)
Desrosières saw this difference in method reflected in the daily working problems of social scientists:
Any statistician who, not simply content to construct a logical and coherent grid, also tries to use it to encode a pile of questionnaires has felt that, in several cases, he can manage only by means of assimilation, by virtue of propinquity with cases he has previously dealt with, in accordance with a logic not provided for in the nomenclature. These local practices are often engineered by agents toiling away in workshops of coding and keyboarding, in accordance with a division of labor in which the leaders are inspired by the precepts of Linné, whereas the actual executants are, without knowing it, more likely to apply the method of Buffon. (242)
Applying his analysis to contemporary sociology shows how these classical differences in aims and procedures produce two somewhat different ways of working that we needn't think of as conflicting but that surely are different in aim and execution.
Excerpted from Evidence by Howard S. Becker. Copyright © 2017 The University of Chicago. Excerpted by permission of The University of Chicago Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of ContentsAcknowledgments
Part 1. What It’s All About: Data, Evidence, and Ideas
1. Models of Inquiry: Some Historical Background
2. Ideas, Opinions, and Evidence
3. How the Natural Scientists Do It
Part 2. Who Collects the Data and How Do They Do It?
5. Data Gathered by Government Employees to Document Their Work
6. Hired Hands and Nonscientist Data Gatherers
7. Chief Investigators and Their Helpers
8. Inaccuracies in Qualitative Research
Afterword: Final Thoughts
What People are Saying About This
“Evidence is a deeply thoughtful, original take on the relationship between our ideas, the observations we make, and our ways of figuring out how we know what we are talking about. Becker breathes new life into an important tradition that has been overshadowedthinking about methodology in terms of the practical organization of data gathering, alongside the practical ends we may not suspect, but that end up black-boxed as ‘objective’ data.”
“Becker calls Evidence a book he’s been writing for the seventy years of his professional life as a distinguished social scientist and dare one say? philosopher. For, beyond being a handbook for doingand understanding research, this is a guide to seeking the truth of day to day lives. No social scientist, humanist, or philosopher could imagine a better time for its appearance given the rise of reckless demagogic claims for a ‘post-truth’ age and their disparagement not just of science but democracy and our shared humanity."