Mail and Internet Survey: The Tailored Design Method, Second Edition / Edition 2

Hardcover (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 1-2 business days
(Save 97%)
Other sellers (Hardcover)
  • All (37) from $1.99   
  • New (3) from $17.11   
  • Used (34) from $1.99   
Close
Sort by
Page 1 of 1
Showing All
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$17.11
Seller since 2014

Feedback rating:

(74)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
Ships same day as ordered. Brand new.

Ships from: Lansing, KS

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
$17.20
Seller since 2007

Feedback rating:

(822)

Condition: New
1999-12-06 Hardcover New NEW. NO remainder markings. A brand new book perfect inside and out.

Ships from: Midland, VA

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
$70.00
Seller since 2014

Feedback rating:

(178)

Condition: New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing All
Close
Sort by

Overview

For nearly two decades, Don Dillman's Mail and Telephone Surveys and the Total Design Method it outlined has aided students and professionals in effectively planning and conducting surveys. But much has changed since the TDM was developed in 1978. Mail and Internet Surveys: The Tailored Design Method, Second Edition, thoroughly revised and updated by the author from his classic text, addresses these changes and introduces a new paradigm that responds to the recent developments that affect the conduct and success of surveys.

In this new edition, Dillman introduces a new paradigm called "Tailored Design," which expands TDM to account for-and take advantage of-innovations such as computers, electronic mail, and the World Wide Web; theoretical advancements; mixed-mode considerations; the increasing acceptance of self-administered surveys; our better understanding of specific survey requirements; and an improved base of social science knowledge. As insightful and practical as its classic original, Mail and Internet Surveys, Second Edition is a crucial resource for any researcher seeking to increase response rates and obtain high-quality feedback from mail, electronic, and other self-administered surveys.

Topics covered include:

  • Writing Questions and Constructing the Questionnaire
  • Mixed-Mode Surveys
  • Personal Delivery of Questionnaires
  • Surveying When Speed Is Critical
  • Government Surveys of Households and Individuals
  • Business Surveys
  • Internet and Interactive Voice Response Systems
  • Questionnaires That Can Be Scanned and Imaged

Praise for the previous edition . . .

"Required reading for anyone who wants to diversify research procedures."
-Contemporary Psychology

"An excellent reference tool and valuable addition to any serious practitioner's library."
-Public Relations Journal

"The book is packed with practical suggestions that cover each task in designing andimplementing a survey."
-Social Forces

Read More Show Less

Editorial Reviews

From the Publisher
"I would recommend this book to anyone involved in the design of postal surveys. The relevant chapters give useful guidance to improve the quality of the questions and the layout of self-completion questionnaires. In addition, the clear organisation of the sections in the book makes it ideal for finding clear well-written advice for specific queries." (Survey Methods Newsletter, Vol 20/2, 2000)
Survey Methods Newsletter
I would recommend this book to anyone involved in the design of postal surveys. The relevant chapters give useful guidance to improve the quality of the questions and the layout of self-completion questionnaires. In addition, the clear organisation of the sections in the book makes it ideal for finding clear well-written advice for specific queries.
Read More Show Less

Product Details

  • ISBN-13: 9780471323549
  • Publisher: Wiley, John & Sons, Incorporated
  • Publication date: 1/28/2000
  • Edition description: REV
  • Edition number: 2
  • Pages: 480
  • Product dimensions: 6.45 (w) x 9.59 (h) x 1.44 (d)

Meet the Author

DON A. DILLMAN is well known and highly regarded in the survey field. He is Professor of Sociology and Rural Sociology at Washington State University, senior scientist for the Gallup Organization, and previously served as Senior Survey Methodologist at the U.S. Bureau of the Census. He is a frequent presenter of seminars and workshops on survey design. His previous books include How to Conduct Your Own Survey and Against All Odds: Rural Community in the Information Age.
Read More Show Less

Table of Contents

ELEMENTS OF THE TAILORED DESIGN METHOD.

Introduction to Tailored Design.

Writing Questions.

Constructing the Questionnaire.

Survey Implementation.

Reduction of Coverage and Sampling Errors.

TAILORING TO THE SURVEY SITUATION.

Mixed-Mode Surveys.

Alternative Questionnaire Delivery: In Person, to Groups, and Through Publications.

When Timing Is Critical: Diary, Customer Satisfaction, and Election Forecast Surveys.

Household and Individual Person Surveys by Government.

Surveys of Businesses and Other Organizations.

Internet and Interactive Voice Response Surveys.

Optical Scanning and Imaging, and the Future of Self-Administered Surveys.

References.

Index.

Read More Show Less

First Chapter

Note: The Figures and/or Tables mentioned in this sample chapter do not appear on the Web.

Part One

ELEMENTS OF THE TAILORED DESIGN METHOD

C H A P T E R 1

Introduction to Tailored Design

In the late 1970s, a well-done mail survey was likely to exhibit a series of four carefully timed mailings, laboriously personalized by typing individual names and addresses atop preprinted letters. In combination with other meticulous details, including real signatures and a replacement questionnaire sent by certified mail, this procedure, the Total Design Method (TDM), demonstrated the ability to achieve high response rates (Dillman, 1978). Two decades later, self-administered surveys are recognizable as much for the ways they differ as for their common features. For example:

  • In a national test of possible procedures for the Year 2000 Census, a four-contact sequence of prenotice letter, questionnaire, reminder postcard, and replacement questionnaire was sent. Personalization was impossible for such a large mailing, but the outgoing envelope contained these words: "U. S. Census Form Enclosed; Your Response is Required by Law." The mailings were sent by first class mail to "residents," and not named individuals, at each address, and a response rate of 78% was achieved (Dillman, Clark, and Treat, 1994).
  • In surveys of visitors to national parks, researchers used a several step sequence, ending with a request for the address to which a thank-you postcard could be sent. This procedure resulted in average mail-back response rates of 75% in 21 parks compared to 38% in 11 other parks, where questionnaires were simply handed to the respondent with a request that they be completed and returned (Dillman, Dolsen, and Machlis, 1995).
  • In a survey of people who had turned in out-of-state driver's licenses to obtain a Washington state license, researchers used a four-contact sequence of individually signed letters and included a $2 bill as an incentive. A response rate of 65% was obtained from this population, which was younger (and therefore a more difficult one from which to obtain responses) than most general public samples, with an increase of nearly 20 percentage points as a result of the incentive (Miller, 1996).
  • In a survey of university faculty, an electronic mail survey which used no paper or stamps, but did use individually addressed e-mails and a pre-notice with three replacement questionnaires, achieved a 58% response rate. This response rate was the same as that obtained by a four-contact paper mail strategy (Schaefer and Dillman, 1998).

These surveys had much in common. Each was designed according to the principles of social exchange theory regarding why people do or do not respond to surveys. Each used multiple contacts and respondent-friendly questionnaires. Communications were carefully constructed so as to emphasize the survey's usefulness and the importance of a response from each person in the sample. All four surveys obtained reasonably high response rates.

On the other hand, the surveys differed from each other in important ways. The Census correspondence was not personalized, in contrast to the other surveys, and was sent to household addresses instead of to named individuals. The announcement on the envelope that response was mandatory added about 10% to the response rate on top of the contribution made by other factors (Dillman, Singer, Clark, and Treat, 1996). The national park survey was delivered personally, providing an opportunity to engage the sampled person in a carefully structured conversation that utilized a foot-in-the-door principle designed to improve response. The general public survey of new state residents used a token financial incentive which seemed especially effective in improving response among younger people. Finally, the electronic mail survey of faculty did not use stationery or return envelopes, essential trappings of the typical mail survey. In sum, mechanically applying one set of survey procedures in lock-step to all survey situations, as was recommended by the original TDM, is not the best way of assuring high quality responses as we begin the twenty-first century.

However, these four surveys do share a commonality, which I call Tailored Design. It is the development of survey procedures that create respondent trust and perceptions of increased rewards and reduced costs for being a respondent, that take into account features of the survey situation, and that have as their goal the overall reduction of survey error. The main features of the Tailored Design perspective are outlined in this chapter.

When the first edition of this book was published in 1978, the mail survey method was considered undesirable-- a procedure to be avoided if at all possible because of poor response rates and a host of other deficiencies. In that book, I described the Total Design Method (TDM) as a new system of interconnected procedures for conducting high-quality mail surveys with a greatly improved potential for obtaining acceptable response rates. The TDM was based upon considerations of social exchange, that is, how to increase perceived rewards for responding, decrease perceived costs, and promote trust in beneficial outcomes from the survey. Details were provided for all aspects of designing and implementing mail surveys, from how to order and position questions in the questionnaire to how to fold and address each mailing. The TDM emphasized four carefully timed mailings, the last of which was sent by certified mail. All contacts were personalized, questions were ordered in a way that would increase the questionnaire's salience for respondents, and photo reduction was used to make the questionnaire appear easier to complete. Repeated tests of this one-size-fits-all approach showed that response rates of 70% could be produced consistently for general public populations, and higher rates were feasible for more specialized populations whose education was not particularly low (Dillman, 1978). In the 20 years since its development, the TDM has been used for thousands of surveys. When used in its entirety, the method has consistently produced higher response rates than are traditionally expected from mail surveys (Dillman, 1991).

Much has changed since the TDM was first developed. Most obvious are the kinds of technologies used by researchers who conduct surveys. In 1978, personal computers were relatively unknown. Instead, the most commonplace survey technologies consisted of typewriters and printing machines. Since no one was merging electronic files, personalizing letters entailed a laborious process of typing individual names at the beginning of preprinted letters. We kept track of respondents by hand-tabulating lists of who had returned questionnaires and who had not.

Along with technology, our understanding of why it is important to take full advantage of social exchange principles in survey research has come a long way since 1978. Even though the TDM represented an advance over what was then acceptable survey practice, it soon became clear that the method's emphasis on using the same protocol for all situations was a serious shortcoming. Some surveys required personal delivery of questionnaires, others entailed completion of diaries that had to be filled out on certain days of certain weeks, and still others called for surveying the same individuals in the same businesses year after year. The need to adapt the original method was made even more obvious by massive government surveys in which the use of personalized letters and envelopes was impossible, but for which it was feasible to use other response inducements.

Also, since the TDM was first developed, researchers have realized that mixed-mode surveys, in which some respondents are surveyed by interview and others complete mail questionnaires, can help overcome the difficulties of obtaining adequate response rates using a single method, either face-to-face, telephone, or mail (Dillman and Tarnai, 1988). The need to combine survey modes to achieve high response rates, however, has highlighted the unsettling problem that people's answers to any particular question vary depending on the survey mode (Schwarz, Strack, Hippler, and Bishop, 1991; Dillman, Sangster, Tarnai, and Rockwood, 1996).

Finally, since 1978, the scientific base of survey methodology has expanded dramatically. This revision of the TDM is also a response to a rapid expansion in our scientific knowledge of how to conduct surveys. In the 1970s, research was just beginning on how and why people sometimes provide different answers to the same questions on mail versus interview surveys, and how question order can influence response (e. g., Schuman and Presser, 1981). The layout and design of self-administered questionnaires was something of an art form based on unsystematic observations of response behavior. Scientifically-based principles of layout and design had not yet been specified, nor had cognitive methods for testing questionnaires been developed and formalized as a pretest methodology (Forsyth and Lessler, 1991).

Another significant change incorporated into this book is the recognition that the fundamental nature of self-administered questionnaires is undergoing substantial change. Traditional postal service mail is only one way of sending and retrieving self-administered questionnaires. Electronic mail, the World Wide Web, and interactive voice response to taped telephone messages make possible the delivery and retrieval of questionnaires electronically. Improved computer technology and people's familiarity with computers are also having a great influence on how questionnaires can be completed and returned. Increasingly, people are being asked to enter answers to self-administered questionnaires by keying numbers on telephones or entering answers directly into computers in classrooms or in their homes. Also, optical scanning and imaging of self-administered questionnaires is increasingly feasible, without having a deleterious effect on response behavior. The variety of possibilities for constructing, delivering, retrieving, and processing self-administered questionnaires raises new challenges for survey design that are discussed here. In addition, these modes for conducting surveys exist in a more competitive environment where other respondent mail also seeks attention and response.

This book is a response to all these changes that have transpired since 1978: new technologies, theoretical advancements, mixed-mode considerations, a better understanding of specific survey requirements, and an improved base of social science knowledge. It describes a new method which I call Tailored Design. Tailored Design responds, in particular, to the tremendous design and implementation possibilities now offered by powerful computer and desktop publishing capabilities. Like the original TDM, it is established on a standard set of principles and procedures generally applicable to all surveys (Part I of this book), but these base elements are shaped further for particular populations, sponsorship, and content (Part II).

THE SOCIETAL TREND TOWARD SELF-ADMINISTRATION

Though exact numbers are difficult to pinpoint, there is little doubt that the number of surveys conducted by self-administration, and mail in particular, exceeds the number of interview surveys conducted each year. In 1981, an assessment by the United States Office of Management and Budget, which must approve all surveys sponsored by the federal government, revealed that 69% were conducted solely by mail, and another 11% were conducted partly by mail (U. S. Office of Management and Budget, 1984). The reasons for this preference undoubtedly involved lower cost and the fact that organizations could conduct such surveys themselves, whereas most interview surveys, whether by telephone or face-to-face, needed to be contracted to professional organizations.

Nonetheless, there can be little doubt that the dominant method for conducting large-scale, nationally prominent, general public surveys was face-to-face interviewing prior to the 1970s, and since then telephone methods. The speed and efficiency of telephone surveys was demonstrated dramatically and with much visibility by overnight surveys during the Watergate hearings in 1974. The fact that the telephone was a fixture in virtually all businesses and most U. S. homes contributed to its becoming the standard survey method for the United States in the 1980s and 1990s. The telephone also was the first survey method to benefit fully from computerized survey methods, especially computer-assisted telephone interviewing (CATI) software which eliminated keypunching. Automatic call-scheduling, dialing of random digit telephone numbers, and data compilation also contributed to the efficiency of the telephone method (Dillman, 1999).

Self-administered questionnaires are now poised to benefit enormously from information age technologies. While U. S. Postal Service delivery and retrieval of paper questionnaires remains essential for some surveys, the possibilities for electronic delivery are increasing rapidly. In addition, the elimination of laborious keypunching is within sight, the result of developments in optical scanning and imaging that result in no loss of data quality (Dillman and Miller, 1998).

A significant cultural change is also occurring which suggests a future of greater prominence for mail and other self-administered surveys (Dillman, 1999). Many activities that once required people to interact with another person are now being shifted to a self-administration mode. Using ATMs to obtain banking information and money instead of going to a bank teller is one example. Others include using touch-tone input to stop and restart newspaper delivery, renew library books, and register for college classes. Home diagnostic kits for all sorts of medical information from blood sugar levels to pregnancy, ordering airline tickets, purchasing gasoline in service stations, and serving as one's own secretary are further examples.

Computer skills are now required for the performance of most skilled work. A lack of effective keyboard skills and the inability to compose and deliver prose quickly significantly limit an individual's ability to succeed in life. This renewed emphasis on reading and writing, requisite skills for responding to most self-administered surveys, follows a period in U. S. society in which verbal skills and the reliance on the telephone as a substitute for traditional letter writing became dominant.

Concern exists that the random digit telephone interview paradigm which has guided most general public surveys during the last decade, and the telephone itself, may have become less effective interview methods. In the mid-20th century, the telephone was a household instrument that controlled behavior. A ringing telephone could not be tolerated by most people. In addition, a norm of politeness prevailed for dealing with callers, in much the same way as it was expected for in-person contacts. There were no answering machines and not to answer the telephone meant missing a call that could have been important. As the telephone became a major method of marketing products and substantial increases in unwanted calls occurred, people increasingly became less tolerant of such intrusions. The prevalence of unlisted numbers began to increase and now includes a majority of phones in many large cities, and 30% nationally (Survey Sampling, personal communication, August 19, 1999). Also on the increase is a call-blocking device that will accept calls only from identified numbers. In addition, answering machines are increasingly relied on to take calls, and active call monitoring is used to select which calls to answer. Many businesses have eliminated receptionists and instead calls are guided through appropriately keyed numbers to personal telephones, where an answering machine becomes the most likely response. Thus, it is becoming more difficult than in the past to complete telephone surveys. A ringing telephone no longer elicits an automatic answer from anyone who hears it. The telephone has evolved from being a controller of human behavior that demanded a response to becoming controlled, so that individuals decide who can reach them and when.

New methods of self-administering surveys are also gaining rapid acceptance. E-mail, web, and touch-tone data entry (or interactive voice response) methods have become feasible, and their use is growing rapidly.

For all of these reasons, the use of self-administered questionnaires will become more important both individually and as a component of mixed-mode data collection systems as we enter the twenty-first century. My aim in this book is to provide principles and procedures for successfully conducting surveys at this time when our reliance on self-administered surveys seems destined to increase.

THE ELEMENTS OF TAILORED DESIGN

The original TDM consisted of two parts. The first was to identify each aspect of the survey process that seemed likely to affect either the quality or quantity of response and to shape each one in such a way that the best possible responses could be obtained. It was guided by a theoretical view, social exchange, about why people respond to questionnaires. The second part was to organize the survey efforts so that the design intentions were carried out in complete detail. It was guided by an administrative plan that assured coordination of all of the parts. Attention to these two aspects of design are retained in Tailored Design, but with a somewhat broader consideration of the causes of survey error and the determinants of response behavior.

Reducing Survey Error

Sample surveys are typically conducted to estimate the distribution of characteristics in a population. For example, a researcher might want to determine in a sample survey of the general public what proportion own their home, have attended college, or favor one political candidate over another. Random sampling allows such characteristics to be estimated with precision, with larger sample sizes achieving ever larger degrees of precision. For example, whereas a random sample of only 100 members of the U. S. general public yields estimates that are ± 10% of the true percent, a sample of 2,200 yields estimates that are ± only 2%, for which we can be 95% confident. The extent to which the precision of sample survey estimates is limited by the number of persons (or other units) surveyed is described by the term sampling error. Formally, sampling error is the result of attempting to survey only some, and not all, of the units in the survey population. This ability to estimate with considerable precision the percentage of a population that has a particular attribute by obtaining data from only a small fraction of the total population is what distinguishes surveys from all other research methods. Neither focus groups, small group experiments, content analysis, nor historical analysis have this capability. However, sampling error is only one of four sources of error which form the cornerstones for conducting a quality survey (Groves, 1989; Salant and Dillman, 1994).

A second source of error which must be minimized in order to conduct a good survey is coverage error. Coverage error occurs when the list from which the sample is drawn does not include all elements of the population, thus making it impossible to give all elements of the population an equal or known chance of being included in the sample survey. An example would be the omission of people without telephones from a telephone survey. A third source of error, measurement error, occurs when a respondent's answer to a survey question is inaccurate, imprecise, or cannot be compared in any useful way to other respondents' answers. Measurement error results from poor question wording and questionnaire construction. A challenge for all survey methods, it is of particular concern in self-administered surveys, in which direct feedback from respondents about poor questions is less available than in interview surveys. The fourth source of error, nonresponse error, occurs when a significant number of people in the survey sample do not respond to the questionnaire and have different characteristics from those who do respond, when these characteristics are important to the study.

Surveys can fail to achieve their objectives for any or all of these reasons. For example, I was once asked to assist with a regional survey which had as one of its goals the estimation of unemployment rates. The sponsors proposed to interview only a few hundred of the nearly 100,000 residents, a procedure that would have resulted in being able to estimate unemployment within about five percentage points at best, rather than the desired 1 to 2%. In the same discussion the use of telephone interview methods was suggested, which meant that people without telephones, those probably more likely to be unemployed, would be excluded from any chance of inclusion in the sample. Had this suggestion been followed, significant coverage error would have occurred. One of the draft survey questions gave the options of employed versus unemployed but did not allow for retired, homemaker, or full-time student, thus raising the likelihood of members of these groups being interpreted as unemployed. Measurement error would have been the result. Finally, it was hoped that the survey would be conducted in only two nights of calling, thus raising the likelihood of a low response rate and the possibility that those who did respond would differ from those who did not, i. e., nonresponse error. Figure 1.1 summarizes and provides additional examples of the failure to achieve acceptable error levels in various types of surveys. Efforts to conduct any survey must begin with an understanding of the threat to precision and accuracy that stems simultaneously from these four cornerstones of survey research.

The overall perspective that guides this book is that efforts must be made to reduce all four sources of survey error to acceptable levels; none can be ignored. However, the greatest attention here is accorded to nonresponse and measurement error. Principles for random sampling are generally similar for all types of surveys, and differences for self-administered surveys do not warrant extensive treatment. Coverage error issues are not a problem for many mail surveys, but are prohibitive for others. For example, address lists are often complete for many populations of interest, for example, school teachers, driver's license holders, and organization members. But for groups such as the general public, adequate lists are generally unavailable, and sometimes nothing can be done to solve this problem.

However, a great deal can usually be done to address measurement and nonresponse issues through careful design of questions, questionnaires, and implementation methods. Behavioral theories of design have now been developed for each of these topics. These theories are used in this book to develop the step-by-step methods that are detailed in the chapters that follow.

Theoretical Foundation: Why and How People Respond to Self-Administered Surveys

Wrong Questions and Misplaced Emphases

Nearly 30 years of trying to help individuals design surveys has led me to develop an early sense of apprehension that a proposed survey is not going to be well designed. This concern is based upon the kinds of questions asked by the would-be surveyor. For example:

  • What color paper should I use to get the best response?
  • Would it help to put three small-denomination stamps on the outgoing envelope?
  • How much will a pretty questionnaire cover help response rates?
  • How small a font can I use to make the questionnaire shorter in order to improve response?

Each of these questions has a legitimate answer. Paper color does not usually affect response rates, nor does the presence of stamps on the outgoing envelope. Cover pages need to convey certain information about the survey, and it is more important to think about content than something subjective like "prettiness." Finally, there is no evidence that using a smaller font to decrease the number of questionnaire pages will improve response.

However, instead of answering each query, I am likely to respond with my own questions: Who are you surveying and what's the topic? What is your overall implementation plan-- how many contacts do you plan to make? Will the mailings be personalized? What interval will you use between contacts? How long is the questionnaire? Then, more to the point of the person's queries, why do you want to use colored paper? Do you plan to use stamps on the return envelope or a business reply? What do you mean by a pretty cover, and what will it look like? Is the questionnaire being printed as a booklet?

My intent with such responses is twofold. First, all of the original queries listed above concern factors likely to have relatively little influence on how people respond to a survey. Second, my advice to those individuals who ask the same question has often been quite different once I knew more details of the proposed study. For example, knowing that a questionnaire being sent to employed professionals was going to be printed as a booklet influenced my opinion on reducing the font size from 12 to 10. Few professionals were likely to have great difficulty with a font reduction of this magnitude and it was clear that reduction of the font size would remove four pages (one sheet of paper) by eliminating several inches of type. In contrast, my advice for a survey of retired people would have been to increase the font size to 14, thereby increasing the number of pages, but also making it easier for respondents to read.

In another instance, questions about a plan to use small-denomination stamps revealed that business reply envelopes were planned for the return envelope. My recommendation was to switch the stamps to the return envelope and throw away the business reply envelopes. Although there is no evidence from past research that response rates are improved by the presence of stamps on the outgoing envelope, their influence on return envelopes is well documented (e. g., Armstrong and Luske, 1987; Dillman, 1991). In another instance, I supported their use on the outgoing envelope; the stamps were already available and their use certainly would not hurt response rates.

An inquiry on the proposed use of color revealed that the questionnaire was being printed in color and the sponsor had the ability to produce lightly colored background fields with answer spaces left white, a method of printing that seemed likely to decrease item nonresponse. In this case I encouraged the use of color. In another instance, bright green paper seemed consistent with membership in the environmental organization that was to be surveyed, but the lack of contrast with black print would diminish readability, thus making that color an unwise choice.

In short, a survey involves many decisions which need to fit together and support one another in a way that encourages most people to respond and minimizes inaccurate or inadequate answers. Social exchange theory, to be discussed shortly, provides a basis for making compatible design decisions. It uses knowledge about the causes of human behavior to identify and understand why people do or do not respond to surveys. Human behavior is enormously complex and can seldom be explained by a single cause. However, considerable evidence suggests that certain stimuli are more likely than others to achieve a desired response.

Our challenge is to utilize theory to guide how we organize these multiple stimuli-- from paper choice to the content of each communication element that constitutes the survey request-- in a way that is most likely to produce the accurate completion and return of a self-administered questionnaire. Designing a quality survey begins with two fundamental assumptions: (1) responding to a self-administered questionnaire involves not only cognition, but also motivation (Jenkins and Dillman, 1995, 1997), and (2) multiple attempts are essential to achieving satisfactory response rates to self-administered surveys regardless of whether administered by e-mail, the web, or postal delivery (Scott, 1961; Heberlein and Baumgartner, 1978; Dillman, 1991).

The first assumption recognizes that people must understand clearly what is wanted of them if they are to respond. However, there is more to getting a response than the four-step cognition model described by Tourangeau and Rasinski (1988), which includes comprehension, retrieval, deciding, and reporting. People must be motivated to go through the process associated with understanding and answering each question and returning the questionnaire to the survey sponsor. The conscious consideration of the motivational qualities of questionnaires and mail-out materials distinguishes the challenge of achieving response to questionnaires that are self-administered from those conducted through interviews. In the latter case, design efforts are inevitably aimed at the interviewer who, in a narrow window of opportunity, must use appropriate words, often delivered extemporaneously, to convince the respondent to continue the interview.

The second assumption is a statement of the most dominant finding from research on how to improve response to self-administered surveys: multiple attempts to contact potential respondents are essential, just as they are for personal and telephone interview surveys. From a design standpoint, this means there are several opportunities to encourage a cognitive understanding of what is being requested. We also have several opportunities to motivate recipients of a questionnaire to respond to the survey request, and to do so accurately. Thus, we must design contacts not only as individual entities, but also as components of an implementation system that precede and/ or follow other communications. In this regard, I have found social exchange theory a helpful guide for organizing and presenting a sequence of mutually supportive requests to questionnaire recipients.

Survey Response As Social Exchange

Social exchange is a theory of human behavior used to explain the development and continuation of human interaction. The theory asserts that actions of individuals are motivated by the return these actions are expected to bring, and in fact usually do bring, from others (Blau, 1964; Gallegos, 1974; Dillman, 1978; Goyder, 1987). Three elements are critical for predicting a particular action: rewards, costs, and trust. Simply stated, rewards are what one expects to gain from a particular activity, costs are what one gives up or spends to obtain the rewards, and trust is the expectation that in the long run the rewards of doing something will outweigh the costs. The theory of social exchange implies three questions about the design of a questionnaire and the implementation process: How can we increase rewards for responding? How can perceived costs be reduced? How can trust be established so that the ultimate rewards will outweigh the costs of responding?

The ideas I am introducing here should not be equated with economic exchange. Social exchange is different from the more familiar economic exchange, in which money serves as a precise measure of the worth of one's actions. First, social exchange is a broader concept. Future obligations are created that are diffuse and unspecified. The nature of the return cannot be bargained over as in economic exchange, but must be left to the discretion of the one who owes it. The range of goods, services, and experiences exchanged is also quite broad. It is assumed that people engage in an activity because of the rewards they hope to reap, that all activities they perform incur certain costs, and that people attempt to keep their costs below the rewards they expect to receive. Fundamentally, then, whether a given behavior occurs is a function of the ratio between the perceived costs of doing that activity and the rewards one expects the other party to provide, either directly or indirectly.

The difference between social and economic exchange is illustrated by results from research on cash payments to respondents. Much research has shown that "token" incentives given with the request to complete a questionnaire, a form of social exchange, consistently improve response rates (James and Bolstein, 1990, 1992; Church, 1993). However, a promise to pay people for completing a questionnaire by sending them a payment afterwards (economic exchange) does not. For example, James and Bolstein (1992), in a survey of small construction companies (often one-or two-person operations), found that response rates received from questionnaire mailings that included one to five dollars produced higher response rates (64- 71%) than the promise of $50 (57%), or nothing at all (52%). Similarly, Johnson and McLaughlin (1990), in a large national survey of people who registered to take the Graduate Masters Exam for admission to business school, found that response rates were 10% higher when a five dollar check was included with the request than when a ten dollar check was promised after the questionnaire was returned. Most importantly, the response rate for the ten dollar check was within one percent of the response rate when no incentive at all was used. Social exchange is a subtle but more powerful method for influencing response behavior when the rewards that can be offered to each respondent are relatively small, as is typically the case for self-administered surveys.

Rewards, Costs, and Trust

There are many specific ways that one can attempt to affect the reward, cost, and trust matrix. In addition to traditional elements of social exchange noted by Blau (1964), Homans (1961), and Thibaut and Kelley (1959), I include more recent ideas about influence in a social context from Cialdini (1984). The discussion that follows extends in more detail an argument presented in the first edition of this book (Dillman, 1978). It is based partly on research findings. It is also based on theoretical arguments that are difficult to evaluate experimentally, and as a result have not yet been well tested. These ideas constitute the basis for elements of Tailored Design that are explicated in the ensuing chapters.

Ways Of Providing Rewards

Show positive regard. Thibaut and Kelley (1959) have noted that being regarded positively by another person has reward value to many people. Consider a cover letter that begins abruptly: "You have been selected in our national sample of accounting organizations. Please respond to the enclosed questionnaire within two weeks." Such a letter shows little respect for the individual who receives it. In contrast, giving respondents reasons that a survey is being done, providing a toll-free number to call with questions, and personally addressing correspondence are small, but not inconsequential, ways of showing positive regard to questionnaire recipients.

Say thank you. Blau (1964) argues that a time-consuming service of great material benefit could be appropriately repaid in a social exchange context by mere verbal appreciation. Phrases such as "we appreciate very much your help" or "many thanks in advance" can be added to correspondence. Perhaps for this reason, a follow-up postcard designed as a thank you for the prompt return of "the important questionnaire we sent to you recently" has been found in some surveys to produce a response burst nearly equal to that which followed the original mail-out a week or so earlier (Dillman, Christenson, Carpenter, and Brooks, 1974).

Ask for advice. Both Blau (1964) and Homans (1961) have pointed out that the feeling of being asked for help, advice, or assistance provides a sense of reward to people. People often get a sense of accomplishment from knowing they have helped someone else solve a problem. Explaining to someone that, "I am writing to you because the only way we can find out whether the service we provide is really meeting the needs of people like yourself is to ask you," is a way to provide this kind of reward. In essence, asking people for their advice subordinates the sponsor to the questionnaire recipient, rather than vice versa, which, as explained later, would incur cost to the respondent.

Support group values. Most people identify with certain groups, which may be as broad as a citizen of the country, or narrowly focused, such as a dues paying member of the Freemont Booster's Association. Depending upon the survey population, sponsorship, and topic, one can often appeal to values shared widely by those who are surveyed. Blau (1964) noted that supporting a person's values can instill a sense of reward in individuals. This principle underlies efforts to appeal to respondents on the basis of a study's "social usefulness" (Slocum, Empey, and Swanson, 1956; Dillman, 1978).

Give tangible rewards. As already noted, research has convincingly shown that token financial incentives of only a dollar or two, enclosed with the request to complete a questionnaire, significantly boost incentives, and inevitably outperform promises to send a larger payment after a completed questionnaire is received (e. g., James and Bolstein, 1992). Material incentives sent with a request, such as ball point pens, have also been found effective, although to a very modest extent (Church, personal communication, April 12, 1993). The providing of a tangible incentive, even a token one, is effective because it evokes a sense of reciprocal obligation which can be easily discharged by returning the complete questionnaire.

Make the questionnaire interesting. Blau (1964) noted that it is possible to obtain desired actions from people when the action is experienced as a net gain rather than a net cost. Similarly, Cialdini (1984) argued that "liking" to do something is a powerful determinant of behavior. Thus, it is not surprising that Heberlein and Baumgartner (1978) showed that questionnaires on topics of high salience to recipients are more likely to be returned than those on topics of low salience. Questionnaires can be made more interesting to respondents by improving layout and design, ordering questions so the more interesting ones are placed at the beginning, and making questions easy to understand and answer. The fact that some people enjoy answering questionnaires regardless of content may explain why some response, even if only a few percentage points, can be obtained with almost any questionnaire.

Give social validation. Knowing that other people like themselves have completed a similar action can strongly influence people's willingness to comply with a request (Cialdini, 1984; Groves, Cialdini and Couper, 1992). In other words, some people are socially validated by seeing themselves as similar to most others in a group. Therefore, in later attempts to encourage response, telling people that many others have already responded encourages them to act in a similar way.

Inform respondents that opportunities to respond are scarce Telling people that there are relatively few opportunities to respond and that they may not have an opportunity to respond unless they do so quickly can influence people to act (Groves et al., 1992). On the surface this argument may seem not to fit with requesting everyone in a specified sample to return a questionnaire. However, the idea of using deadline dates for returning a questionnaire or completing an interview is consistent with this argument. This technique has been used effectively in telephone surveys of businesses when, after 20 or more unsuccessful attempts, "gatekeepers" were informed that all calling had to be completed by the end of the week (Petrie, Moore, and Dillman, 1998).

Ways Of Reducing Social Costs

In some cases, efforts to reduce the costs incurred by respondents represent the flip side of increasing rewards, for example, making a questionnaire more interesting by making it easier to fill out. However, in other cases these two goals are distinctively different. Strategies to reduce social costs include the following:

Avoid subordinating language. Consider these contrasting statements which might be used in communications with questionnaire recipients: "For us to help solve the school problems in your community it is necessary for you to complete this questionnaire," versus "Would you please do me a favor?" The former implies the respondent is dependent upon the letter writer, whereas the latter suggests that the writer is dependent upon the respondent. Blau (1964) argued persuasively that people prefer not to be subordinated to others, and will often make great efforts to avoid that. Not responding is an easy way of avoiding a sense of being subordinated.

Avoid embarrassment. A friend who saw herself as a conscientious respondent to nearly all surveys once described to me why one questionnaire had ended up in the wastebasket. The first question asked about whether the United States should attempt to develop breeder nuclear reactors, which were then being discussed in the media. "I know I should understand these things and read about them, but you know I just don't follow this enough to know," she said. The respondent was clearly embarrassed about her lack of knowledge on the issue.

Thibaut and Kelley (1959) pointed out that costs to an individual are high when great physical or mental effort is required, and when embarrassment or anxiety accompany the action. Complex questions and directions that confuse respondents produce feelings of inadequacy or anxiety that people wish to avoid. Questionnaires often get discarded when the respondent peruses the questionnaire but can't figure out where to start, or what the first question means. The lack of response from people who do not read well or who are not used to expressing themselves in writing may be due to this type of social cost.

Avoid inconvenience. A frequent finding of response rate research is that not including an envelope lowers response rates (e. g., Armstrong and Luske, 1987). The likely reason is that it takes additional effort to locate and address an envelope. Including a return envelope with a real stamp( s) on it also improves response rates over a business reply envelope. A likely reason is that people keep stamped envelopes available, whereas a business reply envelope is sometimes inadvertently thrown away. The continued presence of a return envelope contributes to the convenience of responding.

Make questionnaires appear short and easy. Questionnaires that appear shorter and easy to fill out lessen the perceived costs of responding. Such appearances can be reinforced by indicating in the cover letter that responding should only take a few minutes. Research has shown that longer questionnaires achieve slightly lower response rates (Heberlein and Baumgartner, 1978), but does not confirm that putting the same number of questions into more pages decreases response rates (Leslie, 1997). Other research has shown that respondent-friendly questionnaires, with carefully organized questions in easy-to-answer formats, can improve response rates (Dillman, Sinclair, and Clark, 1993).

Minimize requests to obtain personal information. Many survey questions ask for information that some people do not want to reveal to others; for example, their annual income, past sexual behavior, method of disciplining children, or use of drugs. Sometimes asking such questions is the main objective of the survey. So while it is desirable from a response standpoint to minimize intrusive questions, doing so would defeat the purpose of the study. In these cases, specific wording can "soften" such questions. Further, explanations can be offered for why the questions are important and how the information will be kept confidential or even anonymous.

Keep requests similar to other requests to which a person has already responded.

People who have committed themselves to a position are more likely to comply with requests to do something consistent with that position (Cialdini, 1984). The implication of this finding is that people feel uncomfortable when they do something inconsistent with their past behavior. It may help explain why in panel surveys, once people have responded to the first request, it is much easier to get them to respond to subsequent requests (Otto, Call, and Spenner, 1976). It also may explain the finding by Nederhof (1982) that the people who are most likely to respond to a survey request are people who typically respond to other survey requests.

People's inclination to behave consistently suggests that arguments can sometimes be offered that point out how responding to a particular survey is consistent with something the questionnaire recipient has already done. For example, a survey of organization members noted, "We really appreciate your support through the recent payment of dues, and, we want to be responsive to your expectations. Completing the enclosed questionnaire will give us guidance on how best to respond to those concerns."

The preference for consistency may explain the effectiveness of the "foot-in-the-door" technique, that is, getting people to perform a large task by first getting them to perform a small task. This procedure was used successfully in a survey of national park visitors, whereby instead of simply handing people a questionnaire with the request to complete it, visitors were asked to briefly pull their car to the side of the road and answer three short questions before being requested to complete the questionnaire at the end of their visit (Dillman, Dolsen, and Machlis, 1995).

Ways Of Establishing Trust

In an economic exchange involving small purchases, payment is typically made when the goods are received. In essence, the complete exchange takes place at a specific point in time and is compressed into a few, rather routine seconds. It is precisely on this issue that social exchange differs most significantly from its economic counterpart. Under conditions of social exchange, there is no way to assure that what the survey sponsor has promised as a benefit of the study, for example, "This survey will help our company do a better job of serving its customers," will actually happen. Several steps can be taken to increase the questionnaire recipient's trust that the survey sponsor will do what is promised. Trust is critical to forming the belief that in the long run the benefits of completing the questionnaire will outweigh the costs of doing so.

Provide a token of appreciation in advance. Although one or two dollars included with the questionnaire may have little direct reward value for respondents, it seems to have a greater value in creating trust. By providing the small token of appreciation in advance, the researcher shows trust in respondents who can, of course, pocket the money and not return their questionnaire. The symbolic gesture of trust probably explains why it is much more effective in improving response than larger post-payments. Further, explaining that the money is a "small token of appreciation" is also consistent with expressing trust. In contrast, saying it is a "payment for your time" is more likely to be subordinating and insulting. Finally, including uncancelled stamps on an envelope, which respondents could choose to use for something besides returning the questionnaire, also contributes to the formation of trust (albeit to a smaller degree than a financial incentive).

Sponsorship by legitimate authority. It has been shown that people are more likely to comply with a request if it comes from an authoritative source, that is, one whom the larger culture defines as legitimated to make such requests and expect compliance (Cialdini, 1984). It is therefore not surprising that Heberlein and Baumgartner (1978) found that government-sponsored surveys achieved higher response rates than those sponsored by marketing research firms. Most surveys, regardless of sponsorship, are voluntary, but some, for example the decennial census survey and many government-sponsored business surveys, are not. Research has shown that informing questionnaire recipients that a survey is mandatory improves response for surveys of both businesses (Tulp, Hoy, Kusch, and Cole, 1991) and individuals (Dillman, Singer, Clark, and Treat, 1996).

I once observed the survey letters and envelopes for a federal government survey being printed with a Midwestern city as the return address so that return mail would go to a processing office rather than to its headquarters in Washington, D. C. The likely effect was that people might not think it was a legitimate government survey. More recently research has shown that government questionnaires sent in packaging designed to reflect a private sector marketing orientation, with bright and colorful envelopes and icons, led many people to believe the mailing was not likely to be from government (Dillman, Jenkins, Martin, and DeMaio, 1996). A follow-up field test with the same forms produced significantly lower (from 5- 10%) response rates in a national test (Leslie, 1997).

Make the task appear important. Many surveys try to appeal to people on the basis that something important will ultimately happen as a result of the survey. Form letters produced on copy machines, questionnaires that are sloppily constructed or contain questions that are difficult to understand, and a lack of follow-up mailings targeted to nonrespondents suggest that a questionnaire is relatively unimportant. Trust that something useful will happen as a result of a study can be engendered by making each contact look important. Printing personalized cover letters on letterhead stationary and designing the questionnaire in a way that makes sense to the respondent have a significant role in establishing trust.

Invoke other exchange relationships. One of the first mail surveys I conducted from my university came back complete with this note appended, "I was going to throw this thing away, but my spouse reminded me that our daughter had gotten her education from your university so I guess I owe it to you." The nature of social exchange is that people sometimes do things for other people because they feel they want to repay a favor to someone else. Thus, a letter from a respected sponsor on letterhead stationery may help encourage a survey response.

Linking Social Exchange Elements

Utilization of the social exchange perspective requires that efforts be made to explicitly link the elements of social exchange to form a systematic approach to achieving high quality survey responses. There are several challenges involved in making such linkages.

First, it is important to recognize that a specific design feature may involve more than one of the three major elements (i. e., costs, rewards, trust). For example, inclusion of a token financial incentive such as two dollars with the questionnaire seems on the surface a reward, but because it is sent ahead of time it also promotes trust-- the study sponsor has given something to the potential respondent that the respondent can keep, thus creating a sense of reciprocal obligation.

Similarly, if a survey is required by law and the respondent is simply informed, "You are required by law to respond to this survey," then clearly the respondent is being subordinated to the study sponsor. However, by accompanying such directives with a statement explaining why response is required, or even attributing the mandatory requirement to a third party (when appropriate), one can subtly change the reward/ cost balance. For example, "Because this survey is the only way the government has of knowing the amounts of natural gas being consumed each month, Congress has passed a law requiring all utilities to respond to it." This statement accurately attributes the mandatory requirement to the source and combines it with an argument of importance.

Second, just as costs and rewards are associated with the decision to respond, they can also be associated with not responding. For example, people sometimes respond to questionnaires because they are concerned that not responding will lead to another reminder, which they wish to avoid. On the other hand, there may be an undesirable backlash effect from an offensive request, such as "We are doing this survey of taxpayers to help people like yourself do a better job of completing your taxes on time." Such a statement might be interpreted as a personal criticism, so that the recipient feels a sense of reward from simply tearing up the questionnaire and throwing it away. Thus, it is important to evaluate a proposed implementation strategy with regard to the sense of reward one might get from not responding to the survey request.

Third, repetition of appeals diminishes their effectiveness. It was once proposed to me that costs for a replacement questionnaire mailing being used in a large national survey could be greatly decreased by printing extra copies of the original letter and envelopes and simply stamping "second notice" on the outside of the envelope. Not only would this be likely to anger some respondents, so that it would be harder for them to see any rewards to responding, but an opportunity to invoke new arguments for response would have to be forgone. Research has found that even repeating the inclusion of a token financial incentive with a replacement mailing does not increase response over that which can be obtained with a reminder that does not include the repeated incentive (Tortora, Dillman, and Bolstein, 1992).

Fourth, there is some evidence to suggest that pushing certain concepts to an extreme results in effects that are the opposite of the intended ones. For example, making a questionnaire very short as a means of reducing costs of responding may backfire. In general, longer questionnaires achieve lower response rates. However, limiting a questionnaire to only a couple of questions may also have the effect of making it seem less useful or important, and as a result not improve response. Similarly, it has been shown that putting too much emphasis on confidentiality by providing long detailed explanations can backfire by raising concerns about whether a questionnaire is, in fact, confidential at all (Singer, Von Thurn, and Miller, 1995).

Fifth, it is important to recognize that later appeals are aimed at a somewhat different audience than first appeals. Those who respond early to a survey will be deleted from the contact list and therefore not receive reminders. The appeals that worked for these respondents did not work for the nonrespondents. This fact makes a strong argument for changing the look, feel, and content of later contacts. It is at this stage, for example, that one might usefully consider invoking arguments such as the "social validation" concept proposed by Cialdini (1984). That is, "We have already received questionnaires from most people in this national sample, but are writing to those who have not yet responded because. . . ."

Sixth, people differ in what they perceive as rewards and costs. It has long been observed that some people enjoy filling out questionnaires, whereas others do not. Some worry about confidentiality and others do not. Some find the topic of a particular questionnaire interesting, whereas others from the same population do not. Therefore, it is logical to aim different appeals at early and late responders and towards different survey populations.

Seventh, whether an action evokes a sense of cost, reward, or trust is related to how it links with other implementation actions and not just to how it appears when viewed in isolation. For example, considerable research has shown that preletters can significantly improve response to mail surveys (e. g., Dillman, Clark, and Sinclair, 1995). Thus, it came as a surprise to the sponsors when the pretest for a national survey revealed absolutely no improvement from the use of a prior notification letter. However, an examination of the details quickly revealed why. The letter was very long, emphasized that response was not mandatory, and informed the respondents how they could get information about the study. Further, two long paragraphs of the preletter described all of the potential negative aspects (costs) of responding, without the respondent being able to find out what questions were being asked. Yet the questionnaire itself concerned routine aspects of the respondent's health and asked for relatively little information that most people were likely to object to providing. Finally, the preletter was also sent a full month prior to the questionnaire itself being sent. Thus, not only had a preletter been unpleasant by being long and emphasizing potential costs of responding without allowing the respondent to know what was being requested, but it was so far ahead of the questionnaire that by the time the questionnaire arrived, the preletter was likely to be forgotten. How a survey gets implemented, that is, when and how contacts are made, is also an important aspect of what potential respondents see as the costs and benefits of responding to a survey, and whether they think anything useful will come of it. Elements of implementation plans can be seen by respondents both as individual elements and as a part of a larger whole, each of which forms impressions of costs, rewards, and trust.

Communicating Exchange Concepts through Visual Layout and Design

Although questionnaire designers have long been aware of the difficulty of designing easy-to-answer questionnaires, the proposed principles of design have been based more on subjective impressions than on scientific evidence of how people see and process information (Wright and Barnard, 1975; Sless, 1994). The introduction of computers, word processing equipment, and laser printers has provided nearly every questionnaire designer with the ability to vary fonts, insert graphical symbols, change margins, and switch the size and brightness of type. Such changes can be made with less effort than the typist of traditional questionnaires expended in making a carriage return. The designers of web questionnaires have even more possibilities, being able to add color, photographs, sound, and animation. The indiscriminate use of these new possibilities has resulted in producing questionnaires that are often more, rather than less, difficult to read and answer.

It is therefore not surprising that it has become necessary to apply concepts of graphic design and layout (e. g., Wallschlaeger and Busic-Snyder, 1992) to the construction of self-administered questionnaires. These concepts, to be described in Chapter 3, include using such ideas as figure/ ground, top-down and bottom-up processing, pattern recognition, brightness, contrast, size, location, and simplicity, and are expressed through organizing principles such as the Law of Pragnanz and Law of Proximity (Jenkins and Dillman, 1997)

.

Typically, most respondents see a questionnaire only once. The problem each respondent faces might be compared to that of a driver on a new highway. Finding one's way to a particular destination may not be difficult if rules for driving and road signs are the same, such as white dotted lines to divide lanes, solid yellow stripes for no passing, white on green for identification of exits and blue signs for services. Although such standards exist across states and some countries for highways, they do not exist for questionnaires.

Considerable research has been done on how people look at and read material placed in front of them (Jenkins and Dillman, 1997). For example, people tend to start reading near, but not at, the upper left corner of a page. However, brightness (including color) and larger stimuli (bigger letters, blocks of color, or graphical symbols) may pull one's vision towards a particular element of the page, thus distracting them from a natural starting point. People's attention is also guided by symbols with particular cultural meanings. Thus, an arrow may guide a person across the page more effectively than words.

Questionnaires are written in two languages. One language consists of words. The other consists of graphical symbols. These two languages of meaningful information can be placed on a page so they work in concert, stimulating a person to receive information in the same way one would receive it in an interview. They can also be in conflict, actually pulling people in two different directions so that some people proceed through a questionnaire in one way and some people in another. Therefore, two aspects of a questionnaire must be developed and placed in concert with one another:

  • Information organization: the prescribed order in which we want people to process the words and other symbols used to convey the questions and all needed instructions to respondents.
  • Navigational guides: the graphical symbols and layout used to visually direct people along a prescribed navigational path for completing the questionnaire.

Accomplishing the first of these tasks requires ordering questions in the sequence they should be answered. It also requires placing any instructions or interpretative information at the precise location that information is needed, and therefore likely to be used. Accomplishing the second task involves manipulation of brightness, shape, size, and location of all the information so that respondents are visually directed to process words in the desired order. Details of this perspective are provided in Chapter 3.

Tai loring to the Situation

A major limitation of the original TDM was its one-size-fits-all approach to survey design. This approach, though seemingly appropriate to the times and the technology, did not allow social exchange considerations to be used to their fullest potential. For example, what constitutes a cost or reward also depends upon the survey content. Most surveys making several contacts within a period of seven to 10 days would produce considerable irritation. However, suppose that an individual has been asked to keep a diary of the television programs they watch for a specific week in the month. The following five contacts in 10-day sequence may not seem unusual to respondents: an introductory letter, followed quickly by a questionnaire, and then a postcard to tell them their "diary week is about to begin," a midweek call to see if they are having any difficulties, and another postcard indicating that their diary week is about over. This is similar to a sequence used for many national television viewing and other diary surveys. Receiving these contacts only a day or two apart probably makes sense to respondents because they have been asked to record their behavior each day for a specific week. Whereas sending a token financial incentive might seem like a good will gesture for an individual person survey, it could produce a quite different reaction if sent to a respondent for a business who envisions an ethical problem should she pocket even a tiny amount of money for responding to a business questionnaire. In this case, the token incentive seems likely to produce a sense of cost rather than reward to the recipient.

Some populations can only be given questionnaires if they are handed to them in person: for example, visitors to museums or parks. In other cases the pressure for quick results, as with election surveys, may leave only a narrow window in which data can be collected. In still other situations, some members of a population may be inaccessible by a single method of administration, so that mixed-mode surveys must be done whereby some respondents respond to a mail survey while others are interviewed by telephone.

The shift to Tailored Design attempts to shape elements of design and implementation in ways that take into account critical differences in survey populations, sponsorship, and content. It then builds on those differences in order to shape the most effective method for achieving a response.

In addition, it is easier to obtain responses from some survey populations than others. For example, in a large meta-analysis of response rates to previous surveys, Heberlein and Baumgartner (1978) showed that whereas school, army, or employee populations are more likely to respond to surveys, general public populations are less likely to respond to the same level of survey effort. Similarly, government sponsorship of surveys is likely to improve response, whereas market research sponsorship is correlated with lower response. It is more difficult for some people, particularly those with less education, to respond to a self-administered survey, and therefore the potential costs, such as the amount of effort required and the risk of embarrassment, are likely to be greater than for others. Whereas government organizations can often appeal to legitimate authority as a basis for responding and avoid the use of financial incentives, market research organizations generally have no such authority that can be used in their appeals for response, and may send sizable incentives as an alternative.

For a variety of reasons, some surveys are also more demanding of respondents than are others. In some cases the difficulty stems from the number of questions, and in others from the content of the questions. In the same meta-analysis noted above, Heberlein and Baumgartner (1978) showed that the salience of a survey, which they defined as topics dealing with important (to the respondent) behavior or current interests, is a significant determinant of response rates. Although interest-getting questions can be added and questions ordered and displayed in ways that make them appear more interesting, practices I advocate, there are limits to how much content can be modified in order to improve salience.

Survey content also interacts with survey sponsorship and population to influence how people are likely to view a particular survey and what response-inducing methods will work best. It is one thing for the U. S. Bureau of the Census to ask people to answer questions about personal income in the decennial census, but quite another for a market research company to ask the same questions on behalf of a sponsor whose name they are not willing to divulge. Similarly, appealing to employees of electric power plants to complete government surveys every month is a very different task, requiring different methods, than a one-time survey of members of a professional engineering association. Government-sponsored surveys can also make different appeals than can private corporations that are seeking feedback from customers.

Summary of TAILORED DESIGN

In sum, responding to a questionnaire is viewed as social exchange. People are seen as more likely to complete and return self-administered questionnaires if they trust that the rewards of doing so will, in the long run, outweigh the costs they expect to incur. This perspective is summarized in Figure 1.2. Many attributes of the questionnaire and implementation process that I have identified above incur potential costs and benefits, as well as giving trust messages. I also recognize that what constitutes a message of trust (or reward or cost) for some members of a survey population may not do so for others. Furthermore, the survey situation and sponsor may also convey different message through the study details. This, then, is the basis of Tailored Design-- attempting to identify and utilize knowledge of sponsorship, the survey population, and the nature of the survey situation in an effort to maximize quality, and quantity of response.

The change in perspective from using one basic method for all survey situations in the original TDM to the tailoring of arguments to account for survey sponsorship, population, and content is the most distinguishing feature of Tailored Design. It is for this reason that the book is divided into two parts, elements of theory and design that tend to cut across most survey situations in Part I, and the tailoring to different survey situations, such as population, content, and method of self-administration, in Part II.

SETTING EXPECTATIONS FOR YOUR SURVEY

The first edition of this book listed 48 mail surveys that had used the original Total Design Method, either mostly or entirely (Dillman, 1978). These surveys obtained response rates ranging from 58 to 92%, with an average of 74%. Those which were judged to completely use the TDM obtained response rates of 77% versus 71% for those that omitted certain components.

At the time the TDM for mail surveys was written, application of a companion TDM procedure for telephone surveys typically produced higher response rates, with 31 surveys averaging response rates of 91% (compared to 77% for mail). For both telephone and mail, response rates for general public populations were about 10% lower than for more specialized populations on which the methods were tested.

In the ensuing 20 years, response expectations for interview surveys have changed. As noted earlier, it is increasingly difficult to achieve high response rates for most nongovernmental telephone surveys, and impossible without greatly increasing the number of attempts-- from the four to eight typically used at that time to perhaps 20 attempts today. Hox and de Leeuw (1994), in an analysis of changes in response rates over time, show that response rates have decreased for interview surveys; they are slightly higher for mail surveys than in the past.

My own impression of response rate changes for mail surveys is, first, that Tailored Design response rates similar to those obtained with the original TDM can be achieved, but that doing so generally requires using somewhat more intensive procedures, including token financial incentives and five contacts, one of which is done with special procedures such as telephone or priority (or courier) mail, as will be discussed in Chapter 4. Just as surveying by telephone methods now requires more intensive methods than in the past, including refusal conversion and many more contact attempts, the ability of mail to achieve response has benefited from additional efforts, such as the concerted use of token financial incentives. The response potential for other self-administered methods, particularly electronic ones, remains to be seen, but appears promising.

One of the challenges now being faced in survey design stems from conflicting pressures related to response rate and respondent burden. Some survey sponsors and oversight agencies, such as the U. S. Office of Management and Budget, require that very high response rates be achieved. At the same time, many institutional review boards have legitimate concerns about the manner in which individuals are contacted, the number of contacts made, and other survey procedures. One response to these requirements and concerns is to limit the use of multiple contacts and other procedures that questionnaire recipients or institutions may find objectionable. Thus, many surveys, including some of my own, are designed in a way that will inevitably produce lower response rates than might otherwise be obtained. The positive side is that surveyors now have an incentive to better understand issues related to costs, rewards, and trust as viewed by respondents to achieve the highest possible response rates from all segments of survey populations, while keeping the respondent burden as low as possible.

What you can expect for your own survey depends in part on the extent to which you follow the detailed procedures outlined in Part I of this book. It will also depend upon the constraints and possibilities associated with the particular survey population, the content of the survey, and survey sponsorship, as I will discuss for the tailored survey designs in Part II. Suffice it to say here that conducting self-administered surveys of any type is no longer an excuse for low response or poor measurement. In fact, in many situations, we expect their performance to exceed that which can be obtained by other survey modes.

SUMMARY


Tailored Design is a set of procedures for conducting successful self-administered surveys that produce both high quality information and high response rates. As defined earlier, it is the development of survey procedures that create respondent trust and perceptions of increased rewards and reduced costs for being a respondent, that take into account features of the survey situation, and that have as their goal the overall reduction of survey error. Its distinguishing feature is that rather than relying on one basic procedure for all survey situations, it builds effective social exchange through knowledge of the population to be surveyed, respondent burden, and sponsorship. Its goal is to reduce overall survey error, with particular emphasis on nonresponse and measurement.

The most important concept underlying Tailored Design has to do with applying social exchange ideas to understanding why respondents do or do not respond to questionnaires. Attributes of the questionnaire and implementation process, as identified in this chapter, result in both benefits and costs and also convey a message of trust. People are more likely to complete and return self-administered questionnaires if they trust that the rewards of doing so outweigh the costs they expect to incur. Further, what constitutes a message of trust (or reward or cost) to some members of the survey population may not do so for others.

Tailored Design is based on results from numerous experiments that either confirmed or failed to support the importance of certain elements of its predecessor, the Total Design Method, for certain types of surveys, as well as on new insights about how respondents read and respond to the visual aspects of questionnaire layouts and designs. It is also a result of lessons from hundreds of attempts to shape TDM procedures to improve survey quality.

Self-administered surveys are now ready to benefit from computer advancements through e-mail, posting on the World Wide Web, touch-tone data entry, and optical imaging of paper surveys. The advances of computerization that have so dramatically benefitted telephone interviewing in the 1980s and 1990s are now poised to benefit self-administered surveys as well. These applications were only vaguely imaginable when the typewriter/ printing press procedures for the TDM were under development for the first edition of this book.

The two-part organization of this book reflects these developments and the need to tailor survey procedures. Part I describes the common procedures essential to all mail and most other types of self-administered surveys. Chapter 2 begins the discussion of measurement and presents 19 principles for writing questions. Although many principles are similar to those in use at the time of the first edition, others have been added to reflect the advancements that have occurred in our understanding of how people process and respond to various elements of questions.

Chapter 3 focuses on questionnaire construction. It includes traditional concerns about how to order questions and build easy-to-answer questionnaires. Most of the chapter is devoted to discussing principles that stem from advances in our understanding of how people decide what to read, the order in which they read it, and how these decisions can be influenced by visual layout and design. In Chapter 4, the focus changes to reducing nonresponse error, as procedures for implementation are discussed in detail. Part I concludes with Chapter 5, a brief discussion of coverage and sampling issues.

Part II begins with a discussion in Chapter 6 of mixed-mode surveys. Few issues have become so important to the design of modern surveys as researchers having more options (new electronic modes plus traditional telephone and mail) to link together in an effort to increase response and lower costs. The idea of unimode construction-- writing questions that provide a comparable stimulus across all modes-- is critical to the success of mixed-mode surveys.

Chapter 7 describes how the methods in Part I need to be tailored for delivery by means other than traditional mail. Personal delivery, group administration, and the possibility of distribution in periodicals are each considered here. Chapter 8 then reports ways of tailoring to time constraints when responses have to be obtained in a quick and timely manner. The focus of this chapter is on diaries, certain customer satisfaction surveys, and election prediction surveys.

Sponsorship by the government, the focus of Chapter 9, places special constraints on the conducting of self-administered surveys. Such sponsorship makes it difficult to use some response-inducing techniques (e. g., financial incentives), but also makes it possible to use others (e. g., legitimate authority). This chapter relies heavily on my experiences of designing and conducting numerous experiments aimed at improving the response to certain government surveys.

Chapter 10 reports a number of possibilities for conducting surveys in which the respondent reports for an organization and not himself. Large numbers of surveys of businesses and other organizations are conducted each year, posing challenges that require tailored solutions quite different than those discussed in other chapters.

Chapter 11 introduces the brave new world of conducting electronic surveys that simultaneously provides the greatest potential for future self-administration and the most difficult challenges. Tailoring procedures to the Internet and self-administration by telephone are both discussed in this chapter, providing much evidence that the nature of surveying is undergoing tremendous change.

Chapter 12 reports a specific but critical advance in the application of computer technologies to self-administered surveys. This is the advance from simple optical scanning of marks on questionnaires to imaging the entire questionnaire so that large quantities of questionnaires can be processed quickly but with great accuracy. A brief conclusion ponders the future of self-administration and its importance in the information age.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing 1 Customer Reviews
  • Anonymous

    Posted December 26, 2001

    One of the most useful books on this topic.

    Dillman does a great job of offering practical advice for those wishing to conduct online survey research. His ideas for email research have practical applications within the context of web-based survey research.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing 1 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)