Designing and Using Organizational Surveys: A Seven-Step Process / Edition 1

Hardcover (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $39.82
Usually ships in 1-2 business days
(Save 55%)
Other sellers (Hardcover)
  • All (11) from $39.82   
  • New (6) from $66.69   
  • Used (5) from $39.82   


While many books have been written about survey research methods, few have been designed to provide the organizational practitioner with a clear, concise, and pragmatic working guide on how to go about actually doing a survey-until now. Designing and Using Organizational Surveys offers a hands-on, seven-step process to guide professionals in human resource development, organization development, industrial-organizational psychology, training and development, and other related fields on how to conduct a successful organizational survey. Using a careful, reader-friendly approach illustrated with real-life examples from large-scale survey efforts, Allan H. Church and Janine Waclawski cover all of the critical decisions that must be made in order to conduct an effective survey. The authors review the major issues to be confronted at each stage of the process, examine the options, and suggest the appropriate action to take. They show how to put together a quality survey questionnaire, administer the survey, process and interpret the results, report the findings to the organization, and translate the newly acquired information into meaningful action. And they include practical checklists at the end of each chapter, information about technology application, approaches to action planning, and sensitive coverage of the inevitable political and human issues that arise throughout the process.

Read More Show Less

Editorial Reviews

From the Publisher
"In my view, this book is simply the best, A to Z resource for organizational survey and assessment practitioners available. . . . A fresh and lucid perspective that inextricably links theory and practice. Comprehensive and practical, the seven-step process approach provides a closed-loop blueprint for designing and implementing organizational surveys that work!" (Salvatore V. Falletta, manager, global HR research, Intel Corporation)

"The many tools that Church and Waclawski offer alone make this book a treasure chest. . . . A considerable array of figures, examples, and samples that are helpful to the experienced and inexperienced practitioner (and consultant), including sample scripts and content for communications and focus groups." (David W. Bracken, partner, Mercer Delta Counseling, LLC)

"A great primer on organizational surveys. Church and Waclawski integrate into their approach to surveying both the science of the field and the art of practice in dynamic organizations. . . . Will provide valuable discussions among even the most seasoned professionals as well as insight for those just starting out in the field." (Karen B. Paul, manager, HR Measurement Systems)

Rather than simply introducing survey research methods, management experts who are adjunct professors at Columbia U. provide step-by-step guidance on a survey team approach to conducting effective organizational surveys and translating their findings into meaningful action for change. Chapters offer checklists, a comparison of paper vs. electronic methods, and ways of dealing with psychological and political barriers to success. Originally published in 1998 by Gower Pub. Limited, England. Annotation c. Book News, Inc., Portland, OR (
Read More Show Less

Product Details

  • ISBN-13: 9780787956776
  • Publisher: Wiley
  • Publication date: 3/28/2001
  • Series: Business and Management Series
  • Edition description: 1ST JOSSEY
  • Edition number: 1
  • Pages: 320
  • Sales rank: 642,545
  • Product dimensions: 6.30 (w) x 9.00 (h) x 1.10 (d)

Meet the Author

ALLAN H. CHURCH is director of organization and management development at PepsiCo in Purchase, New York. He is also an adjunct professor at Columbia University and a distinguished visiting scholar in the College of Business, Technology, and Professional Programs at Benedictine University. Church is the author of numerous articles in professional journals and coeditor (with David Bracken and Carol Timmreck) of The Handbook of Multisource Feedback (Jossey-Bass, 2001).

JANINE WACLAWSKI is a principal consultant in the Management Consulting Services line of business at PricewaterhouseCoopers, LLP. She is an adjunct professor at Columbia University and has been an instructor in Hunter College at the City University of New York. Waclawski is a past recipient of the American Society for Training and Development's Donald Bullock Memorial Dissertation Award for her research on large-scale organizational change and performance.

Read More Show Less

Table of Contents

Tables, Figures, and Exhibits xiii

Foreword xvii
Allen I. Kraut

Acknowledgments xxi

The Authors xxv

Introduction 1

What Is a Survey? 4

A Brief History of Surveys 8

Contemporary Use of Surveys 10

Surveys in Contemporary Organizational Life 12

The Seven Steps to Effective Organizational Surveys 17

1. Step One: Pooling Resources 27

Setting Clear Strategic Objectives 31

Obtaining Commitment 38

Overcoming Resistance and Apathy 42

Maintaining Confidentiality 43

Deciding What Information to Collect 45

Balancing Priorities 46

Checklist for Step One 49

2. Step Two: Developing a World-Class Survey 51

Using a Survey Design Team 53

Gathering Preliminary Information 55

Identifying Key Issues 56

Discussing Your Findings 58

Drafting the Initial Survey Document 60

Piloting the Survey 84

Checklist for Step Two 87

3. Step Three: Communicating Objectives 89

The CPR Model of Organizational Communication 91

First Contact with Employees 96

Communicating the Survey 100

Sample Survey Introduction 106

Guidelines for Communicating to Employees 107

Recognizing Informal Systems 109

Checklist for Step Three 111

4. Step Four: Administering the Survey 113

Timing of Administration 114

Working with the Project Plan 116

Sample Versus Census 120

Methods of Administration and Data Collection 122

Paper Versus Electronic Methods: A Comparison 137

Response Rates 143

Learning While Doing 146

Checklist for Step Four 147

5. Step Five: Interpreting Results 149

The Role of Statistics 150

The Importance of Timing 154

Data Entry 159

Data Preparation 162

Item-Level Analysis 172

Conceptual-Level Analysis 178

Comparative Analysis 186

Content Analysis of Write-In Comments 193

Checklist for Step Five 199

6. Step Six: Delivering the Findings 201

Understanding the Roll-Out Process 203

Preparing the Survey Report 207

Balancing Expectations and Reality 225

Checklist for Step Six 227

7. Step Seven: Learning into Action 229

Using Surveys to Create Lasting Change 232

Barriers to the Transfer of Ownership 233

A Commitment to Action 239

Four Approaches to Survey Action Planning 241

Five Critical Factors That Determine the Success of Survey Action Planning 258

The Action Planning Process 259

Linking Survey Results to Other Measures of Performance 267

Building Systems for Evaluating Success 271

The Evolving Role of the Survey Practitioner 272

Checklist for Step Seven 276

References 279

Index 287

Read More Show Less

First Chapter

Note: The Figures and/or Tables do not appear on the Web.


Welcome to the information age. Although we may not like to think of ourselves as a collection of data points just waiting to be identified, gathered, and quantified in some controlled fashion, in large part that is what we are. Each of us comprises an endless supply of information, from the factual (date of birth, gender, ethnicity, religious background, education) to the attitudinal (preferences, dislikes, opinions). This is how various information systems perceive, understand, and ultimately define our existence.

Of course, the fact that our databases are continually growing and changing over the course of our lifespan makes it all the more difficult to understand ourselves. This process of understanding, however, through information gathering (data collection) and interpretation is one of the primary roles of organizations today. For many organizations these data-- our individual and collective experiences as human beings-- are among the most important sources of information to be harnessed. In fact, many e-businesses and Internet firms (even the more traditional organizations with their eyes on e-commerce) today are basing their corporate strategy around the prevalence of such information. Moreover, these data are the basis and lifeblood of such data gathering strategies as polling (on political preferences, television viewing preferences, and current topics), administering undergraduate and graduate record examinations, conducting the national census, and using targeted marketing for various products. Rest assured that if you are watching a particular program on television, any advertisements shown have been targeted at your segment of the marketplace. In today's burgeoning information society, understanding the effective use of data collection and interpretation through survey methodology in organizations presents one key means of increasing our understanding of the human experience.

In the organizational context, surveys play an important role in helping leaders and managers obtain a better understanding of the thoughts, feelings, and behaviors of their own employees and of their customers. In fact, surveys are among the most widely used techniques in contemporary organizations for gathering data from a large number of people in a short amount of time. More than 70 percent of U. S. organizations today survey their employees, either on an annual or a biannual basis (Paul and Bracken, 1995). The trend toward survey use appears to be going up rather than down (Kraut and Saari, 1999). For example, although the Mayflower Group-- one of the elite survey consortia populated by the top-ranked organizations in their respective industries-- contained only fifteen charter members at its inception in 1971, the group boasted a membership of forty-two in 1995 and a growth rate of 40 percent since 1985 (Johnson, 1996). Similarly, presentations and research at professional conferences on surveys and their applications have been consistently popular over the years. Moreover, in the 1980s and 1990s organizational surveys have moved from being the sole province of institutions that are academic (for example, the Center for Applied Social Research at Columbia University) and research-based (for example, the Census Bureau and Gallup) to either internal organization development (OD) and human resource development (HRD) functions or external consulting firms versed in applied research methods such as those made up of industrial-organizational (I-O) psychologists.

The popularity of the survey process in organizations can be traced in large part to two main factors: (1) people usually like surveys and (2) surveys are easy to conduct.

As for the first point, surveys have a broad-based appeal and carry an implied sense of legitimacy. And they are viewed by many people as being a democratic, fair, and typically confidential means of assessing a wide range of opinions. Although prior experience with poorly implemented organizational surveys may have left some people disillusioned (we will return to this issue in a later section), most people like the idea of being asked their opinions, thoughts, and ideas. It is human nature.

As for the second point, surveys do compare favorably with other methods in ease of use and basic effectiveness. For example, one-to-one interviewing, observing, and holding focus groups have not escaped either practitioners or those commanding organizational budgets. Large-scale survey efforts are not cheap, but they are considerably less expensive and more reliable than any other approach currently available. Furthermore, depending on the use to which the survey effort is directed, the costs may seem trivial in light of the value of the information obtained.

Despite the inherent popularity and widespread use of survey methods in organizations today, practitioners are still in need of guidance regarding how to effectively implement and manage the entire survey process-- one that continues to grow and develop. In fact, organizational surveys and ways to base action on survey results continue to be popular subjects at professional conferences, workshops, and survey consortium meetings (see, for example, Waclawski and Church, 1999, 2000). In short, although it may appear to be relatively easy to generate some interesting questions, send them to people, and ask for their answers, the survey process is a highly complex and situationally dependent one that is in need of careful management.

Many factors must be considered before a survey can be used as an effective tool for the organization. Some of these are as follows: obtaining the necessary resources and political backing to obtain support from the organization, developing questions that appropriately reflect the specific purpose to which the survey effort is directed, understanding the nature and content of the communication process, working through resistance and feedback, interpreting the survey's results in a meaningful and effective manner, and working with those results throughout the organization.

Many good books have been written about the specifics of survey research and its methodology-- its sampling schemes, item construction, response theory, and multivariate analysis. However, these books are not usually designed to provide the organizational practitioner with a clear, concise, and pragmatic working guide for how to go about doing a survey. This book is intended to fill that void. In short, our aim is to supply HRD, I-O, and OD practitioners with an easy-to-use, practical, hands-on guide to conducting successful organizational surveys.

In contrast to the more academic resources, this book was written primarily for the organizational practitioner who wants help in conducting surveys. Therefore, wherever possible we have made use of real situations and learnings from actual large-scale survey efforts conducted in organizational settings to enhance our points. This book should prove most useful for anyone involved in the use or implementation of large-scale organizational surveys, including professionals in human resource management, organization development, communications, training and development, and I-O psychology, as well as leaders and managers working with others to implement surveys (or the results of such efforts) in their own organizations.

First, however, let us define what we mean by the term survey and describe the type of surveys to which we will be referring through-out this book.

What Is a Survey?

A survey can be loosely defined as any process used for asking people a number of questions (general or specific) to gain information. The information can be factual or attitudinal, or it can be designed to assess an individual's beliefs or judgments (Schuman and Kalton, 1985). On the surface this definition seems acceptable. It carries with it basic elements of a survey; many different types of data collection could be classified as such. For example, a series of telephone interviews conducted in a particular locality asking homeowners about the quality of their refuse collection would be considered a survey under this umbrella, as would a questionnaire distributed at the copy machine regarding the quality and reliability of its performance. Some authors would agree with such classifications, but this definition of survey is far too broad to be considered useful here. Given this book's concentration on the HRD, I-O, and OD practitioner, we have chosen to define our use of the term organizational survey as follows:

SURVEY: a systematic process of data collection designed to quantitatively measure specific aspects of organizational members' experience as they relate to work.

In most cases, this type of organizational survey involves the use of a standardized questionnaire (either in paper or electronic form) containing a series of items and associated response scales. Such a survey could also be conducted using an automated telephone system, also known as a voice response unit. Although individual interviews could be conducted as well, that would fundamentally defeat the advantages of the large-scale approach. Aside from the obvious emphasis on organizational members and settings contained in the definition, a number of unique elements should be highlighted.

First, we are concerned primarily with a systematic process for conducting surveys. This is not to say that a simple opinion survey could not be quickly thrown together and administered, and yield meaningful findings. It can. More often than not, however, some important element is missed, and the results end up being obtuse or uninterpretable, or the questionnaire is administered to the wrong sample or possibly even contains the wrong items. Any survey effort, whether large or more moderate in scale, should be taken seriously by those administering it. It certainly will be by those being questioned. This means that some type of planned, systematic approach should be adopted, and attention should be paid to effectively managing each of the main phases of the survey process.

It may seem simplistic, but remember that the purpose here is data collection designed to quantitatively measure something. Although we will try not to inflict on the reader all the statistical terminology that normally accompanies such an emphasis, the quality of the specific questions included on the survey instrument and the manner in which they are displayed and administered both have an effect on the quality and quantity of responses returned. For example, the level of detail obtained for a question using a 3-point scale (for example, agree, neutral, disagree) will be vastly different from the same question using a 7-point scale that uses more gradations in meaning. Sometimes these effects simply offset each other and are worth knowing about only to be informed, but in other cases some simple changes can have a drastic impact on the information received. A case in point: a simple wording change in a survey item from "generates creative solutions" to "creates solutions" alters the meaning of the statement completely.

Another issue related to data and measurement is how empha-sis is placed on quantification. More specifically, although survey efforts can contain many different types of questions (scaled items, write-in comments, forced-choice options), the value of a large-scale survey effort is that it provides a significant quantity of responses from people on the exact same question with the exact same response options. That means the question must be clear; then it is relatively easy to interpret, work with, and analyze the resulting data. To sum it all up, there is an old adage in the survey (and consulting business) that says, "You get what you measure" or, more crudely put, "Garbage in, garbage out." In the application of organizational surveys, these sayings hold true as well.

The phrase specific aspects of organizational members' experience in the definition refers to the type of information or data to be collected, which brings us back to the purpose of the survey effort itself. The following questions highlight the different kinds of topics that can be assessed in a survey:

Are you interested in knowing whether employees feel empowered in their jobs?

Do you want to know which types of communication systems are most and least effective for communicating different types of messages?

Are managers behaving in ways that reinforce the new mission and vision of the organization?

How satisfied are employees in their jobs?

What are the barriers to enhancing employees' performance? Do employees at all levels of the organization understand and commit to the stated mission or vision of the company?

What are employees' perceptions about compensation and benefits?

Do employees think that organizational changes are occurring too quickly or not quickly enough?

Is the current organization structure one that facilitates the completion of work?

What is the perception of the organization's senior leadership team?

This list contains only a small sample of the types of questions that can be included in an organizational survey. The potential list is limitless; in fact, it is only constrained by the experience (for example, content knowledge, background, formal education, and prior work with surveys) of the survey developer and the scope of the project.

We discuss ways to identify survey objectives in the first chapter, but the message here is that people behind the survey effort need to be clear about (1) what kind of information they want,(2) how they want to assess it using a questionnaire methodology, and (3) what they intend to do with that information when it has been collected. All this has a significant bearing on the specific aspects of the survey.

Based on this brief list of possible topics for an organizational survey, it should be clear to the reader that surveys, even as we have defined them here, can serve a multitude of purposes in organizations. This is due, in large part, to the variety of sources from which our contemporary approach to conducting and using surveys developed. The following section provides a brief history of these sources and influences, followed by a more detailed discussion of the general uses of organizational surveys in contemporary organizational life.

A Brief History of Surveys

Given the popularity and widespread use of surveys in most organizations today, it may be surprising for some people to note that surveys (as we know them) were not used extensively in organizations until the post-- Second World War era. The use of surveys to assess employees' thoughts and opinions, which seems quite straight forward and natural today, evolved as an offshoot of a variety of factors, which we discuss later. Until recently, however, survey techniques and usage resided primarily within the academic, military, and political realms. Nevertheless, the basic premise of survey methodology has existed for a very long time. In fact, if we consider the more general survey definition previously discussed (a survey is any process used for asking people a number of questions, general or specific, to gain information), it is apparent that surveys have existed since the beginnings of formal language. The first recorded use of surveys, for example, dates back to the ancient Egyptians, who are credited with the establishment of the census process for counting the number of inhabitants (Babbie, 1973). We also know that the Romans used crude survey techniques to find out how many people and of what types lived in their great cities.

Despite its ancient heritage, the survey as a formal methodological approach for collecting data in organizations did not gain widespread acceptance until the 1950s. One of the most significant contributors to the contemporary use of survey feedback is the early and groundbreaking work by researchers such as Samuel Stouffer and Paul Lazarsfeld. Their efforts are generally credited for the acceptance, popularity and, above all, quality of surveys today (Babbie, 1973; Higgs and Ashworth, 1996). These researchers concentrated on developing and refining survey methods and analyses to improve empiricism in the social sciences. Their contribution to the field can be traced to their use of survey methods to examine significant social issues of the time, such as the effects of the Great Depression on people's well-being, the status of blacks in the 1930s, the effects of McCarthyism, and the effects of social factors on the formal presidential voting process. These individuals examined various social, political, and economic factors in America using a large-scale survey technique. Lazarsfeld is also credited with establishing the first academic center for survey research-- the Bureau for Applied Social Research at Columbia University (Babbie, 1973).

Surveys have been used throughout the development of western civilization to gather many different types of information, from people's socioeconomic status, annual income, and place of residence to their opinions about political leaders, religion, capital punishment, and consumer preferences. Before gaining prominence and widespread exposure in organizations, these tools were used extensively (and still are, for that matter) in three different arenas: political, economic, and social. Table I. 1 shows a breakdown of the different types of survey data collected that reflect common emphases and applications for each of these three domains.

Many of these types of surveys are still commonly used today. For example, the U. S. election process (at the city, state, and federal levels) is in fact a very large-scale survey. Most people are eligible to vote, but only a certain sample usually participate (by vote) in the election of a given representative. This response set or sample is then used to determine which official will represent the entire population. Furthermore, at the federal level the construction of the electoral college mandates that each state receive a certain number of electoral votes, depending on the size of its population (based on census data). Similarly, many business organizations, such as IBM, have extensive research functions, with literally hundreds of professionals devoted to the sole purpose of surveying and analyzing market trends among current users and potential buyers of their products. Collecting data on people's responses to new advertising campaigns is also a common practice among such organizations, particularly given the exorbitant costs associated with running such spots on national television and in popular magazines and newspapers. Last but not least, the social sciences are anything but inactive in the area of current survey usage, both with respect to examining social issues (the province of many sociologists) and to the various organizational and related social psychological applications.

Contemporary Use of Surveys

These days a number of organizations base their entire existence on their survey practice. A. C. Nielson and Arbitron, for example, are two of the largest private sector survey firms that use surveys to estimate television viewing audiences (Rossi, Wright, and Anderson, 1983). Similarly, academic institutions such as the National Opinion Research Center (NORC), as well as popular media surveys like the CBSÐ New York Times poll and the Gallup organization, have done a great deal to promote surveys in the eyes of the general public. Many management and organization consulting firms today specialize in conducting or providing advice regarding organizations' surveys, many of which have been founded or are populated by HRD, I-O, and OD professionals who have left their internal positions in survey departments in the past.

Many factors have served to increase the acceptance and usage of employee survey approaches in the world, but the birth of the Mayflower Group-- a consortium of forty-two blue-chip companies-- in 1971 (Johnson, 1996) marked a significant turning point in the history of organizational survey research. This professional group was developed specifically to advance the practice of opinion surveying in organizations by sharing best practices and normative data among firms. This highly unusual (in the business world at least) process of sharing information has led to the establishment and maintenance of a database for benchmarking purposes across organizations. The formation and continued existence of this consortium has shown the willingness of (and the trust required for) companies to exchange potentially sensitive information-- a positive trend, given the continued competitiveness of the world marketplace. The participation at various times by highly profitable and well-respected companies such as IBM, Sears, Xerox, 3M, Merck, Johnson & Johnson, GTE, and Du Pont, just to name a few, not only demonstrates the importance attached to survey research in top-tier organizations but serves to place a seal of approval on the survey process in general. It also sets an example for other companies to follow. In fact, in the last few years another group of cutting-edge technology firms, including Cisco, Intel, Dell, Unisys, Gateway, IBM, Microsoft, NCR, Nortel, Sun Microsystems, and SAP, have formed their own consortium called the Information Technology Survey Group (ITSG). The consortium is aimed at sharing leading-edge survey practices in high-tech companies (http:// www. itsg. org).

Although the approaches to survey and data-feedback methods are somewhat more advanced in these companies (Waclawski, 2000) compared with many more "typical" organizational approaches, a host of challenges must always be overcome in any survey effort. For example, although speculating about future trends and applications is always questionable, even when employees are responding to surveys on their personal digital assistants (PDAs), cell phones, or some new and as yet unforeseen information and communication tool, many of the central problems inherent in designing, delivering, and using the results of an effective survey will remain the same. In short, it is clear that surveys themselves, both in and out of organizations, are here to stay.

Surveys in Contemporary Organizational Life

Why do surveys continue to be so popular in organizations? One of the likely reasons is the diversity of applications to which the results and even the process of a survey effort can be directed. For the HRD, I-O, and OD practitioner, surveys provide a myriad of possible uses and can sometimes simultaneously serve a number of different objectives. Some of the more significant categories of uses include

  • To understand and explore employee opinions and attitudes
  • To provide a general or specific assessment of the behaviors and attributes inherent in employees' day-to-day work experience
  • To create baseline measures and use these for bench-marking various behaviors, processes, and other aspects of organizations against other either internal or external measures
  • To use the data for driving organizational change and development

Each of these applications will be described in greater detail. It should be noted before continuing, however, that the categories are not mutually exclusive. In fact, survey efforts usually involve a combination of these different objectives.

Traditionally, and in their early use in organizations, surveys had been concentrated on assessing the opinions, attitudes, and beliefs of organization members. Early applications of this approach involved attempts to gauge workers' knowledge of and interest in potential unionization efforts, among other topics. However, more contemporary examples of this type of objective include measuring such individual and personal beliefs and feelings as employee satisfaction, empowerment, organizational commitment, autonomy, workÐ life balance, pride in the company, and perceptions of fairness and equity in standardized policies, systems, and procedures.

Other types of questionnaires have been designed to measure more involved and detailed topics such as rankings of employee assistance programs for desirability (given a list of types from which to choose), opinions of the quality of training and development programs, reactions to various messages and strategic initiatives, and external perceptions direct from the customer or client of product service or quality. All these types of data can be extremely useful for planning at every level, from the senior most strategic perspective to the extremely tactical implementation of a compensation and benefits program at a local department level. The one caveat, especially with the measurement of employee opinions, is that one needs to be prepared to openly acknowledge and ultimately attempt to deal with the issues raised. One of the most difficult problems that survey administrators face is finding a "bad" outcome on an important item (for example, employee motivation or morale), only to realize that no one in HR or senior management wants to take up the issue with employees. This "duck and cover" approach to dealing with survey findings often results in the alignment of many negative forces against any current and future survey efforts that might be undertaken.

A second type of survey objective concerns the assessment or measurement of more specific behaviors and conditions that existin organizational life. Most survey efforts are a combination of this and the opinion approach; however, assessment surveys differ in that they involve identifying certain observable, behavioral tendencies that can be accurately rated by employees. Because the assessment of behavior involves the observation of various individuals engaging in these behaviors, this type of assessment typically involves questions pertaining to the actions of immediate managers, functional, divisional, or business unit managers, and senior managers and executives. We discuss the details of developing and using different types of items in a later chapter, but here are some sample items of this nature:

Please rate the extent to which . . .
Senior management is consistent in word and deed
Senior management communicates with employees at all levels Your manager awards and recognizes people in your work unit
Your manager provides you with the information you need to do your job

These types of data are intended to provide more specific and actionable information than the opinion perspective alone. Attitudes and opinions are helpful; however, they are not as easy to act on at the individual level. You cannot say to a manager, for example, "Make your employees feel more satisfied" without knowing what conditions will lead to employee satisfaction. Through the use of surveys and data analysis, one can identify what types of specific behaviors or working conditions need to be changed or reinforced, which will ultimately lead to that employee feeling more satisfied. This is a somewhat subtle distinction but a very important one nonetheless. In many ways, these two types of survey objectives represent the distinction between a survey that simply captures a picture of the present state and one that can be used for diagnosing problems and effecting significant organizational change.

Another popular use and objective of survey methodology is its contribution to the benchmarking process. Benchmarking is a means of comparing survey results from one's own organization with some predetermined measure (benchmark) to identify relative strengths and weaknesses. Many practitioners think of benchmarking in terms of external indicators, but it is also acceptable to use survey data as an internal benchmark, both with respect to other functions, divisions, and work processes at the same point in time, as well as over time through the use of repeated survey administrations.

In its simplest form, for example, a company can use an initial effort to establish a baseline measure against which future survey results can be tracked. Therefore, a benchmarking survey can be used to assess improvement or decline over time on the specific areas it has been designed to measure. Similarly, by incorporating a range of responses for the highest-and lowest-performing units (for example, departments or work teams) suitable for comparison with the present level of results, the survey client can gain an immediate understanding of the areas in which he or she excels relative to the rest of the organization and those that could benefit from additional support and learning through the exchange of best practices from within.

In addition to internal benchmarking, many organizational members are interested in knowing how their individual and collective data compare with those of other organizations. These external indicators can range from the most competitive firms within their own industry to companies in entirely different industries but with similar types of processes or those facing similar issues. When using competitors within the same industry for benchmarking purposes, survey clients are typically interested in knowing how they rate with respect to their competition on certain areas of organizational functioning. In other words, they want to know how they rate against the competition in the areas that they feel are necessary for success. When using organizations in different industries as points of comparison, the purpose is often to see how one's own company fares outside its industry as an indication of its overall competitiveness, irrespective of industry type. Given the increasing tendency for organizations to span national boundaries and compete in different markets, this approach has more credibility than it once did. The previously mentioned Mayflower Group represents one such type of comparison. Members in this group receive norms on a number of standard items that are classified by and across industry type.

Perhaps of most concern to HRD, I-O, and OD practitioners is the use of organizational surveys for the express purposes of organizational change and development. Practitioners have long acknowledged that data-based feedback is one of the most powerful means of effecting change (Nadler, 1977); recent studies based on individualized multisource feedback methods have supported this contention (Atwater and Yammarino, 1992; Church, 1994a, 1997, 2000; Church and Bracken, 1997; Church and Waclawski, 1998a, 1999; Furnham and Stringfield, 1994; Van Velsor, Taylor, and Leslie, 1993). During the 1990s the use of survey data for organizational change became increasingly popular, and a number of external consulting firms have now begun to specialize in these services. The basis of the questions themselves is often similar to or the same as those described earlier. However, there is a fundamental difference in this approach in that the survey process is seen as only a part of a larger change initiative involving other (complementary) methods. Furthermore, the survey questionnaire is seen as both a means of communicating what is important to the organization, particularly if some aspect of behavior change or a change in strategic direction is required, and a way to identify, link, and leverage key variables that lead to desired end states in the organization (such as increased morale) to those who cause or drive them. This type of analysis, although complex to conduct and interpret, represents perhaps the most powerful and potentially effective application of survey results. In effect, individual opinions and assessments of workers' behaviors are used to identify and drive those organizational changes that will have the greatest impact on future behavior and success. Surveys, if concentrated on this objective, can provide HRD, I-O, and OD practitioners and organization leaders with important information about employees' perceptions of change and their readiness for it, as well as many other issues that can affect the success or failure of large-scale change initiatives (Church, Margiloff, and Coruzzi, 1995; Waclawski, 1996a).

The Seven Steps to Effective
Organizational Surveys

If we have made our point so far, it should be clear that although most anyone can participate in a survey effort, a number of significant issues and complexities must be managed if the outcome is to be a positive one. In the following text you will be introduced to the seven steps, or phases, involved in implementing an organizational survey. Figure I. 1 shows the stages. Following is an overview of what each of the steps entails.

Step One (Pooling Resources) focuses on the process of pooling resources in the early stages of a survey. As in any sizeable organizational initiative or intervention, one must always begin by laying the appropriate groundwork with all the right people. Gaining substantive input and cooperation from all key parties is never an easy task, but without the appropriate support and resources, most survey efforts fall short of expectations with respect to impact. In order to satisfy the needs of important constituencies and the ultimate end users of the survey, involvement early on is crucial.

In this chapter we discuss how to set the stage for a successful organizational survey effort, including the following: (1) how to set clear strategic objectives regarding the purpose and uses of the survey process itself, (2) how to obtain commitment from senior management as well as the rank and file of the organization, (3) how to identify and overcome negative energy or apathy due to prior experiences, (4) who should be involved in the data collection effort, (5) what specific types of information are to be collected, (6) what types of additional information (for example, demographics or organizational characteristics)should be collected and at what levels, and (7) how to prepare the organization for the survey effort.

This first stage in the survey process comprises building alliances, support, commitment, and energy for the survey effort before it can really begin. In our experience it is this stage, more than any other, that will determine the ultimate success or failure of the survey effort with respect to its perceived viability as a worthwhile endeavor. In other words, organizational initiatives are often judged (fairly or not) by those people who stand behind them and those who do not.

Step Two (Developing a World-Class Survey) concentrates on the second stage in the survey process-- the fundamentals of the survey instrumentation itself. In this chapter we look at characteristics of the questions, the content, the response options and scales, the layout or presentation, and the formal instructions, to name a few topics. After a brief comparison of the pros and cons of using existing standardized instruments from other sources versus creating a customized survey tool, the discussion turns to the issue of design. Through examples and descriptions of prior research and experience, the survey practitioner is guided on the importance of the following: (1) using teamwork to build high-quality survey instruments; (2) gathering, identifying, and working through key issues that need to be assessed; (3) drafting a survey instrument; and (4) piloting and refining it for final administration. By the end of the chapter, practitioners should have a better understanding of how to write items that (1) concentrate on specific issues, (2) are clear and easy to respond to, (3) avert the typical problems often found in new questionnaires, (4) are free of jargon and cultural biases, (5) are methodologically sound, and (6) are appropriate for the level and type of readership at which they are directed.

The issue of response options or scales is also addressed, as are the pros and cons of using write-in comments to help augment and add spice to the more quantitative data. The use of frameworks, models, or organizing themes for the survey instrument to enhance respondents' understanding of and interest in completing the survey is also discussed, as are the benefits of exploring potential linkages ahead of time with other existing (or impending) organizational measurement or change initiatives.

In Step Three (Communicating Objectives) we turn our attention to one of the most simple-to-understand yet difficult-to-implement concepts in the survey process: communication. In this chapter we start with another old adage: "communicate, communicate, communicate." Many practitioners, managers, and leaders for that matter would agree with this sentiment, but few in our experience actually follow through on the edict. And yet it is one of the basic aspects of an effective survey process. In this chapter we look at the importance of communicating the purpose, objectives, and content of the survey initiative clearly and effectively to those involved in the data collection effort-- the employees completing the survey. Of course, this means gaining agreement first among those in power as to what the expected outcomes of the survey effort are. It also means laying the necessary groundwork to ensure that peopleagree with the survey's objectives and that they understand how the data are to be used, as well as issues of confidentiality. Thus, after a brief introduction and overview of the contents, processes, roles (CPR) model of organizational communication, this section explores each of the phases of communication in a survey effort from first contact to the formal information and messages provided with the survey instrument.

We also discuss general guidelines for communicating to employees, understanding the strengths and weaknesses of various mechanisms for sending messages, and seeing the need to balance the amount of information given so that it is neither too much nor too little. Also covered is the need to manage the informal communication system-- the grapevine. In some organizations, this can be a more powerful means of making or breaking a survey than anything the senior leadership or even the immediate manager says.

Besides these larger issues, the cover letter and the accompanying instructions on the instrument (how all these messages come together for the respondents) are discussed in this section. Although clear communication in and of itself will not save a bad survey effort, poor communication can kill a good one.

Step Four (Administering the Survey) covers what is for some the most mundane and for others the most stressful part of the survey effort: the formal administration process. In this chapter we discuss the details involved in carrying off a successful administration. This includes everything from the importance of establishing a clear, comprehensive, and reasonable project plan with appropriate milestones, checkpoints, and buffer areas for making up lost time when dates begin to slide (as they invariably do) to holding your internal or external clients' hands and allaying their fears and concerns regarding the process and the inevitable glitches.

We also cover the specific methods of and options for the administration itself. For example, what are the pros and cons of mailing the survey to each employee with return envelopes versus having mass administration sessions in large auditoriums with proctors? Who should complete the survey first? What types of data collection methods should be used? Organizational surveys typically conjure images of paper-and-pencil questionnaires with optical scan response forms, but with the advent of the information age (not to mention the popularity of the Internet and the prevalence of local intranets) a variety of alternatives for collecting survey data are now available. Some of these are as follows:

  • Transmission of responses to a computer by pressing keys on a telephone (also known as voice response units)
  • Individual computer-disk-based methods in which a response is made on a computer, and the disk is sent to someone else for processing
  • Various e-mail-based surveys that either have respondents reply directly in the body of the note or via downloadable templates or executables attached to various programs (such as Lotus Notes or Microsoft Excel) that are submitted to a central server once completed
  • Fax-back surveys for which paper responses are scanned automatically
  • Internet, intranet, and related on-line, Web-based response systems that capture the survey data immediately and often interactively

These alternatives are often exciting, particularly to those who are either enamored of computer technology or bored with the more traditional methods, which have strengths and weaknesses. We highlight those where appropriate because of their effectiveness and usefulness in certain situations. Although electronic survey methods are increasing in popularity, both in terms of practice and as an applied research topic, pencil-and-paper methods remain the most commonly used in organizations, for reasons that will become clear in our discussion. Step Four also covers the importance of having a continuous learning and process improvement orientation to working with surveys in organizations. This involves making effective use of feedback from the organization, particularly in the early stages of administration, to adjust and adapt the process to make it optimally effective in a given situation.

Next, in Step Five (Interpreting Results) we move to one of the most potentially complex and consequently misunderstood aspects of survey work. Once all the data have been collected, it is time to put them into that black box, as our clients sometimes call it, and analyze the results. Most people, especially those with advanced degrees, can calculate an average value from a series of responses without having done a good deal of previous survey work or having had experience in applied research on large-scale data sets. But it is a more difficult and refined skill to pull all the results together into a cohesive yet statistically supported story about what is occurring in an organization. Many different stories and themes always emerge in any survey of a large population. However, we are concerned here primarily with the first wave of analysis-- the one presented first to the survey client or the senior management organization (other types of subsequent analytic work will be discussed in Step Six). Such an effort requires the practitioner to identify the main issues and important relationships among a mass of data in what is often the shortest timeframe of the entire survey effort. This happens because once the survey has been sent out and people start responding, everyone wants to know the results as soon as possible. It can take five months to develop the appropriate questions for use in the questionnaire itself, but the time from final data collection to the first reporting process must be only a few short weeks for the results to be meaningful and have the appropriate credibility upon which to take action.

Moreover, in some organizations results are expected in a matter of days, or even overnight, as in the case of survey efforts at Federal Express. The point is that appropriate timing is everything. Time and time again we have seen a senior management team receive their initial top-level results (the big picture) shortly after final administration, only to be so disturbed by the findings that they asked that we withhold the feedback from employees for several months. In cases like that, delay makes the results far less relevant and more lacking in impact when they are finally communicated, not to mention the impact such actions have on the credibility and utility of future survey efforts. In this section we discuss such topics as how to make a compelling story of a large collection of numbers with and without advanced statistics, how to balance expectations with realities inherent in the data, how to work with normative and benchmarking data to assist in interpreting the results rather than becoming the focal point, and how to use write-in comments to enrich the data and presentation. The use and abuse of the benchmarking process is discussed in greater detail in this section as well.

Step Six (Delivering the Findings) is concerned with the delivery of the survey results to organizational members, both in various forms and throughout different levels. In this chapter the emphasis is on picking a strategy for delivering the feedback to all those involved. This strategy, often described as a roll-out process, is conducted on a gradual management-level-by-management-level basis. It is important to note that the approach advocated here places a greater emphasis on the appropriate delivery of results vis-à-vis helping organizational members work with their data interactively to promote understanding and plan for change rather than on quickly supplying everyone in the organization with a copy of their results without interactive coaching support. The former is primarily an OD approach and has the potential for catalyzing energy for improvement; the latter is what we refer to as "the desk drop" and will only be as effective as the motivation level of the individual to whom the results are given.

In some ways this stage represents the second half of the analysis process, whereby the data are reexamined at lower levels and for specific groups, functions, departments, comparisons, or segments to look among similar or different stories in their specific findings. For example, in a large-scale organizational survey it is possible not only to provide reports for every department with a certain number of people responding but to provide a report that compares how several different departments rated one another on service quality and cooperation internally. A simple presentation of results can easily be undertaken without the benefit of subsequent interpretation, but to maximize effectiveness the roll-out process should involve some degree of interpretative assistance built into the framework of the delivery vehicle itself. Of course, the timing as well as the complexity of the information delivered must also be carefully managed for the results to be meaningful. Just as waiting too long to provide any feedback is problematic, so too is "dumping" (for example, through a desk-drop approach) the entire results of the survey on all employees in some overly complex and underinterpreted fashion so that no one can understand what it all means. We have seen that happen in well-intentioned but misconstrued attempts to be entirely open in their survey communications.

Other issues with respect to feedback delivery to be covered in this section include tips for making formal presentations that have an impact, using organizational models and frameworks for describing linkages or relationships among key variables of interest, resisting when requests for additional data threaten the ethical integrity of the confidentiality norms established at the outset of the survey process, and knowing how to present good and not-so-good data in ways that recipients can accept.

Finally, Step Seven (Learning into Action) centers on the last stage of the survey process. Many clients and practitioners pay little attention to this phase once they reach it, feeling instead that when the survey is done and all the feedback is delivered, it can be forgotten. The fact is that this stage can make or break the survey effort. Even the best planning, the most well-constructed questionnaire, the most appropriately conceived and implemented administration process, the best analysis, and a variety of staged feedback reports will not be enough to make a survey effort effective to the organization if the data are not used to drive change and improvement in the system or in people's day-to-day behaviors. Thus, in this chapter we discuss the importance of follow-through, including attention to common barriers to effective action planning, as well as a detailed comparison of four approaches to making full use of survey results to drive change in an organization. Regardless of the objectives of the survey effort itself (for example, to gauge employee opinion, to assess behavioral tendencies, to communicate and reinforce the culture, to target areas for change and development initiatives), it is of paramount importance that the results be used by recipients to make decisions and take actions that will ultimately affect the organization's future. We are concerned here with planning for action, identifying areas for intervention and improvement, enlisting and involving others in the process, measuring progress over time through resurvey efforts, and linking survey results to other key measures of organizational performance. If the organization does not take ownership of the results, the data will have no meaning and therefore no impact. This occurs with many surveys in today's organizations. It can also happen to new survey efforts that fail to receive adequate support from senior management and other key opinion leaders in the organization at the outset; these efforts become lame ducks. It can also happen to survey systems that have been in place for years, where engaging in the survey has created a routine process that employees do not trust, respect, or pay attention to but is nonetheless used by management as a "dipstick" for gauging employee opinion. Step Seven describes how to prevent this type of entropy in the survey process.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Terms of Use.
  • - Barnes & reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)