About the Author
Read an Excerpt
COLLABORATIVE INTELLIGENCEUsing Teams to Solve Hard Problems
By J. RICHARD HACKMAN
Berrett-Koehler Publishers, Inc.Copyright © 2011 J. Richard Hackman
All right reserved.
Chapter OneTeams That Work and Those That Don't
It was not all that different from his regular work. Jim, an analyst at the Defense Intelligence Agency (DIA), looked around at the other members of his team. He knew two of them—another analyst from DIA and an FBI agent he had once worked with; the rest were strangers. The team's job, the organizer had said, was to figure out what some suspected terrorists were up to—and to do it quickly and completely enough for something to be done to head it off. Okay, Jim thought, I know how to do that kind of thing. If they give us decent data, we should have no problem making sense of it.
For Ginny, it was quite a bit different from her regular work as a university-based chemist. She had been invited to be a member of a group that was going to act like terrorists for the next few days. Ginny had not known quite what that might mean, but if her day of "acculturation" into the terrorist mindset was any indication it was going to be pretty intense. She had never met any of her teammates, but she knew that all of them were specialists in some aspect of science or technology. She was eager to learn more about her team and to see what they might be able to cook up together.
Jim and Ginny were participating in a three-day run of a simulation known as Project Looking Glass (PLG). The brainchild of Fred Ambrose, a senior CIA intelligence officer, PLG simulations pit a team of intelligence and law enforcement professionals (the "blue team") against a "red team" of savvy adversaries intent on harming our country or its interests. A "white team"—a group of intelligence and content specialists—plays the role of the rest of the intelligence community. The charge to the red team was to use everything members knew or could find out to develop the best possible plan for doing the greatest possible damage to a target specified by the organizers—in this case, a medium-sized coastal city that was home to a large naval base. Members could supplement their own knowledge by consulting open sources such as the Internet and by seeking counsel from other individuals in their personal or professional networks. But what they came up with was to be entirely the product of team members' own imagination and ingenuity.
To help them adopt the perspectives of those who really are intent on doing damage to our country, red team members spent a day of acculturation. It was like an advanced seminar on terrorism, Ginny thought. Team members heard lectures from both scholars and practitioners on everything from the tenets of radical Islamic philosophy to the strategy and tactics of terrorist recruitment. By the end of the day, Ginny was surprised to find herself actually thinking and talking like a terrorist. Her red teammates seemed to be doing the same.
Ginny and her teammates were aware that the blue team would have access to a great many of their activities—they would be able to watch video captures of some of the red team's discussions, tap into some of their electronic communications and Internet searches, and actively seek other data that might help them crack whatever plot they were hatching. The blue team also had heard lectures and briefings about terrorists, including specific information on the backgrounds and areas of expertise of red team members. Jim found these briefings interesting, but mostly he was eager to get beyond all the warm-up activities and into the actual simulation. And, by the beginning of the second day, the game was afoot.
The start-up of the red and blue teams could hardly have been more different. The red team began by reviewing its purpose and then assessing its members' resources—the expertise, experience, and outside contacts that could be drawn upon in creating a devastating attack on the coastal city. Members then launched into a period of brainstorming about ways the team could use those resources to inflict the greatest damage possible and, moreover, do so in a way that would misdirect members of the blue team, who they knew would be watching them closely.
The blue team, by contrast, began by going around the room, with each member identifying his or her back-home organization and role. Once that was done, it was not clear what to do next. Members chatted about why they had chosen to attend the simulation, discussed some interesting issues that had come up in the previous day's lectures, and had some desultory conversations about what it was that they were supposed to be doing. There were neither serious disagreements nor signs of a struggle for leadership, but also no discernable forward movement.
Then the first video capture of the red team at work arrived. The video made little sense. It showed the team exchanging information about each member's special expertise and experience, but nothing they said was about what they were actually planning to do. Assured that nothing specific was "up," at least not yet, blue team members relaxed a little. But it was frustrating not to have any hard data in hand that they could assess and interpret using their analytic skills and experience.
As blue team members' frustrations mounted, they turned to the white team—the broader intelligence community. To obtain data needed for their analytic work, including information about some of the activities of the red team they had seen on the video, blue team members were allowed to submit requests for information (RFIs) to the white team. Some RFIs were answered, sometimes immediately and sometimes after a delay; others were ignored. It was, Jim thought, just like being back at work.
By early in the second day of the simulation, the red team had turned the corner and gone from exploring alternatives to generating specific plans for a multipronged attack on the coastal city and its environs. Now blue team members were getting worried. They finally realized that they had no idea what the red team was up to, and they became more and more frustrated and impatient—with each other, certainly, but especially with the unhelpfulness of the white team. So the team did what intelligence analysts often do when frustrated: they sought more data, lots and lots of it. Eventually the number of RFIs became so large that a member of the white team, experiencing his own frustration, walked into the blue team conference room and told members that they were acting like "data junkies" and that they ought to slow down and figure out what they actually needed to know to make sense of the red team's behavior.
That did not help. Indeed, as accurate as the accusation may have been, it served mainly to increase blue team members' impatience. As tension escalated, both negative emotions and reliance on stereotypes also increased—stereotypes of their red team adversaries, to be sure ("How could that weird set of people possibly come up with any kind of serious threat?"), but also stereotypes of other blue team members. Law enforcement and intelligence professionals, for example, fell into a pattern of conflict that nearly incapacitated the team: When a member of one group would offer a hypothesis about what might be going on, someone from the other group would immediately find a reason to dismiss it.
Things finally got so difficult for the blue team that members could agree on only one thing—namely, that they should replace their assigned leader, who was both younger and less experienced than the other members, with someone more seasoned. They settled on a navy officer who was acceptable to both the law enforcement and the intelligence contingents, and she helped the group prepare a briefing that described the blue team's inferences about the red team's plans. The briefing would be presented the next day when everyone reconvened to hear first the blue team's analysis, and then a presentation by the red team describing what they actually intended to do.
The blue team's briefing showed that members had indeed identified some aspects of the red team's plan. But blue team members had gotten so caught up in certain specifics of that plan that they had failed to see their adversaries' elegant two-stage strategy. First there would be a feint intended to misdirect first responders' attention, followed by a technology-driven attack that would devastate the coastal city, its people, and its institutions. The blue team had completely missed what actually was coming down.
Participants were noticeably shaken as they reflected together on their three-day experience, a feeling perhaps best expressed during the debriefing by one blue team member who worked in law enforcement: "What we saw here," he said, "is almost exactly the kind of behavior that we've observed among some people we are tracking back home. It's pretty scary."
* * *
The scenario just described is typical of many PLG simulations that have been conducted in recent years. Fred Ambrose developed the idea for this unique type of simulation in response to a congressional directive to create a paradigm for predicting technology-driven terrorist threats. The simulation is an upside-down, technology-intensive version of the commonly used red team methodology, with the focus as much on detecting the red team's preparatory activities as on determining its actual attack plans. Again and again, the finding is replicated: The red team surprises and the blue team is surprised. The methodology has proven to be so powerful and so unsettling to those who participate in PLG simulations that it now is being adopted and adapted by a number of organizations throughout the U.S. defense, intelligence, and law enforcement communities.
What accounts for the robust findings from the PLG simulations, what might be done to help blue teams do better, and what are the implications for those whose real jobs are to detect and counter terrorist threats? We turn to those questions next.
Why Such a Difference between Red and Blue Teams?
How are we to understand the striking differences between what happens in red and blue teams in PLG simulations? Although there is no definitive answer to this question, there are at least four viable possibilities: (1) it is inherently easier to be on the offense than on the defense, (2) red teams are better at identifying and using the special expertise of both their members and outside experts, (3) prior stereotypes compromise the ability of blue teams to take what they are observing seriously and deal with it competently, and (4) red teams develop and use more task-appropriate performance strategies.
OFFENSE VS. DEFENSE. An obstacle that many intelligence teams must overcome is that they are, in effect, playing defense whereas their adversaries are playing offense. Data from PLG simulations affirm the observations of intelligence professionals that offense usually is considerably more motivating than defense. It also is much more straightforward for those on offense to develop and implement an effective way of proceeding. Even though offensive tasks can be quite challenging, they require doing just one thing well. Moreover, it usually is not that difficult to identify the capabilities needed for success. Those on defense, by contrast, have to cover all reasonable possibilities, which can be as frustrating as it is difficult.
The relative advantage of offense over defense is seen not just in intelligence work but also in a wide variety of other activities. A football team on offense need merely execute well a play that has been prepared and practiced ahead of time, whereas the defenders must be ready for anything and everything. A military unit on offense knows its objective and has an explicit strategy for achieving it, whereas defenders cannot be certain when the attack will come, where it will occur, or what it will involve. As physicist Steven Weinberg has pointed out, it is impossible to develop an effective defense against nuclear missiles precisely because the defenders cannot prepare for everything that the attackers might do, such as deploying multiple decoys that appear to be warheads.
Because athletic coaches and military strategists are intimately familiar with the difference between offensive and defensive dynamics, they have developed explicit strategies for dealing with the inherent difficulties of being on the defensive. The essential feature of these strategies is converting the defensive task into an opportunity to take the offense. According to a former West Point instructor, cadets are taught to think of defense as a "strategic pause," a temporary state of affairs that sometimes is necessary before resuming offensive operations. And a college football coach explained that a good defense is one that makes your opponents "play with their left hand." A "prevent" defense, he argued, rarely is a good idea, even when you are well ahead in the game; instead, you always should prefer an "attack" defense. These sentiments were echoed by a military officer: "Good defense is arranging your forces so your adversaries have to come at you in the one place where they least want to."
In the world of intelligence, there is an enormous difference between "How can we cover all the possibilities?" and "How can we reframe our task so that they, rather than we, are more on the defensive?" For all its motivational and strategic advantages, however, such a reframing ultimately would require far better coordination among collection, analytic, and operational staff than one typically sees in the intelligence community. Even with the creation of a single Director of National Intelligence, organizational realities are such that this level of integration may not develop for some time. In the interim, simulations such as PLG offer at least the possibility of helping those whose work involves defending against threats understand more deeply how adversaries think and act. Our observational data, for example, show that analysts who participate in PLG simulations do develop a capability to "think red" that subsequently serves them well in developing strategies that focus on the specific data most likely to reveal what their adversaries are up to.
IDENTIFYING AND USING EXPERTISE. To perform well, any team must include members who have the knowledge and skill that the task requires; it must recognize which members have which capabilities; and it must properly weight members' inputs—avoiding the trap of being more influenced by those who have high status or who are highly vocal than by those who actually know what they are talking about. Research has documented that these simple conditions are harder to achieve than one might suspect. People commonly are assigned to teams based on their organizational roles rather than on what they know or know how to do. Moreover, teams often overlook expertise or information uniquely held by individual members, focusing instead on that which all members have in common. Only rarely do teams spontaneously assess which members know what and then use that information in deciding whose ideas to rely on most heavily.
The challenge of identifying the expertise of team members and using it well is especially critical for those who would mount a terrorist attack since, as Fred Ambrose has pointed out in conversation, "It's not what they have in their pockets that counts most, it's what they have in their heads." The red teams in PLG simulations generally do a great job at using what is in members' heads. The teams are properly composed, to be sure: they consist of individuals who have in abundance the scientific, technical, and engineering skills needed to mount an attack in the setting specified in the simulation scenario. Almost all red teams also take the time to compare credentials early on so that everyone knows who has special expertise in what technical areas, which helps teams mold the details of their plans to exploit members' unique capabilities. And because red teams have both a clear offensive purpose and detailed knowledge of members' capabilities, they generally rely on the right members to address problems that come up as they formulate their plans. Finally, when red teams need knowledge or expertise that their members do not have, they are quick to turn to online sources or to their networks of colleagues to fill the gaps.
Blue teams in PLG simulations also are well composed. They consist of competent professionals from law enforcement, intelligence, and the military who make their livings finding, studying, and heading off individuals and groups who would do harm to the nation. (Red team members, by contrast, generally come from academia, industry, or the national laboratories and are not professionally involved in counterterrorism work.) Blue teams also exchange credentials shortly after they assemble, but these credentials are of a wholly different kind. Typically, blue team start-up involves each member identifying his or her home organization and role in that organization. Perhaps because the team's assigned task—to figure out what the red team is up to—is both defensive and a bit ambiguous, members do not know specifically what capabilities will turn out to be most relevant to the work. So they focus less on what members know how to do and more on the organizations where they work, which increases the salience of both their home organizations' institutional objectives and the methods they rely on to achieve them. Whereas early interactions in red teams pull people together in pursuit of a specific and challenging team purpose, early interactions in blue teams underscore the differences among members and tend to pull them apart.
Excerpted from COLLABORATIVE INTELLIGENCE by J. RICHARD HACKMAN Copyright © 2011 by J. Richard Hackman. Excerpted by permission of Berrett-Koehler Publishers, Inc.. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of ContentsPreface
Introduction: The Challenge and Potential of Teams
Part One: Teams in Intelligence
Chapter 1: Teams That Work and Those That Don’t
Chapter 2: When Teams, When Not?
Chapter 3: You Can’t Make a Team Be Great
Part Two: The Six Enabling Conditions
Chapter 4: Create a Real Team
Chapter 5: Specify a Compelling Team Purpose
Chapter 6: Put the Right People on the Team
Chapter 7: Establish Clear Norms of Conduct
Chapter 8: Provide Organizational Supports for Teamwork
Chapter 9: Provide Well-timed Team Coaching
Part Three: Implications for Leaders and Organizations
Chapter 10: Leading Intelligence Teams
Chapter 11: Intelligence Teams in Context
About the Author