Read an Excerpt
Initial Design Decisions
There's an old saying, "Power lies not with those who have all the answers, but with those who have the questions." In building competency-based HR applications, where so many tasks are either new or are being accomplished in a very different fashion, there are a host of questions that occur. The problem is that there may not be specific right or wrong answers to the questions. Instead, there are alternatives in designing the applications. The best that a competency expert can do is bring up the issues, the alternatives, the pros and cons, the resources required, and past results. Valid competency implementations have varied from a simple flat-file database designed in a morning on a PC to complex HR modules in an ERP system requiring millions of dollars and several years to installand everything in between.
It is the answers to the design questions in this chapter that determine the scope and complexity of an application, and also its ultimate effectiveness. Addressing these issues now will eliminate many problems later on in the project. And it all starts with a very basic question.
Does the Organization Really Mean It?
Anything less than total commitment will doom a competency project. Sometimes "meaning it" is as simple as top management's stating that the organization is seeking ISO 9000 certification, or a national or state quality award. Because employee qualification processes are a standard quality requirement, competency-based HR applications are a mandate driven by audit standards. If workers understand that an ISOcertification may be necessary to keep certain customers, and that a quality award helps in marketing the organization's products and services, then there is buy-in from top to bottom.
It is far more difficult in most cases. Competence is seen as a potential tool, but no one is clamoring for another HR program. Why? Because all too often they have been just programs, rather than strategic initiatives that drive business results. For example, the vice president of strategic planning for a large oil company was concerned about misguided HR projects. His biggest fear was that a new competency project would be one of those activities that sound like a good idea to somebody at the top but then become a total time-waster.
Some projects develop a life of their own. They put an organization into what Jerry B. Harvey calls "The Abilene Paradox."1 This is a situation where group agreement is mismanaged and organizations find themselves "on the road to Abilene." On this path, individuals take actions that are different than what they want to do and thereby defeat the purpose they are trying to achieve. Everyone may agree that a project is a waste of time, but no one will make the group deal with it. It is no wonder that workers and leadership alike are initially skeptical of new concepts such as competency-based applications.
The oil company vice president's latest albatross was a bottom-up, formalized strategic planning process requiring mandatory participation by everyone in the organization. Frontline teams met to determine their strategies. They then appointed representatives to meet and develop workgroup strategies, and so on. The result was a gigantic process generating lots of meetings, pages and pages of minutes, hundreds of sets of completed planning forms, and a steady income stream to the consulting group that had sold the project to top management.
The workplace is littered with the wreckage of organizational fads. A perfect example is the quality circles movement. In its eagerness to catch up with the Japanese, American management tried to implement quality improvement teams without the corresponding quality focus, training, and group consensus culture of the Japanese. The results in many companies were meetings where everyone sat around saying nothing. This early failure with quality circles made it much harder in many organizations for its relative, TQM, to succeed years later.
Organizations also have a collection of worthwhile projects that were killed by the "shark pool" of managers and supervisors who were required to execute them, or by the frontline employees who supposedly benefit. Without their buy-in, top management can make all the decisions it wants, but the project will fail.
The result of such false starts is a near-permanent cynicism about the usefulness of new projects. Employee reluctance becomes a self-fulfilling prophesy of project failure that dooms further efforts to get it right. Both management and workers are forever wary of resurrecting what Dilbert cartoonist Scott Adams dubbed "a dead woodchuck." All this can happen to a competency-based project without the proper organizational buy-in.
Some of the best competency-based decisions are those that were not made. There are a myriad of conditions that can derail a project. If the organization does not have total commitment top to bottom, if it cannot see the benefits and does not have the culture to support a competency philosophy, then the process should be halted. For competency-based HR applications, it is better to have never started than to begin and fail.
It can also be difficult to play at competencies. Competency-based HR applications address core organizational processes, such as hiring, development planning, and promotion of employees. With today's mobile workforce and changing job assignments, it is nearly impossible to isolate a single department or workgroup. An organization puts itself at legal risk when its methods for making personnel decisions vary from department to department. The standardization normally found in performance management must be duplicated in the area of competency assessment. Competency-based applications must be universal within the organization.
At the very least, top management has to really mean it. Leadership has to have the vision to drive the process and the stamina to complete it. Benefits have to be so clear that managers and supervisors willingly perform the extra work to be trained on the process and administer it. And frontline workers have to believe that the application will support them in their jobs by helping them:
* Get the development that they need to succeed
* Skip being trained on what they already know
* Verify what they can do
* Become qualified for the next promotion
Without this support at all levels, the development project is seriously compromised. Many experienced HR consultants will bypass business opportunities where top management is not fully supportive of competency applications.
To be effective, executive leadership must project an assumptive attitude about competency-based projects. There is no "if,"; just "how." Putting competencies into HR processes cannot be a tryout effort. Competence must be at the same strategic level as quality or service. These are not items for discussion. They are no longer competitive advantages. They are required to stay in the game. Every vendor must bring them to the table.
It is the same situation with workforce competence. The organization must go into the project with an attitude of "whatever it takes" to make competency-based applications work. Competence then becomes a condition of employment issue for managers and supervisors. They will complete competence-related leadership activities with subordinates. HR will utilize competency concepts in its training and development processes. Information Technology (IT) will build competence data into its knowledge management data architecture. Frontline workers will be responsible for completing assessments in an accurate and timely fashion. There is no alternative.
Is the Goal Quality or Excellence?
This is a question that often is not asked until late in the development cycle, when people start to wonder about what to assess and how it should be measured. The solution is driven by a question that should have been asked at the start of the project. What is the organization trying to accomplish? While the concept of improving individuals' performance is implicit in competency-based HR applications, it is not enough to proceed. If they are to design useful position competency models and assess qualifications accurately, organizations must understand whether they are striving for quality, for excellence, or for both. The first step is to differentiate these two terms.
Quality is often confused with excellence. Historically, quality has referred to expensive goods and services. A Cadillac was assumed to have more quality than a Chevrolet due to its higher price and luxury features.
Quality has a very different meaning in a TQM context. Quality expert Philip Crosby defines quality as "conformance to requirements." Crosby's quality is an absolute state. A product, service, or activity either meets standards or it does not. Crosby admonishes organizations to do things according to standards, or else change the standards. With this definition, a Chevrolet may represent as much quality as a Cadillac. It depends on how the two models were made in conformance to General Motors' specifications.
Excellence is a relative term. Excellent products or services excel or surpass others. Excellence is the state of being better than something else. It requires a comparison, not to a standard, but to similar items in a category.
One way to think of these terms is that quality is built in'excellence is designed in. Considering durability, the taxicab that is designed to last hundreds of thousands of miles is excellent compared with the household car. Considering comfort and features, the personal auto is excellent compared with the taxi. Yet both may be constructed with equal quality. Quality is determined by how each conforms to its requirements.
What is the goal for competencies, quality or excellence? It is the difference between "good enough" and "how good." As a surgery patient once asked, "I wonder if my doctors were A students, or whether they crammed for exams and sneaked by with Cs?" The patient was far more concerned with excellence than minimum standards.
Will the competency application be designed for quality? This means that individuals will be assessed on whether they meet preestablished standards in a pass/fail approach. The result is the number of workers meeting and not meeting position standards. In the TQM world, this is called counted data, or pass/fail data.
Certification through training is one type of counted competency approach. Schoolteachers are certified when they have satisfactorily completed a specified number of continuing education courses per year. Medical personnel are required to complete a continuing course sequence, or may be required to attend an annual refresher course on CPR. Once the course has been passed, each individual is assumed to possess the minimum competencies, and there is no further competency differentiation between individuals.
Or will the competency application be designed for excellence? This means that individuals will be assessed on their competence levels based on some sort of continuous scale. In TQM terminology, this is called measured data. The goal is to be able to compare the relative competence between two employees in addition to measuring their competence against a standard scale.
School grading systems are examples of the measurement approach. Competency is measured by academic achievement. Other measurement systems include competitive sports, sales quotas, factory piecework incentives, and see-do-master-teach competency continuums.
Setting up applications that assess to standards is much easier than building a competency measurement system. The reason many organizations have set up curriculum-based competency applications is that they require little more than an upgrade to the course attendance administration database.
Competency-level measurement systems are much more difficult to build. They require a thorough understanding of organizational processes and position needs, and they necessitate complex and time-consuming assessment methods.
Some systems are hybrids. Schools issue grades that are measurements (excellence) and give diplomas to students who are have met the graduation standards (quality.) A driver's license exam is given a grade, and then the license is awarded for achieving a minimum passing score.
Competency-based HR application designers must decide what they want to trackdata on meeting minimum standards or measures of individual competency levels. The highest value comes from hybrid systems, measuring the competency levels of individuals, much like the driver's license test. There are minimum standards required, but there is also an assessment of levels of competence.
Ultimately, the capability to perform multilevel assessments, while making the development of an assessment system far more difficult, will provide important benefits later. Then an individual competency can map to various positions while requiring different standards for each. This will facilitate the assessment process, allow a single instrument to cover multiple jobs, and also make it possible to map employees to potential promotion positions.
Is the Development Effort Periodic or Continuous?
In a stable environment, perhaps in service industries such as hospitality, worker competencies have been little changed over the years. In these situations, it is possible to design competency models and to administer them over a long period of time. Updates can be completed as needed on a special project basis. HR department and organizational development resources are therefore required only periodically.
In most organizations, work activities, job responsibilities, and personnel assignments are in a constant state of flux. Each change that generates new job titles and/or process activities requires a corresponding adjustment in position competency models. Here is where problems can begin if competency requirements are not planned for.
Where continuous modeling is required, there are two possible outcomes. The first is that change is slowed down to the capacity of HR to keep models up to date. HR competency modeling becomes the final activity of every change project affecting personnel. If HR does not have sufficient resources, overall organizational performance can be hurt. Project cycle time is increased and responsiveness is reduced.
A second possibility is more likely. Change is driven by customer demands and competitive pressures. It has its own pace because it is out of the control of the organization. If HR cannot keep up, the outcome is that competency models will lag behind the real world by an ever-increasing margin. When this occurs, competency-related applications are in real danger of becoming dead woodchucks.
For example, one client implemented a thorough competency modeling application for its headquarters workforce. A model was created for all positions, and an assessment was created and used annually for individual development planning and for organizational training planning. The problem was that the model was not changed over a four-year period, yet the positions were redefined twice during that time. Minor adjustments were made to the competency standards and titling, but the model competencies and assessment questionnaire remained unchanged. At this point, the entire application was only moderately effective and had lost significant credibility with employees and their managers.
This is when HR finds out whether or not management "really means it" concerning competencies. Management must be willing to provide sufficient development resources periodically and then to fund HR permanently for maintenance of the models and assessment instruments. Competency-based applications become part of the HR installed base, similar to IT mainframe computer programs that must be administered, maintained, and updated. An organization can no more use an unchanging competency model for its personnel assessment than it can use the same balanced scorecard over time for its overall performance measurement.
This continuing resource requirement, if needed, is best discussed with management at the start and linked to any changes planned or anticipated in the near future. Agreement should be obtained up front as to the resources required to implement and support a continuing competency-based effort.
Is Assessment a Rolling Process or a Batch One?
The assessment process requires a scheduling decision similar to development. Should it be periodic, or is it possible to allow assessments to be completed on demand? These are issues of both preference and technology.
An important preference indicator is how the organization handles an annual appraisal system. Are appraisals done for everyone at the same time? Or are appraisals completed on employees' hiring anniversary date? Similarly, should assessments be completed annually, or would the organization prefer to have them completed in a rolling process?
The determining factor is likely to be how development resources are scheduled. For example, if an annual learning plan is published for the entire year, then a batch assessment process must be completed in order to create this plan. If development resource needs are continuously monitored and scheduled, then it may be possible to create a rolling assessment application.
Assessment administration is also a major factor. With manual or stand-alone relational database competency applications, it is customary to do the assessments and reporting for an entire workgroup at once. Forms are distributed and returned, responses are batched and entered, reports are generated and distributed, counseling sessions are mass scheduled, and all development resource planning is completed for the period. This is a classic centralized-control approach driven by HR.
Intranet technology is now enabling candidate or employee-driven HR systems. Competency assessment and reporting can be completed over the organization's Intranet at any time for any position. Organizational resource needs are then continuously updated with the results of each individual assessment.
The technology decision depends upon several factors. First, are computing resources and expertise available to the application designers? Second, will the assessment process be voluntary or mandatory? And third, what is the maturity of the competency application?
The batch approach is useful when starting up. Simple stand-alone systems can be created using text files or e-mail for assessments, spreadsheets as input, and a simple relational database program for comparing and reporting. Every individual is assessed at once. Group results are immediately available so that assessment project design decisions can be validated. A picture of an overall workgroup is developed. And the larger number of responses provides a more statistically valid sample for study.
Interactive assessment on the organization's Intranet requires a significant commitment in customized interactive site programming, which is a scarce resource in many organizations. The programming must integrate the assessment, scoring, reporting, and database updating. This approach is most suitable for larger organizations with more mature competency applications in place, or it can be done more economically on an outsourced basis.
The value of an assessment program lies not with the method of administration but with the accuracy of the competency model(s) and the validity of the assessment instrument. A batch approach the first time around, executed as a stand-alone application or done over the Intranet, can reduce project development time and cost while delivering equivalent results.
Does the Model Reflect What Is or What Should Be?
The generally continuous nature of organizational change creates another question. Chapter 1 provided a brief history of the TQM movement and introduced continuous quality improvement (CQI). The premise of CQI is that processes are always being evaluated and incrementally improved by everyone in the organization. The goal is to deliver an ever-increasing level of quality and customer satisfaction. Similarly, reengineering creates temporary periods of radical change. This environment of periodic large change and continuous small change as shown in Figure 2-1 generates benefits for the organization but creates significant challenges in designing a competency application.
The issue is deciding to what processes are competencies going to be built, current activities or coming improved versions? The primary concern is that competency-related development efforts will document broken or inefficient processes. Even if the process is not broken, the "if it ain't broke, break it" proponents will still worry about locking in current processes. This is because, unless HR is very careful, the competence movement can become a massive impediment to change.
Competency modeling adds another layer to HR processes. It operates alongside the performance management/appraisal system that already takes up significant supervisory and HR resources. Competency modeling also formalizes employee assessments. This makes it more difficult to experiment with alternative processes, titles, and work assignments, and to keep employees who are involved in the trial covered by the existing assessment system.
An example is the competency model for today's marketing professional. Marketing is rapidly changing in an e-commerce world. Table 2-1 gives some examples of how old activities are being replaced. These new Web-based marketing activities still don't appear on many job descriptions, and few, if any, show up in competency models or assessments. So for the organization just getting into e-commerce, how should competencies be written up for marketing professionals? Should current activities be documented, or should Web-based content be covered? And if Internet content is included, who in the organization is qualified to create the model and its corresponding assessment for something the organization isn't doing yet? (It may even be difficult to find outsiders capable of creating appropriate Web-based competency models.)
Finally, who is qualified to assess workers on something they are not yet doing? Management has no experience in setting standards with new processes. Self-assessment will not work, because this is questioning into ignorance. Outside assessment may be possible, but this is not likely to be a viable long-term solution.
The best time to initiate a competency project is during a planned CQI or reengineering of existing work processes. (Installing the human resources module of an ERP package can be just such a time.) The fear that competency-based applications may stifle organizational change is a very real one. Competency-related applications are always at risk of either getting out of synch with actual processes or becoming busywork administrative overhead. By linking competency application development to new projects, change is facilitated rather than slowed.
The concept of competence needs to be an integral part of all design decisions. As alternatives are evaluated, decide whether proposed competency standards are realistic, obtainable, and appropriate. How do new processes drive competency needs, and how do available employee competencies drive process options? How can competencies anticipated for new processes be identified, assessed, and developed?
The process improvement or reengineering team, with assistance from HR, is in the best position to create competency models for upcoming processes. Relatively stable processes can be analyzed and modeled as a continuous responsibility of the human resources department.
What Should the Expectations Be for Competency Project Time?
It is essential that management's expectations concerning competency-based HR applications be realistic. Many operations-oriented managers, inexperienced with these efforts, assume that competency modeling and assessment are activities with classic start and finish dates. Instead, competency is more of a successive approximation application, exhibiting a substantial learning curve with the possibility of a relatively mature state reached only with stable processes.
Experience shows that it takes about three cycles to have an acceptably reliable competency model and assessment. This is particularly true for an organization's first attempt. Later projects can benefit from the experience curve and can meet requirements faster.
Assuming that the assessment is done annually, the first year is taken up with the model development and an initial assessment. The focus here is on getting a selected workgroup or department assessed for development needs and linking those needs to existing development resources. Managers must initially be trained in how to interpret reports and counsel employees on their individual needs. Institutional reports should also indicate what additional developmental resources are required.
The second year typically shows a refinement in the model and assessment. The results also improve. Managers have greater comfort levels and familiarity with the process, having by now gone through one round of employee competency counseling. The development activities identified previously have been completed and actual benefits can be discussed. Managers can review last year's feedback and compare it to this year's requirements. New development activities, identified in the first year as needing to be created, should now be online and available for workers to utilize.
By the third year, unless there are significant changes in processes or in the organization chart, competency-based HR applications should be reaching a level of relative maturity. At this stage, continuing efforts should consist more of maintenance than development. The competency model, assessment, and development resources are not being philosophically and operationally redesigned; they are being tweaked and adjusted as needed due to changes in positions and processes.
Some competency-based HR applications may beat this typical three-year rule of thumb, and some may lag it. But it is important for management to understand that competency-based HR applications are multiyear development projects. There will be immediate benefits the first year, but the organization will typically require several rounds of improvement before reaching acceptable levels of accuracy and results.
Minor dissatisfaction with the first round is not unusual. Weak areas are readily apparent. Most organizations immediately see that there are things that they need to do differently the next time around. This is normal and healthy. The organization's leadership must understand that they need to make the commitment to stay the course for multiple cycles until the desired results are obtained.
How Will the Results Be Used by Management?
Competency-based HR applications have the potential to create fear and skepticism within an organization. Any change can be frightening, and competency is a major shift in the philosophy of assessing employees and identifying required development activities. Even workers in well-led organizations can come up with all sorts of spurious explanations for why management might be interested in looking at competencies.
Perhaps management has just latched onto the next fad. The entire competency effort can be dismissed with a "this, too, shall pass" attitude on the front line and from managers. All it may take is to wait out management's inevitable disillusionment with the current batch of consultants or for the sponsoring leader to move on.
Perhaps executives are looking to downsize the organization. This may be the way management is going to rank employees and sort out the keepers from those who will have their futures freed up. An honest self-assessment may be a form of organizational suicide, or a candid 360-degree rating of a coworker may be murdering someone else's career.
Perhaps this is how raises and promotions will be determined. It is not enough to have generated positive business results; employees are now going to have to show job competence through a series of useless forms.
Perhaps this is a giant power grab by HR. Headquarters staff will own employee assessment standards and the measurement process. Shouldn't qualifications and promotions be determined by line managers?
Such speculations tend to circulate in the absence of a clear message from management at the very start of a project. Executive leadership must make it clear that competency assessment is intended to raise the skills of the organization's employees, and that they will target learning and development activities specifically to those who require them. The end result is that everyone should be better prepared to succeed at work rather than be punished for what they don't know.
Everything depends upon the trust level that exists in the organization and on the quality of individual leaders. The lower the trust level, the lower the willingness to be candid and accurate in assessments. The lower the trust level, the more open and thorough the communications must be on what management intends to do with the data.
This is why it is so important to keep the competency assessment process separate from the performance management process. No matter how much management insists that assessments will only be used for determining development activities, employees will not believe it if the assessment results are handed back along with annual appraisals and raises.
A more comfortable approach for employees is to have management schedule the assessment process to start directly after the appraisal process. This way the assessment does not drive the appraisal rating; rather, the appraisal provides input for development needs. Competency assessments could be completed within two to four weeks of the appraisalor even earlier with Intranet-based administration systems. This separates the appraisal from the assessment, yet it is early enough in the year to provide input for scheduling development resources.
What Are the Desired Outcomes for the Organization?
This is the classic question involving business results. What added value or positive impact is the organization working to attain with its competency-based HR applications? Possible outcomes include the following.
Meet vendor requirements. The competency application may be a requirement of a quality certification program. For example, an ISO 9000 certification is considered to be essential by many manufacturers in Europe. In the United States, parts suppliers must have their ISO 14000 certification to be qualified as vendors for the major automobile manufacturers. Bidders for U.S. Department of Defense contracts are urged to utilize the People-Capability Maturity Model. In these cases, the competency application can be a requirement for staying in business.
Enhance the marketing position. Competency-based applications can also improve the stature of an organization. Winners of the Malcolm Baldrige National Quality Award and related state quality awards are promoted nationally and within their communities. Even without awards, competence efforts can be highlighted in an organization's marketing materials and used to enhance its stature and competitive position.
Hiring effectiveness. Competency-based efforts can have a positive effect on hiring and turnover. Improved recruiting and selection processes deliver employees who are more qualified for their new jobs. This has the potential to decrease turnover, a measure that can be tracked before and after the application is deployed.
Better internal placement. Similarly, positions can be filled more effectively with properly qualified internal candidates. Employees who are ready to move up can be readily identified and promoted. Employees who want to move up but are not qualified can be steered into necessary development activities. The result is more of the right people in the right jobs.
Training/development efficiencies. Organizations that train employees by title or by workgroup may be wasting a lot of productive time. Thirty-year employees may not need to be sitting next to new hires in some training class mandated by management. The goal for developing employees is just-in-time, just-as-needed. The immediate result of individual assessments is an easy win. Fewer people are involved in development they don't need, which means that the costs of developing people may go down.
Increased productivity. Productivity can be improved three ways. First, enhanced selection of employees results in better across-the-board performance on the job. Second, time wasted in unnecessary development activities is converted to productive work time. Third, existing employees receive the development they want and need to be more effective in their jobs.
Better organizational performance. Competency-based HR applications can contribute to the overall performance of the organization, although they are hard to isolate as a direct cause. They can deliver extremely large paybacks by helping organizations identify people who can help capture market share, shorten time to market, raise the level of customer service, be more innovative, improve efficiencies, and make better decisions.
So there are a number of attractive reasons for organizations to implement competency-based HR applications. Development team leaders must have a clear understanding of what business results top management wants to generate with the project. The team can continually ask questions such as:
* "Will this help us eliminate one day of wasted training?"
* "Will this help us identify people who can bring our products to market faster?"
* "Will this help us find the right person for this job?"
* "Will this satisfy quality requirements on personnel qualifications?"
* "Will this drive customer satisfaction and profitability?"
These questions become the fundamental reality check in making process design and content decisions.
Make certain to get a clear message from top management concerning the expected organizational outcomes. Keep those outcomes in focus during the development process, and use them as checkpoints in making decisions in order to stay on target. Having everyone understand what management wants from the project, and why, helps to reinforce the message that management "really means it."
What Are the Desired Outcomes for Employees?
Competency-based HR applications deliver a number of beneficial outcomes for frontline employees. These need to be understood and communicated. Possible outcomes for individuals include:
* Understanding position requirements. Competency applications require a thorough grasp of the processes and skills/knowledge required to meet position performance standards. Employees don't have to wonder what they need to know and what they should be doing on the job.
* Can get needed training. Competency assessments let employees indicate, in a low-risk manner, where they need help in getting their job done. It also lets them establish where they meet qualifications and where they need not waste time in unnecessary development activities.
* Easier to show qualifications. When targeted business goals are not being met, the question arises whether the problem is people, processes, or uncontrollable outside factors. Competency assessments, along with the appraisal system, make it easier to show whether individual employees are properly trained and qualified. Where workers are not qualified, the assessment shows that this is an organizational development problem, not an individual performance problem. Either way, the initial look is at competencies and qualifications rather than at individuals' motivation and success.
* Ability to prepare for the new/next job. Performance appraisals require that an employee be in a job for a period of time before the review takes place. Workers can be appraised only for jobs they have held.
* This is totally inadequate in promotion planning. An advantage of competency assessment over simple performance appraisal is that competencies can be determined for jobs to which an employee aspires.
* For example, a frontline employee who is interested in moving up can be assessed based upon the qualification standards for a supervisor. Any gaps can be addressed with development activities, if appropriate. Then when the organization has a supervisory position to fill, employee assessment records can be searched to locate all frontline workers who are qualified for the position. This allows employees to put themselves into a candidate position for the next job. They can prove their competence ahead of time, rather than being judged subjectively or on factors outside their control. Competency applications provide a straightforward way to say objectively, "I'm ready."
* More rational personnel decisions. Using competency assessment information, recruitment, hiring, placement, and promotion decisions are made much more objectively. Employees are hired, assessed, developed, and promoted based upon objective competencies rather than subjective preferences or unrelated factors such as seniority. This helps the truly qualified individual rise to the top, and allows others to become qualified if they are willing to take advantage of available development opportunities.
* More competent coworkers. If everything works as intended, the overall competence of the workforce is enhanced. This is both an organizational benefit and an individual one. Everyone's work is simplified if coworkers are competent and qualified to complete their part of the process. Checking, error correction, and rework are reduced. Internal service is improved and working conditions are enhanced.
* Healthier, more competitive employer. The final individual outcome of competency-based HR applications is one of job security. The healthier the organization is, the more resources there are for employees. The stronger the organization is as a competitor, the safer everyone's position is. And the more the organization is growing, the more job opportunities open up in the form of new positions and available promotions.
* Assessment of competence makes it easy for the qualified worker to stand out and difficult for the unqualified worker to hide. Competency-based HR applications help migrate the responsibility for employee qualifications from management down to the individual worker. All employees get the information they need to determine their job qualifications and to fill in their gaps. They also have the opportunity to get themselves prepared ahead of time for the next promotion.
There are strong competence-related payoffs for both the organization and the individual worker. Make certain this "What's in it for me" message gets out to frontline workers as early in the development process as possible. This sell-in is critical, especially if there is an attitude of distrust or skepticism in the organization.
Competency applications can be positioned as a way to address existing concerns about unfair assessments and subjective personnel decisions. Competency concepts bring a management-by-fact approach to a formerly subjective personnel process.
How Will Success Be Measured?
Potential wins abound for competency applications. But that is all they are unless there is a measurement process in place to determine project success. As said earlier, the old adage states, "If you can't measure it, you can't manage it." Measurement of competency-related outcomes can be problematic.
A great danger is that projects can move well into their delivery cycle before anyone begins talking about the measurement of results. At this point it is often too late. To determine the delta caused by a change such as competency modeling, there must first be a baseline for comparison. So measurement must start before project implementation, and it is often conducted concurrently with competency application development.
Measurement of competency applications is similar to the measurement approaches used with other HR applications, such as training. These range from simple satisfaction feedback to complex organizational performance measures. It all depends upon the resources an organization wants to put into the effort. Measurement approaches include the following:
Project fulfillment. The simplest measurement approach is a review of the project plan. Possible evaluation questions include:
* "Were all development activities completed?"
* "Were they completed on time?"
* "Were they completed within budget?"
* "Were all promised deliverables provided?"
* "Did they include the promised content?"
* "Were reporting and counseling steps completed by the involved parties?"
* "Were organizational reports generated?"
* "Were they used by the appropriate resource departments to make individual development planning decisions?"
In other words, regardless of effectiveness, did the development team do what it said it would do within the given parameters? This establishes what and how much was done. The remaining challenge is to determine how effective these efforts were. There are a number of additional qualitative measurement approaches.
Anecdotal sampling. Interviews can be conducted with a cross-section of employees who were assessed, and with supervisors and managers who conducted counseling sessions. Their anecdotal feedback can be summarized and reported back to the development team and top management. This is the easiest measurement to conduct and has the least objective validity. The participants are the least likely to possess the competence expertise required to effectively judge the project. They also lack the organizational perspective to evaluate whether management's desired outcomes were achieved.
Project team formal evaluation. The internal development team can review the project's success. This is a formal internal review or debriefing of the project to identify what went well, what should be stopped, and what should be included for the next iteration. Again, the project team may not possess the competence expertise required. It is analogous to being an internal ISO 9000 implementation team member versus being an external ISO 9000 certified lead auditor. The qualification to judge is very different from the qualification to do.
Expert evaluation. Internal or external experts on competence can review the project and its outputs to determine its overall effectiveness. Judgments are typically based on benchmark experience with other organizations and on how this project compares. Some experts may already have developed their own measurement philosophy and approach, one that has been proven valuable in practice. For example, this chapter is a checklist of issues that must be taken into account in designing competency-based HR applications. As such, it can readily be used to evaluate past and present development efforts.
Performance impact. At this stage measurement becomes more objective. Performance improvements compared to baseline measurements are used to show the value of competency applications. For example, the performance impact of selecting employees by competency versus using traditional methods could include:
* Higher average sales per person
* Reduction in departmental turnover
* Lower errors per 100 lines of program code
* Reduction in training costs
* Higher acceptance rate for job offers
The measures listed above show the impact on performance of the competency-based HR application. But they are still one level removed from business results.
Improvement in business results. A competency application must support the ultimate business goals of profitability, competitiveness, market share growth, rising efficiency, faster time to market, and increasing customer satisfaction. Ideally, competency outcomes can be linked to these business results. For example, reduction in turnover lowers costs, which contributes to profitability. Lowering programming errors increases efficiency and speeds time to market. Higher average sales increases efficiencies.
Measuring the results of competency-based HR applications is the only way to be able to prove success to top management. This requires measurement of current performance at the time the project is started, followed by a second measurement after the application is implemented and has had time to affect the workforce. By building measurements into the project from the start, the application can be designed to facilitate the collection of performance data.
Bring up the topic of measurement early in the development cycle. Get baseline performance numbers before making any changes so that there will be a "before" set of numbers from which to work. Make decisions as to the type of measurement data that will be collected. It may be one of the above, or it may include any combination of these elements.
What Are the Desired Deliverables?
As HR processes begin to be redesigned around competencies, the issue of outputs becomes important. Project deliverables may change frequently during the life of the development effort, but they should be considered as early as possible. Deliverables can include the following.
Competency model. The heart of every competency-based HR application is the position competency model. This is typically comprised of twenty or more individual competencies that workers must possess to be qualified for the position. Models can also be created for entire workgroups, with more than one job covered. Position standards. Position standards are the required levels for each competency for each job. Where a workgroup model is being used, standards for single competencies will vary by position.
Assessment instrument. This is the measurement tool that will be used to determine levels of competency. The design decisions concerning assessment approaches and instruments are covered later in this chapter.
Gap reports. The organization is interested in identifying gaps between employee competence and position standards, and then addressing them with related development resources. Gap reports by individuals are necessary for personal development planning. Summary gap reporting can be used to evaluate workgroups or entire organizations.
Individual development plans. The value of an automated process is in connecting groups of databases. Individual gap reports can link assessment results with the enrollment database and with position curriculums. The result is a customized individual development plan listing resources to address any competency gaps identified by the assessment.
Individual career development plans. Employees can be assessed for positions to which they aspire, then they can also see what they need to do to become qualified for the promotion. In competency applications in which a single assessment is used for entire sets of positions within a workgroup, this is simply a request for a different report. The data already exists. In applications with unique assessments by position, this means filling out an assessment for each new job.
Resource deployment plans. The aggregate report of all employee gaps provides information on development resources needed. Training departments can create development resource schedules for the coming year and form attendee invitation lists for those who need specific resources to address their individual gaps. This way development is provided only as needed.
Development topic requirements. Initial assessment efforts usually identify some areas of organizational need where there are no existing resources. Where justified, HR can begin the development process to provide resources that will eliminate the gaps observed and formally add them to the course schedule or online learning system.
There are also one-time opportunities, such as when line managers ask HR, "We've got a regional meeting coming up, and there is a half-day of time where we can insert some development. What should we do?" The assessment database can be analyzed for this workgroup to determine what are the most prevalent and important needs for the entire workgroup. Then a customized development solution addressing one or more specific competencies can be provided for such situations. Resource effectiveness feedback.
A major concern in organizations is determining the effectiveness of development resources. Consider training as an example. There are five classic measures of training effectiveness: attendee ratings, end-of-class testing, delayed testing after the attendee has returned to the workplace, interviews with coworkers for observed behavioral changes, and measurement of improvements in business results. An assessment process allows organizations to create a brand new kind of resource effectiveness measure.
Competency assessments can be linked with gap resources and the training administration system. If an employee has a gap for a competency and attending a seminar is indicated, then the organization can track the effect of that seminar on competencies with questions such as: "How many employees have attended this class but are still ranked below standard for any competencies it addresses?" or "How many employees meet standards for this competency without ever having availed themselves of the relevant resources? Are the resources even necessary?"
Assessments give HR a new tool with which to evaluate development resources. Asking these questions and using reports from the assessment administration system, HR can fine-tune its inventory of development activities.
Online learning links. As more organizations move to interactive Intranet-based implementations, a new direct linkage between assessment and development resources can be created. It is relatively simple to link an Intranet-based assessment to online learning materials. When a gap is identified, even during the assessment session, employees can immediately be referred to the appropriate development catalog or to online courseware that addresses the competency content. This is the ultimate in real-time assessment and remediation.
The best way to look at deliverables at this stage is to highlight what the organization wants to be able to do. The detailed reports and screens required will come later in the design. But the overall functionality needs to be considered early on.
Who "Owns" the Process?
Quality concepts teach that every process must have an "owner." A process owner is defined as someone who has responsibility for the overall success of the process, and who is also responsible for its continuous improvement. Process owners can improve processes on their own, or they can request to form a process improvement team if the task is complex enough and the potential benefit great enough.
Competency applications are certainly complex enough to justify a team development approach. But a common problem in many organizations is that there is not clear project ownership. Is HR leading the effort? Is this a line function driven by the management of those being assessed? Certainly, a range of individuals from many departments will be involved. But whose appraisal will be negatively affected when there is an unproductive outcome? The person (or department) with this responsibility must also have the authority to own the process.
The professional knowledge and skills required to own an assessment process are not typically found in line operations. Line managers and frontline workers provide subject matter expertise and are essential in the creation and evaluation of competency models and assessment instruments.
An assessment application development process potentially requires skills in consulting, formal team facilitation, job analysis, assessment authoring, test creation, development resource administration, database programming, and online systems development. Expertise in these areas is likely to be found distributed across many administrative staff functions. Because so many departments are affected in competency application development, the process owner is best centrally located rather than remotely based. Professionals within HR or training often are logical candidates for assessment process ownership.
Which Workgroup(s) Will Be Targeted in the Project?
In their enthusiasm over core competencies, many organizations embarked on wide scale implementations of a competency process. As discussed in Chapter 1, a core competency approach cannot substitute for individual position competency models. Core competencies are too general to be valid for hundreds or thousands of individual positions. Developing competency-based HR applications is an ideal candidate for a targeted pilot implementation. The organization is experimenting with wide-scale change, adding a process roughly the scope of its performance appraisal system. There are issues of culture and climate to manage. Administrative procedures and integrated computing systems have to be developed. This can be better managed and controlled by working with a single department or workgroup at the beginning. Ideal candidate workgroups have the following characteristics:
Multiple levels. The initial application should present an opportunity to develop competency models for jobs at several organization levels. This could include frontline workers, supervisors, and middle managers (those who have supervisors reporting to them). These represent significantly different kinds of work activity. It is also helpful to have a mix of line and staff functions in the group. This creates the opportunity to model administrative and support positions.
Management commitment. It is hard to overemphasize the importance of management support for competency-related development projects. If top management within the workgroup "really means it" and supports the effort enthusiastically, then middle managers and supervisors who are capable of sinking the project are given proper incentives to succeed. The ideal candidate is a leader who volunteers his or her department for the pilot.
Good culture and climate. Choose workgroups that have a well-established culture and a healthy climate. Make certain that there are no leadership dysfunctions or hidden agendas within the department. Workgroups with positive quality climate, satisfaction, or leadership feedback survey results make good candidates for competency projects.
Structured job positions. A common error is to start with a department that has extremely complex, interdependent jobs. This complicates identifying and building the competency model and the assessment instrument, particularly when the development team is in the early portion of its learning curve. Search for workgroups that have relatively structured job positions and duties. This helps shorten the model development time and minimizes issues of assessment validity and accuracy. The goal of the pilot program is to establish organizational processes on competencies. Other departments, models, and assessments can be added relatively easily once the overall system is up and running.
Choose the pilot workgroup with care. The development team can greatly complicate its job by choosing a workgroup whose positions cannot be readily modeled. The result is that the team can do everything right and still come out wrong.
Who Will Be Involved in Development?
The first guideline is that the team should include all the potential stakeholders. This means that groups such as HR, organizational development, strategic planning, industrial engineering, training, information technology, legal, management development, customers, suppliers, and the targeted workgroups may need to be represented.
There are also a number of additional skills that should be represented, some of which currently may not be available in the organization. (The first application of competency concepts is the selection of the development team.) The team must be able to perform job analysis, create assessment instruments and tests, design and automate processes either in batch mode or over the Intranet, and set up assessment counseling processes.
This is a good point to discuss the issue of using professionals versus "talented amateurs." Many organizations make the mistake of assuming that teams can substitute for lack of knowledge or skills. For example, a team might be assembled to build a customer survey. But when none of the members have a professional background in survey design, question writing, or statistical analysis, the resulting survey may have problems with response dimensions, questions, and ultimate reliability.
As one development team member put it, "No matter how many of my neighbors I get together, I still can't build an atom bomb." Numbers are not equivalent to knowledge. One nuclear physicist is more valuable than a block's worth of neighbors. Similarly, would patients rather have a team of business people perform brain surgeries, or just one neurosurgeon?
The analogy holds for competency development projects (and it will hold for self-assessment and 360-degree assessment). Bringing stakeholders together does not guarantee that the development team will have the competencies it needs for the project. The team needs qualified professionals in competency-related development. These can be developed internally, hired, or contracted for with vendors, consultants, or outsourcers.
An early development team activity is to take a realistic look at its own internal competencies, then model the requirements needed for the development project. Team members must make a candid assessment of their capabilities as either professionals or talented amateurs. Gaps must then be filled either through development activities or additional help from inside or outside resources.
Who Will Perform the Assessment and upon Whom?
The most critical factor in the success of a competency-based HR application is the competency assessment. It does not matter how exhaustive the model is, how finely detailed the reports are, or how dutifully HR processes are executed; if the measurement methodology is not accurate, the project is a waste. Two requirements for an assessment are validity and reliability. Validity means that the competencies measured are the ones necessary to drive desired business results. Reliability means that the measurement instrument accurately captures the true competency levels of employees. (Quality experts would say that the measurement needs to be "free of random error.") In practice, validity and reliability mean that outside factors influencing the assessment must be minimized. The issue here is who should actually perform the assessment, and who is going to be assessed. There are a variety of approaches ranging from simple self-assessment to complex tests, and everything in between. These can be used singly or in combination to provide results that meet an organization's desired standards of reliability. As with nearly every other aspect of competency-based HR applications, there is a correlation between effort and results. Higher reliability requires more design and implementation sophistication. Following is a range of possible approaches along with the issues they create.
The simplest method of assessment is to create training curriculums by position. An employee is considered qualified when the course sequence is completed. Qualification is maintained through periodic refresher courses and new training seminars. Examples include schoolteachers needing continuing education credits, nurses needing seminars to stay licensed, and computer-support professionals needing course attendance to become certified.
Curriculum-based systems are attractive because they are easy to track and administer. Attendance records are kept as a normal part of the training enrollment system, and so it is a minor upgrade to add curriculum information and query capabilities to the database. Many training administration software vendors already offer this as a standard function in their systems.
The major weakness is one of accuracy. The attendee's presence in training is considered to be the measure of competence. (Curriculum-based assessment is also known as the BISbottoms in seatsmethod.) Attendance does not guarantee any level of competence.
Creating a curriculum and tracking its completion is an easy first step in adding in competency concepts. Many of the required elements are already in placeseminars, interactive courses, learning activities, and formal mentoring or on-the-job development. In ord er to create course content and sequences, organizations must begin thinking about what employees in various positions have to know and be able to do. It encourages standardization of methods and performance. It gives everyone common terminology and processes. It starts the employee competence database effort. It gets HR professionals and line managers thinking about what development needs employees have with respect to completing the curriculum.
In general, a curriculum-based competency application provides at least a minimal level of functionality and reliability. Better than doing nothing, it is a start that can be improved upon in a CQI sense.
This is currently the most popular method of assessment with early adopters, and therefore requires a thorough analysis. With this approach, employees rate their own competencies by completing some sort of assessment instrument. The process is quite attractive. It makes intuitive sense since employees know their own needs best. Create a competency model, then survey employees as to where they rank on a continuum scale by competency. This is comfortable to implement. Administration is straightforward. The process is quickly automated and the data is readily compiled. The primary concern with self-assessment is that of reliability. First, are employees qualified to assess themselves accurately? This approach assumes that employees, based solely on their own work experiences, can reliably rate their level of competence to some absolute scale. The implicit assumption is that every employee naturally possesses the knowledge, ability, and integrity required to assess levels of workplace performance reliably.
The problem is analogous to an organization involved in ISO 9000 quality certification. If individual organizations going through the process were qualified to judge their own status, there would be no need for ISO 9000-certified lead auditors. Similarly, companies could set their own bond quality ratings. Banks could perform their own FDIC audits. There would be no need for colleges to assign grades. Students could announce that they had learned course content at the level they desired and then assign themselves appropriate grades. Gymnasts could assign their own scores as they dismounted the apparatus. Employees are not trained HR professionals. They were probably not involved in the development of the competency model for their position. In essence, the process is "questioning into ignorance."
Employees may not have the experience or the overall workplace perspective to rate their competencies on any beginner/expert style of continuum. Depending upon their own skills, they may not have the insight to understand what truly excellent competence is. (They don't know what they don't know.) This is particularly true of workers at the lower competency levels. They are likely to be the least qualified to provide accurate assessments.
The next problem is consistency. With a self-assessment, particularly when no specialized training in how to complete the assessment has been provided, there is likely to be little consistency among individuals. For example, customer satisfaction survey expert Bob Hayes wrote about asking ten people to tell him the meaning of the word "some," using a number from one to ten. Their answers ranged from "three" to "seven." Imagine employees faced with competency questions such as, "Rate your knowledge of routing and delivery methods: None/Some/Competent/Superior/Expert." Without a thorough understanding of what each word means in general, and what it means for that specific competency, employees with equivalent actual competency often rate themselves quite differently.
A third problem is motivational bias. One cause may be coercive or punitive managers, who are immediately apparent when the results come in. Their subordinates may be reluctant to report competency gaps accurately. The direct reports of one such manager responded to line items only when they could mark "Expert" and left the remaining competencies blank. Another common situation is that of respondents marking "Expert" on every competency, fearful of indicating any perceived weaknesses.
A variation of motivational bias is where employees see the assessment linking to promotion opportunities. They try to "game" the system by figuring out what management wants to see, not only for their current position, but for identifying those ready for promotions. This is prevalent in implementations where management has not clearly communicated exactly what the expected outcomes are for the competency application.
A final problem is perceptual bias. Some individuals cannot accurately see themselves as they really are. One such category is "pronoids," people who overestimate their competence. (Paranoids have delusions of persecution. Pronoids have delusions of acceptability.) Others, due to a lower self-image or higher perceived work standards, underrate their competence. Either way, the assessment responses and resulting development plans are out of line with actual needs.
So self-assessment, while convenient and intuitively logical, has significant potential for reliability problems. At the very least employees must be totally comfortable with the process and its ultimate value to them. They must be clear that the data is going to be used only for development. Assuming that the instrument is properly writtenand this is a very large assumptionemployees must be formally trained in what all the terms mean and how the instrument should be filled out, and given a large number of examples. Finally, administrators should be alert for any indication of management/supervisor problems that are causing bias in the self-assessments.
Whereas traditional performance appraisal is a top-down process in which supervisors or managers evaluate subordinates, 360-degree feedback refers to seeking input from others in the workplace, including superiors, coworkers, subordinates, and even customers. In a competency application, this means that anyone in the organization who has interacted with an employee could potentially complete an assessment on that individual. 360-degree assessment is often considered superior to self-assessment. Similar to teams, the belief is that a group view provides a better gauge of an individual's competence. First, the team approach to assessment is thought to eliminate any blind spots or bias existing in self-assessment. Second, it is felt that teams make better decisions than individuals because members synergistically supplement each other's efforts. This is commonly illustrated in training seminars through a type of group decision-making exercise, such as the "Survival" series. These exercises require team members to solve problems individually first, then through group consensus. The resulting answers are compared to experts' correct answers, and decision-quality scores are computed. In almost all cases, the team will generate a better solution than any of its individual members. It is possible for 360-survey approaches to deliver these advantages. Good examples are managerial feedback exercises completed in conjunction with leadership training programs. Differences in self and 360-degree assessment are readily measured by having both the team and the individual complete assessments. When the results are compared, a leader's self-assessment often shows significant differences from the team's perception. A 360-degree approach shares many of the advantages of self-assessment, and it makes even more sense. Intuitively, if a person's self-analysis is useful, then a group of second-party opinions must be even more valid. Many organizations are already involved in some form of 360-degree feedback and have achieved a level of workplace acceptance for the process. Administratively, 360-degree assessment is relative easy to set up, circulate, and tabulate, and it can be automated readily. Yet there are serious concerns. The problem of a respondent not having the qualifications to assess competencies accurately is multiplied. Coworkers have less knowledge about the jobs of others than those performing them. For example, in the case of subordinates assessing managers, the subordinates may not have ever had any managerial experience or training, and they therefore have no real basis upon which to make the assessment. What is the validity of individuals assessing coworkers when the assessors (1) may only interact with the person, (2) may never have actually done the job, (3) don't necessarily understand the processes involved, and (4) have never been trained in assessing others?
Consistency is an even bigger problem with 360-degree assessment because of an increase in the level of complexity. Assessment application designers now have two areas of consistency to worry about. Respondents have to be internally consistent in their own ratings of different coworkers, and they also need to be externally consistent with other respondents in assessing individual coworkers. Multiply this by several coworkers whose specific competencies have to be assessed for their unique positions, and maintaining consistency quickly exceeds the capabilities of a typical person.
360-degree assessment introduces a different kind of motivational bias. The assumption is that respondents altruistically assess their coworkers in order to help them identify developmental needs. This assumption of goodwill may not be justified. Workers are being asked to assess people who may be their competitors for upcoming promotions. This introduces a very real conflict of interest, one in which a respondent may have a personal interest in the outcome of others' assessments. This is a difficult temptation for even the most objective coworker to handle. While there may be no conscious effort to lower a competitor's assessment, the potential for bias cannot be ignored.
Outside the business world, conflict of interest is usually eliminated whenever it is identified. Legislators put their assets into blind trusts so that their voting is not affected by personal gain or loss. Judges recuse themselves when they discover a possible conflict of interest with a case they are trying. Smart parents never umpire their own child's ball game. In these situations, the existence of a potential conflict, while not necessarily affecting the outcome, taints any result, and the same holds true for having potential workplace competitors assess each other. Finally, perceptual bias still exists with 360-degree techniques. The only thing respondents are truly expert on concerning their coworkers is their own feelings. In many 360 systems, respondents are not really assessing competencies, they are merely indicating how satisfied they are with a coworker's interaction with them. Even with careful efforts in the creation of competency models and instruments, a 360-assessment process can quickly deteriorate into little more than a satisfaction survey ("Rate your coworker on communications skills" or "This individual's conflict management skills are...").
360-degree assessment is a common tool for many organizations, but its basic assumptions can be challenged. Without significant support and validation processes built into a competency application, 360-degree assessment can deliver results with only minimal validity. At best, consistency must be continually monitored and managed. In an unhealthy organizational culture, 360-degree techniques can turn the workplace into a war zone where coworkers can take anonymous shots at each other and at management. In either case, these are significant challenges to assessing competencies successfully.
This assessment technique is a conceptual leap from the subjective to the objective. It is a standard method of establishing minimum competencies for many professions, such as automobile drivers, lawyers, real estate agents, professional engineers, insurance agents, brokers, CPAs, pilots, physicians, and actuaries. Yet despite the popularity of testing in professional certification and licensing, it has been slow to catch on internally in many organizations.
Part of the overall reluctance to implement testing may have been due to everyone's lengthy school experience. The work world was seen to be the end of regurgitating facts and the beginning of actual performance. Now, however, testing is becoming more and more acceptable as a means to ascertain workplace knowledge and competence.
Testing addresses many of the concerns about self and 360-degree assessment. Where assessment by individuals provides an opinion, testing potentially provides a measure of what the respondent knows and can do. The test becomes a standardized tool with an objectively measured result. Motivation and perception bias are no longer an issue.
Testing shifts the responsibility for reliability and consistency from the distributed respondent to the centralized test author, because reliability is now totally dependent upon the quality of the instrument. This can cause problems, of course. On the difficult/easy scale, writing valid test questions is very difficult. It is not a skill the typical HR department possesses. It is far easier for HR staff to write general competency questions with a rating dimension for a response.
Further, new factors come into play with testing. Testing gets an organization into the education business, where it is often far behind its K-12 or university cohorts. Unlike the educational system, few organizations have processes in place to assist individuals with special needs. Testing for competence hurts people who are not good test-takers. And some workers may be physical, visual, or auditory learners rather than textual learners.
Similarly, testing tends to unfairly underrate employees with learning and perceptual disabilities. The test, as a verbal instrument, must deal effectively with cultural issues of perception, comprehension, vocabulary, use of words and sentences, and English as a second language. Any disconnect between the construction of the test and the comprehension of the respondents reduces the validity of the results.
Another major concern with testing for competence is in thoroughness. Today's case-worker positions require a wide range of general knowledge in addition to a mastery of position-specific facts and processes. Testing cannot provide the same overall perspective used in self- or 360-degree assessment. An individual or coworker may well understand the overall competence of someone on the job, whereas a test establishes only how much someone knows about what is asked. Testing is a statistical sampling technique where the subset of information tested is assumed to accurately reflect the total body of knowledge and skills a worker possesses.
For example, a network administrator needs to have an extensive general knowledge of personal computing and server software. Study guides for the Microsoft Certified System Engineer (MCSE) exams contain literally thousands of pages full of facts and procedures. Six tests, each approximately one hour long, can cover only a miniscule portion of what MCSEs need to know to be effective. Yet candidates can pass tests by answering as few as thirty questions on something as complex as "Implementing and Supporting Windows NT Servers." This has led to what information technology professionals call "paper MCSEs"people who are certified but cannot do anything useful because they have not had any real experience with networks.
Position tests require far more development time and resources to create than do assessment instruments. They demand a higher level of educational and competency professional expertise from the author. Done correctly, test assessments provide a more objective, accurate, and consistent measure of the relevant competencies. Many organizations will shift to testing as their competency assessment applications mature and improve.
The next higher level of competency measurement involves looking at actual business results generated by the individual. The emphasis must be on individual business results rather than the results of workgroups or the organization. The assumption behind this approach is that business results measure competence in people. For example, salespeople who make assigned quotas are considered to be more competent than salespeople who do not meet their goals.
The attraction to management of this approach is that it provides a direct link between competence and desired business outcomes. There is a Field of Dreams-style assumption: "If they are competent, results will come." Since results are the evidence of competence, normal business data becomes the assessment.
The flawed assumption is that there is a direct cause-and-effect relationship between competencies and business results. Employees do not have total control over their processes and are not solely responsible for outcomes. Therefore, using business results as an assessment means that people are being measured based on factors not addressed by their competencies. This is clearly unfair.
An even bigger danger is that this approach appears to be a performance appraisal, not a competency assessment. The definitions in Chapter 1 explained that competency assessment ascertains whether an employee is qualified to do the job. Competency does not mean actual performance. Appraisal and compensation systems are already in place for that purpose.
The effect of competency-based HR applications on business results is best analyzed at levels above the individual. Before-and-after studies of workgroup performance can be completed. For example, researchers studied the effects of using competencies as employee selection criteria rather than traditional methods. As shown in Table 2-2, they found that the more complex the job, the greater the benefits of using competencies to select superior employees. This is a valid use of business results to measure the effects of competency-based HR applications at a group level. Attempting to use business results for individuals moves the application away from competencies and interferes with the existing performance appraisal process.
Many organizations begin with simple curriculum models and course attendance records. They then move to simple individual or 360-degree assessment. Next they incorporate testing, starting with seminars and courseware and, ultimately, assessment instruments.
The pace of this migration is often driven by technology. Batch systems are giving way to interactive applications. A few leading-edge organizations are now putting the entire competency-based HR system on their Intranet. This online availability of applications facilitates the administration of assessments and tests across the entire organization.
The best competency applications should utilize a range of assessment techniques, from individual assessments to testing. The approach may depend upon competencies being measured. Sel- or 360-degree assessment may continue to be used for soft skills. Information and process mastery will require interactive applications, delivered either on internal Intranets or by outside service providers over the Web. Ultimately, the assessment process will close the loop with development in real time. Competency applications will be integrated into the online learning system. When individuals identify a shortfall in required competencies, they will immediately be sent to the appropriate learning content site.
How Are Assessments Validated?
The previous sections covered the concerns about the reliability of self-assessments and 360-degree assessments. If these methods are going to be used, then decisions have to be made about how responses will be validated. Again, there is a range of choices depending upon the degree of reliability desired and the resources available to execute at that level.
The easiest approach is to defer the issue. Employee assessment of competencies is certainly more valid than no assessment at all. Developers may initially wish to concentrate oncreating accurate competency models and their associated assessment questions. The decision to include a verification step can be made after the first set of responses is tabulated.
A simple examination of results can provide a basic validity check. One approach is to select ten recognized superstars and ten obvious strugglers and examine their assessment responses in detail. See if their responses reflect their relative competency levels. Look at the variation in responses for each competency. Are results skewed high or low? Is the range wide or narrow? Also examine summaries by manager. Data from groups that have negative environments will be skewed to the high end, or will be incomplete due to questions not answered or assessments not turned in.
If there are concerns about the process, have informal conversations with a wide range of people being assessed. Their off-the-record comments will provide good indicators of what is going right or wrong with the entire process. A careful investigation can often deliver as much benefit as a time-consuming and costly formal validation process.
Useful data can be gathered by openly asking employees their opinions about the reliability of the assessment. This is best done shortly after the feedback sessions have been completed and the individuals assessed have had an opportunity to talk to each other about the process. The survey should be confidential, with no employees' responses identified by name. When there are trust issues with management, this survey is best tabulated by third parties outside the organization. Figure 2-2 shows an example of a straightforward survey on reliability.
Another way to provide a check on accuracy is to have the manager or supervisor also assess the employee. Managers can use the same instrument as the employee, or they can fill out a simplified assessment by category rather than by competency. Managers and employees then meet to work through their differences until both agree on the assessment responses.
Similar to managerial validation, 360-degree feedback can be used as validation rather than as the actual assessment. Optionally, where communications and culture are good, the individual and the coworkers who did the validation assessment can work through any variances, usually in a facilitated team meeting. The difference between 360-degree validation and 360-degree assessment is that here the individual still has the ultimate authority to specify the response.
This approach is similar to the Internal Revenue Service's compliance audit program. Here a representative subset of respondents is contacted and asked to verify every line item of their assessment. This can be done either in person or over the phone. The size of the subset can be determined either statistically based on desired confidence levels or behaviorally based upon data closure; i.e., no real change in results is generated from adding more samples.
The goal is to have the assessment expert/auditor provide the standardization to determine what the true assessment should have been. Errors high or low can be identified, and overall accuracy statistics can be computed. For example, an organization might discover that 85 percent of responses were accurate, with 60 percent of the errors caused by ratings higher than actual and 40 percent of the errors caused by ratings lower than actual.
The names of those audited are not released. All the organization is concerned about is estimating the accuracy of assessment responses. In an environment of CQI, problem areas can then be addressed in the next assessment instrument.
Organizations may wish to utilize several of these techniques concurrently, particularly in the early stages of a new application. Informal validation is always useful. Managerial validation is an excellent way to gain leadership's involvement and buy-in. Audit sampling involves experts in the assessment process, people who truly understand the model and are qualified to administer it. The audit generates measurable quality data that can be used to track assessment validity over time in order to facilitate improving the instrument continuously. Each of these makes its own unique contributions to the assessment process and has a place in most competency-based HR applications.
How Is the Project Going to Be Communicated to the Organization?
Adding competency-based processes to HR applications is a change of major proportions. Anything this significant and far reaching is likely to create a large amount of activity on the rumor mill, much of it driven by uncertainty, fear, and anxiety. Perhaps it is too much to hope that everyone will be enthusiastic about new competency-based HR applications, but they should see the value of committing to the effort. It is essential that all present and future stakeholders "mean it" if the development project is to be successful.
Informally, urge development team members to communicate with their peers and departments about the project. Organizationally, use both formal and informal communications tools. Create a positive buzz about the project and its benefits to all participants. Keep focusing on how the applications will make all the HR functions more effective. Provide updates as to the project status.
If warranted, share as much information as possible. One organization placed the first draft of its assessment instrument on the wall, then put an electronic copy on the server and invited everyone in the pilot department to e-mail back comments and suggestions. Show screen layouts around and get future users' reactions to any interactive systems. In general, make the project extremely open and accessible. This is the best way to eliminate concerns about motives, and it also can improve the ultimate solution through early feedback from frontline employees who will be completing assessments.
Sample Project Plan
In 1995, the Anheuser-Busch Learning Center (BLC) embarked on a competency modeling and reporting project for its field sales department. This consisted of personnel who call on Anheuser-Busch (A-B) wholesalers, large national accounts, and retailers across the United States. The department consisted of ten regional offices with supervisors and frontline personnel deployed in home offices throughout the country. Here are the decisions they made in designing a program that was very successful in meeting their desired outcomes.
* Does the organization really mean it? Project approval came directly from the vice president of sales. A-B allocated significant budget and personnel to the effort.
* Is the goal quality or excellence? A-B chose a quality approach. It wanted to help every employee meet position standards. Employees were to be assessed to standards by position and have their competency gaps identified. Then individual development plans were to be created.
* Is the development effort continuous or periodic? Development was periodic. The department had just undergone its first major sales reorganization in nearly twenty years. Time was going to be required for the department to digest the changes and adapt all its processes, so the model was assumed to have some stability for the next several years.
* Is the assessment a rolling process or a batch one? The entire assessment process was periodic. Assessments were completed near the end of the calendar year. The data was then used to create course calendars for the following year, to identify potential attendees, and to provide input for creating or acquiring new development resources.
* Does the model reflect what is or what should be? Due to the recent reorganization, the challenge was to document the new "what is."
* What should the expectations be for the competency project time? Total development time for the initial project was approximately eighteen months. This was on time, and deadlines for the first assessment were met. Continuous improvement was anticipated. (Over a period of four years, the assessment instrument had minor modifications made before each succeeding annual assessment.)
* How are the results going to be used by management? Managers did a superb job of positioning the process and executing to their promises. Assessments were kept separate from well-established performance appraisal and salary administration systems. Assessment results were clearly used by the BLC to provide more targeted development resources.
* What are the desired outcomes for the organization? Sales department employees at A-B were a mix of recent hires and old hands with more than twenty years of experience. A-B wanted to eliminate wasted training and to know what new training to develop. The project was financially justified by the projected elimination of a single wasted day of training per person per year. Any increases in effectiveness were considered to be added value.
* What are the desired outcomes for employees? At the time of the project, performance expectations for the sales positions in the new organizational structure were not clearly defined. A side benefit of creating competency models was the identification of competency standards and tasks for the new positions. Other benefits were curriculums for the new positions so that sales employees would know what courses they should be attending. The summary reports given to individuals included relating gaps to available courses and the creation of a development plan that supported their specific market objectives for the coming year. Finally, employees could ask to be mapped to a position they aspired to so that they could see what was needed to become qualified for that promotion.
* How will success be measured? First, development project goals were met. Assessments were completed, reports were tabulated, printed, and distributed, and counseling sessions with managers were held. The entire process fed the enrollment system and the R&D process for creating new development resources. The only other feedback was informal. Due to the frequency of promotions and moves between regions, the A-B sales department is a highly networked system. The general opinion about a project can usually be readily determined with a few well-placed phone calls. Key managers and frontline individuals were contacted for feedback on the overall attitude and effectiveness of the program.
* What are the desired deliverables? The system generated these reports for individuals: assessment to standards summary, gap report with associated development resources, curriculum status, market plan, and manager's validation summary. The organization utilized customizable reports indicating gaps by competency and development resources needed. All organizational reports could be selected down to individual attendee in order to create invitation lists.
* Who "owns" the process? The project leader was an assistant on the vice president's staff. In the latter stages of development, ownership was shifted to the BLC for administration and any further updates.
* What workgroups will be targeted in the project? The A-B project included all field sales personnel except for administrative assistants in regional offices. The regional structures were identical, and there were ten different professional and managerial positions to model.
* Who will be involved in the development? The project was initially managed by the vice president's staff assistant. The development effort was headed by an experienced project manager in the BLC. The job analysis effort was led by a manager in the A-B corporate organizational development department. Employees from other departments were utilized as needed. An outside consultant was used to help create the assessment instrument, to program the relational database used for tabulating results, to administer the assessment, and to prepare and distribute reports.
* Who will perform the assessment, and upon whom? A-B chose to use self-assessment for this initial implementation. There was some concern about reliability, but this was shown not to be an issue. Instructions were provided with the assessment, and phone support from the consulting firm was available to employees.
* How are assessments validated? The list of competencies was divided up into categories, each containing five to seven line items. Managers were reluctant to fill out complete assessments on each of their people, so a shorter assessment form asking managers to assess subordinates only by category was created and tabulated. This became the basis for discussions when managers met with subordinates to do development planning. At that time, managers went over employees' assessment inputs item by item. A-B found that, in general, bias was not an important issue. Employees generally rated themselves lower than did their managers. Employees said they were excited at the opportunity to receive training and saw the assessment as a way to obtain their fair share. Managers admitted that they did not have enough detailed knowledge of the subordinate's actual competencies. There were a few instances of managers who did not have the right trust environment with their employees. Assessment summary reports quickly highlighted these situations. Employees were contacted, had the process again explained to them, and then filled out another self-assessment.
* How is the project going to be communicated to the organization? A-B has a history of utilizing field sales panels made up of frontline personnel and managers. Membership on this panel is considered to be positive recognition and is assigned on a rotating basis. The field sales panel was kept up to date on the project. In addition, as part of the job analysis, A-B brought in focus groups for each of ten jobs for a three-day analysis and input session. The attendees represented nearly one-sixth of the field sales staff, which meant that the entire sales organization was aware of the effort from the beginning.
A-B has now converted the original periodic, paper-based, batch-mode competency application into an electronic form for its corporate Intranet. This will be discussed in more detail in Chapter 9.
Anheuser-Busch's implementation is not a model for other organizations. It is a sample showing the decisions made by one company in its first implementation, begun before Intranet technology was widely available. The issues covered in this chapter have been identified through experience and can prevent many serious problems later on. Figure 2-3 shows a design decision worksheet for stepping through the questions and options described in this chapter.
There are a multitude of possible paths in working through this chapter. Making decisions as a group will keep the design team focused on the desired outcomes, minimize later problems, and speed project cycle time. Once these decisions are made, the work of building competency models can begin.
* There are important design decisions that must be made at the very start of a competency project in order to eliminate later problems.
* The first implementation should be done right or not at all. This means management has to commit to it, communicate it, fund it, support it, and execute it. Management may also have to change the organization's culture in order to make it work at all.
* All the anticipated outcomes must be clear. This includes quality or excellence, management and individual outcomes, and measures for success.
* Logistics have to be determined, such as selecting the development team, planned project length, periodic or rolling development and administration, current or future process competencies, deliverables, project leadership and involvement, groups to be assessed, and assessment methodology and validation.
1. Jerry B. Harvey, The Abilene Paradox and Other Meditations on Management (San Francisco: Jossey-Bass Publishers, 1996), 13-15.
2. Philip B. Crosby, Quality Is Free (New York: New American Library, 1980), 15.
3. Bob E. Hayes, Measuring Customer Satisfaction: Development and Use of Questionnaires (Milwaukee: ASQC Quality Press, 1991), 51.
4. J.E. Hunter, F.L. Schmidt, and M.K. Judiesh, "Individual Differences in Output Variability as a Function of Job Complexity," Journal of Applied Psychology 75 (1990): 28-42.
The Leadership Investment
How the World's Best Organizations Gain Strategic Advantage through Leadership Development
By Robert M. Fulmer and Marshall Goldsmith
Copyright © 2000 Robert M. Fulmer and Marshall Goldsmith. All rights reserved.