No earlier issue with same topic
Previous article
SIGCHI Bulletin
Vol.29 No.4, October 1997
Next article
No later issue with same topic

Testing for Power Usability: A CHI 97 Workshop

Keith S. Karn, Thomas J. Perry, Marc J. Krolczyk


Usability studies are usually conducted in a compressed time scale (measured in hours) compared with a user's eventual experience with a product (often measured in years). For this reason, typical usability evaluations focus on success during initial interactions with a product (see for example Dumas & Redish, 1994 and Nielsen & Mack, 1994). Familiarity with similar products may predict success on initial use of a new product. Are what we call "intuitive" user interfaces really just familiar user interfaces? This familiarity effect can often swamp the usability differences between design alternatives. If usability evaluations continue to emphasize initial success with a product we may inhibit innovation in user interface design. There is a tension between initial usability (measured by success at first encounter) and efficiency of skilled performance. Initial learning of a product's user interface often results in quite rapid increases in efficiency of use whether it might be with a computer game or a business software application. The time spent with a product once up on a learning plateau typically greatly exceeds the time on the steeper part of the learning curve. Thus, traditional usability evaluation techniques that emphasize initial product use, may fail to capture usability problems that affect users the most. We do not question the importance of testing a products learnability, (see Usability Sciences Corporation, 1994) but feel that the human-computer interaction community should ensure usability throughout the product life-cycle. A narrow focus on initial usability elevates learnability above efficiency once up the learning curve. While this approach may be appropriate for products targeted primarily for casual or occasional users, it fails to capture the usability issues associated with power users (those with significant experience, training, or a professional orientation to their interaction with the product). Initial interactions with a product may affect the purchase decision, but usability over the longer term may determine whether a user will become a truly satisfied customer.

Workshop Goals

The goals of the workshop were to exchange and develop techniques to address usability testing of:

  1. New features and incremental design improvements to an existing product with an established, highly-experienced, power user population. Example: a new version of a computer aided design program.
  2. Innovative or novel products replacing older technologies used by an established power user population. Example: replacing a command line interface with a graphical user interface.
  3. Entirely new technologies that may include a new and unfamiliar user interface and have no established user population. Example: the first Web TV appliance.

Workshop Preparation

Thirteen highly experienced user interaction designers and usability engineers participated in the workshop (see Figure 2). In extensive pre-workshop e-mail communication, we shared our views of the critical issues in testing for power usability and each described in detail one real-world testing example.

From our electronic correspondence we learned that the products we design and test differ greatly, ranging from hand-held data entry devices to fighter aircraft. We come from many countries and work in different settings; from design groups in multinational corporations to consulting firms to academia. Despite these differences, we united with a desire to improve the state of the art in design and usability testing of products with an experienced user population and mission-critical requirements for error-free performance and productivity.

Workshop Methodology

The workshop began with each participant presenting a real-world example of testing for power usability. These presentations included the nature of the task and work environment, description of the product (its revolutionary versus evolutionary nature, i.e., preexistence of a power user population), techniques used for usability testing, and lessons learned. During the presentations, all participants recorded brief descriptions of causes and potential solutions related to the problems of testing for power usability. Participants wrote these causes and solutions with large markers on color coded, self adhesive paper to facilitate later posting for group interaction.

From the presentations we agreed on a problem statement: Power usability is not adequately addressed in product testing.

We then used an adaptation of the Ishikawa Root Cause Analysis technique (Ishikawa, 1982; also see Kimbler, 1995) to dissect the underlying causes of this problem and identify potential solutions. This analysis technique (also known as a Fish Bone Diagram) is a problem solving tool used in various quality management programs. Our adaptation elaborated on the fish bone analysis technique (see Figure 1) and included fish-related themes for the four steps in the analysis method:

  1. Fish Bone: We posted the brief descriptions of the underlying causes of the stated problem that we had recorded during the presentations. We roughly clustered these causes on bones of a large model fish skeleton.
  2. Name that Bone: With additional discussions and critiques we refined the arrangement and clustering of the causes and provided a name for each bone. This provided succinct statements of the underlying causes of the problem.
  3. Filet-O-Fish: We posted solutions on the fish bone diagram; aligning these with associated problems.
  4. Name that Fish: We reflected on the fish and identified conclusions and questions for further work


The activities generated the following categories (bones) of problems and solutions:

The appendix presents the complete list of problems and solutions generated by the fishbone analysis. Note that the list is probably incomplete and the solutions may not be appropriate to every situation.


The Name That Fish activity produced the following summary thoughts.

Power Users differ from casual or infrequent users of a system.

During the course of the discussion, workshop participants identified a partial set of power user attributes.

In addition, we listed some system and environmental factors that often affect power users:

The problems of testing for Power Usability are not unique.

All the issues identified are common to any usability test. However, there are differences in emphasis that come with assessing skilled performance

Power usability must be in initial design.

As with most areas of user-centered design, power usability must be considered in the initial phases of the design to have maximum impact. Here are some issues to consider in this regard.

Some problems won't be found until the product is being used by customers for real work.

Of course, no amount of laboratory testing will reveal all the usability issues with a product. To anticipate this, plan and budget for data collection after the product is introduced. Consider building data capture capabilities into the product for data collection in the user's environment. Focus on identifying long-term use problems to be fixed in the next version of the product. Remember that some problems may be architectural in nature, requiring more major changes and several releases to fix.

Questions For Further Inquiry

During discussions, we identified several questions for further inquiry:

Next Steps

Participants in the workshop agreed that there should be broad interest in a workshop on designing for power usability at a future CHI meeting.


Dumas, J., Redish, J. (1994) A Practical Guide to Usability Testing. Ablex, Norwood, NJ

Ishikawa, K. (1982) Guide to Quality Control. Asian Productivity Organization, Tokyo

Kimbler, D. (1995) Cause and effect diagram.

Neilson, J., Mack, R. (1994) Usability Inspection Methods. John Wiley & Son, NY

Shneiderman, B. (1990) User interface races: Sporting competition for power users. In Art of Human-Computer Interface Design. B. Laurel (ed.) Addison Wesley, Reading MA

Usability Sciences Corporation (1994) Windows 3.1 and Windows 95 Quantification of Learning Time and Productivity.

About the Authors

All workshop participants (Figure 2) contributed to the content of this article. The workshop organizers and editors of this article were:

Keith Karn is a senior user interface designer at Xerox Corporation. His background is in human-machine interaction design and usability evaluation of airplane cockpits and office products. He has an M.S. in industrial engineering and Ph.D. in experimental psychology.

Tom Perry is a senior user interface designer at Xerox Corporation. He is a Certified Professional Ergonomist with an M.A. in Human Factors Psychology from California State University, Northridge.

Marc Krolczyk is a user interface designer at Xerox Corporation. He has worked in the field of graphic design and received his M.A.H. from SUNY at Buffalo in visual communication.

Authors' Address

Keith Karn
Industrial Design / Human Interface Department
Xerox Corporation
Mail Stop 0801-10C
1350 Jefferson Road
Rochester, NY 14623 USA

Telephone: 716-427-1561

Tom Perry
(same address as above)
E-mail: Thomas_
Telephone: 716-422-5524

Marc Krolczyk
(address same as above)
Telephone: 716-427-1879

Appendix: Problems and Solutions

These are the results for the root-cause analysis: a list of causes underlying the problem of inadequate testing for power usability and related potential solutions. The headings evolved as the bones of the fishbone diagram (see text for details on the root-cause analysis technique). Problems are marked with a bullet, solutions with a bullet and a "!".

Revolutionary Products (how to test when tasks are uncertain and no users exist yet)

Barriers to Customer Access (both for customer research and for testing)

Resistance to Change (of experienced users and sometimes developers)

Resources and Schedule (insufficient time and funding to do the job right)

Usability Goals (specifying them, specifying them in a testable way)

Identifying Scenarios & Tasks (what do highly experienced users actually do)

Simulating Scenarios & Tasks in Tests (how to duplicate context of real work in lab setting)

Identifying the Power User Profile (what are the attributes of power users)

Selecting Users for Testing (how to ensure that test subjects match the power user profile)

Designing Tests of Power Usability (subject training, metrics, data analysis, other aspects)

Figure 1. Workshop participants clustering related statements of problem causes onto the fishbone diagram

Figure 2. Workshop Participants posing with the completed fish-bone cause and effect diagram. Left to right: Keith Karn, Xerox Corporation, USA; Tim White, Eastman Kodak Company, USA; Jill Drury, The MITRE Corporation, USA; Marc Krolczyk, Xerox Corporation, USA; Vibeke Jorgensen, Kommunedata, Denmark; Anette Zobbe, Kommunedata, Denmark; Gerard Jorna, Philips Corporate Design, The Netherlands; Tom Perry, Xerox Corporation, USA; Julianne Chatelain, Trellix Corporation, USA; Ronald Baecker, University of Toronto, Canada; Jose Madeira Garcia, Autodesk, USA; Kaisa Vaananen-Vainio-Mattila, Nokia, Finland; Stephanie Rosenbaum, Tec-Ed, Inc., USA; (not pictured: Judith Rattle, Philips Corporate Design, The Netherlands).

No earlier issue with same topic
Previous article
SIGCHI Bulletin
Vol.29 No.4, October 1997
Next article
No later issue with same topic