No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.28 No.4, October 1996
Next article
Article
No later issue with same topic
Issue

Usability Management Maturity, Part 2
Usability Techniques - What Can You Do?

Thyra Rauch, Susan Kahler, George Flanagan

At the 1995 CHI conference, we held a SIG on Usability Techniques (the follow-on to Usability Management Maturity, Part 1: Self Assessment- How Do You Stack Up?), during which we discussed with the audience, at a high level, some basic usability activities. This session was designed for new practitioners of Human Factors, and people who want to start a Human Factors/Usability group.

Usability Techniques

We began the session by presenting an overview of some activities that we thought were important. We then opened the session up to discussion with the audience so that they had ample time to ask us and the other participants in the room questions about what we had found to be successful, and to share their experiences with each other.

This SIG really evolved over the past year as we've talked to many people in different kinds of companies. Some common themes emerged. Many folks were saying "we'd like to be doing usability, but we just can't", with reasons varying from politics to not understanding where to start. First, we assert that you don't have to be an HF/Usability professional to do these activities. The viewpoint is important, and you can learn the rest. In addition, other backgrounds are valuable, and we should use them to our advantage.

To begin with, we recommend one book as a good, practical overview: A Practical Guide for Usability Testing [1]. It not only discusses all aspects of testing (setting up for the test, writing scenarios, recruiting participants), but it also introduces usability, a focus on users, setting goals, the benefits of usability, cost savings, and establishing a usability program. It's also written in a style that shouldn't intimidate anyone just starting out.

Doing usability engineering or user-centered design can be used to guide the production of usable systems. Usability engineering is not surface gloss that can be employed at the last minute. Instead, it starts early in the development cycle, involves the user throughout, and continues through to the end of the development cycle. Early usability efforts in many companies started as a tail-end test or part of quality assurance. The development cycle can be thought of in several stages. In each stage, we show a sample set of activities, tools, and methodologies that can be performed. If you are just starting out, you often lack resources, both money and people, so you may not be able to do much. If you can only do a little, what should that be? We propose a few activities that we think are critical.

Getting Your Foot in the Door

The first activity is one that they generally don't teach you in school. A big part of our job is being able to persuade others about the value of user-centered approaches, or doing usability in general. And, having to do this seems that norm rather than the exception. Success sometimes hinges on your personal credibility with the design or development team. This credibility takes time to establish, and you may have to repeat it with each new team. It takes a strong belief that you have something very valuable to offer. A sense of humor is invaluable, in our experience, as is lots of patience.

We are a relatively new discipline. And sometimes you'll find yourself to be the first HF/Usability person in an organization. You often have to spend much of your time demonstrating the value of a usability approach, or selling yourself and your services, particularly to management. But you must do this, or attempts to conduct later activities will probably be in vain. Cost Benefits

Information that will help convince management that your services are important focus on the costs or benefits to your organization. For example, poor usability increases support costs and decreases user satisfaction and productivity. If your organization has high support costs (and most do) and is trying to cut them, suggest to your management that you can help in this process. Studies have shown that the great majority of support calls stem from poor usability.

Increased Market Share

If your organization is trying to increase market share, you might point out that usability is key to end-user satisfaction. For example, studies have shown that usability is the characteristic most often identified with quality in a survey of 500 business computer users. In addition, surveys of I/S organizations emphasize usability when making their software purchase decisions. Usability is equal to reliability and performance as the most often-identified characteristics that affect end-user satisfaction. And, if that isn't enough, usability accounts for 56% of the write-in comments across all major product characteristics evaluated in satisfaction surveys. As more and more companies are selling overseas, you might also want to point out that usability is just as important in international markets. For example, usability is the most important factor in the Japanese software market, even more so than in the U.S. And, usability is important across all platforms, across all levels of products from mainframe to midrange to desktop software.

In another interesting finding, there are fewer project failures today than there were five years ago. However, only 16% of all projects come in on time and on budget, and still nearly one-third of all projects fail. When looking at the top five causes of success or failure, they look like near mirror images of each other.

Top 5 Causes of Success

Top 5 Causes of Failure

Some overall tips that we've collected over the years, and often learned the hard way are the following:

An approach we suggest if you can't start at the beginning of a development cycle is to start at the end with a test. Pick a single interface or small product. Run a simple test, perhaps only four users, in one day. Have the team watch the test (very important) and document the usability problems. Discuss those problems with the team. Yes, it will probably be too late to fix those problems now, but now you have your foot in the door to perhaps test earlier next time, or perhaps get the improvements into the next release of the product. Keep backing up the testing and evaluation time earlier and earlier into the cycle.

Another approach is to work with a Beta program. Visit a few Beta sites. Include a survey in the Beta package. Better yet, call the Beta participants to ask them questions. Discuss those findings with your team. Again, you may not be able to incorporate the changes now, but you may be able to convince them to do this work earlier next time, or to use your findings as input to the next version. OK, you've got some buy-in. Now, given that you actually get to start working on a project at the beginning of the cycle, what do you do first?

Knowing the User

The typical developer today is not a typical user for most products. If we don't understand the typical user, we won't be very likely to develop a product that meets their needs. An understanding of the potential users of your product is key to all the tasks that follow. We can run a great test, gather lots of data, but if the test participants are not representative of the target user population, what have we gained? Knowledge of the users includes things like their experience level, available training, their frequency of use of the product, and their work environment.

How do you get this knowledge of the users? The best way is to observe them doing their key tasks in their own environment. However, you can also obtain this information from interviews, market analyses, and questionnaires. Costs to obtain this information can range from very inexpensive to very expensive with site visits, unless you have one close by. But, you might already have some of this information available in your company, and it won't cost you anything. Try asking your marketing folks, planners, or information developers.

Doing a Task Analysis

The objective of a task analysis is find out from the users what tasks they perform, how the current systems they use integrate with task performance, and how the tasks could be made easier. There are several details that need to be defined for each task the user performs. Essentially there are about eight details that we should strive to understand about each task the user performs. Often, you may not have the opportunity to observe the users at work at their job site. One alternative methodology is to ask users to participate in a structured interview. The structured interview should start by asking each user to identify the tasks they perform. Then, the user can be asked to provide details about each task. The following are typical questions that can be asked to get the task details during the interview:

  1. Why is the user doing the task? Is it necessary?
  2. How often does the user perform the task?
  3. How long does it take the user to complete the task?
  4. What the steps that the user works through to complete the task?
  5. Does the user work with anyone else while performing the task?
  6. What tools or products does the user use to accomplish the task?
  7. Are there any bottlenecks which make the tasks difficult to perform?
  8. How could the task be made easier?

At the end of the interviews, the users can be asked to rate how important the tasks are to their job and how satisfied they are with the current way of completing the task. These last two details, importance and satisfaction, can be used to help the HF/U person understand how to prioritize the tasks. Those tasks which the user indicates are most important to the job, but receive low satisfaction ratings, are the tasks that should be considered critical in terms of needing improvement.

Other task analysis methodologies are discussed in [2], [7].

Goal Setting

During iterative design and development, the HF/U person needs to know if a proposed design meets the needs of the users. One way to address this concern is to set usability goals that are specific and measurable. Usability goals often take the form of time limits, error rates, and satisfaction ratings. The initial criteria levels for goals can be obtained from the task analysis information For example, it may not be acceptable to the user if the time to complete a specific task exceeds five minutes. These goals are then measured as the user completes tasks during prototype evaluations and usability testing. For further discussion of goal setting, see [1].

Prototyping

A very common low fidelity prototyping method is to use a paper and pencil prototype. This is a low tech approach and the prototype can be created using basic craft supplies. For example, a piece of construction paper can represent a window and a sticky can represent menu options. The "roughness" of the prototype communicates to users that the design is not written in stone and is modifiable. We recommend starting with paper and pencil prototypes first, before implementing a high fidelity prototype such as a computer mock-up. There's been quite a bit of information on prototyping published recently, so rather than duplicate the implementation details here, we'll refer you to [3], [5], [6].

Testing

Many people believe that user testing is one of the most important stages of design because no matter how thorough you are, you might overlook some aspect of your user group because you are not one of them. Testing may show up existing problems, and it may introduce new ones; iterative testing and redesign is important. Ideally, you should iterate testing and design until your usability goals are met, but the design never really gets perfect, so you just do the most important things first, important to the user, that is. In any case, using real users, with a mix of expertise and backgrounds, is critical. Get this profile from your earlier user analysis. How many users is enough? The consensus seems to be that 3-7 users seem to find the greatest number of problems with the least cost, and we've found this to be the case. By the end of the test, we're hearing the same problems fairly predictably.

Formal or informal testing? Formal testing is nice if you have a lab. Setting up a lab is a fairly expensive proposition, so consider it if you will be using it on a frequent basis. Once you have a lab, it's fairly easy to set up and conduct a test. Also, a usability lab gives visibility of usability to the rest of the company. But, you do not have to have any special equipment or location in order to perform a test; an office will do nicely in a pinch.

In our experience, we've begun to think of "testing" more and more loosely. Instead of a tail-end test, think about an "evaluation" at any point in the development cycle. You can do evaluations with any part of the product interface- the documentation, the install process, etc. We've found that videotaping the evaluations are often helpful, particularly if you have to sell the results, or if the developers don't come to the evaluation (although, as mentioned earlier, it's much better to have them there). You can use the video tape to capture facial expressions, things you might have missed, and critical problems. We don't normally view the videotapes unless we need backup, or to revisit something, but they're nice to have when that need arises. Generally, we have our users think aloud during the evaluation so that we can follow their thoughts, and we use post test surveys or interviews to further probe their reactions to the product. Other details on usability testing can be found in [1], [4].

Conclusion

We touched on only a few of the many activities that are possible for a user-centered approach, but at CHI we referred the participants to other, more in-depth sessions that were being presented. In addition, we provided a set of references which we divided into the general activities that we touched on in our sessions.

Bibliography

[1] Dumas, J.S. and Redish, J.C. (1993).
A Practical Guide to Usability Testing. Norwood, NJ: Ablex Publishing Corporation.
[2] Kirwin, B., and Ainsworth, L.K. (Eds.) (1992).
A Guide to Task Analysis. New York: Taylor and Francis.
[3] Rettig, M. (1994).
Prototyping for tiny fingers. In Communications of the ACM, 37 (4), 21-27.
[4] Rubin, J. (1994).
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. New York: Wiley.
[5] Rudd, J. and Isensee, S. (1994).
Twenty-two tips for a happier, healthier prototype. In Interactions, 1 (1), 35-40.
[6] Spool, J., and Snyder, C. (1994).
Product usability: Survival techniques. CHI'94 Conference Companion, pages 365-366.
[7] Tudor, L.G., Muller, M.J., Dayton, T., and Root, R.W. (1993).
A participatory design technique for high-level task analysis, critique and redesign: The CARD method. Proceedings of the Human Factors & Ergonomics Society 37th Annual Meeting, 295-299.

About the Authors

Thyra Rauch has a Ph.D. in Experimental Psychology from North Carolina State University. She works at IBM's Software Solutions Lab in RTP, NC. Thyra is responsible for leading a user-centered design approach with a group who produces tools for electronic publishing, browsing, and distribution.

Susan Kahler is a Human Factors Engineer at IBM's Software Solutions Lab in RTP, NC. Susan works in the area of application development tools. She is also working on her Ph.D. in Ergonomics in the Department of Psychology at North Carolina State University.

George Flanagan is an IBM managing consultant with IBM Consulting Group, specializing in HCI engineering.

Please address all correspondence to:

Thyra Rauch
IBM Corporation
3039 Cornwallis Road
Research Triangle Park, NC 27709 USA
E-mail: Thyra@vnet.ibm.com

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.28 No.4, October 1996
Next article
Article
No later issue with same topic
Issue