No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.30 No.1, January 1998
Next article
Article
No later issue with same topic
Issue

User Autonomy

Batya Friedman and Workshop Participants

Introduction

Autonomy is fundamental to human flourishing and self-development (Gewirth, 1978; Hill, 1991). If we also accept that technology can promote human values (Friedman, 1997; Winner, 1985), then an important question emerges: How can we design technology to enhance user autonomy?

In this workshop, we addressed this question. We built on the organizers' previous framework for understanding user autonomy (1) to analyze participants' research and design experiences of user autonomy in system design, (2) to characterize designs that support user autonomy, and (3) to identify design methods to enhance user autonomy. We report on those activities here.

Keywords

Autonomy, computer system design, design methods, ethics, human values, information systems, social computing, social impact, value-sensitive design.

A Framework for Understanding User Autonomy

(Adapted from Friedman, 1996; Friedman and Nissenbaum, 1997.)

The organizers provided a working definition and conceptualization of autonomy. Briefly, autonomy refers to the capability to act on the basis of one's own decisions and to be guided by one's own reasons, desires, and goals. An autonomous individual is self-governing or self-determining, setting forth goals and determining the means by which to accomplish them. This definition does not preclude that users may discover goals through their use of the technology or that their goals may be influenced by the larger social context.

To provide a framework for fruitful discussion in this workshop, the organizers then identified four aspects of computer systems that can contribute to user autonomy.

System Capability

User autonomy can be undermined when the system does not provide the user with the necessary technological capability to realize his or her goals. To illustrate this problem, consider a recent multimedia workstation design from a leading hardware/networking company in the United States (Tang, 1997). The design lacked a simple hardware switch to easily turn off the microphone during video-conferencing sessions which, in turn, prevented users from realizing their goals for intermittent moments of audio privacy (and to control audio eavesdropping).

System Complexity

In some instances, systems may provide users with the necessary capability to realize their goals, but such realization in effect becomes impossible because of complexity. This problem can be seen with systems that require users to engage in convoluted actions to obtain simple results, recall long sequences of key presses or mouse clicks, or perform intricate computations for input to the system. The problems of system complexity arise from a mismatch between the users' abilities in terms, for example, of skill, memory, attention span, computational ability, or physical ability. Granted, there is a relationship which needs to be understood between good design and training. Autonomy is supported by systems that require a reasonable amount of training -- the specific amount will depend on the type of system (e.g., less for an e-mail program, more for a CAD system). But at whatever point the amount of training goes beyond what the user sees as reasonable, autonomy is compromised.

Knowledge About the System

Who Should Control What and When? A CHI 96 Workshop

Sometimes, in order to use a system as desired, a user must know how the system goes about its work. When the design of a system does not make this sort of information about its functioning accessible to the user, then the user's autonomy can be undermined. For example, consider a typical trainable agent such as a mail or news filters. These agents may provide the user with important information but hide critical assumptions about how that information was collected and filtered (e.g., On what basis was some mail automatically deleted? Why were some news stories overlooked?). The user has performance results (which may be very good) but no knowledge of or access to the underlying rules and database the agent uses to guide that performance. To their credit, some researchers in the intelligent agent community have explored the use of `explanations' (e.g., indicating another situation in which the agent recommended a similar course of action) as a means for helping users to understand how the agent determined a course of action. However, the rules behind the agent's recommendation still remain hidden from the user and the user cannot readily assess if the agent's judgments accurately implement the user's principles (e.g., goals).(1)

Misrepresentation of the System

Users can also experience a loss of autonomy when provided with false or inaccurate information about the system. For example, package copy that does not accurately reflect the software can mislead a user who then develops inaccurate expectations for the software's capability to realize his or her goals.

Illustrative Cases of User Autonomy

Prior to the workshop, participants collected cases of computer use which they felt posed problems for user autonomy. These cases spanned a variety of circumstances, from the level of operating systems to interface design on to organizational structure. As a group, the cases helped to communicate the significance and depth of considerations of autonomy in the design of systems. We describe some of these cases here.

Case 1: Who Should Control Software Installation? (Bay-Wei Chang)

On the Macintosh, many software packages are installed by running a special installer program. An installer places new files where they are needed, sometimes deleting or moving existing files as well. The process is completely automated. While this is convenient for users who do not wish to know any further details, it makes it difficult to troubleshoot when the installation causes problems: one does not know what files went where. Furthermore, not knowing which files were installed makes it extremely difficult to completely undo the installation later.

Case 2: Autonomous Users of On-line Help (Ise Henin).

In using software, a user needs to be able to decide what is important to know and find solutions to problems quickly. On-line tutorials and training materials too often support the developer's goal that the user become proficient with the features of the product, irrespective of the user's goals. The materials take the user through a series of exercises to illustrate a general concept and promote understanding; it is a complete treatment of the subject. But it is not conducive to the quick problem-solving required of users. Instead, on-line help should be context-sensitive and designed to answer the question `How do I?' with the goal to support the user in acquiring problem resolution skills.

Case 3: Training Autonomous Users (Mark Rosenstein

In one system, as part of a larger educational objective, students had to learn how to light-off (turn on) a steam propulsion plant. At an abstract level, technicians need to solve a fairly complex constraint problem. Like many constraint problems it turned out there were multiple sequences that would successfully get up steam. Rosenstein reported that students who worked through the various possibilities had quite good understandings of the underlying structure of this problem. Unfortunately, when Rosenstein's group presented a prototype of this system, their clients were unhappy. They wanted students to learn `the procedure' for light-off. In their client's words, `There will be no free-form light-offs'. In this environment (the US Navy) there are good reasons to have a standard procedure. Many individuals rotate through the positions so the standard procedure serves as a communication device, novices aren't confused by extraneous alternative steps, and in times of stress a fixed procedure may be more likely to be successfully completed. There are also good reasons (things breaking, all sorts of contingencies) to allow students to develop an understanding (have sufficient control) of alternatives. In short, autonomy is situated.

Case 4: Inferring User's Intentions with Agent Technology (Bay-Wei Chang)

In an interface programming environment Chang built (named Seity), each underlying object in the system is represented by a single identity on the screen. To display relationships among each other, such as inheritance, objects move under their own power frequently in concert with other objects. For example, objects on one part of the screen might join others on the opposite side of the screen to form a tree and then later return again to their former places as the user browsed the initial structure. Difficulties can arise when in moving to create a new structure the representation of a previous structure is destroyed on the screen. The independent movement of objects on the part of the system must be carefully crafted to respect the user's expectations and desires. In Seity, Chang attempted to minimize this problem by requiring an action from the user to initiate object movement. For example, the user requests to see a certain inheritance chain, and objects on and off the screen move together to form the graph.

Case 5: What Trade-offs Enhance User Autonomy -- Flexibility vs. Reliability? (Nicole Parrot)

In designing a 3D animation package for the high-end computer animation industry, SOFTIMAGE's software seeks to provide users with as much autonomy as possible by imposing few constraints on use -- which translates into faster performance and greater flexibility. However, these advantages come at the expense of largely unguarded software that can be vulnerable to system crashes. Notably, some users may be willing to tolerate system crashes in order to have greater control over the system while other users are willing to relinquish some control in exchange for a more reliable system. What is the right balance between flexibility and reliability for optimizing user autonomy?

Case 6: User Involvement in Design (Pekkas Lehtio)

Much of the specification of systems goes on early in the design process. By the time the end user gets involved, all of the major decisions have been made and the end user's autonomy is undermined.

Design Methods to Support User Autonomy

Our discussion of the framework and cases led naturally to a discussion of how our design practice might support user autonomy in future designs. We summarize that discussion below. Many of the methods we describe dovetail with other usability goals.

1. Know Thy User

A good match between user desires and goals and system capability can only be obtained if designers have a reasonable understanding of the users' goals and intentions, and how users go about their activities. As highlighted in Case 3 (Training Autonomous Users) and Case 5 (What Trade-offs Enhance User Autonomy -- Flexibility vs. Reliability?), understanding the users' and organization's goals is key to providing a level of control and system flexibility that optimizes user autonomy.

2. Layer Access to System Capability and Complexity

Our framework on user autonomy points to system capability and system complexity as two key aspects of systems that affect user autonomy. Greater capability and control over the system is typically coupled with greater complexity in the use of the system. As a general rule of thumb, we suggest designers initially provide users with a powerful but small set of capabilities so that users may succeed on a reasonable subset of tasks without being undermined by the system complexity. Access to other layers of capability should be included but initially hidden from view. This approach can be seen in numerous existing systems. For example, some systems allow the user at the beginning of the work session to set the user level -- novice, intermediate, expert. Other systems allow the user at the beginning of the work session to specify the task -- simple document, desktop publishing. Other systems set default preferences and provide a `hidden' preference menu to allow the user to customize the system. Finally, other systems using a single mode rely on a working subset of commands being known for typical tasks but provide additional commands should more complicated tasks be attempted (e.g., the Symbolic LISP machine can be used simply as a LISP machine but allows access to the microcode through special commands should they be required). All of these methods return autonomy to the user by hiding unneeded capability and complexity and only revealing that capability and complexity when the user is better positioned to make use of it.

3. Feedback

Appropriate feedback about the system can help foster user autonomy. In some instances, the system contains information that would be useful to the user (e.g., about the state of the machine, about the filtering mechanism used by intelligent agents) and such information can be made available to the user (e.g., an icon to indicate the state of the machine, collect and display the interactive history of intelligent agents).

4. Articulate System Designers' Assumptions

In the process of design, designers must make numerous assumptions about the user and context of use. Frequently the designers' assumptions will differ from the user's reality. If designers can make their assumptions visible, then users can make more judicious use of the system.

5. Capability to Explore

Self-determination is a central aspect of autonomy. The activity of self-directed exploration follows from self-determination. Thus, systems that better support exploration better support user autonomy. Exploration requires good feedback in which the results of the actions are rapid, incremental, and reversible.

6. Tailorability and Extensibility

While good design requires that designers `know thy user', such knowledge will always be imperfect. Systems that are easily tailorable and extensible provide the means for users to augment capability that may not have initially been built into the system (Lai, Malone, & Yu, 1988; Malone, Lai, & Fry, 1992).

7. Build-Up Systems Incrementally

Drawing on the idea of component software, begin with a small, simple system that allows the user to accomplish basic tasks. As the user's tasks require additional functionality, add in just the functionality (and complexity) that is needed to do this new task. Thus, the system builds up slowly and the user is able to negotiate the incremental increases in complexity.

8. Involve Users Throughout the Process

As noted in Case 6: (Developing User Autonomy Through User Involvement in Design), much of the specification of systems goes on early in the design process. By the time the end user becomes involved, all of the major decisions have been made and the end user's autonomy is undermined. To address this problem, involve users early on and consistently throughout the design process. The growing literature on participatory design develops this approach (Blomberg, Suchman, & Trigg, 1996; Clement & Van den Besselaar, 1994; Greenbaum & Kyng, 1991; Kuhn, 1996; Schular & Namioka, 1993).

Conclusions

In this workshop, we examined how to design future systems to promote user autonomy. We also recognized, however, that user autonomy must sometimes be limited. Safety considerations provide one example. Here we may need to restrict user autonomy to protect against a user with malicious intentions as well as well-intentioned users guided by poor judgment. Moreover, pursuing a high-level goal can involve constraining lower level desires and needs. It is here that we can begin to understand how from the perspective of user autonomy we may need to balance standardization with individual preferences. Since user autonomy suggests greater flexibility and personalization, it may appear to run contrary to goals for standardization for the interface. Consider, on the one hand, that some forms of standardization may restrict users in ways that are important to them in relation to their goals. On the other hand, standardization can free users from the burden of relearning interfaces as they switch among systems (or switch among stations with the same system) and thereby, in effect, provide users with greater control over their systems and enhance their overall autonomy.

If we value user autonomy -- as the participants in this workshop do -- then we must design our technology accordingly. This workshop provided initial directions, theoretically and practically.

References

Blomberg, J., Suchman, L., & Trigg, R. H. (1996). Reflections on a work-oriented design project. Human-Computer Interaction, 11(3), 237-265.

Clement, A., & Van den Besselaar, P. (1993). A retrospective look at PD projects. Communications of the ACM, 36(4), 29-37.

Friedman, B. (1996). Value-sensitive design. interactions, III(6), 17-23.

Friedman, B. (Ed.) (1997). Human values and the design of computer technology. New York, NY: Cambridge University Press and CSLI/Stanford University.

Friedman, B., & Nissenbaum, H. (1997). Software agents and user autonomy. Proceedings of the First International Conference on Autonomous Agents (pp. 466-469). New York, NY: Association for Computing Machinery Press.

Gewirth, A. (1978). Reason and morality. Chicago: University of Chicago Press.

Greenbaum, J., & Kyng, M. (1991). Design at work. Hillsdale, NJ: Lawrence Erlbaum.

Hill, T. E., Jr. (1991). Autonomy and self-respect. Cambridge: Cambridge University Press.

Kuhn, S. (1996). Design for people at work. In T. Winograd (Ed.), Bringing design to software (pp. 273-289). Reading, MA: Addison-Wesley.

Lai, K, Malone, T. W., & Yu, K. (1988). Object Lens: A `spreadsheet' for cooperative work. ACM Transactions on Office Information Systems, 6(4), 332-353.

Malone, T. W., Lai, K., & Fry, C. (1992). Experiments with Oval: A radically tailorable tool for cooperative work. In J. Turner & R. Kraut (Eds.), Proceedings of the Conference on Computer-Supported Cooperative Work (pp. 289-297). New York, NY: Association for Computing Machinery Press.

Schuler, D., & Namioka, A. (Eds.) (1993). Participatory design: Principles and practices. Hillsdale, NJ: Lawrence Erlbaum.

Tang, J. C. (1997). Eliminating a hardware switch: Weighing economics and values in a design decision. In B. Friedman (Ed.), Human Values and the Design of Computer Technology. New York: Cambridge University Press and CSLI/Stanford University.

Winner, L. (1985). Do artifacts have politics? In D. MacKenzie & J. Wajeman (Eds.), The Social Shaping of Technology (pp. 26-38). Philadelphia, PA: Open University Press.

Address

Please address all correspondence to: Batya Friedman,
Associate Professor of Computer Science,
Colby College,
Waterville, ME 04901, USA.
E-mail: b_friedm@colby.edu

Organizers

Batya Friedman (Colby College), Helen Nissenbaum (Princeton University)

Participants

Bay-Wei Chang (Xerox PARC), Batya Friedman (Colby College), Ise Henin (University of Victoria), David Kirsh (University of California, San Diego), Pekkas Lehtio (University of Turku), Nicole Parrot (SOFTIMAGE-Microsoft), and Mark Rosenstein (Bellcore)


Footnotes

(1)
As noted by workshop participant Nicole Parrot, in some situations it may be possible to hack the system perhaps by exploiting a system bug or by using additional sources to obtain the desired information. While technically this may be possible, by doing so the user would have gone beyond reasonably constitutes the system as presented to the user.

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.30 No.1, January 1998
Next article
Article
No later issue with same topic
Issue