No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.28 No.1, January 1996
Next article
Article
No later issue with same topic
Issue

Report: Minimizing Bias in Computer Systems

A CHI '95 Workshop

Batya Friedman, Eric Brok, Susan King Roth, John Thomas

A few Novembers ago, Harrison (a pseudonym) walked into his familiar voting place in the United States. The same old voting booths on tottering legs with scant curtains greeted him. Inside, however, was something new: a computerized voting card. With a bit of fumbling and careful reading of the directions, he figured the thing out. Cast his vote. Participated in the modern democracy. But, nationwide computer punch card tallying systems pose serious problems for fair elections. In particular, under-educated groups are more likely not to understand how the computerized system works and, thus, to invalidate their own votes by either not voting for a position, or by voting for more than one person per position (Dagger, 1988). This example begins to illustrate how the interface design of computerized voting systems can favor some groups over others. More generally, the example speaks to the problem of bias in computing technologies.

In this workshop we were concerned with understanding bias in computer systems and developing methods to help minimize bias through the design process. The workshop built on the organizers' previous work which provides a framework for understanding bias in computer systems. In the workshop, we examined this framework and drew on participants' research and design experiences to identify common biases in computer systems and to generate means for minimizing bias. We report on those activities here.

A Framework for Understanding Bias in Computer Systems(1)

In its most general sense, the term bias means simply "slant." Given this undifferentiated usage, bias can describe both moral and nonmoral circumstances. Our discussion, however, focuses on computer technologies whose biases are a source of moral concern and, thus, we use the term bias in a more restricted sense. We say that a computer technology is biased if it systematically and unfairly discriminates against certain individuals or groups of individuals in favor of others. A technology discriminates unfairly if it denies an opportunity or a good, or if it assigns an undesirable outcome to an individual or group of individuals on grounds that are unreasonable or inappropriate. Based on this definition, two points follow. First, systematic errors do not establish bias unless they are joined with an unfair outcome. Second, unfair discrimination does not establish bias unless it occurs systematically.

Drawing on this definition, the organizers had previously examined existing computer systems for bias. From their examination, three overarching categories emerged -- preexisting bias, technical bias, and emergent bias. In summary, the categories are as follows:

Preexisting Bias

Preexisting bias has its roots in social institutions, practices, and attitudes. When computer technologies embody biases that exist independently, and usually prior to, the creation of the technology, then we say that the technology embodies preexisting bias. Preexisting biases may originate in society at large, in subcultures, and in formal or informal, private or public organizations and institutions. They can also reflect the personal biases of individuals who have significant input into the design of the technology, such as the client or system designer. This type of bias can enter a technology either through the explicit and conscious efforts of individuals or institutions, or implicitly and unconsciously, even in spite of the best of intentions. For example, Huff & Cooper (1987) have shown that software designers sometimes unknowingly design software that is more aligned with males than females.

Technical Bias

In contrast to preexisting bias, technical bias arises from the resolution of issues in the technical design. Sources of technical bias can be found in several aspects of the design process, including limitations of computer tools such as hardware, software, and peripherals; the process of ascribing social meaning to algorithms developed out of context; imperfections in pseudo-random number generation; and the attempt to make human constructs amenable to computers -- when, for example, we quantify the qualitative, make discrete the continuous, or formalize the non-formal. The introductory example of computerized voting systems presents a case in point. Physical features of the interface coupled with the need for extensive written directions created unfair difficulties for under-educated voters.

Emergent Bias

While it is almost always possible to identify preexisting bias and technical bias in a design at the time of creation or implementation, emergent bias arises only in a context of use with real users. This bias typically emerges some time after a design is completed, as a result of a change in societal knowledge, user population, or cultural values. For example, much of the educational software developed in the United States embeds learning activities in a game environment that rewards competitive and individualistic playing strategies. When such software is used by students with a cultural background that eschews competition and instead promotes cooperative endeavors, such students can be placed at a disadvantage in the learning process.

Examples of Bias in Computer Systems

Prior to the workshop, participants collected examples of bias in computer systems. To help communicate the depth and breadth of the problem of bias in computer systems, we describe some of these examples here.

Case 1: Gender Bias in Entertainment Software (John Thomas)

I recently bought a computer adventure game that I thought my daughter might like. Well, of course, the first thing that happens is this: you get to choose which one of three basic adventurers you want to be -- a male thief, a male magician, or a male warrior. Nice choice for a young girl, huh?

Case 2: Graphical User Interfaces (GUIs) and the Visually Impaired (Eric Brok)

As user interfaces have evolved from command-line screens to GUIs, using those interfaces has become increasingly difficult for the visually impaired. Consider first a classical text-based screen. All input can be handled by a standard keyboard; the text on the screen can be scanned line after line by a Braille-line-reader which lifts metal pins for decoding by touch. In contrast, a GUI requires movement across the screen and offers essential visual feedback as part of the input process. For example, imagine trying to click on a file in a window if you do not know where the file is positioned. Multimedia can exacerbate this problem for the visually impaired as more and more information content is expressed by a combination of graphics, animation, and video.

It is worth noting, however, that new technologies can also provide the visually impaired with tools to cope with GUIs and multimedia. For example, current technology can "grab" the screen, apply character recognition to the labeled icons, and read the labels out loud. Alternatively, mouse input devices that use an absolute coordinate system can allow users to position the cursor without visual feedback. Even better -- and a possibility for the future -- a mouse could offer physical resistance when it "touches" an object on the screen.

Case 3: Unfamiliar Metaphor (Eric Brok)

The over reliance on metaphor in interface design can introduce bias into a system when some users are not familiar with the metaphor. For example, Kumyo Nakakoji (personal communication) has called attention to a problem posed by most word processors. Because most word processor software assumes a 'typewriter' culture, people in countries that do not use typewriters -- and especially those that use character sets other than alphabets -- are disadvantaged by the typewriter metaphor.

Case 4: Display Height, Visual Acuity, and Literacy in Voting Machines (Susan King Roth)

A recent study reported three findings that pertain to bias of voting systems (Roth, 1994). (1) Bias toward people small in height: The ballot on the mechanical lever machine presented text well above eye level for some subjects. This design defect affected individuals at the low end of the range for height who tend to be women -- especially elderly women -- and those from populations that tend to be shorter than the average American. (2) Bias toward people with reduced visual acuity: The text that described issues on the ballot was set either in small type and with no spacing between lines, or long lines, or uppercase letters. Each of these typesetting practices is known to make text difficult to read for people with reduced visual acuity such as many people over sixty. (3) Bias toward people with low literacy: The ballot language was unnecessarily formal and complex (e.g., including double negatives and other confusing syntax) that limited access to information for voters with lower English literacy levels.

Case 5: Computer Science Education (Eric Brok)

Traditionally, computer science evolved from electrical engineering and mathematics. As a result -- and largely unexamined -- electronics and mathematics comprise most of computer science curricula. Thus, the academic and professional work on computer systems has been biased against people who do not have an intuition for or an interest in electronics and mathematics. Historically, men more than women have tended toward these subjects; hence the skewed distribution of sexes in electronics and mathematics has been carried over into computer science. But is it necessary? Probably not. Modern computer science has broadened its scope considerably. The field now includes database design, information management, human-computer interaction, knowledge technology, and so forth. Insightful knowledge about the effective use of computer technology and its impact on organizations is in many occasions considered more valuable than realizing the technology itself. At the same time, it is widely acknowledged that most traditional computer science students lack the social skills needed in practice. It has become clear that electronics and mathematics are no longer the most important subjects in computer science. Thus, the field can be equally open to people with various talents and interests.

Methods for Minimizing Bias in Computer Systems

Much of the workshop was spent discussing methods for minimizing bias in computer systems. A summary of what we discussed and in some cases recommended follows.

Develop an Awareness of Common Biases

An awareness of common biases can be useful in helping to minimize potential bias in system design. We identified several common bases for bias including: physical characteristics (handedness, age, height), physical disabilities (e.g., color blindness, visually impaired, hearing impaired, sensory motor impairment, dyslexia), cultural perspective (e.g., language, gender, profession), and knowledge (e.g., literacy, background knowledge).

Hold Early Public Discussion of Designs

Public discussion early in the design process can highlight potential biases that might otherwise be overlooked in the design. By "public discussion" we mean discussion with individuals who represent stakeholders in the system. For example, the stakeholders in an accounting system for employee salaries would include management, business office workers, and employees who have their records in the system; the stakeholders in a Caller ID system would include management, telephone operators, and typical telephone users.

Explicitly Target Potential Users

Most systems assume -- explicitly or not -- target users in particular contexts. Emergent bias can occur when populations not originally targeted by the design make use of the system in contexts not originally anticipated by the system designers. If the characteristics of the target users and contexts of use are made explicit, then, as users and contexts change, new users can be alerted to potential biases in the system. Designers will also be better positioned to make adjustments in the design.

Design for Diversity

As a general strategy, designing with diversity in mind can help to minimize unintentional bias, particularly preexisting bias. Diversity can be brought into the design process in several ways such as: (1) including individuals with diverse backgrounds in the development team, and (2) including individuals that represent the diversity of stakeholders and potential users in the field testing groups.

Design for Flexibility

On the surface, greater flexibility seems a desirable design goal as more flexible systems can be tailored to specific users; as users and the contexts of use change, the system can be adapted readily to the characteristics of the new users and contexts. However, several tough questions arise: What features are relevant for flexibility of the sort that is needed here? What methods should be used to determine the default settings for these flexible features? How should increased control over the system be balanced with other desirable interface qualities such as efficiency, simplicity, portability, and ease of use?

Track Bias Throughout the Design Process

Just as we maintain records of bugs that are identified and remedied in the design process, we can maintain records of biases when they are identified and remedied. In the case of technical bias, particular types of technical decisions may be seen to lead to particular biases in the design. In the case of emergent bias, the biases may not be identified until after an initial release of the system. Thus, we can expect that some biases that are identified in early versions of the system will need to be addressed in later releases.

Set Standards for Freedom From Bias in Computer Systems

Freedom from bias should become part of the standards against which we judge the quality of systems in use in society. For example, there is a growing trend in the software industry to adopt the International Standards Organization's standards on system development (ISO9000). This standard calls for companies (1) to have a defined and documented process for requirements gathering, project planning, and testing (among many other things); (2) to ensure that people follow the defined processes; (3) to measure and improve the processes over time; and (4) to provide necessary training. Because many companies are embracing these standards, there is now a special opportunity to build systematic means to minimize bias into the design process. For example, requirements gathering can be structured to ensure that a wide diversity of users are considered; project planning can explicitly include steps to ensure non-biased access; and testing can be configured to include a diversity of users.

Educate Professionals

Stating what by now must be obvious: system designers, product managers, and management need to become aware of bias in system design as well as to become familiar with methods for minimizing bias.

Extending the Framework on Bias in Computer Systems

There was some discussion about extending the framework on bias. For example, in this workshop Eric Brok proposed one such extension as follows: A distinction can be made between bias in the user model -- how the system appears and behaves (which presupposes attributes of the user) -- and bias in the content model -- what the system is about. Consider again an electronic voting machine. Biases of the sort described above by Susan Roth King in Case 4 that pertain to display height, visual display, and complexity of language in a voting machine, would concern the user model. In contrast, a bias in the candidate list due to forces outside the design process would concern the content model. Note, however, that discussion during and following the workshop pointed to some ambiguities in the proposed distinction. Nonetheless, as we gain greater understanding of and experience with bias in computer systems, the movement toward extending the framework presented here seems warranted.

Conclusion

Because biased computer systems are instruments of injustice -- though admittedly, their degree of seriousness can vary considerably -- we believe (and as developed elsewhere, Friedman & Nissenbaum, in press) that freedom from bias should be counted among the select set of criteria according to which the quality of systems in use in society should be judged. As with other criteria, such as reliability, accuracy, and efficiency, freedom from bias should be held out as an ideal toward which those who wish to produce good computer systems should strive. As with these other criteria, the ideal might be difficult if not impossible to assure. Nonetheless, in practice we must actively approach the task of minimizing bias in our designs. Furthermore, as a community we must hold our designs accountable to a reasonable degree of freedom from bias against which negligence can be judged.

References

Dagger, R. (1988, November 7).
Annals of democracy. The New Yorker, pp. 40-46, 51-52, 54, 56, 57-58, 61-68, 97-100, 102-108.
Friedman, B., & Nissenbaum, H. (1993).
Discerning bias in computer systems. In S. Ashlund, K. Mullet, A. Henderson, E. Hollnagel, & T. White (Eds.), INTERCHI '93 adjunct proceedings (pp. 141-142). Amsterdam, The Netherlands.
Friedman, B., & Nissenbaum, H. (in press).
Bias in computer systems. ACM Transactions on Information Systems.
Huff, C., & Cooper, J. (1987).
Sex bias in educational software: The effect of designers' stereotypes on the software they design. Journal of Applied Social Psychology 17, 519-532.
Roth, S. K. (1994).
The unconsidered ballot: How design effects voting behavior. Visible Language, 28, 48-67,

About the Authors

Batya Friedman is Clare Boothe Luce Assistant Professor of Computer Science at Colby College. She received her Ph.D. from the University of California at Berkeley in 1988. Her research interests include the design of computer systems to support human-human interaction, social computing, and the human relationship to technology. Batya is currently editing a book titled Designing Computers for People: Human Values and the Design of Computer Technology.

Eric Brok is on the faculty of the Open University of the Netherlands. His training is in cognitive psychology. Eric is currently conducting research in authoring systems for hypermedia and developing a course on multimedia.

Susan King Roth is Associate Professor in the Department of Industrial Design at The Ohio State University and Adjunct Faculty at the Advanced Computing Center for Arts and Design at OSU. Susan is Consulting Editor of the Journal of Visual Literacy, and Co-director of the Center for Interdisciplinary Studies in Art and Design.

John Thomas is Executive Director, Human Computer Interaction at NYNEX Science and Technology. He received his Ph.D. in Experimental Psychology from the University of Michigan. John has been involved in CHI research and development since 1973. He spent 13 years at IBM, mostly at the T. J. Watson Research Center with a two year stint at corporate headquarters trying to convince IBM to pay more attention to usability. John moved to NYNEX in 1986 to start the Artificial Intelligence Lab. In addition to research on HCI, he has done work in machine vision, speech recognition, expert systems, speech synthesis, and neural networks.

Please address all correspondence to: Batya Friedman, Clare Boothe Luce Assistant Professor of Computer Science, Colby College, Waterville, ME 04901, USA. E-mail: b_friedm@colby.edu.


Footnotes

(1)
adapted from Friedman and Nissenbaum, 1993, in press

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.28 No.1, January 1996
Next article
Article
No later issue with same topic
Issue