No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.30 No.4, October 1998
Next article
Article
No later issue with same topic
Issue

Persuasive Computing

A CHI 98 Special Interest Group

BJ Fogg, Daniel Bedichevsky, and Jason Tester

Overview

At CHI 98 just over 40 people attended a special interest group meeting on persuasive computing ("captology"). About half the participants came from industry and half from academics. Despite the early hour for the meeting, the event seemed to be a useful 90 minutes for learning, sharing, and networking with others interested in persuasive computing.

Introduction to Persuasive Computing

After an overview and personal introductions, BJ Fogg summarized the field of captology for participants who were new. By definition, captology is the study of computers as persuasive technologies. In general, captology includes the design, analysis, and theory of computer technologies created to change attitudes and behaviors.

A few SIG participants then described examples of persuasive computing currently on the market. These technologies don't always follow the traditional form factors for computers. Two examples shown included Baby Think It Over (www.btio.com) and the Polar Heart Rate Monitor (www.polar.fi), both of which are non-traditional computing devices designed to change attitudes and behaviors. But persuasive technologies also come in traditional forms: Dole 5 A Day Adventures (www.dole5aday.com) and software from Purple Moon (www.purple-moon.com) are persuasive computing technologies that ship on a CD-ROM.

Next, Andy Cargile, a program manager at Lexant Corp., highlighted some of the work he and his colleagues are doing that explore and use captology. Lexant's mission is to help individuals improve their health through the innovative integration of behavior change techniques and information technology. This includes using persuasive computing to help people quit smoking, improve nutrition, manage stress, and so on.

Break-out Sessions

After the general introduction to persuasive computing, the larger group of participants broke into three smaller groups to discuss specific areas:

Applications Group

Led by Erik Neuenschwander and Andy Cargile, about 15 people discussed potential applications for persuasive computing. They focused on identifying domains where behavior or attitude change would benefit individuals and where captology principles could apply. The group brainstormed a list of domains, which included safety, nutrition, education, Internet security, conservation, and others.

Members of the applications breakout session also found it easy to brainstorm domains from their own work experiences, many involving some form of behavior change not previously defined in persuasive terms; examples included conservation of company resources and discouraging excessive e-mail.

Neuenschwander and Cargile then selected a single domain for focused discussion: avoiding UV-rays. The group then brainstormed on ways computer technology could persuade people to avoid exposure to the sun's damaging rays, and it seemed the concepts became more innovative as the session continued. Many ideas for persuasive computing focused on integrating technology into existing and convenient form factors. Examples include:

Ethics Group

About 12 people joined the small-group discussion on the ethics of persuasive computing. Led by Daniel Berdichevsky, the group began by reviewing the basic terminology essential to an exploration of ethics and captology. Captology, the group noted, was a field of study, no more or less subject to ethical evaluation than, say, psychology, or the broader field of human-computer interaction. The group decided, instead, that it would examine specific applications of persuasive computing technologies, in hopes of establishing preliminary guidelines that could inform captological design.

Each member of the group offered up one example of a technology which they felt represented a "good" or "bad" application of persuasive principles. "Good" applications included computerized simulations of the spread of AIDS, meant to encourage safer sexual practices. "Bad" applications included several that infringed upon privacy, among them a hypothetical doll that would persuade children to divulge secrets, perhaps by first sharing its own, and then report those secrets to parents. Numerous other examples fell in a nebulous ethical realm -- for instance, bathrooms meant to encourage restaurant employees to maintain sanitary standards by locking doors or sounding alarms if they failed to wash their hands.

The group decided to focus on three broad areas of concern to be evaluated in the ethical analysis of a persuasive technology:

Intent

The group concluded that, as in many instances involving the use and abuse of tools, that part of ethics which involves assigning blame or responsibility cannot focus on an actual technology -- which is not a moral agent -- but on its designers and those who implement it. Computers, it was agreed, could be social actors, but the group was not convinced that could be moral actors.

Autonomy

The group felt strongly that all forms of persuasion must be scrutinized to make sure that they do not exert persuasive pressures in a way that unfairly limits or contradicts individual liberty. Participants were, of course, divided on their conceptions of liberty and what they believed amounted to a meaningful limitation of it.

Target Population

What is the situation of those being persuaded? Do they have any role in choosing to be persuaded? Many members of the group were troubled by the notion that children, the elderly, and uninformed adults might be easily swayed by persuasive technologies.

All members of the group concurred that before going forward with this exploration it would be important to delineate how the ethics of persuasive technology differed, if at all, from the ethics of persuasion by more traditional means. In closing, they voiced a serious need for more opportunities to consider this domain before drafting preliminary ethical guidelines, and concluded that any future discussion would require a much more significant time allotment.

Theories and Frameworks Group

About 15 SIG participants joined the break-out session on captology theory. Because persuasion has many facets and a long history, the study of computers as persuasive technologies can benefit from adapting and adopting theories and frameworks from other domains.

Led by Shawn Tseng and BJ Fogg, the group identified many areas of inquiry that could shed light on persuasive computing. The participants identified 28 domains. This list included obvious fields, such as psychology, marketing, and organizational behavior. But the list also contained less obvious domains, such as architecture and film theory. The group discussed three areas in more details: rhetoric, anthropology, and credibility research.

Rhetoric

The classical field of rhetoric provides frameworks for thinking about computers as persuasive technologies. Aristotle outlined ethos, logos, and pathos as avenues to pea computer technologies. As RPI doctoral candidate Emilie Gould explained, ethos is about character -- reputation, credibility, status. Logos is about rationality or logic -- facts, data, logical appeals. Pathos is about emotion -- joy, sadness, humor. The group agreed that all three appeals (ethos, logos, pathos) could have counterparts in computer persuasion.

Anthropology

Another participant then informed the rest of us about anthropology. One established way in anthropology to get insight into people's lives and culture is to examine four things: symbols, heroes, rituals, and values. The group discussed how researchers could possibly get a better understanding of people and their culture by seeing how computer technology relates to these four areas (e.g., a ritual: Mary checks her e-mail before she goes to bed each night). These insights may have implications for persuasive computing.

Credibility research

The group also discussed aspects of credibility research. Many psychologists assert credibility is made up of two dimensions: trustworthiness and expertise. Trustworthy sources are similar to us and have no apparent bias. Expert sources have great knowledge or experience and often have credentials. These dimensions may translate well into thinking about the credibility of computers or computer information, especially information on the Web.

Planning the Next Steps

After sharing the high points of each break-out session, we discussed how to continue collaborating over the next year. A discussion list now exists for the entire CHI community (send "subscribe CHIcaptology" to majordomo@stanford.edu). In addition, participants were invited to contribute content and suggestions for web site on captology (www.captology.org). In looking toward CHI 99, SIG participants discussed submitting a suite of papers, as well as proposing a panel on the ethics of persuasive computing.

About the Authors

BJ Fogg is the director of the Stanford Initiative on Persuasive Technologies. His contact e-mail is bjfogg@stanford.edu

Daniel Berdichevsky is a graduate student at Stanford, where he teaches bioethics. His contact e-mail is dan@demidec.com.

Jason Tester is a Stanford student and the editor of the Persuasive Technology Newsletter. His contact address is jasonjt@leland.stanford.edu.

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.30 No.4, October 1998
Next article
Article
No later issue with same topic
Issue