No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.30 No.2, April 1998
Next article
Article
No later issue with same topic
Issue

Discourse Architecture

Jed Harris and Austin Henderson

Introduction

This paper is a retrospective on the Discourse Architecture Lab, one of four Labs in ATG in 1996 and early 1997.

The Team

Austin Henderson was the Lab manager. The other members were Dave Curbow, Paul Dourish, Tom Erickson and Jed Harris. Other Apple employees who were active participants included Alan Cypher and Don Norman. We also gained a great deal from our consultants Niklas Damiras, Xin Wei Sha and Brian Cantwell Smith.

The ideas in this paper arose from the creative interaction of all of these participants. The errors and infelicities in this presentation are the responsibility of the authors.

The Discourse Architecture Lab staff left Apple in March 1997, and subsequently formed Pliant Research; information on our current activities can be found at http://www.pliant.org/.

Our Focus

Starting with a diversity of interests and projects, the members of the Discourse Architecture Lab found that we all honored both the potential value of technical design and the very different potentials of human activity. As a result we saw technical systems as integral parts of larger socio-technical practices, and `system development' as an endeavor requiring both technical and social creativity. We were interested both in deeper understanding of technical and social interactions, and in building useful systems.

Working together, we gradually understood that our different projects were all sightings on a single underlying concern -- the mismatch between the formality and rigidity of current computing and the richness and dynamism of the activities in which computing takes an ever-more central role.

We addressed this concern by picking problematic examples and working them out in some detail. These worked examples provided the crucial ground for us to bring our strongly held views to bear on each other, and for us to build a coherent synthesis. In roughly eighteen months of talking and writing, we collaboratively produced a fairly detailed critique of computing as we know it today, several worked examples of a different approach to computing, which we called pliant computing, and some directions to take in developing foundations for such a new approach.

In this paper we lay out, using points drawn from one of these examples, the contrast between the standard (formal, rigid) view, and the pliant view, as they emerged by early 1997. We end with the values that we came to believe must underwrite a pliant design stance, and major challenges that face an attempt to build computing on pliant foundations.

The Standard View

If you have designed a complex technical system you recognize the normal process: You talk with users, you think a lot, and then you decide on what entities the system will `see' in its world, and what operations it will provide to manipulate them. You build them into the system, and that is that. The system has that view, and will continue to have it, unless you, the designer as part of `maintaining' the system revisit your decisions and change them.

As a result of this design stance each application, operating system, computer language, database, etc. defines a fixed `ontology' -- a fixed way of breaking the world into objects, and a fixed set of operations to manipulate those objects (nowadays conveniently up there on the menus).

Consider the typical calendar application: it `sees' the world as composed of events, which have a start and stop time, a type, and a description. Shared `meeting scheduling' calendars have joint events with lists of people who are to attend. There are lots more details, but in essence, our calendar technology sees the world in this fixed way.

Anything outside of the calendar's predefined ontology is not going to be handled by the technology; it is going to be addressed -- if it is addressed at all -- by the users in their creative practices of use. For example, if the starting time of a meeting is uncertain, users might capture this by using `3:00?' in the description. Eventually a convention could grow up that would be meaningful to a group of colleagues, but of course not to the calendaring application.

In creating technical systems we `clean up the world'; we distill enough regularity from the richness of our practices so that we can build a rigidly regular technical system that meshes with these practices. Design of such technical systems corresponds to a normative view of practices: things ought to be clearly specified, and anything that doesn't fit is an `exception to' or an `error with respect to' or simply `outside' the specification.

To achieve the regularity enshrined in our technology, we must eliminate subtlety, ambiguity, and vagueness. We must decide whether things are either the same or different, not somehow in between. We must separate different activities, and any interplay between them which cannot be expressed in a regular fashion must be ignored.

Further, this view is `locked down,' not changeable within the fixed frame of the technical system's language for expressing the structure of the world.

Activities may take on complex meanings for participants (e.g., how late one is for a meeting and how long one stays signals status, interest, value of the activity), but such meanings and their implicit values are not addressed by designers of technical systems. They typically see the technology as `neutral', and values as being the province of the users.

However, the ontology chosen for a technical system does carry with it many values, usually those of the designer. One suspects that the designers of the calendar application feel people should be clear about when meetings will start and stop, and that they should get to meetings on time. And surely people working together should have a single consistent schedule (a single truth; everyone singing off the same page) to remove the opportunities for error and misalignment that come from multiple inconsistent schedules.

Figure 1: The standard view of the structure of interaction between people and machines

Figure 1 suggests the basic collaborative configuration from the perspective of the standard technical design stance: two people, each with a machine, the machines unchanging, and interacting with each other through carefully designed, agreed-in-advance protocols, with fixed ontologies. The technical objects are rigidly agreed upon, the objects and relations clearly identified with no ambiguity or texture so that the machines can communicate precisely. The system is closed: nothing `outside' the people and computers is or ever will be relevant. Whatever changes or complexities appear in the users' practices, the system only responds to the chosen regularities in its ontology. The system is defined -- by fiat -- to be that way. And the world it serves is -- by that very act -- equally defined to be that way.

But how else could it be? Well ...

The Pliant View

In spite of our technical ambition to specify ontologies clearly, the world we live in is often closer to a buzzing, blooming confusion. Furthermore it changes, often in a difficult-to-notice `drifting' way. We reflect this richness and drift in our continued creation and evolution of practices, including the practices needed to work around/with our rigid, intransigent technical systems.

Anyone who has used a calendar application has experienced this sort of mismatch: things in the real lived-in world are much more subtle and complex than the way the calendar sees them, and you have to figure out whether and how to capture that richness despite the calendar's view.

For example: The meeting is sometime late afternoon with the start and stop times not known, and perhaps not knowable until it happens, but it shouldn't take more than half an hour.

Another example: you want to insure that you get three uninterrupted hours before Thursday to complete that memo. Doesn't matter when, but as the days fill up, the pressure on holding space increases dramatically, with the time becoming better and better defined, but not necessarily ever `locked down'.

Yet another: Jed and Austin feel that, between us, we should cover the meeting, that one of us should go, but not both.

Richness is the normal case. Rigidity and regularity is the exception. Indeed, people have to work hard to create the appearance of regularity. Unfortunately, because current systems are so fragile and deal so poorly with ambiguity, uncertainty, and unanticipated change, people have to make the world regular for them all the time.

How Do People Do It?

People can only make the world regular for their systems because they deal very effectively with all this dynamism and uncertainty.

Instead of assuming that one can stand outside the situation and define how it will be and declare it into existence, people take things as they find them, interpret them as best they can, and proceed as if these interpretations are accurate, until something breaks down.

Instead of assuming that things will run right, people monitor how they are doing, notice what's squeaking, and make changes enough to get done what needs to be done.

Instead of these changes having to be adequate for all future situations of this sort (and what sort is that, exactly?), people are satisfied if they address the immediate problem.

Perhaps most important, instead of assuming that everyone shares a single, coherent perspective, people recognize that each participant has their own take on the world.

People recognize that in collaboration we need not settle on one (compromise, and compromised) truth that everyone can live with, but rather that we need to adjust our multiple truths so that they fit with each other well enough for the current task. Instead of getting everyone to sing off the same page, everyone can have their own page, as long as we can pull together a set of pages which are coherent enough for the concert being given.

As a result, when people deal with the world, nothing is locked down. Instead, all relationships are negotiated until they are `good enough for the purposes at hand', using a rich shared context in which to find common ground, and against which to judge adequacy.

Figure 2: The pliant view of the structure of interaction between people and machines

This leads to a Figure 2, a pliant version of Figure 1. The people and computers are embodied and particular. The relationships are flexible and complex. And now the dependence of the system on the world is explicit. All relations are negotiated, not abstractly but in the context of the world on which their multiple truths depend.

Sketch of a Worked Example

How can we adopt these ideas, and still build working systems? Attempts to handle messy situations like this typically lead to `agents', `fuzzy rules', various forms of learning, and so on. The overall tendency is to try to make the calendaring system smart enough to `do the right thing'. Such approaches are still trying to distill the key regularities that describe the world, hoping that if they are sufficiently sophisticated regularities they will do the job. Because the systems are intended to function autonomously, they have to be very sophisticated indeed.

We took a different direction. As one of our worked examples, we thought through a pliant approach to calendaring that can be implemented without any technical breakthroughs. We don't have space here to discuss it in detail but we can sketch some aspects of the user experience.

Our example of pliant calendaring provides a calendar `space' and lets users put various forms of material into it to indicate how they plan to spend their time. Some material is crisp and neat, just like current calendaring ontologies. Other material is soft and blurry, expressing vague boundaries and willingness to change. Material isn't intelligent, but any given variety of material does have tendencies and it is more comfortable with some situations than others. Conversely, we found that it was very important that material doesn't act autonomously and that users can always control it.

Pieces of material on the calendar can interact; some varieties of material may accommodate gracefully to other overlapping material, while other varieties may resist and complain. Furthermore, material on one user's calendar can interact with another user's attempt to place material on multiple calendars. This supports group calendaring.

Material can express complex preferences. One variety expresses our desire (described above) to keep a block of time free for concentrated work. As other appointments eat up the time, this variety of material resists other encroaching material more and more strongly. Of course, users can always override this resistance, but overriding preferences expressed on someone else's calendar may be considered rude. At a minimum such resistance suggests that you ought to ask them first.

The flexibility and richness of material helps, but it only gets us part way toward our goal. A fixed range of varieties of material still imposes a `locked down' ontology on users, albeit one that comes closer to their lived experience. We must go further.

Our next step in this thought experiment was to let users acquire new varieties of material, customize them, and trade them with each other. Over time, we would expect such a system to spawn a whole ecology of calendaring material and associated practices, adapted to the specific needs of the particular users.

With sufficient potential for richness in the varieties of material, and with a social ecology to tune this richness to the needs of a community of users, we felt we had an approach that could potentially overcome the mismatch between human practice and systems technology.

However, this comes at a price. As varieties of material proliferate without central control or a `locked down' framework to make them mesh smoothly, interaction between calendars may become more problematic or even break down. We discuss this type of problem in more detail in the `Challenges' section below.

Based on this and other examples, we believe that pliant systems can be implemented with today's technology. Our primary result so far is a design stance for creating such systems, based on the values described in the next section. In addition, we have identified major issues described in the `Pliant challenges' section below, which will require experimentation and innovation in both the social and technical realms.

Pliant Values

In this section, we describe four broad commitments -- values -- that emerged from our work and that underpin our design stance.

Pervasive Purpose

One of our most basic observations was that the larger context of goals and purposes is always relevant because relationships are always being negotiated, and negotiation is only `good enough for the purposes at hand'. So real systems are never `value free'. You can discover the values by asking `What problem would be important enough to cause renegotiation?'

One way to state the shift we are attempting is that in addition to recognizing that values are always involved, we want to lower the threshold for renegotiation, so that many more values can influence the ongoing shaping of systems.

You Can't Get Out of the River

In our perspective, there is no place outside the process to stand and `design'. A designer no longer has the luxury of `standing on the bank', deciding how to structure the system. Instead, design is done in the process of creation, use, (re)negotiation, incremental restructuring, etc.

Of course the designer still has a valuable role to play, as a guide who has navigation skills and has perhaps been down the river -- or similar rivers -- before. But ultimately the goal is to create a system and a community of users that can sustain each other while in flux.

Systematicity

Faced with the realization that we can't `lock down' a system design, we considered whether to abandon the concept of `system design' altogether. We concluded that systematicity is still good and important, but not to the exclusion of other values. We have to find ways to design systems so that they can retain their integrity (as far as needed) while continuing to evolve. They must honor both the regularities that let people cope with their world, and the differences that enable diversity and growth.

Perhaps a helpful analogy here is the life of a city or large town, which is shaped by millions of separate decisions, but which throughout (if it is healthy) maintains a substantial wholeness, or systematicity, that is essential to the happiness and well-being of its citizens.

Co-production

We often found that when we got stuck in our thinking, it was because we were trying to make some part of a system capable of solving a problem all by itself. Usually, if we said `let the user guide the system' we could find a way to resolve our dilemma. Since no piece of the system dictates the ontology, every piece needs `a little help from its friends', especially its user(s).

This perspective leads to a relationship in which the user can `nudge' the system, and in which the system needs to respond gracefully to `nudges'. Of course, to do this the system has to continuously show the user what it is up to and how far along it is, and the user must always be able both to ask for more or less detail, and to intervene and redirect activities without fear of derailing them.

Pliant Challenges

While we did come up with an approach that led us to some satisfying example designs, we also realized that as we extend this approach, we encounter some truly major open issues. This section summarizes the three most serious such challenges. Luckily, we can choose to avoid these in any given design, but ultimately we will have to deal with them if we are to build fully pliant systems.

Flexible Ontologies

If the relationships between entities in our systems are continuously renegotiated, how can we automate them? We cannot assume that these relationships are described in some higher-level `metalanguage' that solves this problem for us; that just pushes the formality one level higher. We have to be able to deal with genuine, unexpected new shifts and divergences in those relationships.

To pick a very simple example, in pliant calendaring, different people may have different versions of `the same' appointment on their calendars. If one person has a conflict with that appointment, it should affect the others' scheduling preferences. But as the situation changes, the appointments may no longer refer to `the same' meeting, and should no longer influence each other. Worse, a user may split an appointment into two; the relationships between the original, single appointment and other appointments may go with one or the other of the new ones, or may need to be split as well. This sort of problem can arise in usage of a system in myriad ways and at every scale.

Here we believe one of the keys is co-production. Trying to build systems that can autonomously maintain relationships as they evolve wouldn't make sense. Instead, the system and its users must work together; the system must get users' attention when relationships fail and help users to fix them up again or establish new ones. This remains challenging but not hopeless.

Automating Negotiation

A more general and difficult problem is getting systems to participate fully in renegotiating the mechanisms and assumptions that underlie their interactions -- their protocols, schemas, ontologies, etc. While quite difficult this is also absolutely essential.

For example, in calendaring again, individual calendar databases will tend to diverge as people idiosyncratically choose to record more and different information about their activities. Even pre-existing aspects of the databases will be reinterpreted over time. As a result, it will become impossible to simply pass around calendar information; instead such information will need to be re-interpreted in the new context, and if the interpretation doesn't work, the assumptions underlying it may need to be renegotiated.

Another example is the ways network protocols have actually been developed. Of course network protocols are intended to work without deep ontological negotiation. But when a protocol specification is implemented by different teams, ambiguities, omissions and unintended consequences in fact do require negotiation. Since the implementations are not competent to negotiate on their own behalf they rely on their implementors to negotiate for them.

Our approach to this issue relies on co-production, of course. In addition we believe that adequately supporting negotiation will require explicit attention to the evolution of the ontologies in the population of interacting systems as a whole. We expect that considerable experimentation will be required before we understand this area.

Scaling

If every relationship is potentially negotiable at any time, how do we maintain coherence in the system as it has to support large groups and/or work over long time spans? In the worst case, it seems like building large pliant systems could be like trying to build a skyscraper with beams made out of jello.

Current systems try to address this by imposing rigid structure on relationships. Up to a point this can succeed. However, often such systems gradually lose relevance as the needs of their user community change, until they are perceived as imposing an unacceptable cost for little benefit. Equally often, several different systems are initially constructed to work together, and then independently revised to remain relevant. In the process, they typically drift apart and become incompatible. So, neither of these approaches based on getting clear specifications really amounts to a solution.

In contrast, human organizations solve this sort of problem all the time. The solutions are messy, expensive, and sometimes have undesirable side-effects, but they are usually better than rigidity. Organizational continuity and coherence can often be maintained through wrenching changes, sometimes even over centuries.

We need to understand the key mechanisms in this sort of resilience and adapt them to system evolution. Clearly this is a very long term goal, but it is also offers great potential benefits. Such systems will help teams, organizations and communities become even more resilient and effective.

Netting It Out

The Discourse Architecture Lab had a short life -- only about 18 months. However, it was a very rich experience, addressing an issue central to the difficulties with computing today, and at the heart of the ease of interaction between users and machines for which Apple is so well known.

As members of ATG's Discourse Architecture Lab, we are particularly appreciative of Apple Computer for enabling us to come together, surrounding us with bright and enthusiastic colleagues, and providing us with the time, place and opportunity to focus deeply, if only for a while, on a fundamental issue in the computing of today and tomorrow.

About the Authors

Jed Harris was a system architect at Apple Computer for ten years. He was a co-architect of OpenDoc and founded and ran CI Labs. He is now a founding member of Pliant Research, and a principal at Ricoh Silicon Valley, a corporate venture arm of Ricoh Company, Ltd. His personal web page is http://www.pliant.org/personal/Jed_Harris/

Austin Henderson has done science and design in Human-Computer Interaction for BBN, Xerox, Fitch, and Apple. He is now a founding member of Pliant Research, and runs Rivendel Consulting offering services in strategic, product and interaction design (http://www.rivcons.com). His personal web page is http://www.pliant.org/personal/Austin_Henderson/.

Authors' Addresses

Jed Harris
Ricoh Silicon Valley
Suite 210
2884 Sand Hill Road
Menlo Park, CA 94025, USA

email: jed@rsv.ricoh.com
Tel: +1-650-496-5739

Austin Henderson
Rivendel Consulting
PO Box 334
La Honda, CA 94020, USA

email: henderson@rivcons.com
Tel: +1-650-747-9201

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.30 No.2, April 1998
Next article
Article
No later issue with same topic
Issue