Issue |
Article |
Vol.28 No.3, July 1996 |
Article |
Issue |
At CHI '95 a workshop was held on the formal specification of user interfaces. Eighteen people attended based on submitted position papers from the United States, Canada, Europe, and New Zealand. Participants had a wide range of backgrounds from academia, industry, and government. This report summarizes the motivation behind the workshop, the interests/research areas of the participants, and the issues that arose in discussions.
As user interfaces become more complex it becomes more important to communicate among interface designers, developers, users, and customers the look and functionality of a proposed interface. The communication of a user interface is done through a specification. Often interface designers use informal or ad hoc techniques for defining the interface that are incomplete and/or ambiguous and causes developers, users and customers to interpret them differently.
Specifications for user interfaces are often given through prototypes or other non-formal specification techniques. Using a prototype to specify a user interface to a programmer would be a lot like an architect giving a model of a building to a contractor instead of blue prints. The results would be unpredictable and it is unlikely the customers would get what they thought they were getting.
Without a complete, thorough and understandable specification technique, misunderstandings will develop and incomplete or invalid user interfaces will result. No matter how usable an interface design is, if the design is not well communicated to the developer the resulting interface may be unusable. A well specified user interface can also help designers to discover inconsistencies and usability problems before the interface is developed, thus saving time and money.
The goals of the workshop were to bring together researchers and practitioners to discuss:
As stated, workshop participants had a wide range of backgrounds, including human factors, mathematics, and software engineering. The following is a list of the participants and a brief summary of their work in the field of specification of user interfaces:
Ramesh Achuthan (Concordia University, Canada) has proposed a framework for the specification and verification of user interfaces based on a three-tiered object-oriented formal model called Time-Reactive Object Model (TROM) [1]. TROM addresses encapsulation, message passing, modularity, timed constraints, validation, and verification. The top tiers of the model consists of user interface configuration specifications, which describe a user interface architecture by succinctly specifying interaction relationships between the objects in the user interface. The middle tier gives the detailed specification of the objects used in the upper tier by means of class definitions in TROM. The lowest tier specifies the data abstractions of the middle tier using first order logic. In this way, the design specification framework not only provides an architectural specification of the system but also forms a means for formally specifying the detailed design of the system components.
Mark Apperley, (University of Waikato, New Zealand) has interest in describing, capturing, and communicating designs for user interfaces. He has developed a specification/description/analysis technique for menu-based interfaces called Lean Cuisine [2]. This work has led Mark to the view that a successful specification technique will be based on an "evolving" or multi-faceted notation, with facilities for capturing the early design concepts, refining these to a more formal specification, and ultimately leading to direct implementation. Such a technique, with relatively seamless progress through the design cycle, could form the basis of an interactive interface builder's workbench. He is currently doing research into ways in which Lean Cuisine might be combined with other existing approaches to achieve these goals (e.g., event response systems and statecharts).
Aline Baeck (Northern Telecom, Inc., USA) has found one of the most challenging phases in designing a GUI is communicating the design to others. She has used two methods for specifications: formal written specifications and prototypes.
She has found both to have comparative advantages and disadvantages. A written specification is more detailed and complete. It also can reflect many levels of a design, such as guidelines, high level, and detailed designs. A prototype is better at showing the flow of a user interface than in written form, but are often inaccurate, in that a complete simulation of the design cannot usually be provided.
The biggest challenges she found were in detailed design documents, which must communicate every level of the design and be an accurate representation of the final product. Yet these documents are often misinterpreted in executing the design. Some root causes for misinterpretation are the diverse audience for a specification, the inaccuracy of window illustrations, and the software interactions, especially for complex systems.
Vivienne Begg (Cadence Design Systems Inc., USA) has found several obstacles to formal specification of user interfaces. The first is that the customer/user is often involved in the specification process and they need commonly understandable descriptions, which is why she uses prototyes. Another is the involvement of multidisciplinary teams that come from a wide background. A third is that designs are often not done from scratch, but are redesigns which make a formal description more difficult since the product is already implemented.
One technique that she has found for defining formal specifications and at the same time making them flexible and adaptable to change is with design templates. Design templates are partly visual but mostly natural language. One model she has worked with has a template for the following elements: 1) a Task Analysis document, 2) a User Model document, 3) a paper prototype, 4) a code prototype, and 5) a functional specification.
Roxanne Bradley (Hewlett-Packard, USA) found a specification technique that is successful is to keep the writing to a minimum during the early phases of design. This way engineers focus on the users and their tasks, and are more willing to make changes to the design. To do this they have either a screen capture or a pencil sketch of the proposed screen with notes about what will happen when a button is pushed, etc. When the interface is more fully defined, they then fully document each screen and the activities that accompany each screen. They also capture ongoing issues or decisions that need to be made about each screen or area of the interface. Though they do not document the internal design needed to implement the interface, they do capture any issues or ideas about the implementation so they are not lost.
Peter Bumbulis (University of Waterloo, Canada) is investigating an approach to combining formal techniques with prototyping for user interface construction. In this approach he uses a single specification for constructing both prototypes for experimentation and models for formal reasoning.
In traditional approaches to formal specification an executable formal notation is used to express user interface designs. Typically, these notations are based on an existing (usually concurrent) formal notations to take advantage of current tools and methodologies. Prototypes are expressed directly as specifications in the formal notation; these specifications are used as a basis for both experimentation and formal reasoning. His framework provides the user interface designer with a set of primitive components and a dataflow-based formalism for connecting them. User interfaces are described as directed graphs in which nodes represent components and arcs represent the flow of data between them [3].
Luiza M.f. Careiro-Coffin (Oakland University, USA) is working on a state-machine-based user-interface specification called JASMINUM (Joining ADVs and State Machines in a Notation for User-Interface Modeling) [4] which is a new visual formalism for designing interactive systems. JASMINUM is state-machine and event-oriented and is based on earlier research on visual formalisms (Statecharts and Objectcharts). JASMINUM has a number of advantages over other visual formalisms for specifying interactive systems since it handles aspects of design specific to those types of systems, including focus of control, the specification of concurrent operation of the components of a user interface, and the relationship of the various user-interface components. Because of these advantages JASMINUM supports a user-centered design approach to interactive systems in that the appearance of the design strongly resembles the appearance of the user interface.
David Carr (University of Luleå, Sweden) noted that each specification technique addresses only a part of the user interface problem and that it is unlikely that one language will meet all the goals of a specification. He believes that the "Holy Grail" of user interface specifications is a language that will go from a user task description to an implementation of a user interface. The language should be both powerful enough to generate much of the interface from defaults, yet flexible enough to allow specification of new widgets [5]. The language must also guarantee to follow a set of usability guidelines.
He believes that a single language would be unlikely to be sufficient, but would need to be a set of integrated specification languages. These languages would have to address four levels: 1) the user task description, 2) the user/widget dialog description, 3) the interface layout, and 4) new widget description.
Berardina De Carolis (Universita' di Bari, Italy) is working in modeling the behavior of user-adapted interfaces. This requires that a method be able to specify what the system does in the different context of user actions, taking into account the parameters of adaptivity. In formally specifying a user-adapted multimedia interface it is important to take into account:
To model the interaction in multimedia adaptive systems, she has extended a formalism to perform a pre-empirical evaluation of an adaptive user interface by coloring Petri Nets (PNs) and by introducing adaptable mappings between Colored PNs components and visual objects of the interface. [6] The model that is being proposed combines the Colored Nested PN formalism with time constraints and with a characterization of places and transitions that depends on the interaction style adopted.
Bob Fields (University of York, UK) is working on design methods for interactive systems, with an emphasis on safety critical application areas such as aircraft instrumentation. His work has two main facets: using formal specification techniques to represent tasks, designs and requirements for interfaces and using models of human error during the development process. The use of formal specification not only supports the expression of designs and development of designs from requirements, but also helps to form a bridge between psychological theories of human error and software and systems engineering practice. Both the use of formalism and the error analysis techniques are discussed in more detail in [7].
Joan S. Goddard, (IBM, USA) has used a specification methodology derived from a Cleanroom Software Engineering model. The basis of the Cleanroom approach is similar to its hardware namesake, prevention of errors before they enter the product. The fundamental components of the Cleanroom process are the following:
Compared to traditional software development processes, a Cleanroom process puts more emphasis on specification and design, and less emphasis on testing, resulting in highly reliable products without loss of productivity during development.
Mike Haverdink's (Iowa State University, USA) research has focused on the development of SPECS-C++, a formal, model-based, interface specification language for C++ classes. SPECS-C++ models classes via discrete structures (e.g., sets, sequences, and tuples) as well as several primitive types (e.g., integer). Specification of class member and friend functions are done via pre and postcondition assertions. These assertions are written in first order predicate calculus over the abstract class instance model. As such, SPECS-C++ is related to other formal, model-based specification languages like Vienna Development Method (VDM), Z, and the Larch family of specification languages.
Recently he has been working on the application of a technique for the direct execution of assertions to SPECS-C++ specifications, with the goal of producing a prototyping specification compiler. While not possible to execute arbitrary assertions, this technique does make a large and useful subset of assertions executable. This compiler will produce linkable object code files directly from SPECS-C++. This compiler represents a big step toward making formal specifications more cost-effective by being able to test formal specifications in much the same manner one currently tests implementations.
Robert Jacob (Tufts University, USA) is doing work in the specification of "Non-WIMP" user interfaces. Non-WIMP user interfaces provide "non-command", parallel, continuous, multi-mode interaction, in contrast to current GUI or WIMP (Window, Icon, Menu, Pointer) style interfaces [8]. This interaction style can be seen most clearly in virtual reality interfaces, but its fundamental characteristics are common to a more general class of emerging user-computer environments, including new types of games, musical accompaniment systems, intelligent agent interfaces, interactive entertainment media, pen-based interfaces, eye movement-based interfaces, and ubiquitous computing. They share a higher degree of interactivity than previous interfaces-continuous input/output exchanges occurring in parallel, rather than one single-thread dialogue.
Most current WIMP user interfaces are inherently serial, turn-taking ("ping-pong style") dialogues with a single input/output stream. Even where there are several devices, the input is treated conceptually as a single multiplexed stream, and interaction proceeds in half-duplex, alternating between user and computer. Users do not, for example, meaningfully move a mouse while typing characters; they do one at a time. Non-WIMP interfaces are instead characterized by continuous interaction between user and computer via several parallel, asynchronous channels or devices.
Timo Jokela (VTT Electronics, Finland) has been working on a technique in modeling specifications of user interfaces. Her and her colleagues' basic idea is to model the user viewpoint of a system through state behavior in the first place. In other words, instead of focusing on modeling data, objects or such, they model the states of the system (from the user viewpoint), and the transitions between the states. As a modeling notation, they use statecharts.
One basic rule in modeling is that they totally separate the logical behavior of the application from the "what is displayed". They start by modeling the logical behavior without any concern for displays or audible signals, which keep the size of the models small.
They try to identify and model the main states and sub-states of the application at the logical level. They name the states (which is typically quite easy because `natural' names can intuitively be found), and define the transitions between the states (i.e. from what state to another a user can go). Transitions from one state to another are caused by externally generated events (typically user commands), and functions (manipulation of data) are performed during transitions. However, they do not explicitly model the events and functions at an early stage; they leave transitions `unlabeled'.
Srdjan Kovacevic's (US WEST Advanced Technologies, USA) key point in his position statement is that a model-based user interface design is a strategy for incorporating formal specifications in UI development [9]. Model-based design uses an equivalent of formal specification -- an explicit representation capturing all relevant aspects of UI design -- in all design activities it encompasses. This representation/specification is central to model-based design and is created and maintained as an integral part of the design and development process. Capabilities of a model-based tool are limited by its underlying representation and can benefit from improved specifications.
Model-based user interface design refers to a paradigm in which a developer of an interactive system, instead of writing a large monolithic procedural program, defines a declarative model of facts and a much smaller procedural program. The model captures knowledge about the application semantics and other knowledge needed to specify the appearance and behavior of an interactive system. In that sense, the model-based paradigm can be viewed as a way of making the design decisions, concerning the application functionality and its look and feel, explicit.
Laurence Nigay (IMAG, Grenoble, France) is working in the formal specification of multimodal user interfaces [10]. Laurence and colleagues propose a notation as a simple way of specifying and assessing aspects of multimodal interaction. This notation describes the Complementarity, Assignment, Redundancy, and Equivalence (CARE) properties that may occur between the interaction techniques in a multimodal user interface.
Equivalence expresses the availability of choice between multiple modalities while assignment designates the absence of choice. Redundancy and complementarity go one step further by considering the combined use of multiple modalities under temporal constraints. Redundancy is defined by the existence of redundant information specified along different modalities and complementarity is characterized by cross modality references. The formal expression of the CARE properties relies on the notions of state, goal, modality, and temporal relationships, which are the crucial concepts to formally define multimodality.
Philippe Palanque ( LIS-IHM, University of Toulouse, France) is doing research on an HCI development process that is flexible enough to keep the user in the specification process but also provide a formal specification that can be used for analysis and verification [11]. Philippe and colleagues propose accomplishing this by using a prototyping tool that allows end users to be involved in the design and specification process and then to abstract a formal specification from the description language with which the prototype was developed.
The formal specification can then be analyzed to determine short-comings in the prototype. Such things as ergonomic properties, reactiveness, absence of deadlocks, and conformity properties. In addition, aspects of the interface can be verified, and tests can be generated. As can be seen, this method of abstracting out the specification allows more focus on the dialog properties of a user interface.
Fabio Paterno' (CNUCE -- C.N.R., Pisa, Italy) discussed the application of automatic tools to the process-oriented formal specification of interactive systems in order to design them and automatically evaluate their behavior [12]. His and his colleagues' idea is to use formal notations to analyze systematically the dynamic behavior of Interactive Systems at the different abstraction levels of the refinement process.
Fabio also gave various motivations for using formal notations. These include:
He also gave examples of systems that were specified using English (e.g., GKS and PHIGS) which created different interpretations and misunderstandings so that different implementations for some aspects showed different behaviors, which undermined the purposes of the standards. He also emphasized the need to use automatic tools to get better results and to guarantee that specifications are obtained with the correct application of formal notations.
Christopher Rouff (NASA Goddard Space Flight Center, USA) developed a user interface specification technique called Interface Representation Graphs (IRGs) [13]. IRGs are an executable specifications for representing the layout, dynamic behavior, constraints, and data flow to and from the underlying application domain. IRGs are based on statecharts with additional features for representing constraints, layout, and data flow. The hierarchical structure of statecharts lends itself to the natural nesting and grouping of components of a user interface. With the basic features of statecharts and the added features that make up IRGs an executable specification of a user interface is possible.
A user interface management system (UIMS) is currently being developed that uses the IRGs for defining the layout and dynamics. To develop a user interface, a designer interactively defines the layout, sequencing, constraints, and application/interface data exchange which can then be executed by an IRG engine.
During the presentation of position papers many issues and observations concerning specification of user interfaces surfaced. Some are potential research areas and others are practical information that user interface developers should keep in mind during specification. The following is a list of these issues and observations.
After each participant discussed his or her work in user interface specification, we broke into groups to discuss some of the issues that had surfaced during the presentations. Many of the discussions actually raised more questions than were answered. Following are the results of the group discussions on the above issues.
During the initial presentations, it quickly became evident that the word "formal specification" meant different things to different groups of people. Though there never was agreement as to what made a specification formal, the following traits were supported by different participants:
The mathematicians maintained that a specification was not formal unless it supported reasoning and prediction while the human factors group maintained that it just had to be communicable. The software engineers tended to be in the middle. They tended to agree that it needs to be precise, unambiguous and support analysis.
There was complete agreement that a formal specification needed to be unambiguous and communicable/written. There were differences of opinions as to whether a specification needed to be written as long as it was communicable.
The basic goals of user interface specifications are the same as for other types of software:
One of the additional challenges to specifying user interfaces is the almost constant changing nature of the specification and communicating the specification to different parties who need different levels of detail.
On the question as to what specification technique should be used, it was discussed that the specification used is dependent on the audience and the task that the specification is to accomplish. It is highly unlikely that a mathematical specification would be understandable by an end user who is signing off on a system, and it would be very difficult to analyze a user interface for completeness, deadends, etc. using a textual description.
Through discussion we came to the conclusion that it made sense to specify the parts of a user interface in the specification technique most appropriate for the function of the specification. That is, if a user interface for a flight system needs to be checked for completeness, a mathematical specification made more sense. If the purpose of the specification is to get input from the end customer/user, then a prototype or text/graphic description makes more sense. This implies that parts of a user interface may be specified in more than one specification technique. A challenge here is making sure that when one specification is changed that the others (if needed) are updated. Otherwise a verification of a life-critical system may be made on an old design resulting in an unverified user interface.
Another point brought up in this discussion is that the lower the level a specification is (e.g., mathematical, state based, or petri net), the smaller the audience will be (see Figure 1), and the higher the level (e.g., a prototype or written document), the larger the audience will be. In contrast to this, the higher the level of the prototype the less analysis can be done (see Figure 2). It is also worth noting that different types of descriptions work better at different levels of detail. For example, natural language can be difficult to analyze due to its free form and no matter how precisly it is stated it can often be interpreted in different ways. On the other hand, natural language has the advantage that it can be understood by a less technical audience.
Figure 1: Size of audience versus detail of a specification
Figure 2: Amount of analysis possible based on the detail of a specification
There were several people in the group working on executable mathematical specifications (producing a prototype) to make it more understandable to a wider audience, and to keep the specification consistent with the prototype.
Many of the participants thought an interesting study would be to run a usability test on various specification techniques. The techniques could be checked for whether they are precise and unambiguous, whether they support reasoning and prediction, and how well they communicate the requirements to their intended audience.
Next generation user interfaces include non-WIMP user interfaces, such as virtual reality. Some issues related to specification of next generation user interfaces include how to specify:
It was noted that something that would help reduce complexity of specifications of next generation user interfaces (and current specifications) is the introduction of standards in specifications for describing certain interactions.
The dimensions of a specification can be broken down into the following:
The generality dimension relates back to the other three dimensions.
If there is a holy grail of user interface specification it would be that we want to let the user describe what s/he likes or does not like in a language they understand, and have it work from the given description.
Several participants are working in different areas for accomplishing this. Some are working in making a specification more understandable to users (human factors), others in taking a specification and producing a prototype or operational system from a specification (software engineers), and others in making sure that the given specification will actually work (mathematicians).
A key to achieving this "holy grail" would be to bridge the gap between the various specification techniques that different groups of designers are using. This would entail linking various specification techniques together so that when one is changed the others would be automatically updated and produce a single executable prototype from the different specifications.
One of the key benefits of this workshop was bringing together researchers and practitioners with different backgrounds in specifying user interfaces. This allowed each to hear the unique needs of the other.
An important point that was brought up in the workshop was that different specifications or views of a specification are needed for different people. End users do not understand mathematical specifications, but also natural language specification cannot be sufficiently analyzed to determine problem areas, to prove correct, or to be used to generate a prototype. The ability to bridge this gap between different specifications and keep them consistent in a changing environment seems to be unique and a key need in specifying user interfaces.
A second workshop was also held at CHI '96 titled Formal methods in computer human interaction: Comparison, Benefits, Open Questions. More information about this workshop can be found at the URL http://www.cenatls.cena.dgac.fr/~palanque/wschi96.html
Christopher Rouff is a computer engineer at NASA Goddard Space Flight Center. His research interests include specification of user interfaces and user interface management systems. e-mail: Chris.Rouff@gsfc.nasa.gov
Issue |
Article |
Vol.28 No.3, July 1996 |
Article |
Issue |