No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.28 No.3, July 1996
Next article
Article
No later issue with same topic
Issue

HCI in Italy: Reasoning on Gestural Interfaces through Syndetic Modelling

G.P. Faconti

Introduction

Recent advances in user interface development have been mainly driven by technical innovation, based either on new interaction devices and paradigms, or on algorithms for achieving realistic audio/visual effects [Connor92] [Robertson91a] [Robertson91b] [Berners92]. In such a rich environment, the user potentially interacts with the computer by addressing concurrently different modalities. While the technology driven approach has made possible the implementation of systems in specific application areas, it largely misses an undelying theory. This makes it difficult to assess whether this technology will be effective for users with the consequence that cognitive ergonomics is becoming an urgent requirement for the design of new interactive systems. Attention has been paid to the psychology of terminal users from the very beginning of human-computer interface research [Martin73]. However, existing established design techniques do not readily accomodate issues such as concurrency and parallelism, or the potential for the interaction with multiple interface techniques [Coutaz93].

Recently, works have taken place investigating models and techniques for the analysis and design of interactionally rich systems from a variety of disciplinary perspectives as it can be found in the DSV-IS book series edited by [Paterno94] and [Bastide95]. Formal methods have been one of a number of approaches; others include cognitive user models, design space representations, and software architecture models. Applications for formal methods are well known, see for example [Bowen95], and [Gaudel94]. However, none of the cited applications use formal methods to examine the user interface. One reason is that the factors that influence the design of interactive systems depend mostly on psychological and social properties of cognition and work, rather than on abstract mathematical models of programming semantics. For this reason, claims made through formal methods about the properties of interactive systems must be grounded in some psychological or social theory.

This paper builds on previous works carried within the ESPRIT Amodeus project and shows how a new approach to human-computer interaction, called syndetic modeling, can be used to gain insight into user-oriented properties of interactive systems. The word syndesis comes from the ancient greek and means conjunction. It is used to emphasize the key point of this approach: user and system models are described within a common framework that enables one to reason about how cognitive resources are mapped onto the functionality of the system.

Syndetic Modelling and the Role of Interactors

According to [Duke94a], [Duke94a], [Duke95], a syndetic model of an interactive system extends the formal model of the device or interface with the model of the cognitive resources needed for the interaction. The key feature of syndesis is the explicit use of a cognitive model of human information processing as its basis. The system presentation is mapped onto the user's sensors that receive and transform percepts [Duke93] for processing by the internal cognitive sub-systems. Conversely, user actions, expressed through users's effectors, are mapped onto the devices for further processing by the system.

The system is described by means of interactors, that is agents providing with a perceivable presentation of their internal state [Duke94b]. Similarly, the cognitive resources are modelled as an interactor that captures the relevant features of ICS (Interactive Cognitive Subsystems). The key hypothesis is that the structures and principles embodied within ICS can be formulated as an axiomatic model in the same way as any other information processing system.

Interactors have been differently specified for example by [Faconti90] and [Duke93]. Here, the MAL notation is used [Ryan91] that allows to describe the relationship between the internal system state and its presentation while keeping the specification compact and easily understandable. In MAL, type constructors allow new types to be defined from previous ones as the function space (D -> R), the cartesian product (S × S), the finite set (F S) and the sequence (S*). Axioms contain the usual connectivity and quantifiers (^ for and, => for implies, for exists, for for-all, etc.). For any action A and predicate Q, the modal predicate [A] Q, means that Q is required to hold after performing action A. Two deontic operators (per and obl) are used for expressing normative properties. In the specification, annotations called percepts, such as and , are used to represent attributes that are perceivable through some human modality.

In the next section only a simple account of some aspects of ICS is given. The interested reader may find a more complete description of the model in [Barnard93], [May93] and [Barnard94]. In the rest of the paper, a syndetic model is developed and analyzed referring to a simple gesture recognition application [Rubine91]. The formal notation can be easily skipped by the non-interested reader, by simply keeping on reading between formulas that are informally explained in the preceeding lines of text.

Interactive Cognitive Subsystems

ICS is a comprehensive model of human information processing that describes cognition in terms of a collection of sub-systems that operate on specific mental codes. Although specialized to deal with specific codes, all sub-systems have a common architecture, shown in figure 1.


Figure 1: Common Architecture of ICS Subsystems


Incoming data streams arrive at an input array, from which they are copied into an image record representing an unbounded episodic store of all data received by that sub-system. In parallel with the basic copy process, each sub-system also contains transformation processes that convert incoming data into certain other mental codes. This output is passed through a data network to other sub-systems. If the incoming data stream is incomplete or unstable, a process can augment it by accessing or buffering the data stream via the image record. However, only one transformation in a given processing configuration can be buffered at one moment. Coherent data streams may be blended at the input array of a sub-system, with the result that a process can engage and transform data streams derived from multiple input sources.

ICS assumes the existence of nine distinct sub-systems, each based on the common architecture described above:

Sensory subsystems
VIS visual: hue, contour etc.
AC acoustic: pitch, rhythm etc.
BS body-state: proprioceptive feedback
Meaning subsystems
PROP propositional: semantic relations
IMPLIC implicational: holistic meaning
Structural subsystems
OBJ object: mental imagery, shapes etc.
MPL morpholexical: words, lexical forms
Effector subsystems
ART articulatory: speech etc.
LIM limb: motion of limbs etc.

The nine subsystems effectively act as communicating processes running in parallel as shown in figure 2.


Figure 2: Overall Architecture of ICS


The overall behaviour of the cognitive system is constrained by the possible transformations and by a number of principles, most of which are out of the scope of this paper. Although in principle all processes are continously trying to generate code, only some of the processes will generate stable output that is relevant to a given task. This collection of processes is called a configuration. The thick lines in figure 2 show the configuration of resources deployied while using a hand-controlled input device to operate on some object within a visual scene. The propositional sub-system (1) is buffering information about the required actions through its image record and using a transformation (written :prop-obj:) to convert propositional information into an object level representation. This is passed over the data network (2), and used to control the hand through the :obj-lim: (3) and :lim-hand: (4) transformations. However, both obj and lim are also receiving informations from other sub-systems. The user's view of the rendered scene arriving at the visual sub-system (5) is translated into object code (6) that gives a structural description of the scene; if this is to be blended at the object sub-system with the user's propositional awareness of his hand position (from 2), the two descriptions must be coherent. A propositional representation of the scene is generated by the :obj-prop: transformation and passed to the propositional sub-system (7) where it can be used to make decisions about the actions that are appropriate in the current situation. In parallel with this primary configuration, proprioceptive feedback from the hand (8) is converted by the body-state sub-system into lim code (9) in a secondary configuration.

A Formal Representation of ICS

A simplified model of ICS is directly derived from an understanding of the framework discussed in the previous section. As already said, the key observation underlying syndetic modelling is that structures and principles embodied within ICS can be formulated as an axiomatic model. This means that the cognitive resources of a user can be expressed in the same framework as the behaviour of computer-based interfaces, allowing the models to be integrated directly. To begin this process, we define some sets to represent those concepts of ICS that will be used here.

[sys]
ICS subsystems, e.g. vis, prop, obj, etc.
[repr]
mental representations
tr == sys × sys
transformation processes, e.g. :vis-obj:

Representations consist of basic units of information organized into superordinate structures. Their coherence is captured abstractly in the form of an equivalence relation over representations. In addition, we introduce a further set, code, whose elements are representations that have been labelled by the sub-system in which they were generated.

_ _ : repr <-> repr

code == repr × sys located representations

In general we write Rsys for code(R,sys), R* for a code from or to the outside world, and :src-dst: for the transformation (src, dst).

The state of the ICS interactor indicates the source of data for each transformation, and the set of transformations whose output is stable. The codes that are available for processing at a sub-system are identified by a relation @ , where c @ s means that code c is available at sub-system s. The attribute buffered indicates which process is being buffered, while config identifies those processes that are active as part of the current processing activities. Finally coherent contains those groups of transformations whose output in the current state can be blended.

interactor ICS

attributes
source : tr -> F tr
stable : F tr
_ @ _ : code <-> sys
buffered : tr
config : F tr
coherent : v F F tr

Four actions are addressed in this model. The first two, engage and disengage, allow a process to modify the set of streams from which it is taking information, by adding or removing a stream. A process can enter buffered mode via the buffer action. Lastly, the actual processing of information is represented by trans.

actions
engage, disengage : tr × tr
buffer, trans

The principles of information processing embodied by ICS are expressed as axioms over the model defined above. Axiom ICS1 concerns coherence, and states that processes in a set are coherent if and only if they have the same kind of output and that the representations they produce are themselves coherent.

[ICS1]
trs : F tr . trs coherent
<=>
dst : sys . ( s, t : sys . :s-t: trs => t = dst
^
s, t : sys; p, q : repr . (
:s-dst: trs ^ ps @ dst
^
:t-dst: trs ^ qt @ dst
) => p q
)

The second axiom is that a transformation is stable if and only if its sources are coherent, and either it is buffered or the sources are themselves stable. A configuration then consists of those processes that are generating stable output that is used elsewhere in the overall processing cycle.

[ICS2]
t stable <=> source(t) coherent ^ (t = buffered / source(t) stable)
[ICS3]
t config <=> (t stable ^ s . t source(s))

A process will not engage an unstable stream. If its own output is unstable, it will either engage a stable stream, disengage an unstable stream, or try to enter buffered mode.

[ICS4]
per (engage(t,src)) => src stable
[ICS5]

t stable =>

(
s . s stable ^ s source(t) ^ obl (engage(t,s))
V
s . s stable ^ s source(t) ^ obl (disengage(t,s))
V
obl (buffer(t))
)

The next three axioms define the effects of the buffer, engage and disengage actions.

[ICS6]
[buffer(t)] buffered = t
[ICS7]
source(t) = S => [engage(t,s)] source(t) = S {s}
[ICS8]
source(t) = S => [disengage(t,s)] source(t) = S - {s}

The remaining two axioms define the effect of information transfer. ICS9 is the forward rule: if a representation is available at a sub-system, then after a transformation a suitable representation will be available at any other sub-system for which the corresponding process is stable. Conversely, if after a transformation some information were to become available at a sub-system, then there must exist some source system where the information is available, and the corresponding transformation is stable.

[ICS9]
px @ src ^ :src-dst: stable => [trans] psrc @ dst
[ICS10]
( p : repr; src, dst : sys . [trans] psrc @ dst) => x : sys . px @ src ^ :src-dst: stable

A Simple System for Gestural Interaction

The example system is based on the work on gesture recognition by [Rubine91]. The example, examined in some detail by [Faconti95] and [Faconti96], is simple enough and of a size suitable to illustrate the syndetic approach. The user specifies commands by simple drawings, made with the mouse. Figure 3 shows one of the possible scenarios, where a circle is created and subsequently moved to a new location. We will examine the cognitive load of users when using such an interaction technique and provide insights into its strengths and possible limitations.


Three consecutive gestures performed by the user

Final system feedback

Figure 3: Creating, Grouping and Moving Objects


We begin by introducing the types of objects that are of interest. These are primitive types that represent domain concepts:

[Gesture]
recognized gestures, i.e. create_circle, move
[Create]
gestures to create new objects
[M_Posn]
positions in the mouse space
[D_Posn]
positions in the display space

The following relation holds amongst the sets:

(Create Gesture) ^ (M_Posn # D_Posn)

We model the gesture system as an interactor. Its state indicates the scene composed by the objects that have been created. These objects results from the recognition of the history of sequences of points performed by the user. cursor indicates the mouse position in display space. Moreover, scene, history and cursor are defined to be perceivable entities through the user's visual sub-system. The form action models the detection of a position that is perceived through proprioceptive feedback. The evaluate action is internal to the system and models the recognition of a complete gesture.

interactor GESTURE
attributes
scene : F Create
recognition : D_Posn*-> Gesture
history : D_Posn*
cursor : D_Posn
actions
form : M_Posn
evaluate : Gesture

The first axiom prescribes that for a position to be formed it must be perceived by the body-state sub-system. Similarly, the rendering of the scene and of the cursor must be part of the interactor presentation (written [|form|] [|scene|] and [|cursor|] respectively).

axioms
[GS1]
per (form(p)) => [|form|] in [|GESTURE|]
^ [|scene|] in [|GESTURE|]
^ [|cursor|] in [|GESTURE|]

When a new position is detected, it is either appended to the current history if it extends a trajectory leading to gesture recognition, or the history is reset to this position. In both cases a rendering of the history is visually perceivable. The new position is always the current cursor position.

[GS2]
p: Posn; P : Posn* . history = H ^
[GS2.1]
(H <p> P) dom gestures => [form(p)] history = H <p> ^ [|history|] in [|GESTURE|] ^ cursor = p
[GS2.2]
(H <p> P) dom gestures =>
[form(p)] history = <p> ^ [|history|] in [|GESTURE|] ^ cursor = p

As soon as the history indicates a complete gesture, there exists an obligation to recognize it. Subsequently to recognition, the history is reset and the gesture is added to the scene if a new object has been created. This means that the created object becomes perceivable as part of the scene (from axiom GS1).

[GS3]
history dom gestures => obl (evaluate(recognition(history)))
[GS4]
history = H ^ scene = S =>
[evaluate(recognition(H))] history = Ø ^ (recognition(H) Create => scene = S recognition(H))

For completeness, we notice that the effect of a gesture not adding objects to the scene is not specified. However, this exceeds the purposes of this paper.

Building the Syndetic Model

Having described the basic mechanism governing the ICS and the gesture recognition, the building of a syndetic model is almost straightforward. A new interactor is constructed that includes the user and system models and new axioms are defined that govern the conjoint behaviour of the two interactors. A new attribute, goal, is used to contextualize the generic ICS model to the specific features of the gestural interface. Specifically, a simple representation of the user's goal is the sequence of positions representing the trajectory to follow in performing a gesture.

interactor SYNDESIS
ICS, GESTURE
attributes
goal : D_Posn*

The form action, introduced in the gesture interactor, has already been constrained by considering system properties. Here, we add a further constraint derived from cognitive properties. In order for the user to consciously detect a position, the configuration of ICS sub-systems must be set to transform a propositional representation into hand-control. Axiom SYN1 describes this fact by assuming that the set GestConf contains the processes deployed in figure 2. Axiom SYN2 describes the effect on the goal caused by the detection of a new position.

axioms
[SYN1]
per (form(p)) => ( GestConf config ^ buffered = :prop-obj: )
[SYN2]
goal = <p> G => [form(p)] goal = G

In the next section, we will give an informal account of the analysis of the model. It would be a mistake to evaluate syndetic modelling only on the basis its support formal reasoning about interaction. This is an ambitious goal to which a community of researcher is currently contributing in several aspects and it is seen as a long term research activity. However, the representation of user and system models in a common framework already provides in the current stage a starting point for rigorous and informal reasoning about design issues that involves both entities.

Analysis of the Syndetic Model

Standard Mouse Operation

At the propositional system, a corresponding goal is formulated that consists of a sequence of coordinates representing positions in the trajectory the cursor is expected to follow. The goal is fully satisfied when all the positions in the sequence have been removed from the goal, according to axiom SYN2.

Axioms GS1 and SYN1 must hold for each position in the sequence to be permitted. That is the mouse is perceived by proprioceptive feedback, the scene and the cursor are perceived visually, the GestConf is deployed and the propositional sub-system operates in buffered mode (axiom ICS6). If :vis-obj: and :prop-obj: transformations are to be part of the configuration, we must also assume that the output from these sub-systems is stable from axiom ICS3. A stable :vis-obj: is defined by the conditions set by axiom GS1, and requires that the rendering of the history occurs as specified by axiom GS2. Under this assumptions, the two transformations must also be coherent since they are both sources of the object sub-system (from axiom ICS2). After a transformation, as modelled by the trans action, a corresponding representation is transferred to the object sub-system according to axioms ICS9 and ICS10.

Using their own encoding of this information, object system processes are able to derive propositional information for example on the distance between objects, and limb information specifying the musculature control needed for the cursor to get closer to the target position. The information are transformed by the :obj-prop: and :obj-limb: transformations and transferred over two stable streams to the propositional and limb sub-systems respectively, following a trans action and according to axioms ICS9 and ICS10.

The limb sub-system always receives an input stream from the body-state sub-system. According to the definition of the GESTURE interactor, the forming of a position with the mouse causes a proprioceptive feedback from the device. From this feedback an information is derived on the velocity and the acceleration of the arm/hand movement as perceived in mouse space. This information is transformed in limb code by the :bs-lim: transformation according to axioms ICS9 and ICS10. Since this information is defined on a different space than the one received by the object sub-system, we argue that the two data streams are not coherent. Under this assumption, the :lim-hand: transformation becomes unstable according to axiom ICS2. To recover this situation, the limb sub-system must enter buffered mode (from axiom ICS5 third alternative) since there not exist alternative stable streams carrying the same kind of information.

We already know from axiom SYN1 that the buffer is located at the propositional sub-system. This buffering is necessary to sustain the goal formulation and cannot be transferred to another location. The practical consequence is that the forming of a sequence of positions following a trajectory causes an oscillation of the cognitive configuration by continuously transferring the buffer between the :prop-obj: and :limb-hand: transformations. This explain why it is difficult to perform even simple gestures with the mouse, unless the movement becomes `proceduralized' into the :lim-hand: process by repeated rehearsal.

Performing Gestures with the Mouse

The analysis done in the previous section suggests that a system such as the one described by [Rubine91] can be used only by skilled and well trained users. However, this is true only when a gesture is made to precisely follow an exact trajectory (i.e. draw this circle versus draw a circle). In fact, a user doesn't need to exactly quantify the geometric attributes and their relationship within a scene in order to draw a generic object. Only a mental representation of the object based on qualitative attributes is required. In this case, the user will consciously disengage the :vis-obj: transformation and will not use the visual information during gesture performance. The new configuration refers to a different context than the one set by the SYNDESIS interactor and we need to define it with a NEW_SYNDESIS interactor in which GestConf is substituted with GestConf - :vis-obj: in axiom SYN1, according to axiom ICS8. In the new configuration, the :lim-hand: transformation doesn't require buffering. The visual information is explicitly excluded and, consequently, :obj-lim: generates a mental representation not bounded to the display space. The resulting stream flowing to the limb sub-system is coherent with the one generated by :bs-lim: so that they can be blended.

Following this reasoning, the performance of the operation shown in figure 3 is described by switching between the deploying of the GestConf and the GestConf - :vis:obj: configurations.

The user moves the cursor to the starting point of the gesture in display space by deploying GestConf. This can be naturally performed following the arguments set in [Faconti96]. The circle creation gesture is performed by switching to GestConf - :vis-obj: as explained above. When the user perceives visually that the system has recognized the gesture by effect of axiom GS4, the GestConf configuration is restored. The newly created circle is acquired by both object and propositional sub-systems. The :vis-obj: transformation is disengaged to perform the move gesture and engaged again on gesture recognition to perform the final dragging.

Conclusions

With the advent of a new technology, commonly addressed as multimedia/multimodal, the challenge is to demonstrate their usability rather than that systems can be built out of it. The role of human cognitive abilities is becoming increasingly important in this respect. In particular, there exists a need to be able to reason about usability long before an actual system is built. Established methods in software engineering and formal methods do not offer today the necessary support for this reasoning. The approach outlined in this paper shows that syndetic modeling can be one of the possible directions to explore for bridging this gap. The underlying combined representation of user and system adds to existing established methodologies in software development a theoretical background of cognition that can provide a basis for arguing the why an interface is successful, to discover potential problems and to suggest solutions.

Acknowledgments

This work was carried out as part of the ESPRIT Basic Research Action 7040 -- AMODEUS project and of the HCM Interactionally Rich Systems Network -- Contract ERBCHBGCT940674 both funded by the European Union. I would like to thank Phil Barnard and Jon May whose papers and seminars on the Interactive Cognitive Subsystems have been of invaluable help. I would like also to express my gratitude to David Duke and David Duce for having spent much time in discussions and for having originally developed the idea of syndetic models.

References

[Barnard93]
P.J.Barnard and J.May 1993. Cognitive modeling for user requirements. In M. J. Byerley P.F., Barnard P.J. (Ed.), Computers, Communication and Usability: Design Issues, Research and Methods for Integrated Services, Amsterdam, NL, pp. 101--145. Elsevier.
[Barnard94]
P.J.Barnard and J.May 1994. Interaction with advanced graphical interfaces and the development of latent human knowledge. In F.Paterno' (Ed.), Proc. Of The 1st Workshop On Design, Specification, Verification Of Interactive Systems, Berlin, Germany. Springer Verlag.
[Bastide95]
R.Bastide and P.Palanque (Eds.) 1995. Proc. Of The 2nd Workshop On Design, Specification, Verification Of Interactive Systems. Springer Verlag, Berlin, Germany.
[Berners92]
T.J.Berners-Lee 1992. Electronic publishing and visions of hypertext. In Phys. world, Volume 5.
[Bowen95]
J.P.Bowen and M.J.Hinchey 1995. Applications of Formal Methods. Prentice Hall International, New York.
[Connor92]
D.B.Connor, S.S.Snibbe, K.P.Herndon, D.C.Robbins, and Dam, A. 1992. Three dimensional widgets. In Symp. on Interactive 3D Graphics, Special issue of ACM SIGGRAPH Computer Graphics Journal, pp. 183--188. New York: ACM Press.
[Coutaz93]
J.Coutaz, L.Nigay, and D.Salber 1993. The MSM Framework: A Design Space for Multi-Sensory-Motor Systems. In L.Bass, J.Gornostaev, C.Unger (Ed.), Lecture Notes in Computer Science 753 (EWHCI'93) Selected Papers, Berlin, Germany, pp. 231--241. Springer Verlag.
[Duke95]
D.J.Duke 1995. Reasoning about gestural interaction. In Computer Graphics Forum, Volume 14. Cambridge, UK: NCC/Blackwell.
[Duke94a]
D.J.Duke, P.J.Barnard, D.A.Duce, and J.May 1994. Syndetic models for human computer interaction. ESPRIT BRA 7040 : AMODEUS id_wp35.
[Duke94b]
D.J.Duke, G.P.Faconti, M.D.Harrison, and F.Paterno' 1994. Unifying Views of Interactors. In Proceedings of AVI'94, pp. 143--152. New York, ACM Press.
[Duke93]
D.J.Duke and M.D.Harrison 1993. Abstract interaction objects. In Computer Graphics Forum, Volume 12, pp. 25--36. Cambridge, UK: NCC/Blackwell.
[Faconti90]
G.P.Faconti and F.Paterno' 1990. An approach to the formal specification of the components of an interaction. In Duce. D. Vandoni C. (Ed.), Proc. of Eurographics'90, Amsterdam, NL, pp. 481--494. North-Holland.
[Faconti95]
G.P.Faconti and A.Fornari 1995. Syndetic Modelling and Gestural Interaction. In Stefanidis C. (Ed.), Proc. of Workshop on User Interfaces for all, Heraklion, Crete. European Research Consortium for Informatics and Mathematics (ERCIM).
[Faconti96]
G.P.Faconti and D.J.Duke 1996. Device Models. ESPRIT BRA 7040 : AMODEUS id_wp57 (submitted for publication).
[Gaudel94]
M.C.Gaudel 1994. A classification of formal methods. In Proc. International Conference on Software Engineering, New York, pp. 112--128. IEEE Computer Society Press.
[Martin73]
J.Martin 1973. Design of Man-Computer Dialogues. Prentice Hall International, New York.
[May93]
J.May, P.J.Barnard, and L.Tweedie 1993. AnimICS Version 6.0B2: An animated tutorial on ICS. ESPRIT BRA 7040 : AMODEUS ftp server in usemod/AnimICS_6.BETA.hqx.
[Paterno94]
F.Paterno' (Ed.) 1994. Proc. Of The 1st Workshop On Design, Specification, Verification Of Interactive Systems. Springer Verlag, Berlin, Germany.
[Robertson91a]
G.G.Robertson, J.D.Mackinlay, and S.K.Card 1991a. Cone trees: Animated 3d visualizations of hierarchical information. In O. J. Robertson S.P., Olson G.M. (Ed.), Proc. of CHI'91: Reaching Through Technology, New York, pp. 189--194. ACM Press.
[Robertson91b]
G.G.Robertson, J.D.Mackinlay, and S.K.Card 1991b. The perspective wall: Detail and context smoothly integrated. In O. J. Robertson S.P., Olson G.M. (Ed.), Proc. of CHI'91: Reaching Through Technology, New York, pp. 173--179. ACM Press.
[Rubine91]
D.Rubine 1991. The Automatic Recognition of Gestures. Carnegie Mellon University, Pittsburg, PA.
[Ryan91]
M.Ryan, J.Fiadeiro, and T.Maibaum 1991. Sharing actions and attributes in modal action logic. In M. A. Ito T. (Ed.), Theoretical Aspects of Comp. Software, Berlin, Germany. Springer Verlag.

Author's Address

CNUCE Institute, National Research Council of Italy,
56126 Pisa, Italy
e-mail: G.Faconti@cnuce.cnr.it

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.28 No.3, July 1996
Next article
Article
No later issue with same topic
Issue