Issue |
Article |
Vol.28 No.3, July 1996 |
Article |
Issue |
This paper shows how the concept of user task can be used to drive various phases of the design process. The basic idea is to create a direct correspondence between user tasks and the software components needed to perform them. This leads to such results as the design of functionalities which are easier to use, task-oriented help, and the evaluation of users sessions with respect to their tasks.
Introduction
The traditional models (such as the Waterfall, the `V', and the Spiral models) in software engineering for the development and evaluation process were developed taking into account the need to support functional requirements and specifications. The increasing need to guarantee usability of software products has raised the need to revise these models in order to include the user point of view in the software development process. In this paper some results in this area obtained by the User Centered Design Group at CNUCE are described.
Our work is based on two main research areas:
The main goal is to find a balanced interplay between these two aspects so as to obtain a rigorous approach to the software specification and development, which guarantees that user requirements are satisfied by the final implementation.
There are some interesting works in this area. UAN [HG92] was an important contribution as it is a formal notation with concepts very similar to process algebra notations such as LOTOS, CCS and CSP. The main difference is that in UAN the modularization concept is the task rather than the process, this gives a more user-oriented way to structure and analyse the specification. The external description of the user interface is obtained by associating basic tasks with the user actions and system feedback which are needed to perform them. The main goal of UAN is to communicate user interface design in order to discuss possible solutions, whereas we are interested in approaches which support more directly the software development process. Thus we have developed an approach which transforms the task specification into an architectural specification which includes both the perceivable behaviour of a user interface and the description of the software part which controls it.
Palanque and Bastide considered both the use of formal methods (based on Objects Petri Nets) and task models [PBS95]. We consider the task model as an abstract specification for the system model whereas they use them as if they were two interacting models.
At the HCI group in York research is underway which links tasks and formal models: Fields, Harrison and Wright [FHW95] have developed an approach which aims to consider properties about the relationship between the information presented by the system and that required by the user in order to perform some task. Duke et al. [DBDM95] have proposed the Syndetic model which tries to integrate model of the human cognition possibilities with system models. This is a useful contribution as user cognitive models raise different requirements from task models and it is important to define methods to integrate them.
Our basic idea was to model the software specification using user tasks as abstract model rather than starting from functional requirements which do not take into account the user view of system functionalities. The initial goals were to structure the software implementation in a task-driven way. This means that users can more easily understand system functionalities and if tasks have to be modified it is easier to locate the part of software implementation to modify correspondingly. We soon realised that creating a direct correspondence between user tasks and software components, although insufficient to address all the usability issues, opens up even more perspectives in the various phases of the design process.
In this paper we will briefly describe some results obtained.
We consider a task specification as the indication of the logical actions in the application domain which needs to be performed in order to modify the state of an Interactive System or to query it.
In our approach the task specification is performed in a hierarchical way, where abstract tasks are described in terms of more refined tasks, using LOTOS operators (such as interleaving, enabling, disabling, synchronization, recursive instantiation) to indicate temporal relationships among tasks at the same level. For each task related objects and actions are indicated. Next, we developed an algorithm which takes as input the task specification and produces an interactor-based specification of a software architecture where it is possible to find a direct correspondence between tasks and the software interactors used to perform them. The interactor concept [P94] is used to structure the specification: it is a model for software objects which have to interact with users and it is characterised by the possibility to support a bidirectional flow between the user and the application. Instances of interactors can be composed in a hierarchical way along both the input and the output information flow.
The software architecture is designed top-down and designers can stop this process at different abstraction levels depending on what their purposes are. In any case the software components inherit the temporal constraints of the corresponding tasks.
Note that in the end many types of relationships can be obtained between tasks and interactors:
It is very important that the architectural specification satisfies the temporal constraints in the task specification otherwise the chances of a user making a mistake would increase. In fact if there were a sequentiality constraint between two tasks because the second needs information which has to be processed by the first, and if the software implementation allows the two tasks to be performed in parallel, this might cause the user to make, in this case, the mistake of performing the second task without all the information needed.
Since the software specification must not relax the constraints of the task level, likewise it must not add further constraints either. In fact, this would include limitations which have no reason to be in the application domain.
The LOTOS specification of the Interactive System software is mainly obtained by associating a LOTOS process with each interactor. Some control process may be added in order to perform further controls over the dynamic behaviour of the resulting specification. One of the main advantages of LOTOS specifications is the possibility to reason about their properties by applying automatic tools for model checking. This means that the LOTOS specification is automatically translated into a corresponding labelled transition system which represent the model against which user interfaces properties expressed in Action-based temporal logics (ACTL) are automatically verified. ACTL is a branching time temporal logic which means that it is possible to reason about alternative temporal evolutions of the system considered.
The sort of properties that we deal with are:
Its is clear that although these are general properties they can be tailored to the requirements of the particular system being specified.
Interactors have been a useful concept for structuring formal specifications of the software part of an Interactive Systems. One problem which was soon raised was when we want to move these specifications into software implementations: most current toolkits for software development follow different models (or they do not follow any underlying model at all) and so we had some conflicts between translating specifications based on our interactor model and implementations based on different models.
Thus two solutions were available: either to introduce a layer which is in charge of performing the translation from interactor-based specifications into toolkit implementations, or to build a toolkit which follows the same model as the specification. We chose the second option because it allows us to simplify the development process and to gain the benefits of a structured model such as the interactor one at the implementation level too.
In our toolkit each object is obtained by multiple inheritance of three components (Input, Abstraction, Presentation) which represent three dimensions in the design of software objects. Then in order to stress semantic aspects, the implementation of the tree of available classes of interactors was driven above all by the Input component because it is classified according to the information generated towards the application side, which is the information most related to the task that the software object is able to support. Next, once an input functionality has been defined different kinds of presentations to support it can be found, each of them is a different subclass. Finally once an input and a presentation have been identified we can indicate various ways to provide feedback to the user of the input generated.
Thus once the specification has been made we know for each interactor which task it is supposed to perform. The development environment allows developers to immediately locate the classes which support that task, and with a navigation tool they can analyse the refinements available which differ in the type of presentations or feedback provided to the user.
The first implementation of the interactor-based toolkit was performed in the Sather object-oriented programming language, a second implementation in Java is being developed.
The association between user tasks and software interactors is useful at run-time as well. The main advantage is that task-oriented contextual help can be provided. In fact, we noticed that the design of help systems often suffers from the same limitations as the design of user interfaces: poor support of semantics aspects. This means that usually help systems provide information about the possibility of interactions with the perceivable objects, but they do not answer the questions which users most often ask, such as:
These questions are those more related to the user's view of the system functionalities and it is important that help systems are able to provide satisfying related answers.
To provide these answers, in our approach, we use a task manager which has at run-time the information related to the association between tasks and interactors, and receives information about the actions performed by the user. The help system can be activated in two ways: either by interacting with a disabled interaction object or by making queries formulated following one of the four classes of questions mentioned above.
We have found an algorithm that gathers the information needed to answer such questions by navigating in the hierarchy of the task specification. In the algorithm the help system goes down from the root node by stepping down only on active nodes towards the desired task until it finds a subtree which contains the desired task which cannot be visited as it is not active. Then two possibilities usually can occur: the desired task can be activated by activating another one first, or it is not possible to move further down as the desired task cannot be activated by the current state. In the latter case, the algorithm starts backtracking looking for a recursive task, hereafter the X task, which means a task that once completed can be executed again by performing a different set of subtasks. If it does not find any recursive task, it means that in the current session the desired task cannot be anymore performed. In this case the system has been designed allowing one-way trap doors which means that the system may enter in some states from where it is not possible anymore to accomplish a set of tasks. Otherwise, if the recursive task is found, then it is possible to select another set of subtasks, including the desired task, thus finding the solution to the problem. The consequent help message will indicate first the subtask to perform in order to complete the X recursive task and next those subtasks which enable the desired task.
In the evaluation phase the association tasks interactors can give other useful results. We are developing a tool (T.A.S.M., Tasks Actions Specification Mapping) which is able to gather information from the task-driven interactor based specification and the file logs which store events performed by the user, and to produce information about how well the user has performed the tasks.
The tool creates an association between physical events and the actions in the LOTOS specification of the considered application. In the specification it is possible to identify the actions which correspond to the completion of the tasks and those which are needed to reach them. Thus we can compare them with those effectively performed by the user and identify what errors the user has made actions which are not useful to perform the desired task.
More specifically, the tool works on three different levels: the task specification, the software architecture specification, and user actions. There are some tables which contain information which allow the tool to link the different levels. In the LOGS_LOTOS_TABLE the physical actions are associated with the actions in the LOTOS specification; by knowing the interactor architecture the INTERACT_TABLE is able to identify the internal actions in the specification which are activated by the user physical actions; finally the LOTOS_TASK_TABLE indicates what actions in the specification are associated with the completion of some task.
Figure 1: Abstraction Levels and Tables in the TASM Tool
The tool can give information such as: what the actions strictly needed to perform a task are; numeric information (such as number of executions of each task and number of performed actions); list of actions and tasks performed; list of actions and tasks which may be performed; user errors which occurred and their classification into three categories (minimal, recoverable and unrecoverable).
The evaluation results can be provided following a temporal or a logical (task-dependent) order.
The approach seems promising and, after some initial testing in an evaluation of a Swedish information system [PSL95], it is being further developed and applied to other case studies.
The resulting environment which is based on the use of various models and tools, is shown in Figure 2.
The starting point is the specification of the user tasks. The task model is used to drive the modelling and specification of a software architecture. By applying automatic tools for general purpose formal verification it is possible to check whether the specified system supports specific properties. Once a satisfying specification has been obtained it can be used to obtain an implementation by using the Interactor toolkit. The implementation can support task-oriented, contextual help. Finally, by using information from the architectural LOTOS specification and the user sessions with the implementation, the TASM tool can give some evaluation of the user interface.
Figure 2: The Task-Centered Environment
The task-driven specification approach has been applied to various case studies: the user interface for an air traffic controller [PM95a], a multimodal user interface for a flight data base [PM95b], a geographical information system [PSL95], and a mailing system [PP95]. Recently work on how to integrate task and user models (such as [A83], [BM95], [CMN83]) in the development of multimedia user interfaces has been developed. The basic idea is to start with the task specification which considers only constraints belonging to the application domain. When basic tasks have to be associated with software interaction objects which should support their performance then requirements from the user model should be considered to identify and design more effective software objects.
Another recent research work [AMP95] is the design of multimedia presentations of multimedia data bases query results: we use semantic relationships among data, information about the tasks, presentation and effectiveness criteria to generate, with the support of an automatic tool which is being developed, the design of the query result in a multimedia environment.
I want to thank all who have participated to the activities of the User-Centred Group at CNUCE which are described above: Nicola Aloia, Adolfo Leonardi, Maristella Matera, Menica Mezzanotte, Saverio Pangoli, Sabrina Sciacchitano.
Support for this work came from the Amodeus 2, BRA Esprit Project and the CNR project on Verification of Digital Systems.
CNUCE-CNR, Via S.Maria 36, 56126 Pisa, Italy
Issue |
Article |
Vol.28 No.3, July 1996 |
Article |
Issue |