No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.29 No.2, April 1997
Next article
Article
No later issue with same topic
Issue

Dissertations


Abstracts of Interest

The following are citations selected by title and abstract as being related to Computer-Human Interaction, resulting from a computer search, using Dialog Information Services, of the Dissertation Abstracts Online database produced by University Microfilms International (UMI). Included are UMI order number, title, author, degree, year, institution; number of pages, Dissertation Abstracts International (DAI) subject category chosen by the author, and abstract. Unless otherwise specified, paper or microform copies of dissertations may be ordered from University Microfilms International, Dissertation Copies, Post Office Box 1764, Ann Arbor, MI 48106; telephone for U.S. (except Michigan, Hawaii, Alaska): 1-800-521-3042, for Canada: 1-800-343-5299. elsewhere +1-313-973-7007. http://www.umi.com/ International_Sales@umi.com. Price lists and other ordering and shipping information are in the introduction to the published DAI. An alternate source for copies is sometimes provided. Dissertation titles and abstracts contained here are published with permission of University Microfilms International, publishers of Dissertation Abstracts International (copyright by University Microfilms International), and may not be reproduced without their prior permission.

Susanne M. Humphrey
Ben Shneiderman
Contributing Editors

The Design of a Computer-Mediated Reading Tool for the Enhancement of Second Language Reading Comprehension through the Provision of On-Line Cues

Brasche, Hartmut Paul; University of Toronto (Canada) Ph.D. 1991, 212 pages. ISBN: 0-315-69393-2; University Microfilms Order Number ADGNN-69393.

Today's second language teaching/learning practice has been criticized for lacking adequate means to equip students with effective methods for reading second language texts. This problem is further compounded by the fact that the current development of software for second language learning fails to address this problem and, instead, inadvertently reinforces the use of inefficient reading strategies. The objective of this thesis is (a) to demonstrate that an adequate theory basis exists in the fields of psychology and pedagogy of reading for designing a software program which addresses the teaching/learning problem; (b) to generate the design of a second language reading program that corresponds to the theory and document the programming components of a first prototype version of the tool software; (c) to present the methodology and results of an initial formative evaluation demonstrating that the design meets criteria for utilization in the light of the teaching/learning problem being addressed; and (d) to discuss the implications of the design and the prototype software program in the light of relevant theory. With respect to the operational characteristics, the results of the formative evaluation showed that the prototype tool software met the basic criteria for functional utilization and appeared robust and posed no obstacle to use for further experimentation and research. With respect to promoting behaviours related to the Expert L2 reading, the data suggests that tool use promoted greater processing in the target language. Pending further research, initial results suggest that tool users were more willing and able to generate the recall meaning of words encountered in the text. The thesis concludes by specifying a number of areas for further research with the tool.

Print Pathways and Interactive Labyrinths: How Hypertext Narratives Affect the Act of Reading

Douglas, Jane Yellowlees; New York University Ph.D. 1992, 250 pages; University Microfilms Order Number ADG92-37749.

This dissertation examines the ways in which hypertext affects the act of reading. In the new and, as yet, convention-less environment of hypertext space, it is possible to perceive aspects of the transaction between reader and text normally not visible amid the familiar trappings of print environments, enabling us to answer the questions: how do readers make meaning. How do readers negotiate the "blanks" or "gaps" in the text which nearly all theorists claim are endemic to the act of reading. Are endings essential to the process of reading. Do hypertext readers have more autonomy than readers of print narratives. How will the use of hypertext in education affect the definition of learning.

After exploring the ways in which hypertext narratives resemble and differ from traditional and avant garde print narratives, the dissertation examines the strategies of readers attempting to piece together a short story cut into segments and how our perceptual inclination toward seeing connections in the world around us enables us to see multiple connections between elements in a text. The study goes on to examine the meaning making process of two sets of readersone reading a print short story and one an interactive version based on the print storyand the ways in which the readers' respective processes of coming to understand the text reflect their tendency to arrive at interpretive decisions based upon their perception of the relationship between the significance and "place" of textual nodes in hypertext's virtual, three-dimensional space.

Finally, an exploration of the strategies which readers use when confronting texts which have no physical "ending" uncovers the link between the act of prediction as one of the chief constituents of the process of meaning making and our need to anticipate endingseven when the sort of determinate, physical closure inherent in print narratives is deferred or displaced. At the same time, the network of connections and nodes which forms the hypertext can oblige readers to participate in something resembling a game between reader, text, and unseen author, where readers must anticipate authorial intention in order to navigate through the author's "intentional network".

Deriving and Manipulating Module Interfaces

Nord, Robert Louis; Carnegie-Mellon University Ph.D. 1992, 174 pages. University Microfilms Order Number ADG92-38823.

A formal method for systematically integrating general-purpose software modules into efficient systems is presented. The integration is accomplished through adjustment of abstract interfaces and transformation of the underlying data representations. The method provides the software designer with the ability to delay or revise design decisions in cases where it is difficult to reach an a priori agreement on interfaces and/or data representations.

To demonstrate the method, the development of a text buffer for a simple interactive text editor is given. For each basic operation on the text buffer, a natural and efficient choice of data representation is made. This organizes the operations into several "components," with each component containing those operations using the same data representation. The components are then combined using formal program-manipulation methods to obtain an efficient composite representation that supports all of the operations.

This approach provides meaningful support for later adaptation. Should a new editing operation be added at a later time, the initial components can be reused in another combining process, thereby obtaining a new composite representation that works for all of the operations including the new one. There are also ramifications for the application of formal methods to larger-scale systems, as this method can be applied to the manipulation of the interfaces between modules in larger software systems.

Communicating with Graphic User Interfaces: A Comparison of Menu Selection and Menu Bypass Techniques

Hammontree, Monty Lee; Old Dominion University Ph.D. 1991, 228 pages; University Microfilms Order Number ADG92-30320.

The present study was conducted in two phases to determine design tradeoffs relating to command bar menu and bypass code-based techniques for interacting with computers. Forty eight subjects participated. In the first phase of the experiment, mouse-, chorded key-, and function key-based menu selection techniques were compared. It was found that menus were accessed much faster with spatially mapped function keys as compared to chorded key sequences or mouse inputs, and that relative to mouse inputs compatible letter keys lead to faster command selection times. Further, the function key-based technique yielded the fastest combined access and selection times, the fastest block completion times, and the fewest errors. In the second phase of the experiment, four experimental conditions were produced by crossing two menu input devices (i.e., mouse and keyboard) with two bypass coding structures (i.e., function key-based codes and chorded key-based codes). It was found that the groups which used function key-based codes entered the menu designating portion of the bypass codes faster than those that used chorded key-based codes. The coding structure based on spatially mapped function keys also yielded faster task completion times. Furthermore, there were fewer command substitution errors with this coding structure. Comparisons between the groups with no prior exposure to the code sequences (i.e., the groups that used the mouse to make menu selections during the first phase) revealed that the function key-based technique also led to fewer command omissions and fewer extraneous command selections. Finally, subjective data showed menus were felt to be easier to learn, less demanding in terms of mental resources, and less anxiety provoking than bypass codes. In contrast, bypass codes were felt to be more natural, more convenient to use, and faster in terms of task times and better in terms of task performance. The findings of this study clearly indicate that both menu- and bypass code-based styles of control should be provided to promote user acceptance. Furthermore, the performance advantages observed for the function key-based technique point to it as the menu selection and bypass technique of choice.

Cognitive Profiling as a Basis for User Models

Durrani, Qaiser Shehryar; The George Washington University D.Sc. 1992, 253 pages; University Microfilms Order Number ADG92-38110.

Experts sometimes make errors due to biases in their judgement. This research was conducted to find out if the decision making process of experts can be improved by critiquing their performance employing user modeling techniques. The user modeling concept focuses on providing help to users based on individual user capabilities. The goal is to see how a computer critic, through user modeling, can improve an expert's performance in bias situations.

Psychological testing techniques are finding increasing use in various disciplines, such as psychopathology, personal attributes, socialization factors, memory studies, school achievement tests, etc., to achieve better results. In this research, several standard psychological testing techniques are utilized to implicitly assess users' cognitive abilities and to build user profiles. Use of cognitive testing is in contrast to existing user models that use explicit query based techniques to build user profiles.

Two empirical studies were conducted to determine the effects of the user modeling on expert decision making during task performance. The first study proposed to find a relationship between cognitive abilities and human biases. Three standard psychological tests were administered to a control group to capture user cognitive abilities. The same user group was then given a set of eight tasks to perform. These tasks were designed to measure three of the most common biases that experts exhibit. A matrix of cognitive abilities and human biases was developed based on the task outcomes. The matrix was built using multivariate statistical analysis techniques such as, factor analysis, correlational analysis, stepwise multiple linear regression, and discriminant analysis. Cognitive profile rules were then extracted from this matrix so it could be implemented in the user model.

The second study verified the user model which in turn validated the cognitive profile rules. Here a new group of subjects took the same tests. The psychological tests captured the cognitive abilities of the subjects. Based on these cognitive measures and the matrix developed in the first study, a user model was built for each subject. During the repetition of the eight tasks, the user model predicted the likelihood of biases being committed and triggered different critiquing screens to assist the subjects in overcoming their biases. As the tasks progressed, the deviations from the matrix's predictions prompted the user model to update itself.

User Interface Development and Tailoring Tools

Hesketh, Richard Laurence; University of Kent at Canterbury (United Kingdom) Ph.D. 1992, 243 pages; University Microfilms Order Number ADGDX-97627.

Available from UMI in association with The British Library.

Modern graphical user interfaces have opened up the use of computers to an ever increasing range of users. Creating interfaces suited to the different range of user skills has proved to be difficult. Many software tools have been created to aid designers in the production of these interfaces. However, designers still cannot produce interfaces and applications that are suitable for all end-users. Compared with designers, users have a better understanding of the tasks they wish to complete. It is therefore vital that end-users are involved in the development of interfaces and applications. One such involvement is through the tailoring of interfaces by the end-users themselves. A system is tailorable if it allows end-users to modify its appearance and/or functionality. Through these modifications end-users can produce interfaces and applications more suited to their requirements. In this thesis we examine tailorable systems for graphical workstations. We define a model of tailoring and investigate existing systems. From this investigation an example framework for inherently tailorable applications has been advised. To test the ideas presented a number of software products have been designed and implemented. These products have been designed to ease the difficulty found by both application developers, who wish to create tailorable systems, and by end-users who wish to take advantage of them. The conclusion is drawn that, through the careful design of underlying mechanisms, tailorable systems can be created efficiently by designers.

Using Probabilistic Finite State Models to Evaluate Human-Computer Interfaces to Case Systems

Weiss, William Samson; The University of Connecticut Ph.D. 1992, 384 pages; University Microfilms Order Number ADG93-00959.

This project involves the development of a technique for modeling and analytically evaluating human-computer interfaces. The modeling technique is supported by an extensive set of theorems and algorithms which describe the various model building phases. The theorems and algorithms show that the representations of each human-computer interface converges to a single probabilistic finite state model which accurately represents the human-computer interaction. The resulting probabilistic finite state models are then used as predictors to generate data which can be statistically analyzed to yield a comparison between alternative human-computer interfaces.

The technique uses an approach in which probabilistic finite state models of the prospective human-computer interfaces are constructed. Specifications of the human-computer interfaces provide for the partial definitions of initial probabilistic finite state models. Initially, the missing components of the probabilistic finite state models are the time distributions which specify how much time is spent in each state and the probabilities associated with the state transitions. Experiments are required to determine the time distributions and state transition probabilities. For this project a set of experiments was performed using simulated human-computer interfaces to a CASE system component which calculates time performance of software for parallel architectures. The data collected during the experiments were used to complete and refine the probabilistic finite state models. The refined models were then exercised as predictors and the generated data was statistically analyzed to compare the interfaces. The results demonstrated that the modeling and evaluation technique can be used effectively in the design and refinement of human-computer interfaces.

The Effects of Progressive Levels of Interactivity in an Interactive Video Lesson on Achievement, Attitude and Peer Interaction

Bailey, Margaret Lynn; Kansas State University Ph.D. 1992, 117 pages; University Microfilms Order Number ADG92-29230.

This study compared the effects of three variations in interactivity in an interactive video lesson on achievement of lesson concepts, attitudes regarding the learning activity, and frequency and type of peer interaction among undergraduate physics students.

Interactivity was defined as the level of opportunity for interaction between the user and the interactive video program. Three variations of the same software program were created with the following programmed variations in level of interactivity: No Interactivity (no opportunities for interaction), Low Interactivity (opportunities for interaction limited to control of pace, embedded questions, feedback and remediation on an incorrect response) and High Interactivity (increased opportunities for interaction including control of sequence, pace, videodisc controls, number of rope experiments, embedded questions, feedback and choice for remediation).

Subjects (N = 52) were volunteers from an undergraduate course in physics. Subjects were matched with a peer and the pairs were randomly assigned to one of three treatment groups. Subjects were audiotaped for transcription of verbal interaction behavior. After completing the interactive video lesson, subjects independently completed a criterion-referenced posttest and attitude scale.

Group means for recall and transfer posttest scores, attitude scores, overall frequency of verbal interaction and frequency of specific types of verbal interaction (task-related, procedure-related, socio-emotional and off task) were tested for significance with a one-way analysis of variance. The analyses revealed no significant difference between groups on overall posttest scores. The High Interactivity group, however, scored significantly lower on the recall portion of the posttest than did the No and Low Interactivity groups. In addition, the analyses revealed significantly lower attitude scores for the No Interactivity group when compared to the Low and High Interactivity groups.

Analyses regarding peer interaction behavior revealed that the No interactivity group had significantly less frequent peer interactions than the Low and High interactivity groups. Further, the No interactivity group had significantly less frequent task-related and socio-emotional peer interaction than the Low and High interactivity groups. The High interactivity group had significantly more frequent procedure-related interaction than did the No or Low groups. Finally, groups did not significantly differ in the frequency of off task interaction.

Computer Anxiety and the Computerized Writing Classroom: A Qualitative and Quantitative Study

Gos, Michael Walter; Purdue University Ph.D. 1992, 200 pages; University Microfilms Order Number ADG93-01301.

Beginning in the mid-1980s, there has been a move toward using computers in writing classrooms at all levels. While the reviews of their effectiveness are mixed, computers continue to play a larger role in the teaching of composition as time goes on, possibly because today, and in the foreseeable future, computers are the way we write at work.

Traditionally, students have been excluded from literacy, and hence, empowerment, because of economics and social class. But today, with the predominance of computerized writing, both in the classroom and at work, we are finding a new exclusionary factor is surfacingcomputer anxiety.

This study, structured in two phases, looks at computer anxiety in the composition classroom in an effort to find ways to deal with the problem so students can succeed at computerized writing. Phase one consisted of a multiple case study of two computer anxious students and preliminary quantitative studies of six other computer anxious students. Phase two examined 185 subjects with respect to prior experience and eight computer anxious subjects on various personality traits.

Findings show that computer anxiety is strongly correlated not with experience, but rather with the pleasantness or unpleasantness of prior experience (r =.75954). Subjects in the study who had no previous experience with computers also were without anxiety. Further, computer anxiety may actually be programming anxiety in disguise. Students who were computer anxious often talked about bad programming experiences as the genesis of their problem.

Students who did prior planning, and were adventuresome and/or self-reliant had a better chance of overcoming computer anxiety than did their less adventuresome and self-reliant counterparts.

Task avoidance, composing with pen and paper, and editing on screen may all predispose the computer anxious student to failure in overcoming the problem.

The results of this study suggest that instructors in computerized composition classes should identify computer anxious students when possible, strongly discourage absences, especially early in the course, pay special attention to keeping the students on-task as much as possible, and encourage them to write on line, but edit on hard copy.

Effects of Animation and Manipulation on Adult Learning of Mathematical Concepts

Hsieh, Feng-Jui; Purdue University Ph.D. 1992, 269 pages; University Microfilms Order Number ADG93-01313.

This study investigated the instructional effectiveness and motivational appeal of animation and manipulation on adults' learning of mathematical concepts in a computer-based lesson.

The subjects were 54 college students who participated in two CBL sections as part of their mathematics class. They were randomly assigned to receive instruction with either animation or no animation, as well as manipulation or no manipulation. The computer based lesson was developed by the researcher and introduced the concept of Venn diagrams.

Achievement was measured immediately after the two CBI lessons by both paper-and-pencil tests and tests on the computer. One week later, a paper-and-pencil test was distributed to evaluate students' retention. Continuing motivation was assessed through a questionnaire.

Findings included: (1) Animation enhanced adults' retention when the learning tasks required high level cognitive processes such as analysis or synthesis; (2) Animation did not help adults' learning or retention when the learning tasks required mainly the comprehension of mathematical concepts; (3) Animation increased continuing motivation; (4) Manipulation helped the transference of mathematical concepts learned through a computer to paper-and-pencil tests; (5) Manipulation did not promote intrinsic motivation. Recommendations for further studies were also provided in this study.

Visual Search and VDUs

Scott, Derek; University of Durham (United Kingdom) Ph.D. 1991, 380 pages; University Microfilms Order Number ADGD-97867.

Available from UMI in association with The British Library. Requires signed TDF.

This wide-ranging study explored various parameters of visual search in relation to computer screen displays. Its ultimate goal was to help identify factors which could result in improvements in commercially available displays within the `real world'. Those improvements are generally reflected in suggestions for enhancing efficiency of locatability of information through an acknowledgment of the visual and cognitive factors involved.

The thesis commenced by introducing an ergonomics approach to the presentation of information on VDUs. Memory load and attention were discussed. In the second chapter, literature on general and theoretical aspects of visual search (with particular regard for VDUs) was reviewed.

As an experimental starting point, three studies were conducted involving locating a target within arrays of varying configurations. A model concerning visual lobes was proposed.

Two text-editing studies were then detailed showing superior user performances where conspicuity and the potential for peripheral vision are enhanced. Relevant eye movement data was combined with a keystroke analysis derived from an automated protocol analyser.

Results of a further search task showed icons to be more quickly located within an array than textual material. Precise scan paths were then recorded and analyses suggested greater systematicity of search strategies for complex items.

This led on to a relatively `pure' search study involving materials of varying spatial frequencies. Results were discussed in terms of verbal material generally being of higher spatial frequencies and how the ease of resolution and greater cues available in peripheral vision can result in items being accessed more directly.

In the final (relatively applied) study, differences in eye movement indices were found across various fonts used.

One main conclusion was that eye movement monitoring was a valuable technique within the visual search/VDU research area in illuminating precise details of performance which otherwise, at best, could only be inferred.

Allocation of Inspection Functions Between Humans and Computers

Hou, Tung-Hsu (Tony); State University of New York at Buffalo Ph.D. 1992, 233 pages; University Microfilms Order Number ADG93-01855.

Objectives of this research are to compare hybrid systems to human inspection and automated inspection systems, to demonstrate the feasibility of the hybrid inspection systems, and to develop a framework to allocate humans and computers in an inspection system for various conditions.

Based on a search-decision model of inspection (Drury, 1978), two hybrid human-computer inspection systems were developed. Their performance of inspecting surface mount device images was compared to human inspection and two automated inspection systems. Missing components, wrong-sized components, and misaligned components were used as the three fault types in this investigation. Experimental results showed that both hybrid systems were better in inspection accuracy than the two automated inspection systems, and one hybrid system had better performance than human inspection. The feasibility of using both humans and computers in inspection has been clearly demonstrated.

The research also indicated that humans were not significantly affected by contrast while the performance of the systems involving computers deteriorated with decreased contrast levels. It was also shown that humans were better at detecting missing components and wrong-sized components while computers were better at detecting misaligned components.

Based on the above findings, a procedure was proposed for the allocation of inspection functions for different conditions. A neural net model were applied to learn the relationships between the conditions and performance of each alternative inspection system. The neural net was superior in predicting the right system design to a random selection process. The results have shown the feasibility of the framework in allocating humans and computers in an inspection system.

Building User Interfaces with Lightweight Objects

Calder, Paul Robert; Stanford University Ph.D. 1993, 189 pages; University Microfilms Order Number ADG93-26437.

Computer applications with graphical user interfaces are difficult to build because application programmers must deal with many low-level details. One promising solution to this problem is an object-oriented toolkit, which offers predefined components that serve particular user interface needs. However, the components that most toolkits provide are complex and costly in their use of computer resources. User interface programmers cannot use these components for many kinds of applications because the resulting implementation would be awkward and inefficient.

This dissertation describes a new way of building user interfaces from small, simple components that programmers can use in large numbers to define the appearance of application views. Foremost among these components is a lightweight graphical component called a glyph. By using glyphs and other predefined components, programmers can assemble powerful applications with substantially less effort than with other techniques.

To show that the components are simple and effective, I built a prototype toolkit, named InnerViews, and used it to implement a document editor that uses a glyph for each character in the document. The editor's performance is comparable to that of similar editors built with current tools, but its implementation is much simpler. I used the editor to prepare and publish this dissertation.

The success of InnerViews in the text and graphics domains suggests that similar implementation benefits might be seen in building applications that support other media such as sound, video, and animation. Many of the techniques that make glyphs practical should also be valuable in designing and implementing lightweight components for these new domains.

Performance Aspects of Computers with Graphical User Interfaces

Gupta, Aloke; University of Illinois at Urbana-Champaign Ph.D. 1993, 119 pages; University Microfilms Order Number ADG93-29049.

Graphical interfaces and windowing systems are now the norm for computer-human interaction. Also, advances in computer networking have given computer users access to immense distributed resources accessible from anywhere on the network. In this setting, the desktop, or personal computer plays the role of a user-interface engine that mediates access to the available resources. Interface paradigms, such as the "desktop metaphor" and "direct manipulation," provide the user with a consistent, intuitive view of the resources. Traditional computer research has focused on enhancing computer performance from the numerical processing and transaction processing perspectives. In the research described in this thesis a systematic framework is developed for analyzing and improving the performance of window systems and graphical user interfaces. At the system level a protocol-level profiling strategy has been developed to profile the performance of display-server computers. A sample protocol-level profiler, Xprof, has been developed for applications under the X Window System. At the microarchitecture level the memory access characteristics of windowing programs are studied. Cache tradeoffs for a frame-buffer cache are presented. A cache organization is proposed to improve the frame-buffer performance.

Layout Appropriateness: Guiding User Interface Design with Simple Task Descriptions

Sears, Andrew Lee; University of Maryland College Park Ph.D. 1993, 217 pages; University Microfilms Order Number ADG93-27491.

Layout Appropriateness is a design philosophy that can result in interfaces that are not only faster, but are preferred by users. Layout Appropriateness refers to the concept of designing an interface that is in harmony with the users' tasks. Using descriptions of the sequences of actions users perform and how frequently each sequence is applied, interfaces can be organized to be more efficient for the users' tasks.

Simple task descriptions have proven to be useful for both designing new interface widgets and organizing widgets within an interface. The benefits of simple task descriptions were demonstrated in two applications. First, simple task descriptions were used to help designers develop more efficient interfaces. Once designers select the widgets necessary for an interface they must decide how to organize these widgets on each individual screen. To aid in this process a task-sensitive metric, Layout Appropriateness (LA), was developed. Given the widgets to be used and the simple task description, designers can use LA to evaluate the appropriateness of an interface layout. The effectiveness of LA for predicting user performance and preferences was tested in a controlled experiment with eighteen subjects. As predicted, interfaces with better LA values were reliably faster than interfaces with poorer LA values. In addition, interfaces with better LA values were preferred by users.

Second, simple task descriptions were applied to the task of organizing items within a pull-down menu. Considering the benefits of both traditional (alphabetical, numerical, etc.) and frequency-ordered menus led to the creation of a more efficient organization called split menus. In split menus three to five frequently selected items are moved to the top of the menu. Field studies at the NASA Goddard Space Flight Center and the University of Maryland demonstrated the potential of split menus. Selection times were reduced by between 17 and 47%, and 90% of the users preferred the split menus. A controlled experiment with thirty-six subjects was conducted to evaluate the efficacy of split menus. A theoretical model accurately predicted when split menus would be faster than alphabetic menus. Split menus were also preferred over both alphabetical and frequency-ordered menus.

Information-Seeking in Hypertext: Multiple Access Methods in a Full-Text Hypertext Database

Liebscher, Peter; University of Maryland College Park Ph.D. 1993, 267 pages; University Microfilms Order Number ADG93-27456.

This study focussed on the interaction between users, the system, and tasks during information retrieval in a full-text hypertext system. It drew on mental model theory to examine the users' views of information retrieval systems and how these views affected their information-seeking behavior in a hypertext system offering multiple access methods. The access methods provided for this study were an alphabetical index, an hierarchical subject index, Boolean string search, and a network browser. Eleven undergraduate participants and one expert in hypertext systems were given a number of tasks and were observed intensively and individually over five sessions, each lasting approximately 2 to 2.5 hours. Access methods were used individually for the first four sessions. For the final session, all four access methods were available. Participants were given a total of 22 information retrieval tasks. Because the study was exploratory, no hypothesis testing was done. Eleven research questions served to guide the design of the protocols and the observations themselves. Results indicate that undergraduate users of new hypertext information systems use elements of existing mental models for information systems as they interact with the new system. Benefits are quick learnability of system syntax and the ability to apply known information-seeking strategies in the new environment. Drawbacks are inefficient or failed searches due to misapplication of mental models for familiar systems. String search proved to be the overwhelming choice of access method for hands-on retrieval tasks. Selection of string search was independent of task. Principal reasons for selection of string search were a desire to do word, rather than conceptual searches and to minimize the amount of text scanned/read. Hypertext features, such as text embedded highlighted links, were also used to minimize reading. User characteristics, such as subject knowledge, computer experience, and gender, rather than task characteristics, were factors in selection of access method. Overall, participants were successful in their retrieval tasks, but more successful using string search and the network browser than using the indexes. However, success was often attained through serendipitous browsing which indicates that the relatively small size of the database was also a factor.

No earlier issue with same topic
Issue
Previous article
Article
SIGCHI Bulletin
Vol.29 No.2, April 1997
Next article
Article
No later issue with same topic
Issue