Copyright 1998. David Gilmore, Elizabeth Churchill, & Frank Ritter

These lecture notes were not written as a course handout, but as a resource for lectures. Therefore, references and comments will not always be complete.

Task Analysis (TA)

Task analysis is a multitude of techniques for representing what people do when they perform a particular task.

There are many different techniques, focussing on different aspects of task:

The goals of task analysis vary too, covering such things as:

However task is a very nebulous term and needs to be set in the context of many other things that people do, such as:

Most TA techniques ignore the work and activities (remember activity theory?) part of people's lives and focus on tasks, operations and actions. But few would argue that this was a good thing -- all TA needs to be done with an understanding of the general context in relation to work and activities.

For example, consider coffee making. As an isolated task, it may appear very straightforward, but when one considers a wider activity context (the spatial layout, who occasionally shares the physical space, what other tasks occur concurrently, and so on) the analysis can become much more complex, as complex as the task is. The activity context for coffee making may be making breakfast, getting the kids ready for school, and not being late for work, or it may a quite different one of welcoming important business visitors and making a good impression.

It is especially common in work places to discover that the activity context produces changes in the way people do tasks from how they would ideally do them, or from the way they are supposed to do them.

Another important distinction is between the focus of different analyses. We can study

Many would argue that a good task analysis would include analyses of each of these.

Historically

Time studies (Taylor) and motion studies (Gilbert) came first (1920's) -- studying patterns of behaviour as tasks were executed. (Back to lecture 1 on the historic roots of HF.)

Given the nature of psychological knowledge then, the absence of concerns about user's cognitions is not surprising. Psychology was primarily focussed upon observable behaviour. Most systems that were initially studied were simple enough that observed behaviour was adequate for ways to understand how to improve the system.

But time and motion looked purely at sequence of behaviour. This level of analysis breaks down when people juggle multiple tasks or are interrupted. Furthermore, time and motion studies do not address the fact that even sequences of behaviour have underlying structures.

This led to a focus on the goals and subgoals of behaviour (hierarchical task analysis) at the expense of sequencing information. This is still the dominant form of task analysis.

More recently people have been trying to move beyond goals towards understanding "activities" in which sequencing, multiple concurrent tasks, interruptions and other aspects are all taken into account -- but no formal methods as yet.

The Use of Task Analysis in Human Factors

Task analysis has two key roles in human factors

  1. to enable us to describe and understand behaviour - especially given that actual and supposed behaviour in organisations is different.
  2. to enable us to understand how well a new system 'fits' the old system. Given a task analysis of how people currently do a task, or would conceive of doing the task we can ask how much that overlaps with the formal analysis of how it should be done in the new system.

Furthermore, we can take two types of task analysis and look at their relationship -- for example, is the relationship between goals/subgoals and the chronology of actions simple or complex. We can assume that where there is a complex relationship between goal structure and action structure, then an interface may be hard to use.

Main Problems

The major problems of task analysis are

  1. Hard to accommodate tasks completed by more than one individual, except in very simple cases.
  2. Representation of a task analysis is complex, even when a simple task is studied.
  3. Complex tasks (e.g. air traffic control) become very unwieldy very fast, but can be done, but their interpretation is often limited to those that conducted the analysis.


Terminology

Much different terminology is used, and much of it is used in ways which seemed designed to confuse the novice.

For example, goal and task are used interchangeably by some and to mean importantly different things by others. Here's a glossary that you can use:

GOALS
are external tasks, or mental entities, or our perception of the states of the world that we wish to bring about. Goals are external to the devices being used to bring them about. We achieve goals.
 
TASKS
are those processes applied through some device in order to achieve the goals. Tasks are internal to the system / computer / device being used. Tasks usually involve a sequence of steps (often a controlled sequence of steps, where conditionals and loops might occur). We perform tasks.
 
ACTIONS
are tasks with no problem-solving and no internal control structure. We do actions.

These distinctions are in part influenced by experience, since expertise can change the way we perceive the changes we can effect in the world (i.e. our goals). Expertise can also make tasks become simple actions. So, even a Goal-Task-Action analysis would need to be performed with other assumptions, such as the skill-level of users in mind.

METHODS
are sequences of actions that achieve a task. They are distinct from the task in that the task may be performable by more than one method. For example, renaming a file.
 
SELECTION
is the term often used to indicate the choice (or knowledge used to choose) of a method for performing a task.
 
OBJECTS
are the focus of (the entity involved in) any action.

Although TA assumes that tasks may be performable in different ways and tries to represent these choices, the more complex mapping between goals and tasks is never clearly addressed (users may have multiple, conflicting goals, and may not be certain whether a given task will help them achieve their goal).

Hierarchical Task Analysis

Involves decomposing tasks into subtasks and representing the order and structure through structure diagrams.

Tasks / subtasks are represented down the page and sequencing is represented across from left to right.

Performing HTA is a time-consuming process with many choices. It is fussy, and takes effort when you could be having fun coding or chatting to users.

In essence it is a representational device rather than a technique.

Read p. 416 of Preece for more details of the technique.

Cognitive Task Analysis

Cognitive task analysis focuses on what the user knows and frequently is concerned with the quality of the mapping between a representation of the user's knowledge and that required by the system.

There are few established techniques or representations, though many claim to pursue this approach.

Logical Analysis (e.g. Command language grammar)

Sometimes one wants to analyse what a system as a whole must do, not just what the user must do, or just what the system must do.

This can be called a logical task analysis (though it is also called conceptual design).

This can be especially useful if one is designing a new way to do a familiar task, since the logical description should apply equally to the before and after systems.

This can enable one to look at skill and work requirements. A logical representation can be overlaid with indicators of which bits the person does and which bits the computer does -- a comparison of the before and after indicates changes in work practices as a result of the technology.

GOMS

Goals, Operations, Methods and Selection rules (Card, Moran & Newell, 1980; 1983)

There are a whole family of GOMS (NGomsl, CPM-Goms, KLMs) - most are far from easy to apply and use. Whereas HTA is task structure and sequence, GOMS represents cognitive structure and sequence.

Cognitive structure is represented in the concept of starting from Goals, not tasks and also from the explicit use of selection rules to indicate how methods are chosen for the goals.

Further more, the mapping from goal to method direct avoids some of the issues in the goal-task mapping mentioned earlier.

Goals
Goals are defined as desired states of affairs. They are brought about by the application of methods and operators.
 
Operators
Elementary perceptual, motor or cognitive actions. Intended either to change the world (a normal key-press) or to change our knowledge about the world (e.g. reading).
 
In practice this is not quite true -- operators are those subgoals whose methods of solution we choose to analyse no further. Choice of appropriate operators is critical to a GOMS analysis, but no clear guidelines for doing so -- but most of the time it is not difficult and problems are uniform across possible designs so it is fair.
 
Methods
Methods describe procedures for achieving goals. They contain a sequence of subgoals and/or operators, with conditions potentially attached to each part of the method. These conditions relate to the current task environment (e.g. repeat until not tasks left).
 
Selection Rules
Selection rules are the basic control structure of the model. Where multiple methods are available the selection rules enable the user to choose between the methods.

Commentary

In essence, a GOMS model is no different from any other hierarchical task analysis examining Goal/Subgoal analysis. The main difference is that it has formalised its components, and it only claims to be able to describe expert, error-free behaviour. Thus, for example, GOMS analyses presume that methods are known before, and not calculated during performance.

Since there is no such thing as expert, error-free performance, many people have questioned the utility of such analyses. However, even if flawed, they are better than no task analysis at all.

Problems are that there is no clear specification of what can be used in selection rules -- there is an implication that it should be the current task context, but real behaviour undoubtedly allows for selection based on previous selections (for example).

Keystroke-level modelling

When the operators are analysed down to elemental perceptual, motor, cognitive actions (e.g. keystrokes), then by classifying key-strokes it can be possible to make time predictions for expert error-free performance.

The execution of a unit-task requires operators of (basically) 4 types:

  1. Keystrokes - Unit time based on typing speeds (0.08 - 1.2 s)
  2. Pointing - moving mouse to target (clicking is a keystroke) (gener. 1.1 s)
  3. Homing - moving hand to / from mouse and keyboard (gener 0.4 s)
  4. Drawing - dragging mouse in straight-line segments (0.9 n+0.16 l - n = number of segments, l = length of segments)

To these should be added some number of mental operators (1.35 s), and if it limits the user's task performance, some estimate of the system's response time.

The number of mental operators comes from a set of rules -- basically between all operators, except those linked through knowledge or skill (e.g. the keystrokes in a single word, or point, click mouse).

Where there are selection rules governing the choice of methods then it is up to the analyst to decide whether to go for best or worst case time predictions.

Example

Method

  1. Delete 3rd clause and H [mouse] PK PK M K [D]
  2. Insert it in front of 1st clause. PK M K [l] K [ESC]
  3. Replace ": o" with "O". PK M K [R] K [SHIFT] H [keyboard] 2K [O ESC]
  4. Replace "T" by ": t". H [mouse] PK M K [R] H [keyboard] 4K [: SPACE t ESC]
  5. Delete 3rd clause and H [mouse] PK PK M K [D]
  6. Insert it in front of 2nd clause PK M K [l] K [ESC]
  7. Find next task. M K [F]

Time Predictions

Texecute = [24tK + 8tP + 5tH ] + 7tM
= [24(0.15) + 8(1.03) + 5(0.57)] + 7(1.35)
= 14.7 + 9.4
= 24.1 s

Commentary

Although closely related to GOMS, note how keystroke-level modelling is really closer to time-and-motion (chronological) analysis than goal/subgoal analysis. It assumes a concentration on one task at a time, no interleaving of goals, no interruptions, a single method, and so on.

Indeed, KLMs can be achieved without worrying about goals and selection rules.

Quite a considerable effort has gone into trying to make them more usable - particularly by building computer tools to apply them (Nichols & Ritter, 1995; Beard, Smith, & Denelsbeck, 1996). The tools themselves, however, may be like the cobbler's children.

One of these potential tools is called Soar, which is an implementation of a problem-solving architecture. You write the Goals, Methods, Operators and Selection Rules in a production-rule syntax -- and the SOAR architecture runs the model, learns skills and can provide timing predictions.

Within its limitations, these approaches are attempting to support usability testing without real users. They do not (yet) readily offer up information for formative evaluations. They will not spot systems that encourage errors; they will not evaluate visibility / feedback differences between systems (users are presumed to have all the required knowledge. But they are they way forward.

References

Dismal, example applications and tool for applying the KLM model

Reference to Soar Frequently Asked Questions (FAQ) list

Kieras's notes on the KLM in the library.

Richard Young works in this area as well, and has an interesting web site.

Beard, D. V., Smith, D. K., & Denelsbeck, K. M. (1996). Quick and dirty GOMS: A case study of computed tomography interpretation. Human-Computer Interaction, 11, 157-180.

Card, S. K., Moran, T. P., & Newell, A. (1980). The keystroke-level model for user performance time with interactive systems. Communications of the ACM, 23(7), 396-410.

Card, S., Moran, T., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance. Human-Computer Interaction, 8(3), 237-309.

John, B. E., Vera, A. H., & Newell, A. (1994). Towards real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology, 13, 255-267.

John, B. E., & Kieras, D. E. (1996). Using GOMS for user interface design and evaluation: Which technique? ACM Transactions on Computer-Human Interaction, 3(4), 287-319.

Kieras, D. E. (1988). Towards a practical GOMS model methodology for user interface design. In M. Helander (Ed.), Handbook of Human-Computer Interaction. North-Holland: Elsevier Science.

Nichols, S., & Ritter, F. E. (1995). A theoretically motivated tool for automatically generating command aliases. In CHI '95, Human Factors in Computer Systems. 393-400. New York, NY: ACM.

Olson, J. R., & Olson, G. M. (1990). The growth of cognitive modeling in human-computer interaction since GOMS. Human Computer Interaction, 5(2 &3), 221-265.


Lecture 8: Usability Testing and Prototyping

RETURN TO COURSE CONTENTS