Startseite A Generic Approach for Assessing Compatibility Between Task Descriptions and Interactive Systems: Application to the Effectiveness of a Flight Control Unit
Artikel Öffentlich zugänglich

A Generic Approach for Assessing Compatibility Between Task Descriptions and Interactive Systems: Application to the Effectiveness of a Flight Control Unit

  • Camille Fayollas

    C. Fayollas is an assistant professor in Computer Science at National Polytechnic Institute (University of Toulouse, INP-ENSEEIHT). Her research involves techniques, notations and tools to analyse dependability and fault-tolerance for interactive critical systems.

    , Célia Martinie

    C. Martinie is an assistant professor in Computer Science at University Toulouse 3 and researcher at the IRIT lab. Her research interests are techniques, notations and tools to analyze, design and develop interactive critical systems.

    , David Navarre

    D. Navarre is Lecturer in Computer Science at University Toulouse 1. His research deals with notations and tools for the specification, prototyping, validation and implementation of safety critical interactive systems.

    und Philippe Palanque

    P. Palanque is a professor in Computer Science at University Toulouse 3. His research deals with formal description techniques to support design of resilient interactive systems considering usability, safety and dependability.

    EMAIL logo
Veröffentlicht/Copyright: 1. Dezember 2015
i-com
Aus der Zeitschrift i-com Band 14 Heft 3

Abstract

Task models are a very powerful artefact describing users’ goals and users’ activities and contain numerous information extremely useful for designing usable interactive applications. Indeed, task models is one of the very few means for ensuring effectiveness of the application i. e. that the application allows users to reach their goals and perform their tasks. This paper presents a tool-supported framework for exploiting task models throughout the development process and even when the interactive application is deployed and used. To this end, we introduce a framework for connecting task models to an existing, executable, interactive application. The main contribution of the paper lies in the definition of a systematic correspondence between the user interface elements of the interactive application and the low level tasks in the task model. Depending on the fact that the code of the application is available or not, the fact that the application has been prepared at programming time for such integration or not, we propose different alternatives to perform such correspondence (in a toolsupported way). This task-application integration allows the exploitation of task models at run time bringing in the benefits listed above to any interactive application. The approach, the tools and the integration are presented on a case study of a Flight Control Unit (FCU) used in aircraft cockpits. This paper extends the article entitled ‘A Generic Tool-Supported Framework for Coupling Task Models and Interactive Applications’ which have been presented at the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2015). In this expanded version, the detailed description of the correspondence between annotations in the program of the interactive application and interactive tasks in the task models has been added. The complete version of the case study has also been integrated so that the application of each step of the proposed validation process is presented.

1 Introduction

Task models are considered by many as a corner stone in the development process of usable interactive applications. Indeed, they provide a unique mean for gathering information about users’ roles, goals and activities either being about an extant or envisioned system. However, at the same time, task models are also considered as cumbersome, expensive to build and mainly useful in the early phases of the development process. When used throughout the development process as well as at operation time (when the system has been deployed and is currently used) task models bring many benefits such as:

  1. Support the assessment of the effectiveness factor of usability by identifying which tasks are supported by the interactive application and which ones are not;

  2. Support the assessment of the task complexity in terms of perception, analysis, decision and motor action of users in order to reach a goal (Fayollas et al. 2014), assessment of operators’ performance to reach a goal (Swearngin et al. 2013) which can lead to predictive workload assessment (O’Donnell and Eggemeier 1986);

  3. Support the construction of training material and training sessions of operators of complex systems (Martinie, Palanque, Navarre et al. 2011);

  4. Support the structuring and the construction of user documentation (Gong and Elkerton 1990);

  5. Support the heuristic evaluation of usability of interactive applications (better than when task models are not used) not only for single user applications (Cockton and Woolrych 2001) but also for multi user applications (Pinelle, Gutwin and Greenberg 2003);

  6. Support the identification of user errors and their impact on the overall performance for reaching the goals (Palanque and Basnyat 2004) as well as preventing those user errors (Paternò and Santoro 2002);

  7. Support the identification of tasks that are good candidate for migration towards an automation of the system (Martinie, Palanque, Barboni et al. 2011) but also towards other users in the context of collaboration (van Welie and van der Veer 2003);

  8. Makes it possible to provide users with contextual help i. e. explicit information about how (which tasks to perform) to reach the goal both at design time (Pangoli and Paternò 1995) and from the current state of interaction while interacting with the system (Palanque and Martinie 2011);

  9. Support the redesign of the extant system by analysis of extant task models and producing task models for the future system as promoted in ADEPT framework (Wilson and Johnson 1996).

It is important to note that most of the potential benefits listed above require that the interactive application and the information represented in the task model are compatible. Such compatibility can be seen at lexical (for each interface object corresponds a low level task in the task model and reciprocally), syntactic (the task structure and temporal operators are conformant with the availability of interface objects i. e. compatible with the dialogue of the UI) and semantic levels (the interactive application allows users to reach the goals identified in the task model) (Palanque, Bastide and Sengès 1995).

This paper argues that it is possible to ensure compatibility between a task model and an interactive application. Beyond previous work in which compatibility was ensured by generation of the application from the task model (as promoted in (Wilson and Johnson 1996)) or by “connecting” a model of the application with the task model (as promoted in (Palanque, Bastide and Sengès 1995) and (Barboni et al. 2010)). The proposed contribution is wider in the sense that it allows coupling task models with any existing interactive application (without having a model of it). By this coupling the paper provides support for taking advantage of the benefits of task models based approaches (listed above), even without following a model-based development of the interactive application.

In this expanded version of the article presented at the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2015), the detailed description of the correspondence between annotations in the program of the interactive application and interactive tasks in the task models has been added. The complete version of the case study has also been integrated so that the application of each step of the proposed validation process is presented. In particular, the complete description of following phases of the process have been integrated: task analysis and modelling, Task models completeness and consistency, interactive application instrumentation, editing of correspondences between tasks and widgets, testing task models consistency using co-execution.The remainder of the paper is structured as follows. Next section provides a quick overview of related work on compatibility aspects between task models and interactive applications. Section III presents a stepwise process for extracting information about an interactive application and for connecting it to a task model to ensure consistency or to identify gaps. Section IV presents a co-execution environment making it possible to embed task models at execution time. Section V presents the application of the process and the use of the co-execution environment on a case study in the interactive cockpits of large civil aircraft domain. Last section summarizes the contributions of the paper, makes explicit their benefits and limitations and highlights future work.

2 Related Work on Task – Application Compatibility

There have been mainly two mains ways of addressing the issue of compatibility between task models and interactive applications: by generation of the application from the task model and by defining a correspondence between a model (formal or not) of the application and the task model.

2.1 Generation of Application From a Task Model

From the seminal work of ADEPT (Wilson et al. 1993) and its design process based on tasks and domain models, many authors have followed and refined that approach as, for instance, in the CAMELEON framework (Calvary et al. 2003). The basic assumptions for these approaches is that task models are the preliminary source of information and that it is possible to generate an interactive application from such information (while adding other ingredients such as UI guidelines for instance). The main claim is that with such an approach it is possible to generate user interfaces for different platforms thus reducing the development costs. The main drawbacks are that it is difficult to integrate design and craft knowledge in such processes ending up with stereotyped user interfaces far away (in terms of design and interaction techniques) from leading edge applications.

2.2 Correspondence at Models Level

Another approach promoted was to perform integration with a (possibly formal) model of the interactive system. Such an approach has been firstly introduced in (Palanque, Bastide and Sengès 1995). However, such an approach requires a description technique able to encompass all the elements of the interactive application including input device information, interaction techniques as well as the non-interactive part (functional core) of the application. Another drawback is the high development costs for the construction of the application and interaction models limiting usually its use to safety critical applications. Besides, such approaches are very different from current processes in interactive application development where Rapid Application Developments toolkits (RAD) are common practice. Lastly, task models and system models must be consistent. It is another cumbersome task to perform as presented in (Navarre et al. 2001) where such compatibility was assessed through scenarios extracted from the task models and executed on the system model.

2.3 Issue of Having the Task Model at Run Time

An orthogonal issue with respect to the two approaches presented above, lies in the fact that some of the advantages (such as contextual help) can only be available if the task models are embedded during the use of the application. Embedding task models at run time while executing the models of an interactive application was proposed in (Barboni et al. 2010) where the benefits were made available but again at the costs of building an interactive application via the construction of system models. This capability has also been used to modify the system behavior at run time. In (Blumendorf, Lehmann and Albayrak 2010), the current state of execution of task models are used to modify user interfaces distribution over several devices. At last, automated testing of GUIs has been proposed in (Nguyen et al. 2014) but is based on test cases (sequences of low level GUI events) and not on task models.

3 A Systematic and Explicit Approach to Ensure Consistency Between User Tasks and Interactive Applications

The proposed approach overcomes (as much as possible) the limitations identified above while keeping the potential benefits. In order to reach this goal, our approach is made of:

  1. A process based on the synergistic use of task models and interactive application.

  2. A technique for instrumenting an existing application in order to be able to co-execute it with task models at run time.

3.1 Proposed Process

The process takes as input an existing interactive system and its software interactive applications. This process, illustrated in Figure 1, starts with 2 phases that can be lead concurrently: task modelling and instrumentation of the interactive application. The task analysis and modelling phase consists in understanding user’s tasks with the interactive system and in describing them in task models. The particularity of this phase is that task models of the low-level interactions with the applications have to be produced, as these descriptions of the interactive tasks will be put in correspondence with the interactive application. The amount of tasks to be performed can be very important and thus there can be several models which describe tasks at various abstraction level. All these models have to be linked between each other and these phase is depicted as the “integration of task models” phase in Figure 1. In this phase the models are simulated in order to check their completeness and consistency with regards to operational scenarios.

Figure 1 
            Process for validating the effectiveness of an interactive application.
Figure 1

Process for validating the effectiveness of an interactive application.

On the interactive application side, the software is instrumented to enable the connection with task models. Then, a synergistic module automatically extracts widgets from the interactive application and interactive tasks from the task models. Interactive tasks are then put in correspondence with the widgets and their associated events (phase “Editing correspondences between tasks and widgets” in Figure 1). When trying to link interactive tasks to widgets and events, if tasks are not matching widgets or events, incompleteness is detected and the task models have to be mend (return loop in Figure 1). Once the task models are complete, the instrumented interactive application can be co-executed with the task models. During this co-execution, inconsistencies can be detected between task models and interactive application, for example if a widget is described to be used instead of another which should really be used. In this case, the task models have to be corrected (return loop from “testing via co-execution” box in Figure 1).

The required framework to support such correspondence edition and to support co-execution of the interactive application with task models is discussed in the next section.

Once the task models are complete and consistent with the instrumented interactive application, the co-execution framework can be exploited in order to provide support for training programs (Martinie, Palanque, Navarre et al. 2011), contextual help (Palanque and Martinie 2011) and user centered redesign. In the illustrative example, we describe how this co-execution framework provides support for assessing effectiveness of interactive applications, which can be exploited for the re-design of an interactive application.

3.2 Approaches for Co-Execution of Interactive Application and Task Models

Co-execution of system models and task models can be used to provide support for the development of single user applications (Barboni et al. 2010) and collaborative applications (Martinie et al. 2014). In this paper we propose a framework that enables the co-execution of an interactive application with its associated task models. For this purpose, a synergistic module, which consists of a correspondence editor and a co-execution controller (or simulation controller), has been built in order to be able to interface task models with an interactive application programmed with a textual programming language. Following the work proposed in (Barboni et al. 2010) and (Martinie et al. 2014), connecting task models and system models in a software development environment requires the following steps to be performed:

  1. On task modelling side:

    1. A set of interactive tasks (both input and output) must be extracted from the tasks specification (as they represent actions performed by the user and feedback the system provides the user with).

    2. The task models simulation environment notifies to the synergistic module the evolution of the scenario under construction using a dedicated API (sending data from the simulator).

  2. On system modelling side:

    1. The activation and rendering functions have to be extracted as they represent the system inputs and outputs. They contain information about the widgets, the list of related events (relevant for correspondence), the activation rendering methods for each couple of widget and event (i. e. how enabled / disabled widget statuses are rendered) and lastly the rendering methods (graphical representation of data within widgets).

    2. The system modelling execution environment executes the models and notifies to a synergistic module about any change that happens using a dedicated API.

In the proposed approach, these mechanisms remain the same on the task models side, but on the system side, working without models requires to build a co-execution framework with equivalent mechanisms, i. e.:

  1. For editing correspondences, the synergistic module of the framework needs to acquire what is provided by both activation and rendering functions: a list of widgets, the list of related events, the activation rendering methods and the rendering methods.

  2. For co-execution of the interactive application and associated task models, the synergistic module of the framework requires to be notified about any change: activation status of the widget (enabled / disabled), rendering changes. The synergistic module also has to be capable of raising events (for example, to force widgets to trigger a particular event).

Providing such features depends on how the application has been built, i. e. whether or not it has been setup for co-execution.

3.3 Instrumentation of an Existing Interactive Application

When preparing the setup for co-execution, the effort required by the developer depends on the choice of technology or architecture made:

  1. A first possibility would be to ask developers to provide the activation and rendering functions as explained in the previous section, implementing a dedicated API. This solution requires knowledge about the synergistic module and modifies the structure of the interactive application.

  2. Another possibility is to ask developers to instrument the code of the existing application with means to allow the access of pertinent widgets of the application for both user inputs and system graphical outputs. Such a solution requires less effort from developers but implies to precisely define how required features would be exposed (widgets, events, rendering methods…). With Java1[1] for instance (since jdk 1.5), “annotations” are special software code elements that can be used both at compilation and at run time in order to point out objects, attributes or methods, and providing access to them.

The option which consists in providing the developers with a library of widgets already prepared for the correspondence editing is particular. Such a solution introduces a bias as it requires the developer to work with this predefined set of widgets, and prevent them from using new widgets. If this last case occurs, the developer will have to provide extra code to use these new widgets within the framework (which corresponds to the first two possibilities presented).

Table 1 provides an overview of the possibilities that the application developer will have to face to be able to connect an interactive application to task models.

Table 1

Work to be done on the interactive application developer’s side.

Correspondence editing System driven simulation Task driven simulation
Dedicated API Provide:
  1. a list of widgets

  2. a list of events

  3. a list rendering events

  1. Notify user events occurrence

  2. Notify rendering events

  1. Allow trigger user events

  2. Notify rendering events

Code instrumentation Provide access to widgets:
  1. dedicated to user inputs

  2. dedicated to system graphical outputs

Nothing (if notification mechanism is embedded within the widgets) Nothing (if triggering event mechanism is embedded within the widgets)
Resource introspection No work to be done No work to be done No work to be done
Runtime environment introspection No work to be done No work to be done No work to be done
Modification of runtime environment No work to be done No work to be done No work to be done

3.4 Design and Development of the Synergistic Module

If the application is not instrumented for co-execution with task models, the effort will have to be assumed by developers of the correspondence module. The complexity of such approach thus depends on the technology used to develop the application:

  1. The used technology may provide mechanisms (at design time or at runtime) to discover interactive widgets and their features. This could be resource files providing the widgets and their layout (such as in Microsoft Visual Studio, or with Java FX fxml files, Qt qml files…) or mechanism provided by the runtime environment such as with Java Swing application (at runtime it is possible to explore the widget tree of any Java Swing application).

Another possibility could be to modify the runtime environment and to enhance widgets with dedicated mechanisms (as described in previous section). A good example of such possibility is the architecture of Java Swing, where it is possible to change the Look&Feel manager at runtime of any application even if it has not been set up for (for each Swing component, the Look&Feel manager defines both the rendering and how user events are handled). But for these two aforementioned possibilities, there are two important limitations:

  1. Widgets that are not predefined within the framework but are used for the application cannot be easily used for the correspondence editing as they would be difficult to detect and difficult to analyze (to provide list of produced events for instance). Fixing this point thus depends on the used technology.

  2. Widgets that appear at run time cannot be detected at boot strap.

In any case (except when a dedicated API for connection has been implemented in the application), a set of functionalities have to be developed in the synergistic module in order to be able to adapt what is provided by the interactive application and to connect it to task models. Table 2 provides an overview of the work that has to be done by the synergistic module developer depending on the way the interactive application has been prepared for correspondence and co-execution. In this article, we present a solution based on the code instrumentation as it: a) provides full support for connecting an interactive application with its associated task models; b) provides support for modularity of the framework; c) does not modify the architecture of the interactive application; d) is the more accurate with regards to recent advances in software engineering.

Table 2

Work to be done on the synergistic module developer‘s side.

Correspondence editing System driven simulation Task driven simulation
Prepared for correspondence and co-execution Dedicated API Nothing (except editing correspondence itself) Nothing Nothing
Code instrumentation Translate application enhancements into compatible features (done once per feature) Adapt notification mechanisms if they exist or create a pull mechanism (done once per feature) Adapt notification and firing mechanisms if they exist or create a pull mechanism (done once per feature)
Note prepared for correspondance and co-execution Resource introspection
  1. Parse ressource files.

  2. Foreach known widgets prepare list of features for correspondence editing

Adapt notification mechanisms if they exist or create a pull mechanism (done once per feature) Adapt notification and firing mechanisms if they exist or create a pull mechanism (done once per feature)
Runtime environment introspection
  1. Get and explore widgets tree.

  2. Foreach known widgets prepare list of features for correspodence editing.

  3. Requires means to graphically identify widgets to build the correspodence.

Adapt notification mechanisms if they exist or create a pull mechanism (done once per feature) Adapt notification and firing mechanisms if they exist or create a pull mechanism (done once per feature)
  1. Does not work for unknown widgets.

  2. Difficult to do if widgets are dynamically instanciated

Modification of runtime environment
  1. Modify the runtime environment making possible to retrieve a widget list.

  2. Foreach known widgets prepare list of features for correspodence editing.

  3. Requires means to graphically identify widgets to build the correspodence.

Adapt notification mechanisms if they exist or create a pull mechanism (done once per feature) Adapt notification and firing mechanisms if they exist or create a pull mechanism (done once per feature)

4 An Integrated Environment Supporting The Co-Execution of Tasks MODELS and Java Applications

The proposed process for validating the effectiveness of an interactive application presented in the previous section is supported by a modelling and simulation CASE tool for engineering user tasks and interactive applications.

4.1 Task Modeling With HAMSTERS

Task models support gathering and structuring data from the analysis of users’ activities, and recording, refining and analyzing information about users’ activities. Several notations are available to describe tasks with varying expressiveness levels depending on targeted analysis. HAMSTERS (Human – centered Assessment and Modeling to Support Task Engineering for Resilient Systems) is a tool-supported graphical task modeling notation for representing human activities in a hierarchical and ordered way. At the higher abstraction level, goals can be decomposed into sub-goals, which can in turn be decomposed into activities. Output of this decomposition is a graphical tree of nodes. Nodes can be tasks or temporal operators.

Tasks can be of several types (Figure 2) and contain information such as a name, information details, and criticality level. Only the single user high-level task types are presented here but they are further refined. For instance the cognitive tasks can be refined in Analysis and Decision tasks (Martinie, Palanque, Barboni et al. 2011) and collaborative activities can be refined in several task types (Martinie et al. 2014).

Figure 2 
            High-level Task Types in HAMSTERS.
Figure 2

High-level Task Types in HAMSTERS.

Temporal operators (presented in Table 3) are used to represent temporal relationships between sub-goals and between activities. Tasks can also be tagged by temporal properties to indicate whether or not they are iterative, optional or both.

Table 3

Temporal Ordering Operators in HAMSTERS.

Operator type Symbol Description
Enable T1>>T2 T2 is executed after T1
Concurrent T1|||T2 T1 and T2 are executed at the same time
Choice T1[]T2 T1 is executed OR T2 is executed
Disable T1[>T2 Execution of T2 interrupts the execution of T1
Suspendresume T1|>T2 Execution of T2 interrupts the execution of T1, T1 execution is resumed after T2
Order Independent T1|=|T2 T1 is executed then T2 OR T2 is executed then T1

HAMSTERS’ notation and tool provide support for task-system integration at the tool level (Martinie, Palanque, Navarre et al. 2011) by:

  1. Structuring a large number and complex set of tasks introducing the mechanism of subroutines (Martinie, Palanque and Winckler 2011) and generic components (Forbrig et al. 2014). These structuring mechanisms enables the breakdown of a task model in several ones. Subroutines are similar to functions or procedures. Generic components offer a set of properties and a set of functions and can be instantiated several times in several task models.

  2. Describing data that is required and manipulated (Martinie et al. 2013) in order to accomplish tasks. Figure 3 recapitulates the notation elements to represent data. Information (“I:” followed by a text box) may be required for execution of a system task, but it also may be required by the user to accomplish a task. The notation element “Physical Object” (“Phy O:” followed by a text box) supports describing whether a user or a system task requires a particular physical object to be accomplished. The notation element “Object” (“O:” followed by a text box) is dedicated to the description of a software data object that is required by the system in order to accomplish a task. The notation element “Software Application” (“Sw A:” followed by a text box) provides support for the description of a software application that is required in order to accomplish a task.

Figure 3 
            Representation of Objects, Information and Knowledge with HAMSTERS Notation.
Figure 3

Representation of Objects, Information and Knowledge with HAMSTERS Notation.

4.2 Interactive Application Instrumentation and Annotations Processing

As stated above, we present an approach where developers of the interactive application instrument their code with annotations so that the interactive application can be integrated within the synergistic framework. Our framework and the interactive application presented in the case study are fully implemented using the Java technology. In particular, we propose a solution based on the Java type annotation mechanism (since JSE 1.8). Annotation in Java is a form of metadata (prefixed with a ‘@’), providing data about a program that is not part of the program itself. Annotations have no direct effect on the code instructions that they annotate but they are used by the compiler to detect errors. At compilation time, they provide information to generate the code. At run time they provide support for examining the annotated code. We use this last possibility in our framework. There are a lot of possible usage of such annotations:

  1. For instance, the following annotation may be used at run time to periodically schedule a method call.

    @Schedule(dayOfWeek="Fri", hour="23")public void doPeriodicCleanup() { ... }

  2. The following one is used at compilation time to detect a possible assignment to a null value (and raise an compilation error if so)

    @NonNull String str;

The approach proposed is based on such annotations to point out within the code widgets that may be used to build the correspondence. There are two kind of annotations: one for widgets related to user inputs and the other one for graphical outputs from the system, any widget being possibly of the two kinds:

  1. Example of a simple button

    @EventSource(name="Validate", event="actionPerformed")private JButton btn1;

  2. Example of a label

    @Renderer(name="Display", property="text")private JLabel lbl1;

  3. Example of a text field

    @EventSource(name="Name", event="actionPerformed")@Renderer(name="Name", property="text")private JTextField txt1;

Both types of annotation may be enhanced with extra data to provide information for the correspondence editing. In the two cases it provides a readable name for the correspondence (independent from the attribute name) and another information (the name of the related event or the name of the widget property that may change while using the application). Table 4 summarizes the different annotations used in this approach and the corresponding interactive tasks.

Table 4

Annotations used and corresponding interactive tasks

Interactive application Corresponding task Annotation declaration Annotation definition
Activation Event produced by user action on a widget Input Task @SynergyEventSource Defined by an event source name and an event name Ex: @SynergyEventSource(name=“buttonl“, event=“a661EvtSelection“)
Rendering Change in the rendering of a widget Output Task @SynergyRenderer Defined by a Tenderer name and a widget property name Ex: @SynergyRenderer(name=“labell“, property-‘labelString“)SynergyRenderer(name=“labell“, property-‘labelString“)

When editing the correspondence between the interactive application and task models, the synergistic framework uses the introspection mechanism of Java to find any annotated widget, building a list of widgets available for editing.

4.3 Architecture of the Framework for Coupling Task Models and Instrumented Interactive Application

The architecture of the synergistic software environment that supports both correspondence editing and co-execution is presented in Figure 4.

Figure 4 
            Architecture of the synergistic environment
Figure 4

Architecture of the synergistic environment

This architecture is composed of: two modules for task edition and simulation (on the left side), the interactive application (on the right side) and of modules for connecting and co-executing task models with interactive application (grey shaded in the center in Figure 4). The colors of the architecture elements make explicit in light grey elements that have been either added or deeply redesigned to extend co-execution to applications not designed following a model-based approach (Barboni et al. 2010). The module in charge of interfacing the interactive application with the correspondence editor and simulation controller is called the “Annotation processor module”. This module is responsible of processing the annotations and of providing information about widgets, events, rendering to the correspondence editor and notifications to the simulation controller (as described in section “Approaches for co-execution of interactive application and task models”). The main differences with the architecture presented in (Barboni et al. 2010) are that: the architecture presented here is not exclusively model-based; the information about widgets and events is provided by the annotation processor module instead of the PetShop CASE tool.

5 Illustrative Example From the FCU Back Up Case Study

This section presents how to apply the process described in third section and illustrated in Figure 1 on a case study extracted from an interactive application in the avionics application domain.

This section is divided in six subsections. The first one is dedicated to the case study presentation. The five other ones focus on the illustration of five steps of the proposed process. Firstly, we describe the task analysis and modelling step. Secondly, we describe the instrumentation of the existing application. Thirdly, we describe the editing of the correspondences between tasks and widgets. Fourthly, we describe the co-execution of the interactive application with task models and the checking of tasks models consistency. Finally, we describe how the co-execution framework can be exploited, using the example of how it provides support for assessing effectiveness of interactive applications.

5.1 Presentation of the FCU Backup

In interactive cockpits, the Flight Control Unit (FCU) is a hardware panel composed of several electronic devices (such as buttons, knobs, displays…). It allows crew members to interact with the Auto-Pilot and to configure flying and navigation displays. The FCU Backup is an interactive application designed for the recovering of all FCU functions in case of FCU failure and is used in exclusion with the FCU. It is composed of two interactive pages:

  1. EFIS_CP: Electronic Flight Information System Control Panel for configuring piloting and navigation displays.

  2. AFS_CP: Auto Flight System Control Panel for the setting of the autopilot state and parameters.

For example, this application is displayed on two of the eight cockpit LCD screens (in the Airbus A380), one for the Captain and the other for the First Officer. The crew members can interact with the application via the Keyboard and Cursor Control Units which gathers in a single hardware component a keyboard and a trackball.

In this paper, we will focus on the EFIS_CP page depicted in Figure 5. The left panel is dedicated to the configuration of the Primary Flight Display while the two right panels are dedicated to the configuration of the Navigation Display (illustrated in Figure 6). The top right panel of the EFIS page enables the display of several navigation information while the bottom panel of the EFIS page enables to choose the display mode and scale.

Figure 5 
            EFIS control panel (w / o WPT button activated).
Figure 5

EFIS control panel (w / o WPT button activated).

Figure 6 
            Navigation display (w / o waypoints displayed).
Figure 6

Navigation display (w / o waypoints displayed).

More particularly, we will focus on two different activities that can be performed using the EFIS_CP page: the activities that have to be performed in order to insert waypoints in the current route and the activities that are related to the configuration of the barometer settings.

5.2 Task Analysis and Modelling

5.2.1 Inserting a Waypoint in the Current Route

During the flight, crew members may ask permission or be asked to modify the current route of the aircraft. If the air traffic controller agrees, a clearance is ordered to the pilot to insert a waypoint.

Figure 7 details the tasks that have to be performed to reach this goal. First, thanks to the VHF radio device, the crew member receives a clearance from the air traffic controller indicating which waypoint has to be added in the current route (interactive output task “Receive a clearance from ATM for inserting a waypoint in flight plan” and perceptive task “Perceive waypoint insertion request” in Figure 7). Then, s / he decides to check the waypoints (cognitive decision task “Decide to check waypoints”) and has to perform a set of activities to display waypoints on the navigation display (abstract task “Display waypoints on ND” in Figure 7). In order to display waypoints on the navigation display, the crew member has to display the EFIS page and to configure the display options. The subroutine “Configure ND display options” describes the tasks that have to be performed to configure the navigation display options. The part of this subroutine task model which is relevant to our illustrative example is depicted in Figure 8. It shows that the crew member can iteratively configure the display of navigation information (iterative abstract task “Configure display of navigation information”) until (temporal ordering operator “disable”) s / he decides that the ND is setup correctly (cognitive task “Decide that ND setup if ok”). To configure the display of navigation information, the crew member has the choice between several actions to perform.

Figure 7 
              Task models for Handle waypoint insertion.
Figure 7

Task models for Handle waypoint insertion.

Figure 8 
              Subpart of “Configure ND display options” subroutine task model.
Figure 8

Subpart of “Configure ND display options” subroutine task model.

In order to configure the display of waypoints, s / he will press the WPT button widget.

Figure 9 details the component “Press button widget to configure the display of [information, button]”. This component is a generic task model that details what have to be performed by the pilot to configure the display of an information by pressing the corresponding button widget. First the crew member has to locate the corresponding button widget (user task “Locate <button> in Figure 9), then to perceive the current state of this button (perceptive task “Perceive current status of <button>” in Figure 9). Then, s / he decides that the current status of the button is the right one or not according to the targeted status (temporal ordering operator “choice”). If the crew member analyses that the current status of the button does not match the target status, s / he clicks on it (interactive input task “Click on <button>” in Figure 9). The new status of the button is displayed (interactive output task “Display <button> status” in Figure 9). The crew member then can perceive the new status of the button and decide that it is the right one (user tasks “Perceive current status of <button>” and “Decide that status of <button> is OK for targeted display configuration of <information>”). Figure 10 details the component “Press button widget to configure the display of [information, button]” instantiated for the information “waypoints” and the corresponding button “WPT button”.

Figure 9 
              Task model for component “Press button widget to configure display of [information, button]”.
Figure 9

Task model for component “Press button widget to configure display of [information, button]”.

Figure 10 
              Task models for Press button widget to configure display of waypoints.
Figure 10

Task models for Press button widget to configure display of waypoints.

5.2.2 Configure Barometer Settings

Before landing, crew members may be asked to configure the barometric pressure according to the one reported by the airport. The barometric pressure is used by the altimeter as an atmospheric pressure reference in order to process correctly the plane altitude. Figure 11 details the tasks that have to be performed by the pilot in order to configure the barometric settings. In order to change the barometric pressure (abstract task “Change atmospheric reference”), the pilot has first to select the QNH mode (component “Select checkbutton in radiobox for configuration of [atmospheric reference, QNH checkbutton, atmospheric reference radiobox]”). S / He can then configure the pressure unit by choosing to display it in hPa (component “Press button widget to configure the display of [pressure unit, InHg to hPa button]”) or in InHg (component “Press button widget to configure the display of [pressure unit, hPa to InHg button]”). Then, the pilot can set the atmospheric reference in InHg (component “Set value in editboxnumeric [InHg pressure, InHg editboxnumeric]”) or in hPa (component “Set value in editboxnumeric [hPa pressure, hPa editboxnumeric]”). Figure 12 details the component “Set value in editboxnumeric [information, editboxnumeric]” instantiated for the information “InHg pressure” and the editboxnumeric “InHg editboxnumeric”. First the pilot has to locate the corresponding edtiboxnumeric widget (user task “Look at <InHg editboxnumeric>”), then to perceive the current displayed value (perceptive task “Perceive value of <InHg pressure> in <InHg editboxnumeric>”).

Figure 11 
              Task models for Configure baro settings.
Figure 11

Task models for Configure baro settings.

Figure 12 
              Instantiation of “set value in editboxnumeric” component task model for InHg pressure.
Figure 12

Instantiation of “set value in editboxnumeric” component task model for InHg pressure.

Figure 13 
              Simplified task model for Configure flight instruments
Figure 13

Simplified task model for Configure flight instruments

Figure 14

Excerpt from the FCU Backup widget declarations

Then, s / he decides that the current value is the right one or not according to the targeted value. If s / he decides that the value has to be changed, s / he clicks on the corresponding editboxnumeric (interactive input task “Click in <InHg editboxnumeric>”) and edit the value (interactive input tasks “Type <InHg pressure> value in <InHg editboxnumeric>” and “Type on Enter key”). The new value is then displayed on the editboxnumeric (interactive output task “Display final value of <InHg pressure> in <InHg editboxnumeric>”). The pilot then can perceive the new value and decide that it is the right one.

5.2.3 Configure Flight Instruments

The task models presenting the two activities on the FCU Backup we focus on in this case study (inserting a waypoint presented on Figure 7 and configuring barometer settings presented on Figure 11) are composed of numerous components and subroutines. In order to simplify the explanations about the following steps of the process, we bring together the two FCU Backup activities in a unique simplified task model presented on Figure 13. This task model focus on the interactive tasks that will be used for the co-execution process. It is divided into two goals, corresponding to the two activities and to two different abstract tasks: “Configure baro settings” and “Configure display of waypoints”.

When the pilot is asked to enter a new value for the pressure reference, s / he first chooses the QNH mode (interactive input task “Click on QNH”). Then s / he configures the pressure unit by choosing hPa (interactive input task Click on InHg to hPa button”) or InHg (interactive input task “Click on hPa to InHg button”). S / he can then choose to edit the value in hPa (interactive input task “Enter value in hPa” followed by interactive output task “Display updated value in hPa”) or in InHg (interactive input task “Etner value in InHg” followed by interactive output task “Display updated value in InHg”).

When the pilot is asked to insert a waypoint, s / he decide to display the waypoints on the Navigation Display by configuring their display using the FCU Backup (abstract task “Configure display of waypoints” on Figure 13). To do so, s / he has to configure the associated button status (abstract task “Configure WPT button status”) by clicking on it (interactive input task “Click on WPT button” followed by interactive output task “Display new status of WPT button”). The waypoints are displayed on the Navigation Display when the WPT button is displaying three green lines as illustrated in Figure 5-b).

5.2.4 Task Models Completeness and Consistency

Scenarios can be produced by the execution of task models as initially proposed in CTTE (Mori, Paternó and Santoro 2002). Each time the task model is simulated (for instance in order to test that the model corresponds to the activity of the operators) the execution is recorded as a scenario. Execution of interactive systems by means of scenarios produced from a task model has been proposed in (Navarre et al. 2001) where scenarios (such as the ones described right before) are used as input for the autonomous execution of the system (no user input is required as this information is already in the scenario). This makes it possible to assess consistency between the information described in the task model and the actual behavior of the interactive system.

5.3 Interactive Application Instrumentation

As presented in previous section, the approach proposed is based on the Java type annotation mechanism and proposes two kind of annotations: @SynergyEventSource used as an activation annotation and @SynergyRenderer used as a rendering annotation. Considering the two activities we are focusing on in this case study, we are interested in seven different widgets. Figure 14 presents the declarations of these widgets: one picturepushbutton to configure the display of waypoints (ppbWPT), two checkbuttons to configure pressure mode (chk_QNH and chk_STD), two editboxnumerics to edit the pressure reference (ebnHPA and ebnINHG) and two picturepushbuttons to choose the pressure unit (ppbHPA and ppbINHG).

Figure 15

Excerpt from the FCU Backup widget declarations with annotations

Figure 15 presents the same widget declarations with the corresponding annotations. For each widget, the annotations (that are highlighted in bold in this Figure) are added before its declaration. For example, the annotations for the picturepushbutton ppbWPT notify that this widget (named wpt) is likely to send one event (named a661EvtSelection) and to change its display upon the change of the picture property.

Figure 16 
            Illustration of the editing of correspondences between tasks and widgets.
Figure 16

Illustration of the editing of correspondences between tasks and widgets.

5.4 Editing of Correspondences Between Tasks and Widgets

Once the tasks models are defined and once the application is instrumented, the synergistic environment extracts automatically widgets from the interactive application (following the annotations) and interactive tasks from the task model. To illustrate the correspondence edition, we uses the task model presented in Figure 13 and the FCU Backup application annotated as presented in Figure 15. The synergistic environment extracts from the task model seven interactive input task and three interactive input tasks. It extracts from the FCU Backup application seven widgets events and three rendering actions. To edit the correspondences between tasks and widgets, the developer uses the correspondence editor as illustrated in Figure 16. This tool may be divided into three parts:

  1. The left bottom part presents two tabs. The first one, displayed in Figure 16, enables the developer to choose among the available task models the one that s / he will use in the correspondence edition. The second tab (that is not displayed in Figure 16) enables the developer to choose, among the available ones, the application that will be used in the correspondence edition.

  2. The top part presents two correspondence tables. The top one defines the input correspondences. It enables the developer to put in correspondence interactive input tasks and widget event handlers. The bottom correspondence table defines the output correspondences. It enables the developer to put in correspondence interactive output tasks and rendering actions. For both interactive input and output tasks, event handler and rendering actions the correspondence editor offers to the developer a drop-down list with all the available ones (as illustrated in Figure 16 for the event handler of the last line of the input correspondences table).

  3. The right bottom part is divided into two panels. The top one presents the correspondence coverage, pointing out the number of unused interactive input and output tasks, event handler and rendering actions. This enables the developer to check the completeness of the correspondence edition. As an example, the illustration in Figure 16 presents a correspondence coverage of 55%. This is due to the fact that only 5 of the 7 interactive input tasks and 4 of the 7 event handlers are currently used. Finally, the bottom panel enables the developer to set the co-execution properties (main task model and main application for the co-execution) and to launch the co-execution.

Figure 17 
            Illustration of a co-execution step between task models and FCU Backup application.
Figure 17

Illustration of a co-execution step between task models and FCU Backup application.

5.5 Co-Execution between Task Models and FCU Backup Application

Figure 17 presents one step of co-execution of the tool suite presented in this paper. The left part (resp. the right part) corresponds to the execution of the models and application before clicking the WPT button to display waypoints (resp. after clicking) using a task driven co-execution.

Figure 18 
            Illustration of a co-execution step between task models and FCU Backup application.
Figure 18

Illustration of a co-execution step between task models and FCU Backup application.

A particular focus on the left part of the figure allows the understanding of the tool suite (which is a set of modules of the NetBeans IDE.2[2] This tool may be divided into four parts:

  1. The top part is a set of classical IDE menu bars and tool bars buttons.

  2. The left part provides means to navigate amongst the project files (java sources, HAMSTERS and correspondence models).

  3. The right part shows properties of the sources or models (bottom part) and tools to modify the currently selected model (on the top part a specific toolbox appears depending on the kind models).

  4. The center part allows the editing and execution control of both the sources and models. The layout of this part is fully reconfigurable as illustrated by the layout difference of the left and right part of the figure.

  5. To illustrate the synergistic exploitation of both task models and the FCU backup application, we use the model presented in the previous sections (cf. Figure 13), focusing on the task driven simulation.

Following the five numbered steps presented on the figure, the behaviour of the task driven simulation is as follows:

  1. A set of available tasks is provided by the HAMSTERS environment that is selectable within the associated list box.

  2. The selected task is connected to a widget event (the click on WPT button) by the correspondence editing.

  3. Perform the task thus acts as if the user clicks on the corresponding button.

  4. As if the button were clicked, a rendering may occur and this rendering may be related to an output HAMSTERS task, using the output correspondence edition (in our example, the button status changes to engaged and this property change is related to the task display <WPT button> status).

  5. When an output task is selected, a dedicated panel appears at the bottom of the tool, showing whether a rendering occurs and if it corresponds to an output correspondence (panel “feedback on task performance). In this panel it is possible to indicate if the rendering was effectively correct and perceive (for log purpose).

5.5.1 Testing Task Models Consistency Using Co-Execution

Figure 18 presents another step of this co-execution. In this configuration, the user has already clicked on the QNH checkbutton and must now choose the pressure unit by performing interactive input task “Click on InHg to hPa button” or interactive input task “Click on hPa to InHg button” (Figure 18-1 and Figure 18-2). However, as shown by the orange colour highlighting it on Figure 18-1 and the fact that it is not highlighted in green in Figure 18-2, the interactive input task “Click on hPa to InHg button” is not accessible for the user. This is due to the fact that only the “InHg to hPa” button is displayed on the FCU Backup (cf. Figure 18-3). Therefore, the “hPa to InHg” button is not accessible for the user, neither is the corresponding task.

This inconsistency brings our attention on the fact that the task model is faulty and must be corrected to correspond to the application use (step “Mending of task models” in the process). The corrected task model is presented in Figure 19. In this case, the pilot must click on the “InHg to hPa” button before editing the pressure in hPa. If s / he prefers to edit the pressure in InHg, s / he can then click on the “hPa to InHg” button.

Figure 19 
              Corrected simplified task model for Configure baro settings.
Figure 19

Corrected simplified task model for Configure baro settings.

5.6 Analysis of the Effectiveness of the FCU Backup Application With Regards to Crew Members’ Tasks

Thanks to the correspondence and co-execution, we have verified that the task models of the “Handle waypoint insertion request” activities are complete and fully consistent with regards to the FCU Backup interactive application. From the task models, we can analyse that the crew member has to move from one application to another in order to be able to modify the flight plan. S / he has to display the EFIS page to be able to configure the ND display options (“FCU Backup EFIS page” software application object in Figure 7 and “FCU Backup FMS page” software application object in Figure 7).

Thus, we have numbered 10 articulatory tasks that are only dealing with moving from one page to another (5 subtasks in the “Display EFIS_CP page” subroutine and 5 tasks in the “Display FMS page subroutine” in Figure 7).

Furthermore, this articulatory task load will also have to be added to the crew member’s activities as far as s / he will have to switch to another application in order to come back to her / his current activity. This analysis shows that the interactive application, as currently designed, reduces the crew members’ efficiency. One potential solution for re-design could be to integrate the Navigation Display panel with the FMS panel in one single interactive application.

6 Conclusion

The paper has proposed a tool-supported process for embedding task models in extant interactive application. The systematic and stepwise application of this proposed process has been described in the case study. The use of this process allows users to benefits from information available in the task model while interacting with the actual system. We have demonstrated that this process can be more or less time consuming depending on the application considered and whether or not it has been prepared for such an integration at development time. We believe such framework can be of great help for increasing the use of task models in interactive system development by providing benefits even for existing and already deployed applications. Of course, the usual benefits of using task models such as assessing work complexity, operators’ workload, identifying areas for improvement… are still present and even improved by possible storage of information about the actual use of the application under consideration in its real context. One limitation of the approach is that it is currently focusing on WIMP interactions (within the specific scope of JAVA SWING) but going to other platforms does not raised issues more difficult to solve than the ones already addressed. Another limitation is that going to more sophisticated interaction technique raises not trivial issues requiring extensions to task models providing more detailed representations of interaction as, for instance, in (Jourde, Laurillau and Nigay 2010).


This paper is a revised and expanded version of a paper entitled ‘A Generic Tool-Supported Framework for Coupling Task Models and Interactive Applications’ presented at the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS), Duisburg, Germany – June 23–26, 2015.


About the authors

Dr. Camille Fayollas

C. Fayollas is an assistant professor in Computer Science at National Polytechnic Institute (University of Toulouse, INP-ENSEEIHT). Her research involves techniques, notations and tools to analyse dependability and fault-tolerance for interactive critical systems.

Dr. Célia Martinie

C. Martinie is an assistant professor in Computer Science at University Toulouse 3 and researcher at the IRIT lab. Her research interests are techniques, notations and tools to analyze, design and develop interactive critical systems.

Dr. David Navarre

D. Navarre is Lecturer in Computer Science at University Toulouse 1. His research deals with notations and tools for the specification, prototyping, validation and implementation of safety critical interactive systems.

Prof. Philippe Palanque

P. Palanque is a professor in Computer Science at University Toulouse 3. His research deals with formal description techniques to support design of resilient interactive systems considering usability, safety and dependability.

References

Barboni, E., J-F. Ladry, D. Navarre, P. Palanque and M. Winckler, 2010. Beyond modeling: an integrated environment supporting co-execution of tasks and systems models. EICS’10, 165–174.10.1145/1822018.1822043Suche in Google Scholar

Bernhaupt, R., D. Navarre, P. Palanque and M. Winckler. 2007. Model-Based Evaluation: A New Way to Support Usability Evaluation of Multimodal Interactive Applications. In: Maturing Usability: Quality in Software, Interaction and Quality. Springer Verlag, pp. 96–122.10.1007/978-1-84628-941-5_5Suche in Google Scholar

Blumendorf, M., G. Lehmann and S. Albayrak. 2010. Bridging models and systems at runtime to build adaptive user interfaces. Proceedings of the EICS 2010, pp. 9–18.10.1145/1822018.1822022Suche in Google Scholar

Calvary, G., J. Coutaz, D. Thevenin, Q. Limbourg, L. Bouillon and J. Vanderdonckt. 2003. A Unifying Reference Framework for multi-target user interfaces. Interact. Comput. 15(3): 289–308.10.1016/S0953-5438(03)00010-9Suche in Google Scholar

Cockton, G. and A. Woolrych. 2001. Understanding inspection methods: Lessons from an assessment of heuristic evaluation. People and Computers, Springer Verlag, pp. 171–192.10.1007/978-1-4471-0353-0_11Suche in Google Scholar

Fayollas C., C. Martinie, P. Palanque, Y. Deleris, J-C. Fabre and D. Navarre. 2014. An Approach for Assessing the Impact of Dependability on Usability: Application to Interactive Cockpits. EDCC, 198–209.10.1109/EDCC.2014.17Suche in Google Scholar

Forbrig, P., C. Martinie, P. Palanque, M. Winckler and R. Fahssi. 2014. Rapid Task-Models Development Using Sub-models, Sub-routines and Generic Components. Proceedings of HCSE, pp. 144–163.10.1007/978-3-662-44811-3_9Suche in Google Scholar

Gong, R. and J. Elkerton. 1990. Designing minimal documentation using the GOMS model: A usability evaluation of an engineering approach. Proceedings of CHI 90. New York, ACM DL.10.1145/97243.97261Suche in Google Scholar

Jourde, F., Y. Laurillau, L. Nigay. 2010. COMM notation for specifying collaborative and multimodal interactive systems. EICS: 125–134.10.1145/1822018.1822038Suche in Google Scholar

Martinie, C., P. Palanque, E. Barboni and M. Ragosta. 2011. Task-Model Based Assessment of Automation Levels: Application to Space Ground Segments. Proceedings of the IEEE SMC Anchorage.10.1109/ICSMC.2011.6084173Suche in Google Scholar

Martinie, C., P. Palanque, D. Navarre, M. Winckler and E. Poupart. 2011. Model-Based Training: An Approach Supporting Operability of Critical Interactive Systems: Application to Satellite Ground Segments, Proceedings of EICS, pp. 141–151, ACM DL.10.1145/1996461.1996495Suche in Google Scholar

Martinie, C., P. Palanque, M. Ragosta and R. Fahssi. 2013. Extending Procedural Task Models by Explicit and Systematic Integration of Objects, Knowledge and Information. Proceedings of European Conference on Cognitive Ergonomics, pp. 23–33.10.1145/2501907.2501954Suche in Google Scholar

Martinie, C., E. Barboni, D. Navarre, P. Palanque, R. Fahssi, E. Poupart and E. Cubero-Castan. 2014. Multi-models-based engineering of collaborative systems: application to collision avoidance operations for spacecraft. Proceedings of EICS, pp. 85–94.10.1145/2607023.2607031Suche in Google Scholar

Martinie, C., P. Palanque and Winckler, M. 2011. Structuring and Composition Mechanism to Address Scalability Issues in Task Models. Proceedings of the IFIP TC 13 INTERACT, LNCS Springer Verlag.10.1007/978-3-642-23765-2_40Suche in Google Scholar

Mori, G., F. Paternó, C. Santoro. 2002. CTTE: Support for Developing and Analyzing Task Models for Interactive System Design. TOSE, 28(8): 797–813.10.1109/TSE.2002.1027801Suche in Google Scholar

Navarre, D., P. Palanque, F. Paternò, C. Santoro and R. Bastide. 2001. A Tool Suite for Integrating Task and System Models through Scenarios. DSV-IS: 88–113.10.1007/3-540-45522-1_6Suche in Google Scholar

Nguyen, B., B. Robbins, I. Banerjee and A. Memon. 2014. GUITAR: an innovative tool for automated testing of GUI-driven software. Automat. Softw. Eng. 21(1): 65–105.10.1007/s10515-013-0128-9Suche in Google Scholar

O’Donnell, R. D. and F.T Eggemeier. 1986. Workload Assessment Methodology. In: Handbook of Perception and Human Performance (Vol. II Cognitive Processes and Performance, pp. 42-41–42-49). Wiley & Sons.Suche in Google Scholar

Palanque, P. and S. Basnyat. 2004. Task Patterns for Taking Into Account in an Efficient and Systematic Way Both Standard and Erroneous User Behaviours. HESSD: 109–130.10.1007/1-4020-8153-7_8Suche in Google Scholar

Palanque, P., R. Bastide and V. Sengès. 1995. Validating interactive system design through the verification of formal task and system models. EHCI: 189–212.10.1007/978-0-387-34907-7_11Suche in Google Scholar

Palanque, P. and C. Martinie. Contextual Help for Supporting Critical Systems’ Operators: Application to Space Ground Segments Activity in Context Workshop, AAAI conference on Artificial Intelligence.Suche in Google Scholar

Pangoli, S. and F. Paternò. 1995. Automatic Generation of Task-Oriented Help. ACM Symposium on UIST: 181–187.10.1145/215585.215971Suche in Google Scholar

Paternò, F. and C. Santoro. 2002. Preventing user errors by systematic analysis of deviations from the system task model. Int. J. Hum.-Comput. Stud. 56(2): 225–245.10.1006/ijhc.2001.0523Suche in Google Scholar

Paternò, F., C. Santoro and L. D. Spano. 2009. MARIA: A universal, declarative, multiple abstraction-level language for service-oriented applications in ubiquitous environments. ACM Trans. Comput.-Hum. Interact. 16(4): 30 pages.10.1145/1614390.1614394Suche in Google Scholar

Pinelle, D., C. Gutwin and S. Greenberg. 2003. Task Analysis for Groupware Usability Evaluation: Modeling Shared-Workspace Tasks with the Mechanics of Collaboration. ToCHI. 10(4): 281–311.10.1145/966930.966932Suche in Google Scholar

Swearngin, A., M. Cohen, B. E. John and R. Bellamy. 2013. Human performance regression testing. IEEE international conference on Software Engineering, ICSE: 152–161.10.1109/ICSE.2013.6606561Suche in Google Scholar

van Welie, M. and G. C. van der Veer. 2003. Groupware task analysis. Handbook of Cognitive Task Design, LEA, NJ, pp. 447–476.10.1201/9781410607775.ch19Suche in Google Scholar

Wilson, S. and P. Johnson. 1996. Bridging the Generation Gap: From Work Tasks to User Interface Designs. CADUI: 77–94.Suche in Google Scholar

Wilson, S., P. Johnson, C. Kelly, J. Cunningham and P. Markopoulos. 1993. Beyond hacking: A model based approach to user interface design. Proceedings of HCI, University Press, BCS HCI, pp. 217–223.Suche in Google Scholar

Published Online: 2015-12-01
Published in Print: 2015-12-01

© 2015 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 8.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/icom-2015-0037/html
Button zum nach oben scrollen