Home Generating Interactive Prototypes from Query Annotated Discourse Models
Article Publicly Available

Generating Interactive Prototypes from Query Annotated Discourse Models

  • Filip Kis

    Filip Kis is a PhD student at the Department of Media Technology and Interaction Design (MID), School of Computer Science and Communication (CSC) at KTH Royal Institute of Technology, Stockholm, Sweden. His research into User Interface development is focused on enabling evolutionary prototyping and end-user development. He is particularly interested in evolutionary prototyping in the context of startup environments where the emergence of lean approaches is changing the development processes and technology.

    EMAIL logo
    and Cristian Bogdan

    Cristian Bogdan is Associate Professor at the Department of Media Technology and Interaction Design (MID), School of Computer Science and Communication (CSC) at KTH Royal Institute of Technology, Stockholm, Sweden. His background is in Computer Engineering (Timisoara, Romania), his PhD is from KTH on Human-Computer Interaction, and his post-doc at TU Wien in model-based UI development based on human communication theories. His research interests revolve around the translucency of technology for its users, which he looked at e. g. for semantic technologies and robotics. Currently, translucency is investigated in the areas of energy technologies, electric vehicles, and voting systems. Relatedly, Cristian is pursuing a quest to involve users, domain experts and interaction designers in the development of interactive artifacts. His main approach in his quest is the design and evaluation of evolutionary prototyping technologies.

Published/Copyright: December 1, 2015

Abstract

Model Based User Interface Development offers the possibility to design User Interfaces without being concerned about the underlying implementation. This is achieved by devising models at a high level of abstraction, thus creating the potential for involving users or domain experts to achieve a user-centered design process. Obtaining a running interactive application from such models usually requires several model transformations. One of the current problems is that while a user interface is generated after these transformations, other parts of the interactive system such as the application logic need to pre-exist or they must be written manually before the interface can be tested in a realistic scenario. This leaves the domain experts dependent on programmers and increases the time between iterations. In this paper we work with Query Annotations, which were previously used only for modeling at low levels and for generating fully functional interfaces, and we aim to generalize them for the high-level modeling approach called Discourse Modeling. The direct expected benefit of this generalization is the possibility to generate complete, readily testable interactive prototypes, rather than just their user interfaces. In addition, Query Annotations can serve as the mapping between the various levels of abstraction and bring to the domain experts a better understanding of the transformation process, as well as the possibility to modify the interfaces and models directly.

1 Introduction

Model Based User Interface Development (MBUID) provides a method of developing user interfaces by working with higher-level interaction models. Such models are used to create interfaces that can be easily adapted to various context of use (i. e. different platforms, modalities and environments). Higher-level models can be transformed and adapted automatically to different contexts of use, requiring less time and ultimately driving down the cost of development process.

By using MBUID approaches the domain experts and other stakeholders not skilled at programming can be involved in the development process as there is an indication that they can understand higher-level models more easily than they would understand software code. For this reason MBUID approaches and tools are used in close cooperation with domain experts and are considered to be part of user-centered design process (Træ tteberg 2008).

Our aim is to support user-centered design processes. Therefore, allowing the domain experts to prototype by using MBUID is an important goal. By prototyping we mean rapid, iterative production of design alternatives with tools that allow the domain experts to focus on the model-generated interactive concepts rather than having to think about programming details (Beaudouin-Lafon and Mackay 2003).

In using currently existing MBUID approaches for prototyping, the domain experts could encounter a number of potential difficulties: Firstly even if they have composed a high-level interaction model and generated user interfaces from them, there may be other models (and software code) necessary before they can run and test the generated interfaces (García Frey et al. 2012). Secondly the domain experts may not understand the mechanisms of transformation from the high-level model they have made to the generated interfaces, and may thus experience frustration due to lack of control (Myers, Hudson and Pausch 2000). Thirdly even if the domain experts are allowed to modify the generated interfaces, such changes may be lost when the high-level model is modified, and the user interfaces are re-generated (the “round trip” problem) (Mann and Thompson 1988).

In this paper we address these challenges by introducing Query Annotations to a discourse-based MBUID approach. Query Annotations present familiar information to domain experts because they are formulated in SQL-like syntax that resembles natural language and that uses elements in the query definitions that originate from the domain. Furthermore, we will show how Query Annotations can be used in higher-level discourse models for data-binding (i. e. to connect interaction models to the domain models) and how they can be propagated all the way to the generated user interface. As such they can present to the domain experts a mapping between the higher level models and the generated interfaces, thereby facilitating an understanding of the generation mechanisms.

The most important purpose of applying Query Annotations to discourse-based modeling approach is to generate the basic CRUD1[1] business logic code from such annotations. The result will be automatically generated, fully interactive prototypes that are ready to be tested by the domain experts.

Discourse-based modeling (Falb et al. 2006) has been chosen as the underlying MBUID approach over the more widespread task-based modeling (Paterno 2013) because it represents a more self-contained approach in regard to its models and tools. Task-based modeling consists of various models and appropriate tools that need to be orchestrated together to achieve the full path transformation from highest level models to final user interfaces. Having to go through a number of steps and modeling languages in order to complete an iteration cycle is counter-productive for a prototyping-oriented process. Our choice for the discourse-based MBUID is further supported by the recent arguments in regard to duality of task- and discourse-based approaches (Popp 2014).

The remainder of this paper is structured as follows: Section 2 describes the work related to our goals. The solution of Discourse Modeling combined with Query Annotations is introduced in Section 3. The use of the combination is illustrated by means of modeling an on-line shop application in Section 4. Section 5 demonstrates how a fully interactive prototype application is generated and how the domain experts can modify this application. Section 6 presents conclusion and future work.

2 Related Work

The authoritative framework for conceptualizing userinterface modeling research is the Cameleon Reference Framework (Calvary et al. 2003), which defines models at various levels of abstraction. Most Model Based User Interface Development approaches use the task-based interaction models with the primary example of approaches being MARIA (Paternò, Santoro and Spano 2009) and UsiXML (Limbourg et al. 2005). An alternative to task-based approaches are discourse-based approaches (Falb et al. 2006, Popp, Raneburger and Kaindl 2013), in which high-level concepts are modeled according to the principles of communication instead of tasks.

There is a duality of task-based and discourse-based models (Popp et al. 2014) in the sense that they can be used to specify the same type of interaction, but through different modeling concepts. Task-based models focus on the tasks that the user and the system need to accomplish, while the discourse-based models focus on the communication between the user and the system. Earlier work (Helmut Horacek et al. 2011) has also looked closely into the differences between the task- and discourse-based approaches. The more widespread task-based approaches have a large variety of tools available to work with models at various levels of abstraction. However, they require more orchestration of various tools and transformation models to go from higher-level models to the final user interface (García Fre et al. 2012, Limbourg and Vanderdonckt 2009). Discoursebased models, on the other hand, provide a fully automated process of generating final user interfaces (excluding business logic) from high-level interaction models.

Discourse-models have also been used to facilitate prototyping by supporting the exploration of design alternatives (Raneburger et al. 2014). In task-based modeling most attempts at prototyping have been done in the form of transforming early stage prototypes (sketches) into task-based models (Coyette et al. 2004). Both task-based and discourse-based prototyping approaches, however, require the existence of business logic (often in the form of Web services) before the full interaction of the interfaces with real data can be tested. This points to an overall tension between the exploratory nature of prototyping and the formal nature of model-based development (Træ tteberg 2008).

Data-binding (i. e. linking the user interfaces with the business logic) in discourse-based approaches has so far been achieved through Action-Notification Model (Popp, Kaindl and Raneburger 2013). Action-Notification Model is built on top of the Model-View-Controller (MVC) pattern and supports any business logic for which the appropriate adapter is created. Currently there is no automatic way of generating the business logic from Action-Notification Model although authors of (Popp, Kaindl and Raneburger 2013) indicate that it could in theory be possible. Task-based approaches mostly focus on connecting the task models to annotated web-services (Paternò, Santoro and Spano 2010). A notable exception is (Tran et al. 2009) where the authors use UsiXML to generate user interface and corresponding code for generic database functions. However, the code generation process is complicated, with many transformations, and includes several steps where user (developer) intervention is needed. Query Annotations (Kis and Bogdan 2013), that are used in this paper, also create the needed business logic code for database access. However, instead of mapping only to simple database functions, Query Annotations allow the domain experts to express complex queries.

Metawidgets (Kennard and Leaney 2010) allows retrofitting UI generation into existing applications by generating only database access widgets instead of whole applications. WebRatio (Brambilla 2008), a web modeling tool used in industry, provides support for modeling web interfaces and for generating the underlying functional core (or linking to the existing one) from the higher-level models. However, WebRatio and other modeling tools used in industry are targeting a single platform (e. g. web or mobile) and do not share the goals of addressing multiple contexts of use, that the MBUID approaches set.

Addressing the round-trip problem has been an ongoing challenge in the MBUID research. There is an extensive body of work summarized in Sanchez Ramon et al. (2013) on reverse engineering approaches that are capable to generate higher-level models from the final interfaces or lower level models and resources. However, these approaches work from a concrete level to an abstract level only, thus overriding the abstract level model based on a concrete model. The challenge of the full round-trip problem is to synchronize modifications at the lower-level models with the information present in the higher-level. The round-trip problem has been fully addressed by FLERP (Hennig et al. 2011), where the authors propose a semi-automatic solution for tackling various mapping problems when moving between abstract and concrete UI models. At the Concrete UI level FLERP uses the CAP3 (Bergh, Luyten and Coninx 2011) model to define the platform independent UI. Based on the original Canonical Abstract Prototypes (Constantine 2003), CAP3 has similarities to the approach proposed here since both give a graphical, although generic, representation of UI components, and both provide visual cues to the link between these components and the domain model elements. However, the generic representation has its limitations in giving the domain experts the understanding of the resulting UI look. As demonstrated in their research, the approach is more suitable for critical systems where proper functionality is much more important than the UI design.

3 Query Annotated Discourse Modeling

We will introduce Query Annotated Discourse Modeling by way of the diagrams in Figure 1. The diagrams represent previous Query Annotations (top) and Discourse Modeling (middle) approaches, as well as the solution, proposed in this paper, in which Query Annotations are applied to Discourse Modeling (bottom). The processes are explained in the context of the Cameleon Reference Framework (CRF) that defines four abstraction levels in MBUID: high-level Task & Concepts, modality independent Abstract User Interface (AUI), platform independent Concrete User Interface (CUI) and Final User Interface (FUI) at the lowest level.

Figure 1 
          The modeling process in the low-level Query Annotation approach (top), the Discourse Modeling approach (middle), and the Query Annotated Discourse Modeling (bottom).
Figure 1

The modeling process in the low-level Query Annotation approach (top), the Discourse Modeling approach (middle), and the Query Annotated Discourse Modeling (bottom).

3.1 Query Annotations

Currently Query Annotations are employed in a low-level modeling process, where they annotate concrete UI models (Kis and Bogdan 2013) and provide data-binding. The Template User Interface consists of concrete user interface components that are annotated with queries as shown in figure 2. Query Annotations automatically generate the program code for database access and modification (the CRUD operations), which results in an interactive prototype. Furthermore, they are also optionally shown in the generated user interface. The presence of Query Annotations both in the Template User Interface and in the generated one enables the domain experts to develop the mapping between the action of changing the queries and the resulted modification in the generated UIs.

Figure 2 
            Query Annotated Template UI, previously composed manually of concrete UI components and annotated with queries. Such interfaces will be automatically generated through the modeling process presented in this paper. Query annotations (in red) indicate the domain information that should be present in the final user interface, and serve for generating business logic that makes the final user interface functional.
Figure 2

Query Annotated Template UI, previously composed manually of concrete UI components and annotated with queries. Such interfaces will be automatically generated through the modeling process presented in this paper. Query annotations (in red) indicate the domain information that should be present in the final user interface, and serve for generating business logic that makes the final user interface functional.

Query Annotations are used in real world applications (Bogdan and Mayer 2009) where the use of queries allowed the non-programmers (domain experts) to continuously modify the user interfaces by changing the queries which are understandable to the domain experts and represent the domain knowledge. This enabled the domain experts to quickly see the result of their low-level modeling work and supported an iterative prototyping process, without the need for involving developers.

The main limitation of Query Annotations is that they have so far been present only at the low-level models. Therefore, if the user interfaces for different contexts of use (i. e. different platforms, modalities and environments) were to be prototyped, a different Template Interface for each context would need to be manually created and annotated with queries.

3.2 Discourse Modeling

Discourse Modeling, shown in the middle part of Figure 1, is a mature MBUID approach that has been applied in previous research to generate UIs with different modalities ranging from robots (Kaindl et al. 2011) and mobile applications to web applications (Popp, Raneburger and Kaindl 2013). It describes the high-level interaction between a user and a system as a form of discourse. The discourse is modeled using elements of Speech-Act Theory (Luff, Gilbert and Frohlich 1990) and Rhetorical Structure Theory (Mann and Thompson 1988).

To achieve data-binding between the Discourse Model and the application logic (i. e. the propositional content of the discourse) Discourse Modeling uses the Action-Notification Model (ANM) (Popp, Kaindl and Raneburger 2013). The ANM is designed to be comprehensive in terms of what kind of application logic it is modeling and therefore the ANM can be linked to any system that implements appropriate adapters.

The Discourse Model and the Action-Notification Model that are present at the Tasks & Concepts level are automatically transformed to Abstract UI and Concrete UI models. The CUI model (Structural Model) can be modified by the domain experts to fine-tune the UI. Finally, the Final UI source code can be generated from CUI. However, the FUI code only contains the navigational logic and stub methods for the business logic. Before the FUI can be fully tested it requires a developer to write the actual business logic.

Compared to task-based MBUID approaches, Discourse Modeling provides a more self-contained solution towards modeling from higher level models to the final application. They have also recently been applied in the interaction design process to generate various user interface alternatives (Raneburger et al. 2014), thereby supporting discourse-based MBUID suitability to prototyping processes.

3.3 Query Annotated Discourse Modeling

To enable the generation of fully interactive prototypes from discourse-based models we have decided to combine them with Query Annotations. The resulting approach, Query Annotated Discourse Modeling (QADM), is shown in the bottom part of Figure 1. As can be seen, at the Task & Concepts level the Action-Notification Model (ANM) has been replaced with Query Annotations of Discourse Model.

The motivation for this is two-fold: Firstly the ANM is more generic than Query Annotations, which makes it harder (if not impossible) to generate business logic. Their ambition is to just connect to existing business logic. Query Annotations, on other hand, already provide the possibility to generate business logic for database access, which is suitable for prototyping purposes. Secondly the ANM notation employs concepts such as classes, types, variables, etc. which are not easy to understand for non-programmers. Query Annotations use SQL-like syntax, which is designed to resemble natural language and is therefore easier to understand.

The Query Annotation approach also introduces Template Interfaces present at the concrete UI level. However, compared to previous Query Annotations, in Query Annotated Discourse Modeling the Template Interfaces will be automatically generated from Discourse Models. The Template Interface enables the domain experts to explore the appearance of the generated UI and its connection to the Query Annotated Discourse Model. Furthermore, the application logic is now removed as a separate element of the process as it gets generated directly from the CUI level. Finally, Query Annotations can be used in combination with Template Interfaces to modify the original model and possibly to allow the domain experts to generate new transformation rules, as will be shown in later sections.

The main expected outcome of this solution is the direct testability of the generated user interface during the prototyping process, without the need for writing implementation code (application logic). Furthermore, instead of interacting only with a more abstract Structural Model at the CUI level, the domain experts now have the opportunity to explore the Template Interface which resembles the expected final UI.

4 Modeling Example

In order to illustrate the Query Annotated Discourse Modeling approach introduced in this paper, we will use an example of a simple on-line shop interactive system. In the system the user should see existing product categories and, when a category is selected, the user should see the list of products in that category (provided that the category contains products). Next, the user should be able to add the selected products to a list (shopping cart). Finally, the user may decide to proceed to checkout, where the selected products are shown and the user needs to fill in the credit card information for the bill to be generated.

The domain experts start modeling the on-line shop system by defining the domain model for the three entities (ProductCategory, Product, CreditCard) shown in Figure 3. The domain model defines the Product entity that has a reference to ProductCategory. There is also a Bill entity that has a one-to-many reference to Products (i. e. the products that the user decides to buy) and a reference to the CreditCard entity with details (attributes) of the user’s credit card.

Figure 3 
          Domain model of the on-line shop example.
Figure 3

Domain model of the on-line shop example.

4.1 Discourse Modeling: Communicative Acts

The next step is to model the communication on the high level using the discourse-based modeling approach. We will illustrate this approach by examining the Discourse Model of the on-line shop example shown in Figure 4. All the constructs used in the model are listed in Table 1. Discourse Models define a high-level communication between a user and a software system by representing the communication as a discourse according to the principles of human communication. The basic elements of discourse are Communicative Acts (CAs), derived from the Speech-Act-Theory (Searle 1969), used to describe utterances (question, answer, informing, etc.) in the discourse. Such CAs are used to describe utterances either from the system (providing information, asking for input) or utterances made by the user.

Figure 4 
            Discourse Model for on-line shop example showing CommunicativeActs (leaves), AdjacencyPairs (diamond nodes), RSTRelations (square nodes), conditionalstatements (tree branch annotations), and QueryAnnotations (inside the tree nodes). The meaning of each construct is explained in Table 1.
Figure 4

Discourse Model for on-line shop example showing CommunicativeActs (leaves), AdjacencyPairs (diamond nodes), RSTRelations (square nodes), conditionalstatements (tree branch annotations), and QueryAnnotations (inside the tree nodes). The meaning of each construct is explained in Table 1.

Table 1

Modeling constructs of the Query Annotated Discourse Model.

Construct Description Representation
Communicative Acts (leaves) Describe discourse utterances (e. g. question, answer, information). The green (darker) ones indicate discourse uttered by the system (i. e. prompting the user for input or displaying information) while yellow (lighter) indicate discourse uttered by the user (i. e. providing input).
Adjacency Pairs (diamond nodes) Group together Communicative Acts indicating turn-taking in the discourse. For example, one such pair is asking a question and providing an answer.
RST Relations (square nodes) Define the overall relations of elements of the discourse in a hierarchical tree structure. For example, Alternative relation indicates parts of discourse that are mutually exclusive. When two branches of a relation are of different importance, the more important one is called Nucleus and the less important one Satellite. For example, Elaborate indicates additional details that the Satellite branch provides for the Nucleus. In our example this is selecting a cat- egory (Nucleus) and a product from that category (Satellite).
Conditional statements (tree branch annotations) Further extend RST Relations to enable or disable parts of the discourse tree depending on the condition that is typically related to the domain information.
Query Annotations (inside the tree nodes) Represent propositional content of the discourse or, in other words, define which domain information is present in communicative acts ut- tered by the system or the user.

In Figure 4 CAs are represented as the leaf elements of the tree. The user action of selecting the product category is modeled by a SelectProductCategory ClosedQuestion CA, uttered by the system, and by an Answer to that question, uttered by the user. The leftmost Informing CA is uttered by the user and it serves to inform the system to proceed to checkout. The CAs that represent turn-taking in the discourse (e. g. question-answer) are paired according to the Conversation Analysis (Luff, Gilbert and Frohlich 1990) in Adjacency Pairs (diamonds in Figure 4).

4.2 Discourse Modeling: Rhetorical Structure Theory

To model the relative importance of different communication aspects, Discourse Modeling uses Rhetorical Structure Theory (RST) (Mann and Thompson 1988). RST Relations are represented by the square nodes in the tree in Figure 4 and they define the relations between the Adjacency Pairs. For example, the topmost Alternative Relation means that the left and / or right side of the communication can take place. If we move further down the tree we can see the Elaboration Relations which signify that the nucleus side of the relation (asking for the product category) can be further elaborated on with the satellite side (asking for product and providing category information), and so on.

RST defines the relations between certain elements of communication, however, not all communication should be possible at all times. For example, the user should not be shown the product list before a product category is chosen. For this purpose, Discourse Model tree branches are annotated with conditional statements. For the mentioned example the count(…)>0 statement on the right side of Elaboration specifies that the respective part of communication should only happen when the selected category has some products in it. Similarly, the topmost Alternative has conditions on both branches: they determine that either its left or its right side will be active depending on whether the user has proceeded to checkout or not.

4.3 Query Annotated Discourse Modeling

The remaining step in establishing the high-level model is to define the propositional content of the communication, meaning which elements from the Domain Model will be present in Communicative Acts (CA) and how they are related. The steps demonstrated in previous sections are the standard steps of defining a high-level model in Discourse Modeling. In this step we introduce the modification to Discourse Modeling approach by using Query Annotations for defining the propositional content. The SelectProductCategory ClosedQuestion is annotated with the query fragment FROM ProductCategory pc. This query fragment defines that the options of the ClosedQuestion should come from ProductCategory domain element. The action of providing input (answering the question) is defined by the corresponding Answer CA. For this reason Answer has the SET $selectedPc annotation which means that the selected option from the ClosedQuestion will be saved in the $selectedPc variable (the $ sign defines internal elements representing the state of communication). The ProceedToCheckout Informing CA, uttered by the user, specifies that when it is uttered, the state variable $checkout should be set to TRUE.

4.4 Hierarchy of Query Annotations

When modeling communication with hierarchical models such as the Graphical User Interface (GUI) tree, hierarchical queries can be useful in simplifying the representation of models by moving repeating queries higher up the model tree. This leads to brevity of the annotations and it also helps the domain experts organize the annotation better and to understand or recall them more easily. One such example in Figure 4 is the Background Relation annotated with:

FROMProductCategorypcWHEREpc=$selectedPc

With this, the domain experts specify that a ProductCategory instance, represented by pc label and matching the WHERE clause, will be available to all the subtree nodes. In other words, in the SelectProduct and SelectedCategory CAs Query Annotations the domain experts can refer to the pc object and know that it represents the ProductCategory selected by the SelectProductCategory CA since this was the WHERE condition of the Background’s query. Therefore, the query used in its subtree FROM Product p WHERE p.category = pc expands into:

FROMProductCategorypc,ProductpWHEREpc=$selectedPcANDp.category=pc

Previous work (Bogdan and Mayer 2009) determined that such hierarchical query representation is intuitive for domain experts. When we generalized query annotations and their hierarchy to the Tasks and Concepts level, we learned that they map to the generated interface as well, i. e. the above query hierarchy and query expansion will take place at the Concrete User Interface (CUI) level of abstraction as well. To anticipate, the CUI-level query hierarchy is visible at the bottom right of Figure 7.

4.5 Actions as Query Annotations

So far, we have shown how query annotations can be used to get access to data to supply propositional content to communicative acts. In order to specify actions that allow for changing data, action-depicting query annotations can be used. The ADD TO $productsInCart Query Annotation in the AddToCart Answer means that the selected option for the closed question will be appended to the $productsInCart variable.

The query FROM Bill b WHERE b = new(), which annotates the Joint RST Relation, defines via its WHERE clause a creation of the new instance of Bill object instead of selecting the already existing one. The new instance, represented by the b label, will then be available throughout the subtree of the relation.

Another such object creation annotation refers to the OpenQuestion on the right side of the tree marked EnterPaymentDetails, which specifies the creation of a CreditCard object. Its Submit answer has a twofold action on the user response. First, the newly created CreditCard object c, filled with the result of the input, will be assigned to the creditCard field of the new Bill object b. This is expressed by the constraint b.creditCard = c. Second, the products added to the shopping cart will also be assigned to the same Bill, which is specified by the constraint b.product = $productsInCart.

5 Automatic Generation of Interactive Prototype

The first step in operationalizing the Discourse Model is to generate a Behavioral Model at the Abstract UI level. This step involves creation of the Partitioning State Machine (PSTM), shown in Figure 5. The PSTM consists of three possible states: selection of product category (S1), list of products in the selected category (S2), and checkout (S3). Each state corresponds to the Discourse Model subtree and represents one of the screens of the UI. The process of the PSTM generation is described in detail in Kis et al. (2014).

Figure 5 
          Partitioning State Machine generated according to Kis et al. (2014) for the on-line shop Discourse Model. S1 – selection of product category (no category is selected, or the selected one has no products); S2 – showing products from the selected category and adding them to the shopping cart; S3 – checkout, showing the products in the shopping cart and selecting to enter credit card information.
Figure 5

Partitioning State Machine generated according to Kis et al. (2014) for the on-line shop Discourse Model. S1 – selection of product category (no category is selected, or the selected one has no products); S2 – showing products from the selected category and adding them to the shopping cart; S3 – checkout, showing the products in the shopping cart and selecting to enter credit card information.

The Concrete User Interface (CUI) model is generated by applying transformation rules on the Discourse Model. As described in Kavaldjian, Falb and Kaindl (2009), the transformation rules match the communication elements of the Discourse Model (and their propositional content) with CUI templates. In the on-line shop example, the ClosedQuestion named SelectProduct and its Answer tagged AddToCart trigger the rule shown in Figure 6. When all the rule matching is done, the CUI Model is complete.

Figure 6 
          The Discourse Model to CUI transformation rule, part of work and toolbox by Kavaldjian, Falb and Kaindl (2009) (left), rule-triggering adjacency pair (top right), and the generated query-annotated Template UI fragment (bottom right).
Figure 6

The Discourse Model to CUI transformation rule, part of work and toolbox by Kavaldjian, Falb and Kaindl (2009) (left), rule-triggering adjacency pair (top right), and the generated query-annotated Template UI fragment (bottom right).

During the CUI generation process, the Query Annotations from the Discourse Model are copied to the generated Graphical UI (GUI) widgets. Figure 6 shows, at the bottom right, the widgets generated from the associated CUI template and their Query Annotations. The container element in the generated widgets is annotated with the query (FROM Product p WHERE p.category = pc), while the child elements get the respective query projections (p.<attribute>. The widgets generated from the transformation rule template element and marked with IdOrFirst, are annotated with p.name), while the widgets generated from the transformation rule element and marked with RestAttributes are annotated with the query expressions p.price and p.description. Finally, the action ADD TO $productsInCart of the AddToCart Answer is assigned to the button underneath the Product Choice. The transformation rule keeps a connection to the original adjacency pair, thus our interpreter can detect that the ADD TO button action refers to the element selected in the Product Choice. This relation between the button and the chooser could also be made visible to the domain experts.

The top of Figure 7 shows the screen that corresponds to the state S2 rendered in Java Swing and that is obtained from the CUI model.2[2] In the approach described in Kis and Bogdan (2013) this interface is called a Template Interface. The information populating the interface is only example data, as the queries have not yet been executed. The example data is obtained from the domain model definition. Such a Template Interface gives the domain experts a quick and intuitive preview of the UI, while still not being at the FUI level.

Figure 7 
          The Template User Interface for S1 (top left) and S2 (top right) with respective Query Annotations (bottom).
Figure 7

The Template User Interface for S1 (top left) and S2 (top right) with respective Query Annotations (bottom).

As part of the toolset presented to the domain experts, at the top of each screen there are two check boxes, which give the domain experts an option to further explore the generated interface. The annotate check box displays the Query Annotations as shown at the bottom of Figure 7. Displaying the annotations directly in the Template User Interface provides the domain experts with a better understanding of the connection between the high-level Discourse Model, the Domain Model, and the generated interface.

5.1 Query Execution and UI Population With Data. Testable UI

The second option in the toolset is the data option. This option will trigger the execution of the queries annotating the user interface and will populate the interface with data from the data-source. For each result of the executed query the child elements of the container that the query annotates are duplicated and, if needed, populated with data. The result for the S2 screen data population is shown in Figure 8, e. g. for each Product or ProductCategory the radio button and its respective labels are duplicated and populated with data.

Figure 8 
            The Template User Interface for S2 filled with actual data (top) and with Query Annotations (bottom).
Figure 8

The Template User Interface for S2 filled with actual data (top) and with Query Annotations (bottom).

Switching between the product categories shonw on the left of the Figure 8 (and pressing the Select button) triggers the query execution again and the interface is populated with new data. Finally, when the Proceed To Checkout button is pressed, the domain experts see the S3 screen shown at the right in Figure 9. Continuing in data mode, the domain experts can fill the credit card information and press the Submit button, which will create a new Bill and CreditCard objects in the data-source. The domain experts could also choose to switch to annotated, non-data mode (Figure 9 left) to examine only the query annotations at CUI level.

Figure 9 
            The Template Interface for S3 with Query Annotations (left) and data (right).
Figure 9

The Template Interface for S3 with Query Annotations (left) and data (right).

As our focus is on prototyping by domain experts, calling the application logic that actually does the purchase is beyond the scope of this paper, and is not strictly needed to test the generated UI. What is important for us is that the domain experts are able to navigate through the whole interface and test it without having to work at the application logic level where they might not have the necessary competence.

5.2 Domain Experts Work With Transformation Rules

We have shown how, in our implemented proof of concept, the domain expert can explore the generated user interface in several modes: the template (Figure 7 top), the query-annotated template (Figure 7 bottom), the data-populated interface (Figure 8 top, showing the FUI), and the query-annotated FUI (Figure 8 bottom). As shown in Kis and Bogdan (2013), the domain experts could use an interface builder to work with such query annotated UIs, but such work would be lost if the CUI is re-generated from the Discourse Model in the way we have illustrated here.

To address this, we elaborated conceptually on queryannotated transformation rules that can be worked upon by the domain experts, and which can package their work for future use. To illustrate that, the left part of Figure 10 shows a domain expert-oriented representation of a variant of the transformation rule in Figure 6 that is created specifically for adding to sets. The adjacency pair has a ClosedQuestion annotated with FROM <T> o WHERE … (meaning objects of a parametrized type T) and an Answer annotated with ADD TO <set of T>. The generated UI is represented in the query-annotated template form, showing the o.name expression for each result of the query FROM <T> o WHERE…. It should be noted that in this case the rule specifies that only the name attribute of the T objects will be shown.

Figure 10 
            Domain expert-oriented representation of an addition transformation rule.
Figure 10

Domain expert-oriented representation of an addition transformation rule.

The domain experts can work on this transformation rule to change the generated layout by, for example, using a list box instead of radio buttons, or by showing fields other than name for various possible values of T, etc. Furthermore, the domain experts can create a completely new CUI, matching the same discourse subtree with, for example, the interaction approach shown in Figure 11. Figure 11 displays the possible candidates at the left – using a list widget rather than using radio buttons – and the constructed set at the right, with an Add button in between. This new rule can also define that the button label is a left-to-right arrow. To show that the ADD TO action of the button refers to the choice made in the left list, a dotted line can show their connection, dictated by the adjacency pair.

Figure 11 
            CUI side of an alternative “Add-to-set” transformation rule.
Figure 11

CUI side of an alternative “Add-to-set” transformation rule.

In a modeling toolset that involves the domain experts at the transformation rule level, the transformation rules in Figures 10 and 11 will be offered as alternatives, since they are triggered by the same discourse model subtree (Figure 10 left). Furthermore, the rules can be shown both in the formal, type-parametric form (using <T> and <set of T>), and in the actual form (using Product and $productsInCart) to ease the domain experts’ work.

6 Conclusions and Future Work

The possibility to include designers, domain experts, and even end users in the development is an important factor in the user-centered design processes. Enabling early exploration of readily testable interfaces and iterating quickly over their variations helps bringing non-programmers early into the development process when prototyping is needed. However, most Model Based UI Development approaches require a significant understanding of various models and transformations and often even require writing software code before such testable interface can be reached. This increases the time between the iterations and hinders the non-programmers’ understanding of the transformation process thereby slowing down and discouraging prototyping.

This paper addresses these issues by modifying an existing MBUID approach. The inspiration for using Query Annotations on top of the existing approach comes from the previous results showing that queries specified for data-binding are easy to understand by the domain experts. These results described by Bogdan and Mayer (2009) show a decade of experience with non-programmers producing query annotated GUIs. The use of Query Annotations also motivated the choice of Discourse Modeling as the underlying MBUID approach for the proof of concept presented here.

By employing the queries to generate a simple databinding application logic we have addressed the issue of multiple models and implementation language code that is needed before the domain experts can get to test their generated UI. Even if this application logic does not make it to the final application, it is still very useful for the domain experts exploring and judging the qualities of the generated user interface. This enables the domain experts to focus on the results of the various model changes at various levels, rather than having to wait for a programmer to produce the necessary application logic.

The issue of domain expert understanding of the UI generation mechanisms has been addressed at two levels. Firstly, the Query Annotation present in the generated UI enables the domain experts to understand the internal mechanisms of the generated user interface, which helps them understand the transformation from a model to a UI. The query annotations also help to establish a connection between Task & Concepts and CUI levels. Secondly, the transformation rules can be better represented by using the Query Annotations, in a “language” that the domain experts already understand from their exploration of the Query-annotated models.

Finally, in our conceptual exploration we address the round trip issue by letting the domain experts change the Template User Interface and, at the same time, extract changes of the transformation rules involved. Furthermore, we allow the domain experts to propose alternative partial Template UIs for a certain rule matching the Discourse Model subtree, thereby providing for the creation of new transformation rules. By saving the domain experts’ work at the Template UI level as transformation rules, we ensure that the domain experts’ work will survive changes of the Discourse Model that do not break the matching Discourse Model subtree pattern. If a the pattern is broken, it may be natural that its generated partial Template UI is no longer needed. If it is still needed, the domain experts can retrieve it from the saved transformation rule. The domain experts can thus keep a repertoire of transformation rules and share transformation rules with other domain experts.

In future work we plan to apply the same principles of Query Annotation to other MBUID approaches, especially the commonly used ones based on Task Models. Our initial estimate is that the underlying Query Annotation principles can equally be applied to other MBUID approaches as long as their UI models have a hierarchical structure, which is the case in most approaches. Therefore the potential difficulties of applying Query Annotations to other approaches mostly depend on the adaptability of the implementation and the tools available for these approaches.

To expand Query Annotations beyond data-binding and CRUD operations, we are considering to explore how other types of application logic could be expressed in a declarative query-like form. Some examples of interest here are authentication and authorization, as common elements of software applications that also have impact on the generated UIs.

About the authors

Filip Kis

Filip Kis is a PhD student at the Department of Media Technology and Interaction Design (MID), School of Computer Science and Communication (CSC) at KTH Royal Institute of Technology, Stockholm, Sweden. His research into User Interface development is focused on enabling evolutionary prototyping and end-user development. He is particularly interested in evolutionary prototyping in the context of startup environments where the emergence of lean approaches is changing the development processes and technology.

Cristian Bogdan

Cristian Bogdan is Associate Professor at the Department of Media Technology and Interaction Design (MID), School of Computer Science and Communication (CSC) at KTH Royal Institute of Technology, Stockholm, Sweden. His background is in Computer Engineering (Timisoara, Romania), his PhD is from KTH on Human-Computer Interaction, and his post-doc at TU Wien in model-based UI development based on human communication theories. His research interests revolve around the translucency of technology for its users, which he looked at e. g. for semantic technologies and robotics. Currently, translucency is investigated in the areas of energy technologies, electric vehicles, and voting systems. Relatedly, Cristian is pursuing a quest to involve users, domain experts and interaction designers in the development of interactive artifacts. His main approach in his quest is the design and evaluation of evolutionary prototyping technologies.

Acknowledgment

We are grateful to our colleagues working on various UI generation approaches who have helped our efforts with their advice and have generously shared tools and source code.

References

Beaudouin-Lafon, M. and W. Mackay. 2003. Prototyping tools and techniques. In: (J. A. Jacko and A. Sears, eds) The Human Computer Interaction Handbook, L. Erlbaum Associates Inc., Hillsdale, NJ, USA, pp. 1006–1031.Search in Google Scholar

Bergh, J. V. D., K. Luyten and K. Coninx. 2011. CAP3: Context-Sensitive Abstract User Interface Specification Expertise Centre for Digital Media. Digital Media, pp. 31–40.Search in Google Scholar

Bogdan, C. and R. Mayer. 2009. Makumba: the Role of Technology or the Sustainability of Amateur Programming Practice and Community. In: Proceedings of the fourth international conference on Communities and technologies. C&T ’09, ACM Press, New York, New York, USA, pp. 205–214.10.1145/1556460.1556490Search in Google Scholar

Brambilla, M., S. Comai, P. Fraternali and M. Matera. 2008. Designing Web Applications with Webml and Webratio. In: (G. Rossi, O. Pastor, D. Schwabe, and L. Olsina, eds) Web Engineering: Modelling and Implementing Web Applications, Human – Computer Interaction Series, Springer London, pp. 221–261.10.1007/978-1-84628-923-1_9Search in Google Scholar

Calvary, G., J. Coutaz, D. Thevenin, Q. Limbourg, L. Bouillon and J. Vanderdonckt. 2003. A Unifying Reference Framework for multi-target user interfaces. Interact. Comput. 15(3): 289–308.10.1016/S0953-5438(03)00010-9Search in Google Scholar

Constantine, L. L. 2003. Canonical Abstract Prototypes for Abstract Visual and Interaction Design. In: (J. A. Jorge, N. J. Nunes, and J. F. e Cunha, eds) Interactive Systems. Design, Specification, and Verification. LNCS, pp. 1–15.10.1007/978-3-540-39929-2_1Search in Google Scholar

Coyette, A., S. Faulkner, M. Kolp, Q. Limbourg and J. Vanderdonckt. 2004. SketchiXML: towards a multi-agent design tool for sketching user interfaces based on USIXML. In: Proceedings of the 3rd annual conference on Task models and diagrams. ACM, pp. 75–82.10.1145/1045446.1045461Search in Google Scholar

Falb, J., H. Kaindl, H. Horacek, C. Bogdan, R. Popp and E. Arnautovic. 2006. A discourse model for interaction design based on theories of human communication. In: CHI ’06 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’06, ACM Press, New York, New York, USA, pp. 754–759.10.1145/1125451.1125602Search in Google Scholar

García Frey, A., E. Céret, S. Dupuy-Chessa, G. Calvary, and Y. Gabillon. 2012. UsiCOMP: an Extensible Model-Driven Composer. In: Proceedings of the fourth ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2012). EICS ’12, ACM Press, New York, NY, USA, pp. 263–268.10.1145/2305484.2305528Search in Google Scholar

Horacek H., R. Popp, and D. Raneburger. 2011. Automated Generation of User Interfaces – A Comparison of Models and Future Prospects. In: (I. Maurtua, ed) Human-machine interaction – getting closer (Jan), InTech, Rijeka, Croatia, pp. 3–22.10.5772/28625Search in Google Scholar

Hennig, S., J. V. D. Bergh, K. Luyten and A. Braune. 2011. User Driven Evolution of User Interface Models – The FLERP Approach. In: INTERACT ’11 Proceedings of the IFIP TC13 Interantional Conference on Human – Computer Interaction. INTERACT ’13, pp. 610–627.10.1007/978-3-642-23765-2_41Search in Google Scholar

Kaindl, H., R. Popp, D. Raneburger, D. Ertl, J. Falb, A. Szép and C. Bogdan. 2011. Robot-Supported Cooperative Work: A Shared-Shopping Scenario. In: Proceedings of the 44th Hawaii International Conference on System Sciences. HICSS ’11, IEEE Computer Society, Washington, DC, USA, pp. 1–10.10.1109/HICSS.2011.366Search in Google Scholar

Kavaldjian, S., J. Falb and H. Kaindl. 2009. Generating content presentation according to purpose. In: Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics. SMC ’09, IEEE Computer Society, San Antonio, TX, USA, pp. 2046–2051.10.1109/ICSMC.2009.5346348Search in Google Scholar

Kennard, R. and J. Leaney. 2010. Towards a general purpose architecture for UI generation. J. Syst. Software 83(10): 1896–1906.10.1016/j.jss.2010.05.079Search in Google Scholar

Kis, F. and C. Bogdan. 2013. Lightweight Low-Level Query-Centric User Interface Modeling. In: Proceedings of the 2013 46th Hawaii International Conference on System Sciences. HICSS ’13, IEEE Computer Society, Washington, DC, USA, pp. 440–449.10.1109/HICSS.2013.384Search in Google Scholar

Kis, F., C. Bogdan, H. Kaindl and J. Falb. 2014. Towards Fully Declarative High-level Interaction Models: An Approach Facilitating Automated GUI Generation. In: Proceedings of the 47th Hawaii International Conference on System Sciences. HICSS’14, IEEE Computer Society, Washington, DC, USA, pp. 412–421.10.1109/HICSS.2014.59Search in Google Scholar

Limbourg, Q. and J.Vanderdonckt. 2009. Multipath Transformational Development of User Interfaces with Graph Transformations. In: (A. Seffah, J. Vanderdonckt and M. Desmarais, eds) Human-Centered Software Engineering, Human – Computer Interaction Series, Springer London, pp. 107–138.10.1007/978-1-84800-907-3_6Search in Google Scholar

Limbourg, Q., J. Vanderdonckt, B. Michotte and L. Bouillon. 2005. USIXML: A Language Supporting Multi-path Development of User Interfaces. In: (R. Bastide, P. Palanque, J. Roth, eds) Engineering Human Computer Interaction and Interactive Systems, Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. 200–220.10.1007/11431879_12Search in Google Scholar

Luff, P., N. G. Gilbert and D. Frohlich. 1990. Computers and conversation. Academic Press, London, UK.Search in Google Scholar

Mann, W. C. and S. A. Thompson, 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text – Interdisciplinary Journal for the Study of Discourse 8(3): 243–281.10.1515/text.1.1988.8.3.243Search in Google Scholar

Meixner, G., F. Paternò, J. Vanderdonckt, 2011. Past, Present, and Future of Model-Based User Interface Development. i-com 10(3): 2–11.10.1524/icom.2011.0026Search in Google Scholar

Myers, B. A., S. E. Hudson, R. Pausch. 2000. Past, present, and future of user interface software tools. ACM Transactions on Computer – Human Interaction 7(1): 3–28.10.1145/344949.344959Search in Google Scholar

Paterno, F. 2013. End User Development: Survey of an Emerging Field for Empowering People. International Scholarly Research Notices 2013, 1–11.10.1155/2013/532659Search in Google Scholar

Paternò, F., C. Santoro and L. D. Spano. 2009. MARIA: A Universal, Declarative, Multiple Abstraction-Level Language for Service-Oriented Applications in Ubiquitous Environments. ACM Transactions on Computer – Human Interaction 16(4): 1–30.10.1145/1614390.1614394Search in Google Scholar

Paternò, F., C. Santoro and L. D. Spano.2010. Exploiting web service annotations in model-based user interface development. In: Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems. ACM, pp. 219–224.10.1145/1822018.1822053Search in Google Scholar

Popp, R., H. Kaindl, S. Badalians, D. Raneburger and F. Paterno. 2014. Duality of task- and discourse-based interaction design for GUI generation. In: 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, pp. 3316–3321.10.1109/SMC.2014.6974439Search in Google Scholar

Popp, R., H. Kaindl and D. Raneburger. 2013. Connecting Interaction Models and Application Logic for Model-Driven Generation of Web-Based Graphical User Interfaces. In: 2013 20th Asia-Pacific Software Engineering Conference (APSEC). vol. 1, IEEE, pp. 215–222.10.1109/APSEC.2013.38Search in Google Scholar

Popp, R., D. Raneburger and H. Kaindl. 2013. Tool Support for Automated Multi-device GUI Generation from Discourse-based Communication Models. In: Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. EICS’13, ACM, New York, pp. 154–150.10.1145/2494603.2480334Search in Google Scholar

Raneburger, D., H. Kaindl, R. Popp, V. Šajatović and A. Armbruster. 2014. A process for facilitating interaction design through automated GUI generation. In: Proceedings of the 29th Annual ACM Symposium on Applied Computing – SAC ’14. ACM Press, New York, New York, USA, pp. 1324–1330.10.1145/2554850.2555053Search in Google Scholar

Sanchez Ramon, O., J. Vanderdonckt, J. Garcia Molina, O. S. Ramón and J. G. Molina. 2013. Re-engineering graphical user interfaces from their resource files with UsiResourcer. In: Research Challenges in Information Science (RCIS), 2013 IEEE Seventh International Conference on. pp. 1–12.10.1109/RCIS.2013.6577696Search in Google Scholar

Searle, J. R. 1969. Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, Cambridge, England.10.1017/CBO9781139173438Search in Google Scholar

Træ tteberg, H. 2008. A Hybrid Tool For User Interface Modeling And Prototyping. Computer-Aided Design Of User Interfaces V (Idi): 215–230.10.1007/978-1-4020-5820-2_18Search in Google Scholar

Tran, V., J. Vanderdonckt, M. Kolp, S. Faulkner. 2009. Generating user interface from task, user and domain models. 2nd International Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies, and Services – CENTRIC 2009, pp. 19–26.10.1109/CENTRIC.2009.24Search in Google Scholar

Published Online: 2015-12-01
Published in Print: 2015-12-01

© 2015 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 8.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/icom-2015-0041/html
Scroll to top button