Home Model-driven latency analysis of distributed skills in automation networks with robots in the AI.Factory
Article Open Access

Model-driven latency analysis of distributed skills in automation networks with robots in the AI.Factory

  • Dominik Hujo-Lauer

    Dominik Hujo-Lauer received the M.Sc. degree in Mechatronics and Information Systems in 2020 from the Technical University of Munich (TUM), where he is currently working toward the Ph.D. degree. He is currently a Research Assistant with the Institute of Automation and Information Systems. His research interests include heterogeneous distributed networked control systems and model-based assessments and engineering, where his primary focus is communication latencies at different automation levels.

    EMAIL logo
    , Birgit Vogel-Heuser

    Univ.-Prof. Dr.-Ing. Birgit Vogel-Heuser received a Diploma degree in Electrical Engineering and a Ph.D. in Mechanical Engineering from RWTH Aachen. Since 2009, she has been a full professor and director of the Insititute of Automation and Information Systems at the Technical University of Munich (TUM). Her current research focuses on systems and software engineering. She is a member of the acatech (German National Academy of Science and Engineering), editor of IEEE T-ASE, and IEEE Fellow and member of the science board of MIRMI at TUM.

    , Cedric Wagner

    Cedric Wagner received his M.Sc. in Engineering from RWTH Aachen University in 2020. He is currently pursuing a Ph.D. at the Institute of Automation and Information Systems, Department of Mechanical Engineering, TUM School of Engineering and Design, Technical University of Munich. His research focuses on data-driven fault detection, embedded machine-learning, and their integration with agentic systems.

    , Jingyun Zhao

    Jingyun Zhao received her M.Sc. in Mechanical Engineering from the Technical University of Munich in 2022 and is currently pursuing a Ph.D. at the Chair of Automation and Information Systems at the Technical University of Munich. Her research focuses on information modeling, machine learning, and the development of digital twins in automated production systems.

    and Josua Höfgen

    Josua Höfgen received his M.Sc. in Mechanical and Aerospace Engineering in 2023 from the Technical University of Munich (TUM), where he is currently pursuing a Ph.D. at the Institute of Automation and Information Systems. His research focuses on bridging knowledge gaps in the development of automation systems by enhancing the integration of Model-Based Systems Engineering methods and Knowledge Representation, with a particular emphasis on their application to Multi-Agent Systems.

Published/Copyright: May 7, 2025

Abstract

In future factories flexible processes are dynamically deployed to heterogeneous automation technology and robotic systems. Distributed system design is a significant challenge relying on a trial-and-error approach for software-hardware integration. This paper presents a model-based documentation of the latency of distributed skills in automation networks, analyzing their suitability for a given use case. The approach considers latencies caused by the execution of control code. The difference in skill definitions in robotics and automation complicates the integration of robots into the automation systems. This work focuses on synchronizing these definitions to enable a holistic Input/Output latency analysis in heterogeneous systems. As proof of concept, the approach is used to model and analyze a use case, in which a production plant and a robotic arm are cooperating.

Zusammenfassung

Die Fabrik der Zukunft verlangt nach flexiblen Prozessen, bei denen Aufgaben dynamisch auf heterogene Systeme, bestehend aus Automatisierungstechnik und Robotik, verteilt werden. Die Gestaltung von verteilten Systemen stellt eine erhebliche Herausforderung dar, selbst in herkömmlichen statischen Szenarien, und stützt sich häufig auf einen Trial-and-Error-Ansatz bei der Integration von Hardware und Software. Ein modellbasierter Ansatz wird vorgestellt, der das Latenzverhalten von verteilten Skills in Automatisierungsnetzwerken dokumentiert, um deren Eignung für einen bestimmten Anwendungsfall zu analysieren. Der Ansatz berücksichtigt sowohl Latenzen, die durch die Ausführung von Steuerungscode verursacht werden, als auch durch die Kommunikation. Da abweichende Definitionen von Skills in der Robotik und der Automatisierung die Integeration und Analyse von hybriden Steuerungsnetzwerken erschweren, liegt der Fokus des Ansatzes auf der Synchronisation dieser Definitionen. Damit soll eine ganzheitliche I/O Analyse der hybriden Systeme ermöglicht werden. Als Machbarkeitsnachweis wird der entwickelte Ansatz verwendet, um einen Anwendungsfall an einem Demonstrator zu modellieren und zu analysieren, der die Zusammenarbeit zwischen einer Produktionsanlage und einem Roboterarm abbildet.

1 Introduction

The increasing demand for customized products and the ever-growing market requirements amplify the need for digital, flexible, and innovative production automation. Integrating autonomous robots into classical industrial automation is beneficial for achieving a high degree of flexibility, with the capability of producing a vast range of products with high throughput. While the integration of autonomous robots into the system is promising for the flexibility of an automated production plant, the implementation is challenging and needs to be generalized. Various approaches for a plug-and-play integration solution exist, including integrating the Internet of Things (IoT) within an industrial context or using Multi-Agent Systems (MAS). While promising IoT solutions have already been implemented and continue to be explored in consumer sectors, building automation, and the energy industry, in automated manufacturing, IoT often remains either a theoretical concept or is only applied in preliminary forms. Thus, detailed descriptions of industrial IoT implementations are rare [1]. Although examples demonstrate how MAS can be integrated into industrial production facilities, e.g., in [2], these implementations are primarily limited to academic applications. Additionally, the industrial automation domain inherently requires fulfilling specific challenges, such as timing, determinism, reliability, availability, and security. According to a 2021 McKinsey study [3], IoT holds a value potential of up to $13 trillion by 2030, with $3.3 trillion attributed to the manufacturing industry alone. This potential remains underutilized in production environments, due to technological fragmentation and lack of interoperability. Model-based design support can overcome these challenges arising from heterogeneous technologies used in industrial automation systems, by handling several aspects addressing the integration of autonomous robots into challenging industrial environments. Besides others, one critical aspect is securing the system’s real-time capability while using heterogeneous technologies in distributed network-controlled systems. Requirements arising from the technical process speed and the latency properties of system components must be matched to achieve this real-time capability. This system’s latency occurs from the execution of software skills, communication between controllers, and latency of sensors, actuators, and network equipment [4].

This paper addresses the design support for integrating distributed heterogeneous control systems, by examining the execution and time behavior of skills in production plants, and robotic systems. Based on a domain-specific language (DSL) from [4], [5], the proposed concept enables the modeling of skill and communication latency from industrial automation and robotics in a unified way. The skill descriptions used are taken from the Capability, Skills, Service (CSS) approach [6], which defines capabilities, skills, and services in the context of industrial automation, and from [7], where robotic skills for manufacturing are defined. The main contribution of this paper is the adaption of a unified skill description to the modeling approach in [5], enabling a model-based problem description of distributed heterogeneous control systems, which allows for an analysis of the system during the runtime and even before setting up the system. The analysis is achieved through modeling and subsequent calculation of system latency, based on established metrics, visualizing the currently active skills, and analyzing the communication between their outputs. The DSL aims to support application engineers, deploying identified control functions to different available controllers, considering requirements from the technical process, achieving faster design phases and lower costs in control technology. A representative use case was selected to analyze the differences between the domains, which is further discussed in Section 2.

The remainder of the paper is structured as follows. First, an overview of standard skill definitions in robotics and industrial automation from state-of-the-art literature is given, and a combined use case is presented. Then, Section 3 presents the state of the art in skill modeling and timing of distributed systems. The main contribution is made in Section 4, where a combined skill representation and a modeling for problem description for the latency analysis are presented. Section 5 gives the result of the modeling and analysis of the individual use cases, which are then discussed in Section 6. The paper concludes with a summary and an outlook in Section 7.

2 Skills and capabilities in robotics and industrial automation

The definition of a Skill varies according to domains. In the robotic domain, the Task/Skill/Primitive framework (cp. Figure 1) [7] is considered, and a Skill is defined as the combination of primitives that are accessible to the user and can be executed by robots. In the automation domain, the Capabilities/Skills/Services (CSS, cp. Figure 2) model [6] is used, where the skills include details at the level of implementation and the invocation of automation functions. The detailed definitions are described in the following.

Figure 1: 
Class diagram of robotic skills adapted from [7].
Figure 1:

Class diagram of robotic skills adapted from [7].

Figure 2: 
Class diagram of a skill from industrial automation adapted from [6].
Figure 2:

Class diagram of a skill from industrial automation adapted from [6].

2.1 Domain-specific definition of skills

The definition of a robotic Skill from [7] is shown in Figure 1, and is visualized and adapted so that the two definitions can be directly compared. The Skill consists of a SkillInterface for the Preconditions and Parameters. The Preconditions are input from the CurrentState (State of the system) and the Parameters from the overlying Tasks. In the center of the robot Skill is the Execution, which is parametric. That means the same Execution occurs every time the Skill is executed, and only some parameters can be adjusted, affecting characteristics, but not the behavior of the Skill. A simple example would be the drilling of a hole. The Execution of the Skill drilling would stay the same while, e.g., the depth can be adjusted by an input Parameter, resulting in the same Execution. A unique feature of the robotic Skills is constantly monitoring the Skill’s Execution. The Execution is constantly evaluated (ContinousEvaluation) and checked to determine whether Pre and Postconditions and the Prediction are fulfilled. The Skill’s Execution results in a StateChange. In the robotic domain, a Skill is composed of DevicePrimitives, and Tasks are composed of Skills.

Similar to the Skill definition from the robotics domain, a Skill in the automation domain consists of two different inputs, a StateMachine, and Parameters, which are, in this case, exposed by a defined interface (cp. Figure 2). At first glance, differences like the lack of post- and precondition checks and continuous evaluation can be seen. These points are not as explicitly defined for industrial automation Skills, as they are for the robotics domain. One of the biggest differences compared to the robotics domain, however, is the integration of Skills into the surrounding architectures. While in the robotics domain, the Skill merely leads to a not further specified StateChange, the industrial automation Skill focuses directly on the Process. In this case, the Skill, which is provided by a specific resource, controls a technical process. There are further definitions of industrial automation, such as that of Köcher et al. [8], providing a definition of Capability, Skills, and Resources that is linked to a number of industry standards. Likewise, comparisons have already been made by da Silva et al. [9] based on this concept with the Skill descriptions of heterogeneous robot systems. However, to achieve modeling and analysis of distributed heterogeneous control networks with collaborating robots, a uniform way to define both robotic and industrial automation Skills needs to be defined, containing information to create a matching between software and controller platform.

2.2 Hybrid use case description

A use case is designed to demonstrate the communication and information exchange between systems and serve as the application example for the modeling and analysis approach to be developed and tested. The use case was chosen because of its hybrid nature of combining a six-axis robot with a typical example of a production plant. It comprises two scenarios: normal behavior and fault recovery. The system operates as expected in normal behavior scenarios, with no interruptions or errors. If a fault occurs in the error scenarios, a central agent intervenes by rescheduling tasks and reallocating responsibilities. This ensures the continuity of the process and highlights the system’s resilience and adaptability. The scenarios include a typical industrial automation use case combined with a six-axis robotic arm, focusing on material handling operations.

The use case is a yogurt production plant demonstrator that consists of a process and an intralogistics part. The investigation focuses on the intralogistics of the plant, which can be seen in Figure 3 (Conveyors-C1 to C4 and Switches-S1 to S3). The resources included are four conveyors: a light gate at the beginning and one at the end of each conveyor, and three switches that can move 360° to bring yogurt bottles to a specific target. The resources are hierarchically ordered in three different Conveyor Groups (CG). CG1 consists of conveyor C1 and switch S1, CG10 consists of conveyors C2 and C3 and switch S2, and CG9 consists of switch S3 and conveyor C4. The skills involved in this use case can be found in Table 1, and are further described in Section 4. An empty bottle will enter the system at the beginning of C1 and is detected by the first light barrier. The bottle is then transported to the Switch S1, where the skill DirectionControl_CG1 decides in which direction the bottle should be delivered. After the switch is turned in this specific direction, it continues to move through the transportation system in the same way until it reaches the output slot at the end of C4. This is the normal operation. The fault recovery case within the hybrid system is a broken switch in S3. The bottle would either get stuck or be redirected. This use case additionally involves a robot, handling yogurt bottles from the yogurt plant’s storage and placing them into a logistics system, cp. Figure 3. Three agents are involved in handling the process: Central.AI, the MyJoghurt plant, which consists of the storage and logistics system, and a Franka Emika robot. The scenarios (normal behavior and fault recovery) are described in the sequence diagram in Figure 4. The first ten messages in this scenario are the same for the normal behavior, and the fault recovery scenario. The Central.AI provides the recipe to the MyJoghurt plant and the coordinates necessary for the Franka robot to know the insertion and clearance point as the interaction point. Afterward, the MyJoghurt plant and the Franka robot communicate directly, resulting in the Franka providing a bottle at the entrance.

Figure 3: 
Yogurt production with Franka and logistics system of MyJoghurt.
Figure 3:

Yogurt production with Franka and logistics system of MyJoghurt.

Table 1:

Skills with attributes from the MyJoghurt use case.

Skill name Description Interface I/O
(# of instances) State machine Parameter Inputs Outputs
Driver_JVL (7) Velocity, position, and acceleration control of JVL motors –INIT

–RUN_pos

–RUN_vel

–STOP
Skill init.: “Dinterface.Initialized”

Skill mode: “Dinterface.Mode”

Motor op. mode: “Dinterface.iOp”

M. pos.: “Dinterface.Pos_ist”

M. state: “Dinterface.error”

M. target pos.: “Dinterface.Pos_soll”

M. target vel.: “Dinterface.Speed_soll”

M. target acc.: “Dinterface.Acc_soll”
G.iC3_*X*M

G.iS3_*X*M
G.oC3_*X*M

G.oS3_*X*M
CE_Conveyor (4) Controls a conveyor movement based on the target of a bottle –INIT

–STOP

–RUN
Skill mode: “CGinterface.MODES”

Skill init: “CGinterface.C_init”

LB left: “CGinterface.C_LBL”

LB right: “CGinterface.C_LBR”

Direction: “CGinterface.C_move”

M. init: “Dinterface.INITIALIZED”
G.C3_*X*LBl

G.C3_*X*LBr
n.a.
CE_Switch (3) Controls a switch movement based on the target of a bottle –INIT_Auto

–Run_Auto

–Manual

–STOP
Skill init: “CGinterface.S_Init”

Skill mode: “CGinterface.MODES”

Switch ready: “CGinterface.S_Ready”

Switch direction: “CGinterface.S_Dir”

“CGinterface.mSwitchData.reset”

LB status: “CGinterface.S_LB”
G.S3_*X*LB

G.S3_*X*D
n.a.
Direction

Control_CG (3)
Defines the direction of the bottle movement based on the bottle target for the CG1 –AUTO

–CLEAR

–STOP

–MANUAL
Skill init: “CSinterface.CG1_Init”

Skill mode: “CSinterface.Mode”

CG ready: “CSinterface.Group1_ready”
n.a. n.a.
MoveToSafePose Move robot to (predefined) safe position Control (Franka control loop) n.a. n.a. n.a.
MoveCartesian Move robot to target position (x, y, z) Control (Franka control loop) Target coords: “targetCoordinates”

Time: “movementTime”
n.a. n.a.
MoveGripper Move gripper fingers to specified width Move (Franka control loop) Width: “width”

Speed: “speed”
n.a. n.a.
Grasp Controls gripping sequence after moving in grasp distance Grasp (Franka control loop) Width: “graspWidth”

Speed: “graspSpeed”

Force: “graspForce”
n.a. n.a.
HomeGripper Homing after changing grippers Homing (Franka control loop) n.a. n.a. n.a.
MoveCartesianWHGT Move robot to target position with (half) turn of gripper Control (Franka control loop) Target coords: “targetCoordinates”

Time: “movementTime”
n.a. n.a.
MitigateSwitch Clear bottle from failing position at switch Sequence of skills (execution) n.a. n.a. n.a.
ProvideBottles Provide bottle from predefined positions Sequence of skills (execution) n.a. n.a. n.a.
ClearBottles Clear bottle to predefinded positions Sequence of skills (execution) n.a. n.a. n.a.
Figure 4: 
Sequence diagram of the hybrid use case, including normal behavior and fault recovery scenario.
Figure 4:

Sequence diagram of the hybrid use case, including normal behavior and fault recovery scenario.

In the normal scenario (Figure 4, alt: [Scenario: normal]), the Franka robot retrieves empty yogurt bottles from the MyJoghurt bottle storage area. It places them onto the logistics system’s input conveyor (C1). The bottles are transported along a predefined conveyor path, C1 → S1 → C2 → S2 → C3 → S3 → C4, which is controlled by switches at various points. As the process technology part belongs to the not considered part of the plant, the filling of yogurt into the bottles is assumed to take place between S1 and S2. Once the bottles reach the output conveyor (C4), the robot transfers the filled bottles to the finished product storage area for completed products. Central.AI, which operates on a separate computer, manages the data from the other two agents, including layout and real-time sensor information.

In the fault recovery scenario (cp. Figure 4, alt: [Scenario: mitigation of failed switch]), switch S3 is defective and unable to transport the yogurt bottles to C4. The Central.AI is informed of the fault by MyJoghurt and orchestrates a mitigation plan, instructing the robot to retrieve the bottles from C3 and place them in the finished product storage area, ensuring that the production process continues without interruption. The unique feature here is that the communication between the MyJoghurt system and the Franka robot can continue as usual. The Central.AI sends the Franka robot new fixed coordinates for the pick-up point of the bottle, which is now at the light barrier at the end of C3. Central.AI uses the information from MyJoghurt, i.e. the bottle position at the end of C3, to calculate a six-dimensional pose for the pick-up point of the Franka robot.

3 State of the art in model-based skill definition and latency analysis in distributed heterogenous control systems

Several relevant topics must be considered to contextualize model-based skill and latency analysis. In industrial automation and other domains, description languages exist to characterize latencies. However, often more than these are needed to efficiently identify the appropriate attributes across domains. Equally important is the precise definition and documentation of skills and their properties, and the design of benchmarks for software and communication.

3.1 Modeling of timing behavior in distributed CPS and robots

An example of modeling and analyzing real-time systems is the MARTE [10] standard developed by the Object Management Group (OMG). MARTE extends the Unified Modeling Language (UML) to provide broad-spectrum coverage of real-time system modeling. In the automotive domain, Mubeen et al. [11] have introduced an approach to evaluate end-to-end timing behavior in control networks. By developing plug-ins for the AUTOSAR standard, they enable early-stage design analysis. Nevertheless, due to the high degree of standardization and cloud connectivity in automotive systems, this approach has limitations that hinder its direct applicability to industrial automation. A timing-aware, model-driven development approach for automotive software is presented by Bucaioni et al. [12]. This method integrates design and implementation by establishing a detailed relationship between EAST-ADL [13] and a commercial development tool. It supports early-phase timing analysis through the automatic allocation of software to hardware and facilitates the phases of design, deployment, and verification. Expanding on this, Marinescu and Enoiu [14] extend the EAST-ADL approach to include the modeling of functions, execution timing, and resource usage. In another effort, Eder et al. [15] explore automated hardware design solutions by deploying software tasks through a DSL. Their methodology incorporates three key perspectives: modeling, language design, and exploration automation. These viewpoints address the heterogeneous technologies inherent in modern automation systems, highlighting the need for a more cohesive framework to bridge design and implementation in industrial contexts. These works collectively highlight advancements in timing-aware modeling and analysis across various domains. However, the transition of these methodologies from automotive to industrial automation systems remains an ongoing challenge, primarily due to differences in system architectures and domain-specific requirements.

3.2 Modeling of skills in industrial automation and robotics

The definition of skills is a well-researched field and was defined especially for the robotic domain and for different use cases. A brief overlook of the skill definition was given in Section 2. Two of the approaches on which this approach is based are the CSS model definition from [6] and the definition of robotic skills in manufacturing in [7]. Köcher et al. [8] link the definition of capability and skills to a number of industry standards. Likewise, comparisons have already been made by da Silva et al. [9], based on this concept with the skill descriptions of heterogeneous robotic systems. A formalized description of skills is presented in Lesire et al. [16] to derive operational models. The focus of this work is specifically on robotic skills. In the field of Cyber Physical Production-Systems (CPPS) a skill definition is given by Gonnermann et al. [17]. The approach focuses on the sensorial skills to enable the automated setup of process monitoring. A multitude of works exist that address skills. However, they lack a software-oriented focus on skills in heterogeneous systems that enable a timing analysis for controller and communication load.

3.3 Latency analysis in distributed controlled networks of robot systems and CPS

Latency analysis for software can be conducted on differing granularity levels, depending on the level of detail required. Coarse-grained approaches measure execution times at the function level, while fine-grained methods can capture the execution time of individual code statements [18], [19]. During runtime, time monitoring can be categorized into non-intrusive and intrusive methods [19]. Non-intrusive methods primarily focus on network-related parameters such as congestion, while intrusive methods involve embedding measurement instructions directly into the source code, enabling real-time monitoring of software states [19]. Tools like Code-TEST [20] provide time-monitoring functionalities in addition to the internal timing mechanisms of devices. However, such tools are often vendor- or operating-system-specific, which restricts their portability [21]. In the domain of machine learning (ML) and edge computing benchmarks, existing approaches like Tiny Benchmark [22], AIBench [23], and AIoT Bench [24] predominantly rely on operating system-based timing functions. For Programmable Logic Controllers (PLCs), Iqbal et al. [25] propose a benchmarking methodology aligned with the IEC 61131-3 standard. This approach leverages vendor-specific system counters to measure program execution times, capturing metrics, such as last, minimum, and maximum PLC scan times. Some approaches estimate Worst-Case Execution Times (WCETs) using probability distributions derived from collected measurements [26]. An approach of a vendor-independent and generalizable benchmark among different devices was developed by Rupprecht et al. [27]. The measurement of the latency between agents in the context of automation and robotic cooperation was carried out by Vogel-Heuser et al. in [28] and [29]. In summary, most benchmarks employ specific time measurement methods without thoroughly examining their characteristics or ensuring their portability across different devices. Furthermore, the details of the measurement techniques are often not explicitly documented, and they typically exclude PLCs, limiting compatibility across benchmarks.

4 Integrating heterogeneous definitions of skills in a model-based analysis approach

To comprehensively represent the characteristics of skill-to-hardware allocation, three essential perspectives are captured: the hardware perspective, which documents attributes of control technologies; the functional perspective, which describes the capabilities; and the software perspective, which maps the skills. In this section, a consolidated skill concept from robotics and industrial automation is initially developed and then integrated into the software view of the DSL based on Hujo et al. [5].

4.1 Consolidated skill concept

A consolidated representation of skills is to be developed to enable the efficient modeling of heterogeneous systems and to standardize the representation of code snippets for uniform analysis. To achieve this, two code examples – one from the field of robotics and another from industrial automation – will first be analyzed and compared with the two previously outlined skill models. Two types of skills are considered in this context to ensure generalizability. Since, in most cases, skills are not merely arranged in a purely sequential manner but can also be represented hierarchically, both a basic skill and a composite skill are examined.

A basic skill from the industrial automation use case is a motor driver for one of the actuators in the MyJoghurt system. This skill represents the most elementary level of operation and directly controls the motor outputs. In the structure of the function block (cp. Figure 5), the states serving as the interface for the skill are evident. These states are INIT, RUN_pos, RUN_vel, and STOP. The commonly encountered functions INIT and STOP align with the definition of the OMAC state machine [30], as specified in PackML, which is widely utilized in industrial automation. However, it becomes apparent that the state machine is not consistently applied across all levels, as the states RUN_pos and RUN_vel are customized states. This discrepancy does not play a significant role in the abstract level of skill representation, as it is clear that these non-standardized code snippets can still be associated with the interface state machine. The interface variables and the code modules within the states RUN_pos and RUN_vel, control the motor either based on target position or target velocity. Parameters are passed to these states in compliance with the definition of the parameter interface. A more granular analysis reveals that additional state machines exist within the skill, which process individual operations step-by-step. These state machines implement a form of pre- and postcondition verification through their transition conditions. This description applies to both robotic skills and industrial automation skills. However, these internal state machines do not impact the modeling of latency times within the system. For this reason, this part is represented as the execution layer and is not further considered regarding pre-/post-conditions or additional skill evaluation. If it becomes necessary to distribute the continuous evaluation of the skill to other skills, this can be easily exposed through the interface and optionally modeled as a validation interface. In this case, a prediction state is not present. In the context of industrial automation, such a state could ideally be invoked externally and, in the sense of a digital twin, facilitate the comparison between the predicted and the actual execution of the skill.

Figure 5: 
Control code of Driver_JVL skill from the use case.
Figure 5:

Control code of Driver_JVL skill from the use case.

A hierarchically higher-level skill, such as controlling an entire conveyor group in industrial automation, serves as an example and is stated in Figure 6. The interfaces also contain a state machine (cp. Figure 6, CS_Interface: Mode), but the primary distinction from the basic skill lies in the parameters. The I/O connections for sensor and actuator integration are defined at this level. The state machine perspective is primarily used as a parameter-passing mechanism rather than a fully implemented interface. Therefore, the definition that a state machine must exist at this level is only partially accurate. Instead, it is more appropriate to say that the individual skills operate within the execution layer, while the higher-level skill only indirectly features a state machine. The parameters passed to this skill, in addition to the current state, include initialization conditions for all units of the transport system, the bottle number being requested, and three arrays for each bottle present. These arrays specify the target for the bottle, the position of the bottle, and the desired route for the bottle. Specific variables are defined for communication with the robotic system. For instance, Group1_ready indicates whether the MyJoghurt system is ready to receive a bottle, and G10_request signals whether a bottle is ready for pickup by the robot. If the bottle is removed from the robot, the clearance can be communicated by the variable bottleRemoved. The remaining parameters are utilized for a unique state where the system is initially cleared of bottles. In this case, these parameters have no further impact on the operating state. It can be stated that the hierarchically higher-level skill, although composed of basic skills that are state machines and thus inherently possess pre- and postconditions, does not explicitly reflect these conditions at the higher skill level. For this reason, it cannot be assumed that higher-level skills necessarily implement pre- and postcondition checks or adhere to a state machine interface. Nevertheless, when the skill is displayed in an expanded representation, it can serve as a container with defined inputs. If it is presented in a collapsed form, the state machine and parameters should be inherited in a specific manner.

Figure 6: 
Control code of the Conveyor_Group skill from the industrial automation example.
Figure 6:

Control code of the Conveyor_Group skill from the industrial automation example.

In contrast to the IEC 61131-3 conform implementations of the automation skills, the skills of the robotic arm are written in C++ and are differently structured. Figure 7 shows an example of a basic skill for the Franka robot. The skill depicted here is used to move the robot within a Cartesian coordinate system and is equivalent to a function. Input parameters include the variables targetCoordinates, provided as a vector, and movementTime. Unlike explicit state machine control, the interface is not directly available but is invoked by the higher-level code within a specific state. The concept of continuous monitoring during the execution of a skill is generally applicable to the Franka robot. However, it is implemented in our example only at the lower-level skills, which control the drivers of the Franka robot. Therefore, the representation of continuous monitoring can be achieved by inheriting the properties of the lower-level skills into the higher-level ones or by modeling this feature as optional in the system design. However, a check for a postcondition is implemented here, where the timer variable – and thus the execution time of the skill – is monitored. Upon completing the skill, the state change is communicated back, ensuring proper synchronization. Due to copyright reasons, displaying the individual driver libraries here is impossible, as these are not custom-written drivers in contrast to our industrial automation scenario. However, the structure differs slightly from that of PLC programming. In this context, an output stream is defined for communication with the lower-level robot controller, whereas in PLC programming, variables are directly mapped to the controller outputs. Nevertheless, the basic skills can be assigned here so that they are clearly defined when a skill actuates an actuator or reads a sensor. This abstraction layer can thus be effectively utilized for system modeling.

Figure 7: 
Base skill MoveCartesian of the Franka robot.
Figure 7:

Base skill MoveCartesian of the Franka robot.

Figure 8 proposes a consolidated version of the skill descriptions combining both approaches from [6], [7]. The illustration is based on the CSS definition from [6], with all deviations from this standard highlighted in blue. The CSS framework was utilized to maintain the relationship between Resource, Capability, and Technical Process, as these elements are of significant importance in the combined consideration of industrial automation and robotics. The goal of this consolidation is not to redefine skills for implementation within generalized programming frameworks or concepts, but rather to abstract the definition in such a way that, irrespective of the actual implementation – such as those depicted in Figures 57 – the classification of control code within the analysis concept can be achieved.

Figure 8: 
Class diagram of the consolidated skill description.
Figure 8:

Class diagram of the consolidated skill description.

From this objective, the first modification of the model emerges: the introduction of an interface linking the skill definition to the analysis. This interface enables the inclusion of skill parameters that can be utilized not only during the operation phase but also for analysis, automated deployment, or other applications. The Analysis is enabled by some of the elements of the skills, including the Parameter that are inputted from a Property, the newly introduced class Metrics, and the also newly introduced I/O-Connections. The I/O-Connection class was introduced to address the unclear interaction (controls) between skills and processes. While the I/Os are generally determined by their assignment to a resource, they are not yet explicitly defined enough to enable a distributed analysis. For instance, if a skill is located on a controller that is not directly connected to the I/O variable, additional latency arises due to the communication required with the corresponding resource. The Metric class was introduced to analyze the code embedded within a skill, focusing on properties such as execution time, memory usage, and other characteristics. Metrics can be defined and extended as needed for the specific application under analysis. Additionally, any number of metrics can be associated with a single execution, allowing for a flexible and comprehensive evaluation. The Execution concept was adopted from the robotic skill domain, recognizing that, particularly in robotics, not every skill is associated with a StateMachine. For this reason, the StateMachine is implemented as an optional component. While optional, the State Machine can still serve as an interface and be used to support the continuous evaluation of robotic skills – an aspect not explicitly defined in CSS. It can also be employed for evaluating specific conditions, such as preconditions, postconditions, and predictions, thereby enhancing the flexibility and applicability of the framework. For this purpose, the preconditions, postconditions, and predictions defined in the robotic skill are transformed into StateChangeConditions. This ensures that the evaluation of skill execution remains consistently described and seamlessly integrated into the framework. The final modification lies in the composition of skills. As demonstrated in the example of IEC 61131-3 code, an overarching skill can be composed of multiple skills. Given that in PLC programming, the definitions of tasks and primitives have slightly different meanings, it was introduced that a skill can be defined as a composition of multiple skills. This adaptation ensures compatibility and reflects the hierarchical structure commonly observed in industrial automation scenarios.

Based on the definition presented in Figure 8, Table 1 identifies and categorizes the existing skills for the use case. Column 1 also includes the number of skills of each type, as a skill can appear multiple times – as instances without any code modifications. This is particularly evident in the example of a driver skill. The skill Driver_JVL (cp. Table 1 ) always performs the same task: controlling the motor based on the inputs from the skills CE_Conveyor or CE_Switch. The only differences are the inputs, outputs, and interface parameters. The table includes skill names, descriptions, interfaces, and I/Os. The state change conditions are not explicitly listed, as they do not play a significant role in the analysis of timing behavior in this specific case. A small example can, however, be found within the Franka basic skill MoveCartesian, where a continuous time query is performed.

4.2 Integration of the consolidated skill concept

Based on the presented consolidated skill concept, an integration of the existing approach by Hujo et al. [5] is to be carried out. Figure 9 provides an overview of the key symbols used to represent software properties, particularly with regard to timing behavior in PLCs and other cyclically operating controllers. The DSL consists of skills, controller boxes, a communication annotation element, and three different types of connectors that can interlink the skills. The idea here is to represent the distribution of skills across different controllers and their communication with one another, whether within a single controller or between two controllers, which introduces additional latency. Figure 10 shows the revised skill symbol, incorporating all elements listed in Figure 8 (derived from the skill description given in Table 1), which can be used for modeling. It is important to note that some fields, as described in the abstract definition of the skill, are optional. An example of this is the ChangeCondition, which can be assigned to a state. An excerpt of the skill implementation for the use case, based on the symbols, is presented in the following section, explaining how it is used.

Figure 9: 
List of elements for the DSL from [5].
Figure 9:

List of elements for the DSL from [5].

Figure 10: 
Skill element based on the proposed concept.
Figure 10:

Skill element based on the proposed concept.

5 Modeling and analysis of the presented use cases

In this section, the experimental setup for quantifying the latencies caused by control code execution is presented, and the model of the use case is created using the symbols given in Figure 9 enlarged with the skill symbol from Figure 10. Rather than just being executed sequentially, skills may be arranged in hierarchies in order to implement capability requirements on different levels of abstraction of distributed heterogeneous control networks, as mentioned in previous sections. Especially on lower levels of the hierarchy, skills are often coupled to hardware I/Os and control logic, offering the potential for generalization that leads to highly modularized implementations and increased code reusability. As a result, such skills can be instantiated multiple times without further code modifications, being controlled and parametrized to the isolated task within the system via the interface of the skill. The MyJoghurt system from the use case serves as a representative example of an application from the industrial automation domain with a hierarchical implementation of skills. The skills presented in Table 1 are ordered from lower-level to higher-level implementation logic, showing the following tendency: Lower-level skills are instantiated multiple times due to higher generalizability, whereas higher-level skills, even when sharing large parts of their implementation, serve more specific requirements in the system and, therefore, have to be modeled and implemented as unique skills. This can exemplarily be seen in Figure 11, where the Drive_JVL skill is used several times only with changed parameters, while the CGroup skill is built from different skills for each group individually.

Figure 11: 
Excerpt of the modeled use case including one conveyor group and the provided skill of the Franka robot.
Figure 11:

Excerpt of the modeled use case including one conveyor group and the provided skill of the Franka robot.

The areas of interest for this case study are derived based on these considerations: Evaluation of the execution time for skills from different levels of hierarchy, and evaluation of the execution time for instances of the same skill, sharing identical code with different parametrization. Figure 11 shows an excerpt of the modeled use case from Section 2. A PLC-CX2040 is used with cyclic execution of the control code (cp. Figure 11 in first ControllerBox), while the PC controlling the Franka robot operates with sequential execution (cp. Figure 11 in second ControllerBox). Through the unified representation of skills, it is now possible to depict both the cyclic and sequential processes together, as well as identify which skills need to communicate with each other to initiate the process. In the PLC, the times required for each skill per cyclic call were measured and documented, while in the Franka control system, the execution time of the individual skills was recorded. The results of this investigation are discussed in Section 6, where the skill execution times were measured and analyzed. The measurement of the time to estimate the latency of the control code execution is implemented for each skill in the same way and requires several steps. First, timestamps for the exact start and end time of the skill execution must be generated. For this purpose, a built-in function of the PLC is used that returns the current time from a distributed clock with a resolution of 100ns. Since the skills are implemented as state machines that are controlled and parameterized through interfaces, the methods responsible for executing a skill’s functionality are invoked solely from within the state machine of the corresponding function block. Each skill is, therefore, encapsulated as an individual function block, which is called cyclically by another function block acting as a container. This design prevents the use of nested function blocks for skills. Consequently, the execution time of a (isolated) skill can be estimated by measuring the start and end times around the body of its associated function block, which primarily consists of the state machine. Second, the time difference is calculated and added to a buffer in each cycle, i.e. once every 10 ms. The case study experiment consists of 10,000 consecutive cycles. The execution time of the Franka control system, on the other hand, is measured by wrapping each basic skill with a C++ built-in timing function and corresponds with the total time required for executing the skill. Due to total execution times of higher level skills in the order of magnitude of minutes, each of these skills – consisting of a sequence of basic skills – has been measured 50 times. In order to be able to evaluate the basic skills within their execution context, i.e., skill parameters and the state of the system defined by preceding skills, each basic skill has been assigned a unique ID, consisting of the higher level skill name, the basic skill name and a number representing the order within the sequence.

The latency analysis should show if the requirements derived from the technical process and the latency properties of the system correlate and if the process is feasible to operate. On the one hand, it can be analyzed if a PLC task can handle the number of skills executed. This analysis focuses on a cyclic execution of control code. On the left side of Figure 11, an example of this cyclic controller execution is given. The skills included within this controller are benchmarked, and the maximum execution time for one cycle is given. The connectors state if the skills on the controller can be executed in parallel (Parallel Skill Connector or Parallel Skill Illustrator) or in sequential (Sequential Skill Connector) order. This is important for calculating the overall code execution time within one cycle. If the skills are modeled in sequence, this states, that the skills cannot be executed within one cycle and, therefore, must not be added to the overall execution time. If a skill is stated with a parallel execution, the skill times are added to an overall execution time. Communication between skills is not explicitly measured. The communication between the two shown controllers can be measured with ca. 420 µs, as stated in the octagonal annotation element on the sequential connector between the skills DirectionCotnrol1 and HomeGripper.

Another analysis can be done for the sequential execution controller. In order to secure a high throughput of the machine, the overall interaction between the two systems, CX2040 and Franka control, should not take longer than 20 s. When the process is even slower than 50 s, the queuing line on the conveyor of the MyJogurt plant can also overflow. That leads to two requirements, which fulfillment can be calculated based on the skill execution and the communication between the CX2040 and the Franka Control. Therefore, the sensor variable that triggers the process is identified (cp. Figure 11 Sensors (within a skill)), and the signal flow is followed to finish the process. All the times that occur in between are then added. Additionally, the hardware view of the DSL is used for the full description of latency in the system, which is not demonstrated in this paper. The calculation of the two analyses is given in further detail.

6 Discussion of the results

This section gives an overview of how the analysis of the modeled use case is carried out. The latency of the control code execution of the skills during the operating state, i.e., AUTO/RUN mode of the corresponding skill state machines, is the focus of this case study. Therefore, a subset of around 7,000 consecutive cycles (∼70s) is selected for evaluation, covering the production cycles of three bottles and excluding the first production cycle that directly followed the INIT sequence. The numerical results are shown as summary statistics and boxplots in Figures 12 and 13, respectively. It becomes evident that the order of magnitude of the control execution latency is tightly coupled with the skill (implementation), whereas the execution time of parametrized skill instances are in a similar range. However, the instance “CE_S32” of skill “CE_Switch” seems to be an exception to this assumption, having almost double the execution time on average compared to the other two instances of that skill. Regarding the spread of values it should be noted that within the scope of 7,038 cycles, all skills contain a certain amount of outliers, defined by exceeding the third quartile by more than 1.5 times the interquartile range. It can also be observed that for all skills with average execution times below 200 ns the minimum time difference has been measured as 0 ns.

Figure 12: 
Box plot of the measured execution times of the automation’s skill instances.
Figure 12:

Box plot of the measured execution times of the automation’s skill instances.

Figure 13: 
Box plot of the difference between measured skill execution time and the average per column of the robot skills; values correspond to dt[s]. The average actual execution time is shown as filled black rectangles, corresponding with the values of t[s].
Figure 13:

Box plot of the difference between measured skill execution time and the average per column of the robot skills; values correspond to dt[s]. The average actual execution time is shown as filled black rectangles, corresponding with the values of t[s].

The following discusses the observations and explanations for the latency behavior described in the previous section. First of all, the precision of the time measurement is limited by the distributed clock that updates in ticks of 100 ns. In cases where the execution time is shorter than the value of one tick, the measured start and end time are referencing the same timestamp, resulting in the observed values of 0 ns execution time. Due to higher-level skills controlling lower-level skills via skill interfaces instead of nesting them in their skill state machine, skill latency can be measured in an isolated manner. As a result, higher level skills do not necessarily have higher execution times than the lower level skills they monitor and control, as the comparison of “CE_Conveyor” instances with instances of “Driver_JVL” shows. Although no definite causes for the large deviation of “CE_S32” in comparison with the other instances of skill “CE_Switch” could be identified, it can still be concluded that even skills with identical code implementation can contribute differently to the control execution latency, depending on the skill parameters and the code execution context, including mapped hardware I/Os.

The numerical results for the execution times of the Franka control system are summarized in Figure 13. In contrast to the cyclic evaluation of the MyJoghurt control code, the total times for the skill execution have been measured. As the figure shows, the execution times vary and mainly depend on the skill and its parameters. Thus, the difference between each run and the average of 50 runs is shown as a box plot for each unique skill ID, serving as an indicator for evaluating the stability and reproducibility of skill execution. For most of the basic skills, the deviation lies within fractions of a second, in the order of magnitude of µs to ms. Still, the visualization reveals deviations up to almost 200 ms for the basic skills HomeGripper, Grasp and MoveGripper that all relate to the control of the gripper. This demonstrates that measuring the distribution and deviation of skill execution times is a crucial and complementary step in the modeling process, providing estimates for the real-world characteristics of complex CPS or even indicating underlying problems and unexpected behavior. What might seem unexpected is an execution time close to zero of basic skill MoveToSafePosition during the init sequence (0). However, the same skill is called in the last sequence step as well, leaving the robot in the target position which leads to the post condition being fulfilled already at the beginning of the execution during init. Still, the call in the init sequence is not redundant and cannot be left out, since the higher level skill does not make assumptions about the state of the system and first transfers the robot into a well-defined initial state. A significant result derived from Figure 13 is, that the execution time of a single skill can vary based on the use case that is operated.

By modeling the problem in Figure 11, the measured times of skills and communication can be summed up, enabling an evaluation of the use case. The MaxExecutionTimes of the robotic skills from Figure 13 and the MaxInnerCycleTime of the automation skills from Figure 12 are stated in the model as metrics. For the cyclic behavior of the PLC, it can be analyzed whether the defined cycle time is still being met or if a redistribution is necessary (MaxExecutionTimes). The experimental study focused on measuring the latency of skill execution within the system. While inter-skill communication latency between subsystems was not explicitly captured in the experiments, the latency introduced by the non-deterministic behavior of the MQTT protocol, used for communication between the PLC and the robot, has been considered in the model. To account for the impact of this latency, an estimate for the order of magnitude has been included in the visual representation of the use case (cp. Figure 11). When defining system requirements, such as a minimum reaction time, the feasibility of the approach can be determined, as described in [5], by summing the corresponding communication times, the sequentially executed skills on the Franka robot, and the cyclically executed PLC skills. The result of the consolidated model is a DSL in which a holistic overview of the latency within and between the different systems can be captured for a problem description.

For the cyclic execution of the PLC (cp. Figure 11, ControlerBox CX2040), the summed-up execution time of the shown excerpt would be CE_S31 (2500 ns) + CE_C32 (200 ns) + Driver_S31 (300 ns) + Driver_C32 (500 ns) equals 3,500 ns or 0.0035 ms, which is way smaller than the cycle time of 10 ms. This result was expected, as the excerpt of the control code was small compared to a complete control program. The analysis of the sequence behavior can be done by summing up all skills involved in the task execution, from the triggering of a sensor in the PLC skills to the full execution of robotic skills. This includes two cycles of PLC task (20 ms = Worst-Case Execution Time), the MQTT communication (420 µs), once the skill HomeGripper (6.05s), once MoveToSavePose (5.23s), four times MoveCartesian (4.01s), once MoveGripper (1.27s), once Grasp (1.47s), and once MoveCartesianWHGT (4.01s). This rounds up to 34.1s. As discussed in Section 5, the process should not take longer than 20s to be efficient and not more than 50s to avoid safety issues. The safety requirement can be addressed, but the robot skills are executed too slowly for efficiency. In future work, this analysis should automatically be done by a tool that considers all possible input and output combinations of the process to provide a supportive overview of the system behavior.

7 Conclusion and outlook

Designing distributed heterogeneous networked control systems is a challenging task itself, while real-world production scenarios offer very high potential for optimization in the context of Industry 4.0 and IoT. The combination of technologies from various domains further complicates this process. This paper presents an approach for the equivalent modeling and analysis of skills from the domains of robotics and industrial automation to support the design of such integrated systems. The developed approach builds on an existing domain-specific language. It incorporates hardware, software, and functional perspectives, enabling a semi-formal problem description and providing support for analyzing system designs prior to the commissioning phase, thereby aiming to save time during the testing process and facilitating system validation. The DSL supports the design phase of distributed control system architecture by making system design requirements and properties formalizable and comparable. The main contribution of this paper is the detailed definition of skills and their integration into the DSL of the referred latency modeling approach. This semi-formal definition enlarges the approach from [5] to enable the modeling and integration of heterogeneous software skills from different domains. It formulates a skill’s internal structure and its relation to other system components on an abstraction level, which allows the modeling of robotic skills and skills from industrial automation. Integrating skills from different domains is essential in modeling modern CPPS, as they use various technologies partly adapted from consumer domains. The proposed concepts were evaluated through a use case combining robotics and industrial automation. Different scenarios were tested under normal operating conditions as well as in fault recovery states, providing comprehensive insights into their applicability.

Future work will focus on exploring suitable metrics to describe the interaction between hardware and software to further automate the latency analysis of CPPS. Additionally, enabling dynamic and automated skill distribution within the overall system of the distributed networked control system will be a key objective. The long-term goal is an integration platform based on the developed DSL, enabling the design and ramp-up of a plant’s distributed control system based on requirements derived from the defined technical process.


Corresponding author: Dominik Hujo-Lauer, Lehrstuhl für Automatisierung und Informationssysteme, Technische Universität München, Boltzmannstr. 15, 85748 Garching bei München, Germany, E-mail: 

About the authors

Dominik Hujo-Lauer

Dominik Hujo-Lauer received the M.Sc. degree in Mechatronics and Information Systems in 2020 from the Technical University of Munich (TUM), where he is currently working toward the Ph.D. degree. He is currently a Research Assistant with the Institute of Automation and Information Systems. His research interests include heterogeneous distributed networked control systems and model-based assessments and engineering, where his primary focus is communication latencies at different automation levels.

Birgit Vogel-Heuser

Univ.-Prof. Dr.-Ing. Birgit Vogel-Heuser received a Diploma degree in Electrical Engineering and a Ph.D. in Mechanical Engineering from RWTH Aachen. Since 2009, she has been a full professor and director of the Insititute of Automation and Information Systems at the Technical University of Munich (TUM). Her current research focuses on systems and software engineering. She is a member of the acatech (German National Academy of Science and Engineering), editor of IEEE T-ASE, and IEEE Fellow and member of the science board of MIRMI at TUM.

Cedric Wagner

Cedric Wagner received his M.Sc. in Engineering from RWTH Aachen University in 2020. He is currently pursuing a Ph.D. at the Institute of Automation and Information Systems, Department of Mechanical Engineering, TUM School of Engineering and Design, Technical University of Munich. His research focuses on data-driven fault detection, embedded machine-learning, and their integration with agentic systems.

Jingyun Zhao

Jingyun Zhao received her M.Sc. in Mechanical Engineering from the Technical University of Munich in 2022 and is currently pursuing a Ph.D. at the Chair of Automation and Information Systems at the Technical University of Munich. Her research focuses on information modeling, machine learning, and the development of digital twins in automated production systems.

Josua Höfgen

Josua Höfgen received his M.Sc. in Mechanical and Aerospace Engineering in 2023 from the Technical University of Munich (TUM), where he is currently pursuing a Ph.D. at the Institute of Automation and Information Systems. His research focuses on bridging knowledge gaps in the development of automation systems by enhancing the integration of Model-Based Systems Engineering methods and Knowledge Representation, with a particular emphasis on their application to Multi-Agent Systems.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors declare no conflict of interest.

  6. Research funding: The authors would like to thank the Bavarian State Ministry of Economic Affairs, Regional Development and Energy (StMWi), Lighthouse Initiative KI.FABRIK, (Phase 1: Infrastructure and R&D Programme, funding code DIK0249), for funding.

  7. Data availability: Not applicable.

References

[1] H. Boyes, B. Hallaq, J. Cunningham, and T. Watson, “The industrial internet of things (IIoT): an analysis framework,” Comput. Ind., vol. 101, pp. 1–12, 2018. https://doi.org/10.1016/j.compind.2018.04.015.Search in Google Scholar

[2] B. Vogel-Heuser, M. Seitz, S. L. Cruz, F. Gehlhoff, A. Dogan, and A. Fay, “Multi-agent systems to enable industry 4.0,” Automatisierungstechnik, vol. 68, no. 6, pp. 445–458, 2020.10.1515/auto-2020-0004Search in Google Scholar

[3] M. Chui, M. Collins, and M. Patel, “IoT value set to accelerate through 2030: where and how to capture it,” 2021. https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/iot-value-set-to-accelerate-through-2030-where-and-how-to-capture-it [accessed: Dec. 18, 2024].Search in Google Scholar

[4] B. Vogel-Heuser, E. Trunzer, D. Hujo, and M. Sollfrank, “Re-)Deployment of smart algorithms in cyber-physical production systems using DSL4hDNCS,” Proc. IEEE, vol. 109, no. 4, pp. 542–555, 2021. https://doi.org/10.1109/jproc.2021.3050860.Search in Google Scholar

[5] D. Hujo, B. Vogel-Heuser, and L. Ribeiro, “Towards a graphical modelling tool for response-time requirements based on soft and hard real-time capabilities in industrial cyber-physical systems,” J. Emerg. Sel. Top. Ind. Electron. (JESTIE), vol. 3, no. 3, pp. 13–22, 2022. https://doi.org/10.1109/jestie.2021.3093248.Search in Google Scholar

[6] C. Diedrich, et al.., Information Model for Capabilities, Skills & Services: Definition of Terminology and Proposal for a Tech-nology-independent Information Model for Capabilities and Skills in Flexible Manufacturing, Berlin, Plattform Industrie 4.0, 2022.Search in Google Scholar

[7] M. R. Pedersen, et al.., “Robot skills for manufacturing: from concept to industrial deployment,” Robot. Comput.-Integr. Manuf., vol. 37, no. 1, pp. 282–291, 2016. https://doi.org/10.1016/j.rcim.2015.04.002.Search in Google Scholar

[8] A. Köcher, C. Hildebrandt, L. M. da Silva, and A. Fay, “A formal capability and skill model for use in plug and produce scenarios,” in 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), vol. 1, IEEE, 2020, pp. 1663–1670.10.1109/ETFA46521.2020.9211874Search in Google Scholar

[9] L. M. V. da Silva, A. Köcher, and A. Fay, “A capability and skill model for heterogeneous autonomous robots,” at-Automatisierungstechnik, vol. 71, no. 2, pp. 140–150, 2023. https://doi.org/10.1515/auto-2022-0122.Search in Google Scholar

[10] B. Selic, “UML profile for MARTE: modeling and analysis of real-time embedded systems (version 1.1),” Object management group (OMG), 2011. https://www.researchgate.net/publication/283209151_UML_Profile_for_MARTE_Modeling_and_Analysis_of_Real-Time_Embedded_Systems_Version_11 [accessed: Dec. 18, 2024].Search in Google Scholar

[11] S. Mubeen, E. Lisova, and A. V. Feljan, “Timing predictability and security in safety-critical industrial cyber-physical systems: a position paper,” Appl. Sci., vol. 10, no. 9, 2020, Art. no. 3125.10.3390/app10093125Search in Google Scholar

[12] A. Bucaioni, et al.., “MoVES: a model-driven methodology for vehicular embedded systems,” IEEE Access, vol. 6, pp. 6424–6445, 2018. https://doi.org/10.1109/access.2018.2789400.Search in Google Scholar

[13] EAST-ADL Association, “EAST-ADL,” https://www.east-adl.info [accessed: Dec. 18, 2024].Search in Google Scholar

[14] R. Marinescu and E. P. Enoiu, “Extending EAST-ADL for modeling and analysis of system’s resource-usage,” Proc. Int. Comput. Software Appl. Conf., pp. 532–537, 2012. https://doi.org/10.1109/compsacw.2012.99.Search in Google Scholar

[15] J. Eder, A. Bahya, S. Voss, A. Ipatiov, and M. Khalil, “From deployment to platform exploration,” in Proc. 21th ACM/IEEE Int. Conf. Model Driven Eng. Lang. Syst., 2018, pp. 438–446.10.1145/3239372.3239385Search in Google Scholar

[16] C. Lesire, D. Doose, and C. Grand, “Formalization of robot skills with descriptive and operational models,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 7227–7232.10.1109/IROS45743.2020.9340698Search in Google Scholar

[17] C. Gonnermann, J. Weth, and G. Reinhart, “Skill modeling in cyber-physical production systems for process monitoring,” Procedia CIRP, vol. 93, pp. 1376–1381, 2020. https://doi.org/10.1016/j.procir.2020.03.095.Search in Google Scholar

[18] D. Stewart, “Measuring execution time and real-time performance,” in Embedded Systems Conference, 2001.Search in Google Scholar

[19] L. Gao, M. Lu, L. Li, and C. Pan, “A survey of software runtime monitoring,” in Proceedings of 2017 IEEE 8th International Conference on Software Engineering and Service Science, IEEE, 2017, pp. 308–313.10.1109/ICSESS.2017.8342921Search in Google Scholar

[20] Freescale Semiconductor, Inc., “NXP- CodeTEST software analysis tools for freescale TM StarCore TM MSC8100 family.” Available at: https://www.nxp.com/docs/en/supporting-information/CW_starcoreintro_SI.pdf [accessed: Dec. 18, 2024].Search in Google Scholar

[21] J. J. S. Costa, R. S. de Oliveira, and L. F. Arcaro, “Use of measurements in worst-case execution time estimation for real-time systems,” in 2021 XI Brazilian Symposium on Computing Systems Engineering (SBESC), IEEE, 2021, pp. 1–8.10.1109/SBESC53686.2021.9628230Search in Google Scholar

[22] C. Banbury, et al.., “MLPerf tiny benchmark,” arXiv preprint arXiv:2106.07597, 2021. https://doi.org/10.48550/arXiv.2106.07597.Search in Google Scholar

[23] T. Hao, et al.., “Edge AIBench: towards comprehensive end-to-end edge computing benchmarking,” in Benchmarking, Measuring, and Optimizing, ser. LNCS Sublibrary, vol. 11459, C. Zheng, and J. Zhan, Eds., Springer International Publishing, 2019, pp. 23–30.10.1007/978-3-030-32813-9_3Search in Google Scholar

[24] C. Luo, et al.., “AIoT Bench: towards comprehensive benchmarking mobile and embedded device intelligence,” in Benchmarking, Measuring, and Optimizing, ser. LNCS Sublibrary, vol. 11459, C. Zheng, and J. Zhan, Eds., Springer International Publishing, 2019, pp. 31–35.10.1007/978-3-030-32813-9_4Search in Google Scholar

[25] S. Iqbal, S. A. Khan, and Z. A. Khan, “Benchmarking industrial PLC & PAC: an approach to cost effective industrial automation,” in 2013 International Conference on Open Source Systems and Technologies, IEEE, 2013, pp. 141–146.10.1109/ICOSST.2013.6720620Search in Google Scholar

[26] A. Prabhakara, B. Steinwender, and W. Elmenreich, “Statistical analysis of execution time profile for temporal validation of a distributed hard real-time system,” in 2021 22nd IEEE International Conference on Industrial Technology (ICIT), IEEE, 2021, pp. 1188–1192.10.1109/ICIT46573.2021.9453493Search in Google Scholar

[27] B. Rupprecht, B. Vogel-Heuser, and E. Neumann, “Measurement methods for software execution time on heterogeneous edge devices,” in 21st IEEE International Conference on Industrial Informatics, INDIN’23, Lemgo, Deutschland, 2023, p. 6.10.1109/INDIN51400.2023.10218136Search in Google Scholar

[28] B. Vogel-Heuser, et al.., “Delay modelling and measurement of multi-agent systems with digital twins in a gear assembly use case,” in 19th International Conference on Automation Science and Engineering (CASE), Cordis, New Zealand, 2023, p. 8.10.1109/CASE56687.2023.10260673Search in Google Scholar

[29] B. Vogel-Heuser, et al.., “Modellierung und Validierung von Latenzzeiten in industriellen Agenten-Systemen mit Digitalen Zwillingen der KI.Fabrik,” Automatisierungstechnik, vol. 70, no. 10, pp. 958–979, 2024. https://doi.org/10.1515/auto-2023-0221.Search in Google Scholar

[30] ISA-TR88.00.02, Machine and Unit States: An Implementation Example of ISA-88, Durham, USA, International Society of Automation, 2008.Search in Google Scholar

Received: 2024-12-18
Accepted: 2025-03-18
Published Online: 2025-05-07
Published in Print: 2025-05-26

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 30.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/auto-2024-0175/html
Scroll to top button