The digital twin implementation for linking the virtual representation of human-based production tasks to their physical counterpart in the factory-floor

ABSTRACT Production systems empowered by digital simulation tools can be improved in a time and cost-effective approach. The enrichment of digital simulations with sensor data, can enhance their realism and improve the accuracy of their results. Hence, this study proposes an implementation of the digital twin approach as part of a wider cyber-physical system (CPS) to enable the optimisation of the planning and commissioning of human-based production processes using simulation-based approaches. This is achieved through a) sensor data fusion and motion recognition of human activities in a factory floor and b) knowledge management mechanism for capturing the implicit knowledge of the task execution. A case study from the intra-factory logistics operations in the white goods industry, demonstrates the feasibility of the proposed approach, by enriching the simulation of manual assembly operations with the operator’s knowledge in the form of spatiotemporal constraints.


Introduction
Digital representations constitute a major challenge in improving the accuracy of existing and future simulation tools. A consistent digital representation, hence a digital twin, may bridge the gap between the physical and virtual experimentation by improving the added-value of industry's simulation tools. A digital twin model, as part of a wider cyber-physical system, comprises of three main parts: a) the real world, b) the virtual world, and c) the connections of information associating the virtual with the real world, with the digital twin serving as a digital controller of the real world manufacturing system.
Currently, most of the published work related to applying the digital twin in manufacturing applications puts more focus on either representing entities (Grieves 2014a;Söderberg et al. 2017;Tao and Zhang 2017), such as machines, production lines, robots, or on modelling and simulating human processes by means of digital manikins (De Magistris et al. 2015;Karmakar, Sanjog, and Patel 2014). However, to the best of the authors' knowledge, there is no related work that holistically considers all the separate aspects of the digital twin concept based on the original definition (Grieves 2014).
In this paper, a digital twin concept for human-centred operations on the factory floor is presented and studied as a part of an overall cyber-physical system (CPS); while its application to manufacturing is analysed and evaluated with respect to warehouse operations. Human motions on the shopfloor area are recorded using a combination of low-cost sensors (optical, force and torque). Motion recognition and analysis algorithms are used for acquiring semantic information, thus making the worker's knowledge an intrinsic part of the generated simulations. Afterwards, those parameters can be used for the generation of realistic simulations and their assessment. The enrichment of the digital twin simulations, with sensor data, is proposed for enhancing the realism of digital simulations and improving the accuracy of their results. Thus, human factors (e.g. ergonomics) and production aspects (e.g. cycle time) of shopfloor operations can be improved in a cost-effective, rapid and safe for the human operator approach. A software prototype has been implemented to verify the proposed approach with respect to warehouse operations. The main contributions of this paper are: (1) applying the digital twin concept for modelling human tasks in manufacturing while at the same time we propose how this modelling concept fits as a part of an overall production CPS model, (2) the analysis and identification of the spatiotemporal parameters that are important to the manual assembly operations from the worker's perspective (3) the utilisation of these parameters for the generation of realistic simulations.
into their operations, which in turn, will improve the flexibility of the overall production system. State of the art technology, such as predictive analytics to the industrial internet of things and cloud computing, are already integrated into the manufacturing industry towards optimising the performance of production systems in terms of time, cost, quality, and flexibility (Westkämper 2007). Digital manufacturing has been considered as a promising set of technologies, not only for enabling cost-effective and efficient production but also for addressing the emerging demands for customised production (Chryssolouris et al. 2009). A survey of modelling techniques in production planning is provided by Jeon and Kim (2016). Simulation technologies, allowing cost-effective experimentation and validation of manufacturing and product solutions, constitute a key enabling technology in the field of digital manufacturing (Mourtzis et al. 2015). As product development processes are becoming more and more complex, insight can be gained through simulation modelling and analysis approaches (Mourtzis, Doukas, and Bernidaki 2014). The virtual representation and experimentation, enabling early product and process verification, provide promising potential savings in time and cost (Curran et al. 2007;Zhang and Li 2012;Lawson et al. 2015). In this context, it becomes clear that modern automation solutions are essential for production (Pedersen et al. 2016).
However, manual operations are still dominant in low volume and complex manufacturing processes (Alkan et al. 2016). Human operators still play a major role in the performance of production systems (Longo and Monteil 2011). Typical examples of operations performed by humans include maintenance, inspection, and complex assembly tasks. In this context, ergonomic factors, such as the workers' fatigue, become significant (Chryssolouris 2006). The ergonomic evaluation has been traditionally associated with the creation of physical models and their analysis throughout different real-world experiments; a procedure that increased production cost as well as time-to-market (Mavrikios et al. 2007a). An initial study on the computer-aided ergonomic evaluation of workplace design is presented in Feyen et al. (2000). There are numerous studies, referring to digital human modelling and simulation approaches for the ergonomic evaluation of human-centred operations, such as Mavrikios et al. (2006) and (Mavrikios et al. 2007), Jung, Kwon, and You (2009), Lawson et al. (2015, De Magistris et al. (2013). A comparative study of digital human model (DHM) simulations is provided by Lämkull, Hanson, and Roland (2009) and in Polášek, Bureš, and Šimon (2015), while possible applications include, risk assessment for aging workforce (Case et al. 2015), increasing customer satisfaction (Lawson et al. 2015), enhancing operations ergonomics (Bjorkenstam et al., 2016) as well as human-product and human-process ergonomic suitability (Alexopoulos, Mavrikios, and Chryssolouris 2012;Karmakar, Sanjog, and Patel 2014). In Yang et al. (2007), digital human representation is analysed, and an implementation of a digital human environment is presented. In conclusion, in an industrial environment, human simulation can facilitate the assembly process design, accelerate, and improve product development in a cost-effective approach and improve the ergonomics of human-centred operations, through the analysis of various aspects of the production at an early phase.
There are several state-of-the-art systems in the field of the digital human simulation, some examples of which include Siemens Tecnomatix Jack ® , Dassault Delmia Human ® and RAMSIS ® (Longo and Monteil 2011). Digital human simulations usually include a virtual representation of the workplace, the DHM and its interactions with the virtual space (Ziolek and Kruithof 2000). However, this kind of simulations may provide inaccurate results since humans are modelled as technological equipment, for example, inverse kinematics is used for modelling a human's motion similarly to the approach used in robotics disregarding the fact that the human behaviour is not similar to a machine's. The inclusion of human real behaviour, encapsulating the actual knowledge and experience in the way that a worker performs manual operations can make modern DHM to accurately simulate and assess the impact of human factors. Towards this direction, Motion Capture (MoCap) systems have been used to capture real worker movements, edit, and animate them. An overview of the MoCap technologies is provided in Ribeiro (2016) while tracking technologies and techniques for virtual environments are discussed in Dineshh et al. (2017). In Mohamed and Ali (2013), there is a presentation of the approach and application of the motion recognition field. Two detailed and recent studies on vision-based approaches, along with the related challenges are discussed in Kale and Patil (2016) and Herath, Harandi, and Porikli (2017).
At the time digital human simulation was introduced and began to be studied, the data acquisition and processing methods were scarce and sensors limited. However, in the last decade, the innovations that have been achieved in the ICT, have provided rich data sources. Low-cost sensors and embedded processors, along with advanced storage capabilities, enable the creation of new rich representations and realistic models of the physical world. An approach towards improving the accuracy of simulation results of virtual tools through the integration of sensor data is presented in Cai et al. (2017). The benefits of integrating real-world heterogeneous data into digital models, investigating their behaviour through advanced simulation tools, and changing the initial physical system based on the results, constitute the extensive research that has been done during the past years (Giordano, Spezzano, and Vinci 2014;Konstantinov et al. 2017;Stark, Kind, and Neumeyer 2017;Thiede et al. 2017;Vachalek et al. 2017). This rich digital representation of real-world objects/ subjects and processes, including data transmitted by sensors, is known as the digital twin model (Figure 1).
The digital twin can be conceived as the component that fulfils the cyber-physical production system, envisioned in Industry 4.0 (Uhlemann, Lehmann, and Steinhilper 2017). One of the greatest challenges in enabling realistic simulations is the acquisition and combination of real-world heterogeneous data into the digital models, thus creating digital representations of real-world situations with high accuracy. The typical approach for applying the digital twin concept for the simulation and analysis of human activities is to use advanced motion tracking systems to capture the human motion and then reconstruct the motion captured data in a DHM software (such as blender.org (2015)). Although the motion reconstruction approach, for creating the digital twin, is widely used in some industries, such as the entertainment industry, there are some limitations when applied to the manufacturing domain, more specifically: (1) The motion capturing systems are usually expensive to acquire and install; are intrusive and thus they cannot be easily deployed in a manufacturing environment.
(2) For realistic motions to be generated, a lot of sensors (i.e., markers) need to be attached to the human body and thus making them uncomfortable for people while the data volume to be processed increases.
(3) It does not provide any insight into the semantics of the activities performed by the human, such as whether the motion is related to some object picking or placing activity. (4) Does not always consider constraints such as the learning effect, the fatigue, production volume, the field of view, the environmental constraints, a worker's knowledge etc. These constraints play a critical role in shaping the human behaviour and thus, its realistic representation towards accurate simulations, which in turn, may increase productivity and improve ergonomics in a system's design (Battini et al. 2011).
Although points (1) and (2) are not in the focus of this study, the approach presented in this work adopts low-cost optical sensors (Rietzler et al. 2016), wearable and tool sensors as presented in detail in Section 3.2.1. Furthermore, the motion recognition algorithm of this paper allows for the reduction of the data that need to be processed as described in the aforementioned section. Additionally, unlike (3), sensor fusion enables the correlation of recognised human motions by the optical sensors with shopfloor objects as measured by the tool and inertial measurement unit (IMU) sensors attached to the tracked objects, as presented in 3.1.3.1. In contrast to (4), tacit knowledge of an operator when performing some manual task is acquired through the motion capture system and linked to a specific assembly process step via a knowledge management mechanism, as presented in 3.2.2.

Approach
The proposed method aims towards the implementation of the digital twin concept for human-centred operations on the factory floor. The enrichment of the simulations, with sensor data, is proposed to enhance their realism and improve the accuracy of their results. The main work, presented in this paper, is the analysis and identification of the spatial and temporal parameters that are important to the manual assembly operations from the worker's perspective. Afterwards, those parameters can be used for the generation of realistic simulations. Concerning the manual process itself, the digital twin can be viewed as the means for achieving cost-effective digitally enabled optimisation, through a digitally enabled closed-loop control of the physical system.

CPS model
The CPS model consists of a physical and a virtual system, possible controllers and the communication mechanisms of the CPS.

Physical system
The physical system is approximated in this paper as a discrete event linear and time-invariant (LTI) system. It consists of a production station/area, which receives an input, r[k] denoting the product to be assembled with expected production metrics, with k∈ℤ, and generates an output y[k], which denotes the evaluated outcome of the assembly, such as the ergonomic score. A schematic block diagram of the physical system is presented in Figure 2, where the actuator refers to the operator(s) along with the tools/equipment that may be required to perform assembly operations with certain parts, p. Additionally, potential disturbances from the environment to the assembly system are denoted as d. The physical controller denotes the function that transforms the input r of the system along with the feedback signal, y m , coming from the feedback compensation block to the system input u.

Cyber system
The implementation of a physical closed-loop control in a manufacturing system relies mostly upon trial and error approaches and the experience of senior engineers. This has a significant impact on the implementation, running and improvement cost during the entire lifecycle of the system. The transformation of the control loop to a digital form would require an appropriate conversion of the physical inputs and signals to their digital counterpart, but most importantly an accurate digital representation of the physical system, thus an accurate digital twin model. The cyber system suggested for replacing the physical system's feedback loop is presented in Figure 3, where its input, y dig , is the digital representation of the output of the physical system and its output, y sim , is the result of the simulation through the digital twin. In particular, the vector y sim includes the evaluation of the simulation in terms of optimisation metric(s) and the updated station layout. The output then is imported to the cyber controller along with the expected target/optimised output, y opt (e.g. the targeted ergonomic score), for reconfiguring the input y dig , based on the generated error y e , until y sim converges to y opt .

Cyber-physical system
The aim of the digital twin is to serve as a virtual controller to the physical system. The integration of these two parts, physical and cyber, to a full closed-loop control system, the CPS, where the physical system is controlled by the virtual one through the digital twin. The schematic block diagram of the generated CPS is provided in Figure 4, with the output of the physical system, y, after its digitisation, serving as one of the two inputs of the cyber system. The second input is the error y e of the closed-loop control. The error, y e , is the result of the evaluation of the generated output of digital twin, y sim , and the aimed optimal output, y opt . The output of the digital twin y sim is converted to its physical counterpart and imported as an input y m to the physical system. When y sim has converged to the y opt , y m will contain the optimal configuration of the assembly steps and the ideal station layout for the assembly of a specific product. Hence, the CPS, as a composition of discrete-time LTI systems, is also an LTI system.

Analog-to-digital conversion (ADC)
Timestamped sensor data that capture manual operations are used to bridge the gap between the physical and virtual parts of the CPS. In this study, three types of sensors have been considered to capture temporal and spatial parameters of the human-centred assembly process; optical, non-optical (force/ pressure and IMUs) and tool (torque) sensors ( Figure 5). Captured data are temporally stored in a local database. A set of moving average and least squares fitting algorithms are used to pre-process acquired data. Real world information captured by sensors are transferred to the cyber part through web services, thus, serving as the communication channel between the physical and cyber part. A Controlled Natural Language (CNL) approach has been adopted for the representation of manual operations for the virtual system (Busemann, Steffen, and Herrmann (2016)). The controlled natural language, wrapped by a set of optimisation constraints and condition-action rules, supports the execution of the same assembly process in different variations, under a different set of motion constraints, but for the same set of tasks.
The approach presented in Geiselhart, Otto, and Rukzio (2016) and in Rietzler et al. (2016) has been followed for acquiring temporal and spatial information of the human centred  operations. Multi-depth cameras have been adopted to compensate the need of covering a wide area of the shopfloor and better capturing skeletal information. To compensate the fact that optical sensors cannot directly measure linear or angular accelerations or the application of forces by a worker to an object, the force, and pressure sensor embedded in a wearable device (e-glove | Emphasis Telematics). IMUs have been used for the capturing the displacement and acceleration of objects.
Moreover, prior to applying the sensor data to the digital model, preprocessing is required for the transformation of the data into a global coordinate and, additionally, filtering and smoothing them. The generated sensor data signals are incorporated into a unified data model that is then further analysed for generating key motion spatial and temporal parameters which define motion constraints for the digital twin model of the human operator. The digital model of the shopfloor environment, created and reconfigured by the 3D scene editor, accompanies the DHM in the cyber part of the system.

Digital-to-analog conversion (DAC)
The conversion of the output of the cyber system, y sim , to its physical counterpart, y m , is achieved with human intervention by the responsible design and production engineer. Purpose of the y sim is to provide them with the appropriate set of layout modifications and assembly operation details to improve the assembly process for a given product under a desired set of optimisation constraints such as reduced cycle time, improved ergonomics, reduced risk of collisions with surrounding objects, etc.

Digital twin
The digital twin is enabled by transferring key parameters of the physical system through the ADC component and additional subsystems to the digital model. Thus, the digital model represents a digital replica of the physical environment along with the operator. This model constrains the behaviour of the twin towards replicating the actions of the physical system's actuators. This is achieved mainly by integrating the operator's timestamped, spatial and temporal, motion constraints coming from the sensors to the digital twin, through the presented motion recognition and motion constraints evaluation algorithm. The recordings of real-world operational data and the identification of their key parameters may reveal certain points in the execution of an operation which can be used for enhancing the realism of the simulations. Hence, the digital twin concept of this study includes (1) the motion recognition, (2) the motion constraints generation, (3) the provided digital model(s) and (4) the simulation subsystem as presented in Figure 6.

Human motion recognition
The recognition of motions and the generation of parameters/ constraints for the virtual representation of human-based production tasks are based on values provided by low-cost capture devices. These data correspond to the coordinates of the human joints, in a three-dimensional space. The joint values are complemented by additional sensor data, which are required for the recognition of certain operations, such as screwing. The recognition of an operation is performed through the accomplishment of a pre-defined set of condition-action rules of the form of 'If. . .-Then. . .' (Pintzos et al. 2016). By adjusting the rules to the existing script, several motion types can be recognised in a similar way. This approach allows for the recognition of human motions by processing only a limited subset of captured data and not the entire fused set. A high-level overview of the transition from sensor data to motion constraints, for the digital twin, is presented in Figure 7.

Motion constraints generation
After the recognition of the motions/operations performed in the shopfloor, the parameters are evaluated from real-world data in order to be transferred to the virtual model. The parameters identified from the sensor data will be used to constrain and control the behaviour of the virtual system and the simulated execution. A list of possible parameters, constraining the behaviour of the virtual system based on the acquired real-world knowledge, is provided in Table 1.
As a next step, the identified motion parameters/constraints are evaluated against the pre-stored parameters of the motion clips that will be used for the simulation and the replication of the simulated scenario and the real-world operations. A motion constraints managing algorithm is responsible for the comparison of the real-world motion constraints with the corresponding pre-recorded motion clips. The  implemented version compares the trajectory and keyframe values of consecutive recorded with CNL planned one.
The trajectory parameters are evaluated in terms of smoothness, described by a set of rules, and provide a set of feasible control points in the 3D space. Next, the 3D points must match the trajectory values of the next operation in the motion sequence; consequently, the entire walking path, for example, will be a curve generated by having considered these trajectory control points.
In order to achieve the motion constraints generation, first, a cubic Catmull-Rom spline is created for interpolating the trajectory control points of the hips joint from the recorded motion clips. The generated spline is evaluated in terms of smoothness by calculating the tangent on continuous points. In case the smoothness is not satisfactory, a cubic B-spline (Manns and Martin 2015) is created. If the smoothness target is not achieved, the knots defining the non-smooth area are identified, their values are iteratively changed until either a minimum number of iterations has been achieved or the smoothness target has been achieved or approximated. Otherwise, a higher degree spline is created and the aforementioned steps are repeated. A similar approach is followed for the keyframe values, defining the start and stop time instances of the operations, i.e., start_walking, start_pick_contact, etc. The steps of the algorithm are presented below in Figure 8.

Simulation
The motions can be simulated via a data-driven motion synthesis approach Manns et al. 2018). Motion capture, as well as sensor data, are stored in a hierarchical skeleton model, along with parameters of the motion models, representing short motion clips as presented in Min & Chai (2012). The data-driven motion synthesis, in combination with sensor data, can correlate high-quality motion captures with real-world assembly operations. The identification of key motion constraints, between the physical and virtual operations, realistic simulations can be generated. This fact results in the improvement of the assembly process, through data-driven simulations towards satisfying a set of optimisation constraints and/or criteria that define the desired outcome of the virtual system y opt .

Implementation
The key requirements for the implementation of the presented approach are the (1) definition of the assembly scenario to be carried out, (2) operational data capture and analysis towards the identification of key parameters for the manual assembly process, (3) creation of digital twin models with the integration of key motion parameters and operational constraints to them, modelling the behaviour of the physical assets, and (4) simulation of the assembly process and its optimisation according to a set of optimisation constraints. In this section, the implementation of the approach discussed in subsections 3.2.1 and 3.2.2 is presented. The software discussed in the following paragraphs has been implemented in Java. Its purpose is to provide an instantiation of the proposed concept and principles with specific components and I/Os. The overall system architecture is presented in Figure 9.
In the presented implementation, r refers to a sequence/set of shopfloor operations to achieve a specific outcome y m (e.g. manual assembly of a product) of the physical system. This output is aimed to be optimised through the digital twin to a target output y opt (cycle time and ergonomic score).
Real-world data acquired from the shop floor are stored in the data layer. Each sensor network (optical, wireless and tools) provides its own data format and timeline while being supported by its own software infrastructure and communication services. The network time protocol (NTP) is used for achieving time synchronisation of all sensor nodes. For a basic capture setup, without occluding objects in the scene, a circular setup of six depth cameras (RGB-D), the Kinect v2 in our case, is suggested while for the other sensors can vary according to the number of tracking objects and actors. Motion Capture (MoCap) is a recording of the movement captured by the optical and pressure sensors. The human model and thus the identification of human motion constraints is based on the Biovision model for motion synthesis (Meredith, Maddock, and Road 2001).
Shop floor data are accumulated in a common JavaScript Object Notation (JSON) format, serving as the wrapper of the generated motion constraints that will constitute a portion of the y dig information of the virtual system. In particular, the JSON message includes the following schema; {OpticalData: [< Data>], WirelessData: [< Data>], ToolData: [< Data>]}, where the < Data> tag includes sensor data in a proper format for each sensor's network. The y dig information is complemented by the station layout which is digitally represented in this implementation using the XML3D format and the digital representation of shopfloor operations, which are described using the CNL approach mentioned in Section 3.1.3.1.
The real-world data are stored in the data layer. This layer provides an Application Programming Interface (API) for Create, Read, Update and Delete (CRUD) operations in the form of RESTful services. The physical layer is implemented using the Apache Cassandra, which is a Non-SQL database capable of handling and querying JSON data. A generic schema has been implemented in the Cassandra database for enabling the data management of different scenarios. The schema consists of the actual data to be stored and a 'Meta-Model'. The 'Meta-Model' contains three properties that allow the quick retrieval and correlation of data by indexing these specific fields, namely: • 'id': Representing each data unique identifier. This is an auto-generated value to ensure uniqueness.
• 'domain': A field representing the domain that data comes from. It is used to distinguishing data, generated from sensors (sensor domain), data generated by the closed-loop control (constraints domain and cost functions domains) and data generated for the digital twin layer (control domain and execution domain).
• 'group_id': A field that correlates data from different domains that are generated from the same single scenario/project. This value is also used as the scenario/ project identifier.
The use of Camunda business process management (BPM) engine has enabled the implementation of the 'closed-loop control' layer (by integrating the relevant services) along with the 'digital twin' layer's control function into a single workflow resulting in the 'digital twin' simulated execution.
The output of the cyber system, y sim , includes the result of the digital twin simulation in terms of production-wise performance measurements (e.g. cycle time, process time) and 3D space requirements (e.g. working space, material supply areas, walking paths). The deviation of the simulation output, y sim , e.g., achieved ergonomic score, from the provided targeted output, y opt , is the compensation error, y e , which is aimed to be minimised through the digital twin simulations. The error is minimised through changes in the input y dig , (e.g. 3D working space reconfiguration, changes in the shopfloor operations), in an iterative way. In the current work, the output y sim includes the ergonomic assessment Figure 9. Software architecture.
of the simulations within the scope of EAWS and NIOSH standards, allowing the creation of ergonomic landscapes, evaluation of working conditions and visualisation of ergonomic risks.

Industrial pilot case
The case study presented in this paper involves a pick and place process of warehouse components from a rack and to a trolley. The purpose of this pilot case is (a) to record the real-world information as to the way the operations are performed by an operator, (b) having this information transferred to the digital twin and (c) evaluate the ergonomy and cycle time of the current process along with the identification of potential improvements through consecutive reconfigurations of the digital twin. It should be noted that the implemented links, enabling the digital twin, are only a part of the presented software application platform, which integrates all the required functionalities to enable the evaluation of the scenario.
The evaluated scenario comprises an industrial relevant environment of approximately 10 square metres, representing the corresponding warehouse. The environment was arranged to have two racks of different heights, where the actors would pick and transfer objects from one to the other. The performed motions were tracked by two Kinect v2 as well as one e-glove. The pilot case setup, as well as the warehouse environment and its 3D model, are presented in Figure 10.
The scenario was executed five times by 3 actors with different physical characteristics. The following table (Table 2) demonstrates the results of the proposed motion recognition approach, of each actor, as evaluated in the use case presented.
From the recognised motions, the motion constraints, corresponding to the data recorded from the physical environment, are generated. The time required for the transformation of real-world recordings into motion constraints information, including the coordinate transformation and time synchronisation, was approximately 5-10 min. Then, the motion synthesis model, through these constraints, can simulate the behaviour of the actual operator through the 3D human model in the virtual environment. An example of walk motion constraints, represented as blue dots on the floor, is provided in Figure 11. Different motion and environmental/scene constraints result in different simulations. The spaghetti chart, as well as the ergonomic evaluation of the aforementioned scenario, are presented in the following figure (Figure 12).   Following the presented approach iteratively, including evaluating the simulation results, reconfiguring the digital twin and running new simulations, the ergonomics of the warehouse operations can be improved/optimised within a timeframe of hours through the digital twin simulations instead of days. After the simulation has generated the desired working space configuration and the characteristics of the assembly steps, the output can be transferred to the physical system with the reduced/minimum cost and effort.

Conclusions and discussion
The method presented in this paper describes an approach for transferring real-world information of a human-based manufacturing system to its cyber counterpart, where data can be further processed and evaluated in a cost-effective approach. Sensor recordings are used for improving the ergonomics of warehouse operations as well as the reconfiguration of the station towards improved ergonomics, allowing such experimentation to occur without interrupting the real production process. Hence, the digital twin  concept demonstrates the potential for being used as a controller of a manufacturing system closing the control loop with it. The closed-loop system can provide a digital testbed for empowering production-wise applications such as human resources planning, station design and reconfiguration as well as rapid prototyping.
Future work will focus on the extension of the sensors, used for the recording of real-world operations, as well as for improving the accuracy of the data processing approaches for recognising operations and generating motion constraints from them. Moreover, the evaluation of alternative approaches for parameter extraction and motion synthesis with the support of additional sensor systems will be investigated, towards increasing the accuracy of the digital twin.