Targeted muscle training with a hybrid body-machine interface

—Studies have shown that motor recovery after neurological injuries is dependent on functional reorganization. In particular, engaging muscles in skilled activities triggers a process of remodeling that could lead to improving functional outcomes. Here, we propose a novel approach for engaging targeted muscles into skilled activities while operating assistive interfaces based on wearable sensors. To enforce contribution of specific muscles to the control output of a movement-based assistive interface, we introduced a signal dependent on muscle activation as replacement of a highly correlated signal dependent on limb kinematics, as measured by a set of inertial sensors. The latter were weighted against the EMG contribution and sent as input to a linear map projecting kinematic signals onto a 2D screen. Modulation of the weighting factor allows switching from a kinematic only (assistive) to a hybrid (rehabilitative) mode by increasing or decreasing EMG contribution to the operation of the interface.


I. INTRODUCTION
OSS of independence after neurological injuries is commonly resulting from the inability to voluntarily control muscle activation to the degree required for completing functional activities without assistance. Generally, assistive devices employ interfaces that bridge the input still available to their users (e.g. movement of unaffected limbs, the activity of certain muscles) and the output of a device as means to bypass the disability [1]- [3]. Learning to skillfully operate these devices triggers a process of neural adaptation, and leads to the gradual consolidation of specific coordination strategies [4].
To this end, it has been proposed that assistive interfaces could be used at the same time as means to retrain motor functions. This is the case for body-machine interfaces (BoMIs) that allow their users to perform skillful tasks via artificially re-mapping available movements into a suitable control space [5]. Interfaces based on movement have, however, the major drawback of not allowing selective involvement of only certain muscles in the operation. Accordingly, only the overall effect of coordinated muscle activations can be controlled and observed. Myoelectric interfaces, on the other side, directly map the activity of a selected group of muscles into control inputs for the device, but suffer from recording instabilities, are sensitive to electrode placement and inject more uncertainty in the control due to the lower signal-to noise ratio [6].
Here, we propose a hybrid interface design for exploiting the advantages of movement-based interfaces while also allowing for a direct and selective contribution of muscles in the control. In particular, we tested the ability of the interface to increase triceps, but not biceps, activation with practice. The EMG activity of biceps and triceps muscles is initially recorded during operation of the interface in movement-mode. The observed activation is then remapped onto the movement inputs such that causality between limb kinematics and muscle activity is maximized. We introduced a weighting coefficient to adjust the contribution of movement and muscle signals over time and allow intuitively switching from an assistive mode (movementbased) to a training mode (hybrid).

A. Hybrid Interface design
In a typical body-machine interface, the interface map is a linear transformation between the input space of body movement signals -generated by inertial measurement units (IMU) -to the output space of the device, for instance the coordinates of a 2D cursor. To include muscle activity in the design map, we started from the assumption that specific patterns of muscle activity are observed while individuals control the BoMI, and that these patterns consolidate with time [7]. Moreover, since muscle activity and joint movement are biomechanically coupled, we hypothesized that it is possible to observe specific patterns of movementmuscle coordination during interface use that are specific to the interface map. Hence, our approach was not aiming at modifying the interface map, but rather modifying the input to the map so that it accounts for both movement and EMG contribution to the output. For doing so, we identified the movement that exhibited the highest correlation and minimal delay relative to a targeted muscle activity as the desired coordination element (Fig. 1). We then constructed an IMU-equivalent signal from the target EMG envelope imposing an equivalence between the z-scores of the two signals as follows: where " , " and # , # are, respectively, the estimated mean and standard deviation of the EMG envelope and the IMU orientation recorded at baseline, and c = {-1; 1} accounts for the sign of the cross-correlation. The parameter is used to modulate the effective amplitude of the muscle activation during training so as to induce a muscle to contract more ( < 1) or less ( > 1) than baseline. The hybrid input to the interface is then computed as linear combination of the original and reconstructed IMU signal weighted by a coefficient 0 ≤ ≤ 1 controlling the amount of EMG contribution to the interface ( = 0 is the baseline condition), as in Eq. 2: B. Experimental protocol Ten unimpaired volunteers practiced a reaching task with the BoMI using their dominant arm and forearm for one session of about 1 hour. Biceps and triceps muscle activities as well as arm and forearm movements were recorded using three IMU/EMG sensors (Delsys TM Trigno), placed as in Fig.1. Pitch and Roll angles of two sensors (IMU1 and IMU2) were used as movement input to the interface. The BoMI map was calibrated for each participant extracting the first two principal components of movement variance during 30 seconds of random arm motions. The Roll and Pitch of each sensor were then mapped by the BoMI to obtain the x,y position of a cursor displayed on a screen. Participants had to control the cursor to reach targets appearing at random positions on the screen in three conditions: i) baseline, using only IMU inputs; ii) modulation, with increasing every 20 trials by 0.25, iii) modulation, with = 1 and the gain decreasing every 20 trials. Before and after each condition a reaching test to 4 targets (20 trials) as in Fig. 1 was used to evaluate the control performance across conditions. The practice started and ended with a test in the reference condition of = 0. The target muscle was the Triceps brachii.

III. RESULTS
All participants learned to perform the reaching task across conditions (Fig. 2 -left). As shown in Fig. 2 -right, the time to reach a target during the test significantly decreased with practice and was minimally affected by the change in control condition. Triceps activity increased significantly compared to baseline with > 0.25 and further increased with < 1 (t-test block vs baseline). In contrast, the activity of the biceps did not significantly change with training (result not shown). The triceps activity modulation was not retained when returning to baseline.

IV. DISCUSSION AND CONCLUSIONS
We propose a novel approach for engaging targeted muscles into skilled activities while operating assistive interfaces based on wearable sensors. A recent work explored the advantages of integrating EMG and kinematics for controlling body-machine interfaces [6]. The authors suggest that the parameters of a non-linear regression could be tuned to modulate muscle contributions to the control. Here, we implemented an intuitive way to provide targeted muscle training that exploits i) the stability and ease of control of a kinematic interface and ii) the natural biomechanical causality between muscle activation and movement. The proposed approach can be easily extended to multiple muscles, for instance to include antagonist muscles to help regulating co-activation. The upper bound to the number of muscles that can be included is provided by the natural motor redundancy as encoded by the BoMI map. Furthermore, we speculate that the proposed hybrid interface could be used to compare the effect of intuitive vs. nonintuitive EMG-kinematics mapping to study abstract learning in human-machine interfaces [8].