User integration with assistive devices or rehabilitation protocols to improve movement function is a key principle to consider for developers to truly optimize performance gains. Better integration may entail customizing operation of devices and training programs according to several user characteristics during execution of functional tasks. These characteristics may be physical dimensions, residual capabilities, restored sensory feedback, cognitive perception, or stereotypical actions.
1. Introduction: User ‘feel’ for rehabilitation devices
How then is the full potential of ‘do’ realized with movement rehabilitation? The Nike slogan would imply it is a matter of better ‘feel’. What defines
Integration can also be described on multiple levels. In the case of a device, the user may see the device truly as an extension of oneself. Certainly, the ultimate objective for integration would be to provide means that maximize biomimicry, including allowing the user to feel embodiment to the device [12–14]. Not only should the device be anthropomorphic but it should also provide to the user sensory feedback, including touch and kinesthesia . Finally, the device should reflect user intent. In cognitive neuroscience, this would entail having a sense of ‘agency’, or belief one is the true author of one’s movements . Understanding how better to integrate the user with a device may lead to better performance of a specific movement function. Therefore, it may be critical to identify the design specifications of a rehabilitation device that improves user integration to better execute the desired movement. With movement execution, feedback describing system kinematic states [17, 18] is fundamental at a device level. But allowing users to ‘feel’ more integrated with operation may be the key better performance by user-driven devices.
Areas of advancement to improve device ‘feel’ include: (1) using novel modes of feedback, (2) computational intelligence for processing control and identification, (3) targeted user-based command strategies to initiate and modulate control of actions, and (4) identifying optimal movements to be executed. For novel modes of feedback, not only could new sensory modalities be biophysically restored, but device controller and rehabilitation protocols may better leverage how users employ intact sensory feedback capabilities. Users may rely on intact visual, audio, and haptic feedback to negotiate movements and modulate their control of a device or actions during a rehabilitation training protocol . New control structures are taking advantage of increasingly powerful computational capabilities and soft computing approaches . These approaches may require less accurate descriptions of system identification and can readily update parameters to adapt online in the presence of new performance data. Driving operation of assistive devices according to user commands for better performance has typically involved selection of desirable user cues, tuning operational sensitivity to those cues, and facilitating heuristic learning . Simple cues may allow users to invoke switch-type control to initiate operation with a mechanical trigger like a button push . Extension to biomimetic operation would rely on more graded control and bioelectric signaling from the user such as electromyographic (EMG) or electroencephalographic (EEG) recordings [23, 24]. Subsequent refinement and tuning between developers and users for customization can be largely expected to generate satisfactory empirical results. Thus, customizing devices and protocols to each user is an important design consideration to produces the best movement results for the user. Certainly, considering physical dimensions to better fit a device to someone is required for basic assimilation. However, the residual capabilities of the user may not be as well considered except to define a functional baseline from which the user is expected to improve with practice. This approach leaves the onus largely on the user to learn how to better operate the device or diligently engage in a protocol.
2. System view of user-centered rehabilitation
There are three major components to consider for a system that restores or rehabilitates movement function (Figure 1, top). First, there is the biomechanical plant which is composed of the physical body of the person executing a movement task objective within an environment, including objects to be manipulated. In this formulation, we interpret the fundamental movement goal as a fixed property of the system plant. The physical body includes the size and shape characteristics of the limbs to be moved, perhaps the entire body as with ambulation. Second, a device may assist in executing the movement or provide constraints and cues to the user on how better to perform the function. Third, the user volitionally commands themselves and/or the device to drive movement operation of the entire system. The extent of the drive depends on residual capabilities of the user, contingent on the disorder, the capabilities of the device, and the user-device interface. For example, an individual with stroke performing walking may largely be servicing the movement with their own residual capabilities. They may utilize an assistive device such as a spring-loaded ankle-foot orthosis (AFO) for propulsive boost during gait push-off. The user may further govern execution of gait with their vision and the haptic feedback they receive at their feet or even an additional assistive device such as a walking cane.
All three major system components (plant, device, user) must act in concert to successfully maintain or execute movement. The synergistic interactions between each component represent areas of research investigation to be developed to better restore movement performance. In Figure 1 (bottom), areas 1, 2, 3 are specifications for each fundamental component previously described. Area 4 signifies how the device utilizes its onboard sensing capabilities to receive feedback about actions of the plant, and in turn utilizes its actuation capabilities to manipulate the plant. Depending on the level of assistance the device provides, it likely requires proportionate information about the states of the plant. For movement, these feedback states are likely kinematics. The device may utilize potentiometers to measure joint angles or inertial sensing units that integrate accelerometers, gyroscopes, and magnetometers to derive real-time estimates of global orientation . Load sensors may be utilized to discern pressure or force information between plant and environment such as ground-foot contact  or hand grasp loads onto objects . Device actuation may be more passive as with AFO assistance from spring-loads being applied proportional to displacements ala Hooke’s law (
Area 5 similarly denotes how the user may rely on his or her own residual sensory capabilities (such as vision or touch) to interpret states of the plant and can enact residual motor capabilities to further manipulate the plant independently or with the device. For the purposes of rehabilitation, user-volitional capabilities alone are not sufficient to achieve the desired levels of function. Thus, a device or training program to enhance overall function is necessary to desirably improve movement performance. The relative user versus device contribution function can vary considerably depending on application. The user typically drives most of the action, which is only enhanced by the device, for gait following stroke. Alternatively, it may be the device that makes movement even possible such as with FNS or powered exoskeletons to restore standing and walking following complete SCI.
Area 6 is the fundamental link between user and device. Often, the device operates in a pre-programmed manner according to distinct phases of the movement (e.g., gait cycle), or the user will command operation of the device (e.g., button push, tilting action, signal threshold exceeded). In the latter case, the onus is on the user to monitor the plant and select when the device should intervene. Ultimately, the user responds not only to his or her own observations of the plant but must react to the actions of the device as well. The device may also provide information to the user about the plant that the user does not otherwise have. In this case, the device may restore sensory feedback (touch or movement) to the user about body-environment-task interactions [31, 32]. The device may do so through neural activation to restore basic sensory capabilities such as vision, hearing, touch, or kinesthesia . Or the device may provide supplementary sensory cues to alert the user to events or cautionary indicators related to the movement. For example, vibrational cues have been used for training individuals on how better to use a device , and audio cues have been provided for assisting individuals in balance function .
Moving forward, it is research initiatives in Area 7, that designs simultaneous synergies between plant, user, and device that may truly optimize rehabilitation for better functional performance. Considering how best to integrate the user may be most key to this end. Mechanical descriptions of the plant and target movement is necessarily done for designing the functional specifications of an assistive device. But integrating the user is comparatively more challenging without
3. Enacting device control from user actions
A biomechanical rehabilitation system with great potential for better incorporating user actions into enacted device operation is gait-restoration by powered exoskeletons. To overcome clinical barriers to translation, powered lower-limb orthoses for gait following SCI should be light-weight, cosmetic, and well-integrated with standard wheelchairs. With these design constraints, however, joint degrees-of-freedom and torque magnitudes are typically limited. Users must then utilize walking aids such as crutches or canes for balance and support. While exoskeleton-assisted gait is achievable, the observed motions are staggered and expend energy inefficiently for the user and orthosis. Feedback control schemes may be better designed to synergistically combine functional capabilities and actions of the user and worn exoskeleton to produce gait patterns that are more natural and energy cost effective.
The linear quadratic regulator (LQR) is a classical optimal control approach to regulate the dynamics of a linear system. The LQR can also be utilized on non-linear dynamical systems such as bipedal human gait by approximating as a linear time-varying system. Extending such an approach could provide a foundation for optimal control of powered gait devices such as exoskeletons. Exoskeletons to restore or augment gait function have previously employed time-varying proportional-derivative (PD) control . However, gains and setpoints for closed-loop joint control were empirically fitted to human angle-moment data and consequently not unique nor optimal toward specific performance objectives. This is unlike an optimal control approach where parameters are uniquely identified for minimizing a specified cost function such as controller effort while better tracking desired motion trajectories.
Bipedal walking can largely be simulated with a sagittal plane model. A minimal representation may consist of seven rigid bodies, connected by six bilateral joints of the hips, knees, and ankles . For full-state description, there would be nine kinematic generalized coordinates: global anterior-posterior and superior-inferior position coordinates of the hip joint, torso orientation, right/left hip angles, right/left knee angles, and right/left ankle angles. In this model, contact between feet and ground can be modeled with spring-damper elements uniformly distributed along each foot sole  and include some passive joint moments to provide realistic joint motions and limits. Reasonable passive moments include basic stiffness (1 N-m/rad) and damping (1 N-m-s/rad) through normal ranges of motion, and then high stiffness (1000 N-m/rad) near range of movement limits. Modes to generate equations of motion (e.g., Autolev by Online Dynamics Inc., Sunnyvale, CA) can be readily integrated with MEX functionality  to compute system dynamics in the form:
The diagonal weighting matrices
To uniformly vary weighting toward tracking over effort, only the
For “exoskeleton walking,” which typically requires volitional upper-body support, feedback states would need to include kinematics of the torso and upper-extremities as well. The upper-extremities could include extensions of the crutches that make ground contact, converting the model to a type of quadruped. Alternatively, if arm support actions were more suspensory, as with a walker, user ‘control’ could be equivalently modeled as loads at the shoulders [41, 42]. However, the upper-extremities with crutch walking would allow controller developers to utilize volitional motion of the upper extremities of the user to signal gait intentions. From these gross kinematic actions, desirable operation of the powered exoskeleton may be better identified. An LQR controller could be defined for the entire plant (include upper and lower extremities). A family of LQR controllers for different sets of optimal whole-body kinematic trajectories could be generated offline. Each set could represent a different gait speed or various couplings between upper and lower body motions. In any case, the sets of trajectories should comprehensively cover the spectrum of potential motion behaviors for a given user. But to parse out upper extremity actions to trigger lower extremity control, two distinct control loops would be necessary. First, if the user (human operator model) has the freedom to modulate gait from their upper extremities, then one loop needs to map the expected or desired lower extremity control actions. These actions would be necessarily coupled from the observed/measured upper extremity feedback states. Some mapping structure (e.g., artificial neural network) is necessary to identify from the upper extremity kinematics what is the corresponding desired kinematic states (
4. Optimizing user agency over device
User-driven devices are often judged by measured ability to restore or add functional capabilities following neuromuscular disorder or injury. Limb loss has a profound impact on an individual’s capability to perform activities of daily living. Amputations of the upper-limb can be especially debilitating to one’s ability to interact with their environment and feel physically engaged to the world. Prostheses for limb loss are first-level evaluated by simple physical fit and comfort . But further criteria for artificial limbs is feeling natural and operating as real ones based on embodiment  or brain-level control . Applied neurosciences are also creating prostheses to provide a better sense of interaction with the external world and to seamlessly take actions according to user intent. Neuro-applications have been developed to elicit motor and sensory signals at a peripheral neurophysiological level for novel electromyograms and restoration of touch [1, 8]. But more complete user integration with the device may reside at a cognitive level. The user should have a better ‘feel’ for the device, not just by literal touch restoration, but a sense they are controlling the device intuitively. These devices should be operated with sense of agency, or the perception the user is the true author of the device actions [14, 16]. In this section, we propose a framework to identify processes that optimize sensory-feedback hand prostheses according to agency. Such an approach may provide insight into design formulations that better integrate users with their devices, facilitate clinical acceptance, and ultimately produce skilful performance control that is more intuitive to the user.
For a user to inherently recognize the prosthesis as an extension of self, device operation should facilitate a sensorimotor basis of self-awareness. A key aspect of self-awareness is recognition of being the author of one’s own voluntary actions and the related consequences. This phenomenon is known as a sense of agency. It has been shown that cognitive processes underlying agency may be abstracted even in the presence of intermediaries such as prostheses . Thus, a sense of agency may not require direct physiological embodiment, or body ownership, and can exist across novel instrumental associations. Both agency and body ownership can be present with a surrogate prosthesis, as shown with the rubber hand illusion [12, 44]. But the sense of agency may be reduced by altered embodiment. However, distortions in the causal chain that produce end-effector actions that are incongruent with user intention more notably reduces agency . There is evidence that with increased agency there is greater ability to perform functional tasks .
There are innovative approaches restore sensory and motor pathways for better control of functional hand prostheses after upper-limb amputation. One such approach is targeted muscle reinnervation (TMR) where nerves that once controlled the lost hand are surgically reassigned to target sites upon denervated pectoral muscles . Resultant EMG recordings from these sites represent motor commands to the missing hand that can be reliably utilized to drive a motorized prosthesis. And tactile manipulation of skin at these sites stimulates afferent pathways that elicit sensory feedback of the missing hand to the amputee. The biological neural-machine-interface created by TMR is a unique platform for investigating control of multi-sensory prosthesis [7, 31, 32] and potential cognitive frameworks of agency. Specifically, agency considerations may enhance the EMG motor command interface and augment how sensory feedback is perceived by the user. One could utilize a computer-generated virtual hand  to represent multi-joint prosthesis dynamics to be driven by descending motor commands recorded at TMR muscle sites. Tactile pressure and vibratory stimulation of reinnervated skin areas can activate feedback pathways that elicit illusions of touch  and kinesthesia  to be perceived as superimposed upon the computer hand. Procedures for perceptual mapping to restore robotic touch  can be used to identify locations and parameters for pressure and vibratory stimulation. The virtual hand could perform functional grasp tasks such as matching dynamic grip-load profiles [49, 50] with concurrent visual feedback of the hand. The hand could be driven by EMG command with concurrent sensory illusions provided according to hand motion and contact with virtual objects. The end of the grip loading profile will represent task completion, after which tones will be sounded at variable intervals to detect intentional binding as a measure of agency. Intentional binding is the perceptual compression of the time-interval between a movement event and sensory consequence with action is voluntary , i.e., greater agency. Ultimately, patterns of EMG and visual illusions for touch and movement can be preferentially selected based on agency. Efficacy of such an approach may reflect improvement in performance at various levels of prosthesis development. These levels include EMG classification accuracy of user motor intent, user ability to interpret sensory feedback, and user functional performance.
Myoelectric control has been a major focus area for improved hand prosthesis function to create user command interfaces that are intuitive and effective [21, 52, 53]. Notable work has demonstrated the advantages of EMG pattern recognition over amplitude-based command to provide intuitive prosthesis control . While these methods are largely based on prosthesis operation, the motor command interface could be designed to co-maximize function and sense of agency. Typically, EMG data from target muscle sites are collected to identify specific grasp patterns based on feature extraction [55–57]. Several of these methods produce similar identification results provided an appropriate selection of features . Subsequently, the real-time effectiveness of the pattern classifier is evaluated in testing environments that observe functional metrics such as task times and success rates  under proportional control . We postulate that collection of EMG-driven prosthesis grasping data with an intentional binding paradigm  may facilitate improved classification and performance given user-preferred grasping patterns. We postulate the collection of data and implementation of a hand prosthesis based on agency in Figure 3.
It may be possible that preferential selection of electromyography data for pattern classification training from trials where the subject indicates greater agency will produce higher classification success rates for various functional grips (cylinder, flat, and precision) . For amputee subjects, training data collection can be done as part of grip matching paradigm between intact hand and missing (imagined) hand . Using standard classifier plus standard proportional control , users could undergo functional EMG-drive trials to execute open-to-close grips of the virtual hand according to visual cues for target speed and position. For each trial, the sensory illusion and virtual hand being visualized could be systematically altered. With each trial, the intentional binding metric with interval estimation would be evaluated to assess sense of agency. Optimal grip configurations for posture and movements may be identified that maximize agency and subsequently grasp performance. It is then reasonably hypothesized that subjects will more reliably generate EMG patterns for conditions in which they have greater agency. Ultimately, the EMG data used to train and test the pattern classifier, the grip kinesthetic illusions, and grip configuration illusions while having tactile feedback could all be preferentially selected according to user agency.
5. Identifying optimal movements from individual physiology
Observable biomechanical characteristics such as kinematics, external forces, masses, lengths, inertias) can be well assessed for function, but the underlying physiological patterns may not be. Muscle force patterns are often estimated from optimization with the criterion of minimal muscle effort  following an inverse analysis  to determine joint moments. We hypothesize the value of customizing the minimization criterion based on observable physiological responses for an individual. In turn, a model-based approach could be developed for rehabilitation paradigms to identify movements that are optimal for an individual based on other criteria such as minimizing injury stresses. For the remainder of this section, we propose an example methodology to modify a musculoskeletal modeling environment based on person-specific physiological responses to identify optimal movements that potentially minimize shoulder injury for those using wheelchairs.
Disease or injury affecting mobility commonly requires afflicted individuals to excessively recruit the upper-body when ambulating with walking aids, executing seated transfers, or propelling a wheelchair. The repetitive and unnatural loading of the arms creates abnormal joint stresses that often precipitate shoulder pathologies. The spectrum of shoulder injury includes shoulder impingement syndrome, rotator cuff tears, glenohumeral instability, avascular necrosis, acromioclavicular joint degeneration, biceps tendonitis, and distal clavicle osteolysis . The etiology of shoulder pathology for the movement disabled is typically associated with increased mechanical loading at the shoulder complex  and abnormal internal stresses at the shoulder joints . Higher magnitudes of forces at the shoulder and poorly directed forces with inefficient motion can be detrimental to surrounding muscles and bones . It has been suggested that proper technique may reduce progression of shoulder pathology despite high usage. Wheelchair athletes have fewer hospitalizations and no greater incidence of shoulder pathology , and focal strengthening of target muscles  and optimal maneuver techniques  can be vital in preventing shoulder injury. While increased mechanical stressing of muscles and bones of the shoulder are well correlated to pathology, methodologies to identify user-specific movement training patterns to mitigate injury progression are lacking. Methodologies that utilize person-specific physiological responses with computational methods to identify movements that are optimal in function and minimizing potential injury would be of high clinical value. This would provide platforms to identify optimal movements and develop customized training protocols for those movements. Integrating medical imaging, multi-scale musculoskeletal modeling, and advanced apparatuses could provide such a platform.
As an example, we describe such a methodology for wheelchair propulsion to identify optimal movements for minimizing functional stresses at the shoulder. The developmental flow of the proposed approach is shown in Figure 4. The proposed approach exploits unique advantages of imaging, modeling, and a training apparatus. The first step would be to create a custom testing apparatus with specialized instrumentation, computer-controlled actuation, and visualization facilities to simulate advanced propulsion functions. Feedback control of wheel resistance could serve to systematically simulate ridges, harsh surfaces, and inclines. This advanced biomechanical testing apparatus facilitates well-controlled investigation of wheelchair propulsion for consistently repetitive and functionally specific data for an individual.
The second step would be to identify injury stresses at the shoulder by integrating high-resolution imaging and multi-scale musculoskeletal modeling. Advanced medical imaging is typically used for physiological diagnosis of disease progression or structural characterization. Magnetic resonance imaging (MRI) can quantify activity-dependent muscle usage from contrast-detection of edema patterns following exercise [73, 74]. After a session of wheelchair propulsion, the motion and MRI data for that person could be input into a musculoskeletal model to compute shoulder stresses. The model would be multi-scale with global rigid-body dynamics used to compute muscle forces local finite element models (FEM)  to quantify local bone and muscle stresses that precipitate injury. The MRI data could be used to create outlines of the FEM models but also bias an objective function being minimized for optimality for the specific person. The optimization can have a multi-objective minimization function:
Finally, a rehabilitation paradigm could be generated from this methodology to provide customized training to individuals for performing the optimal movements to minimize injury. The optimization routine should identify propulsion patterns that efficiently distribute functional requirements across the arms and torso while reducing stresses at the shoulders based on person-specific data. A proposed rehabilitation protocol may utilize visual feedback targets to train its users to modify their cyclic motor actions toward the biomechanical patterns that minimize injury. Such training results could provide new perspectives into sensorimotor adaptations that occur when balancing natural motor tendencies versus performance targets to minimize injury. Such an approach to identify sensorimotor principles of model-based learning for injury prevention is novel and readily extensible to other exercise science, sports performance, and motor control studies.
In this chapter, we outlined the importance in user integration for the next major advancement in the feel and performance of movement rehabilitation devices and protocols. We presented three case examples for user integration: (1) optimizing device operation according to user actions, (2) designing a prosthesis controller according to cognitive agency, (3) developing musculoskeletal modeling platforms based on person-specific physiology. There is great potential to further develop such ideas for new applications and in combination for improved rehabilitation. In any case, user-centered approaches for movement rehabilitation offers the greatest promise for user acceptance of such devices and programs beyond further technological breakthroughs for sensing, actuation, and biological interfacing.