Optimizing User Integration for Individualized Rehabilitation

User integration with assistive devices or rehabilitation protocols to improve movement function is a key principle to consider for developers to truly optimize performance gains. Better integration may entail customizing operation of devices and training programs according to several user characteristics during execution of functional tasks. These characteristics may be physical dimensions, residual capabilities, restored sensory feedback, cognitive perception, or stereotypical actions.


Introduction: User 'feel' for rehabilitation devices
#FeelBetterDoBetter was a recent campaign slogan by apparel giant Nike, suggesting how sportswear can provide a 'feel' that leads to better athletic performance. Does such a concept apply to prosthetic devices and improved rehabilitation? Is feel a matter of user perception, product customization, or literally the sensation of touch? Great technological strides have been made to develop assistive devices and paradigms that restore motor capabilities of individuals with physical impairment. Augmenting function with prosthetic limbs after amputation [1] and neuroprostheses following spinal cord injury [2] are clear examples where the biological-technological interface has notably advanced. Smart-prosthetics are continually being equipped with greater computational processing and actuation methods to accurately control movements involving several degrees of freedom [3]. Electromotor prosthetic limbs can now sufficiently power desired motor actions while maintaining compact form for increasingly biomimetic implementation [4]. Neuroprostheses for motor restoration now includes greater access and finer control of paralyzed muscles activated by advanced stimulation electrodes [5,6]. While further advancements in augmenting the capacity of movement for those with motor dysfunction is still sought, it is clear there has been notable advancement for assistive device users to 'do'.
How then is the full potential of 'do' realized with movement rehabilitation? The Nike slogan would imply it is a matter of better 'feel'. What defines how a user feels when using a rehabilitation device? A literal interpretation may be engineering novel sensations of touch or body movements to the user through the device interface itself. Such an example is a prosthetic hand following amputation that connects mechanical sensors to transduce physical sensations to neural activations and allow users to perceive touch or movement [7,8]. With intact physiological systems interfacing with external objects, such sensory feedback is meant to facilitate the identification of object shape, size, weight, and texture [9]. This information should allow better manipulation of objects with appropriate motor planning, online adaptations, and final goal achievement [10]. Another interpretation of feel is how a user perceives their ability to use the device. This may initially be reflected in usability surveys [11]. This could imply ease-of-use or sufficient capabilities to do activities of daily living. For the former, the perception of ease-of-use can be highly subjective. For the latter, the user's perception of value could be in terms of what new tasks they cannot otherwise perform or at what new level they can perform certain tasks. Is perception of ability simply a matter of effort, or in the case of devices that are restoring functions in a biomimetic way, is it truly a matter of user integration?
Integration can also be described on multiple levels. In the case of a device, the user may see the device truly as an extension of oneself. Certainly, the ultimate objective for integration would be to provide means that maximize biomimicry, including allowing the user to feel embodiment to the device [12][13][14]. Not only should the device be anthropomorphic but it should also provide to the user sensory feedback, including touch and kinesthesia [15]. Finally, the device should reflect user intent. In cognitive neuroscience, this would entail having a sense of 'agency', or belief one is the true author of one's movements [16]. Understanding how better to integrate the user with a device may lead to better performance of a specific movement function. Therefore, it may be critical to identify the design specifications of a rehabilitation device that improves user integration to better execute the desired movement. With movement execution, feedback describing system kinematic states [17,18] is fundamental at a device level. But allowing users to 'feel' more integrated with operation may be the key better performance by user-driven devices.
Areas of advancement to improve device 'feel' include: (1) using novel modes of feedback, (2) computational intelligence for processing control and identification, (3) targeted user-based command strategies to initiate and modulate control of actions, and (4) identifying optimal movements to be executed. For novel modes of feedback, not only could new sensory modalities be biophysically restored, but device controller and rehabilitation protocols may better leverage how users employ intact sensory feedback capabilities. Users may rely on intact visual, audio, and haptic feedback to negotiate movements and modulate their control of a device or actions during a rehabilitation training protocol [19]. New control structures are taking advantage of increasingly powerful computational capabilities and soft computing approaches [20]. These approaches may require less accurate descriptions of system identification and can readily update parameters to adapt online in the presence of new performance data. Driving operation of assistive devices according to user commands for better performance has typically involved selection of desirable user cues, tuning operational sensitivity to those cues, and facilitating heuristic learning [21]. Simple cues may allow users to invoke switch-type control to initiate operation with a mechanical trigger like a button push [22]. Extension to biomimetic operation would rely on more graded control and bioelectric signaling from the user such as electromyographic (EMG) or electroencephalographic (EEG) recordings [23,24]. Subsequent refinement and tuning between developers and users for customization can be largely expected to generate satisfactory empirical results. Thus, customizing devices and protocols to each user is an important design consideration to produces the best movement results for the user. Certainly, considering physical dimensions to better fit a device to someone is required for basic assimilation. However, the residual capabilities of the user may not be as well considered except to define a functional baseline from which the user is expected to improve with practice. This approach leaves the onus largely on the user to learn how to better operate the device or diligently engage in a protocol.
In this chapter, we will suggest rehabilitation approaches with examples that consider 'feel' in development of devices and protocols that better integrates users for improved performance. In Section 2, we outline the major components in a user-driven rehabilitation device or protocol and their bases of interaction. In Sections 3-5, we discuss alternative methods and considerations to better incorporate the user into assistive devices or rehabilitation protocols. We first postulate an example of controlling a gait-restoration exoskeletons that could optimally relate volitional actions of the user with computation of device control parameters that optimize performance. We then discuss methods that consider how user perception of agency over a device may affect performance. The discussion context will be a proposed experiment to optimize an individual's ability to perform grasping actions with a virtual hand prosthesis based on cognitive agency. Finally, we suggest a rehabilitation paradigm to minimize injury when they consider individualized physiological responses to movement activity.

System view of user-centered rehabilitation
There are three major components to consider for a system that restores or rehabilitates movement function (Figure 1, top). First, there is the biomechanical plant which is composed of the physical body of the person executing a movement task objective within an environment, including objects to be manipulated. In this formulation, we interpret the fundamental movement goal as a fixed property of the system plant. The physical body includes the size and shape characteristics of the limbs to be moved, perhaps the entire body as with ambulation. Second, a device may assist in executing the movement or provide constraints and cues to the user on how better to perform the function. Third, the user volitionally commands themselves and/or the device to drive movement operation of the entire system. The extent of the drive depends on residual capabilities of the user, contingent on the disorder, the capabilities of the device, and the user-device interface. For example, an individual with stroke performing walking may largely be servicing the movement with their own residual capabilities. They may utilize an assistive device such as a spring-loaded ankle-foot orthosis (AFO) for propulsive boost during gait push-off. The user may further govern execution of gait with their vision and the haptic feedback they receive at their feet or even an additional assistive device such as a walking cane. All three major system components (plant, device, user) must act in concert to successfully maintain or execute movement. The synergistic interactions between each component represent areas of research investigation to be developed to better restore movement performance. In Figure 1 (bottom), areas 1, 2, 3 are specifications for each fundamental component previously described. Area 4 signifies how the device utilizes its onboard sensing capabilities to receive feedback about actions of the plant, and in turn utilizes its actuation capabilities to manipulate the plant. Depending on the level of assistance the device provides, it likely requires proportionate information about the states of the plant. For movement, these feedback states are likely kinematics. The device may utilize potentiometers to measure joint angles or inertial sensing units that integrate accelerometers, gyroscopes, and magnetometers to derive real-time estimates of global orientation [25]. Load sensors may be utilized to discern pressure or force information between plant and environment such as ground-foot contact [26] or hand grasp loads onto objects [27]. Device actuation may be more passive as with AFO assistance from spring-loads being applied proportional to displacements ala Hooke's law (F = -kx) and mediated through well-executed clutching [28]. Device actuation may be more active such as functional neuromuscular stimulation (FNS) to activated paralyzed musculature and restore standing balance capabilities [29] or electromotor actuation to restore walking [30] following spinal cord injury (SCI).
Area 5 similarly denotes how the user may rely on his or her own residual sensory capabilities (such as vision or touch) to interpret states of the plant and can enact residual motor capabilities to further manipulate the plant independently or with the device. For the purposes of rehabilitation, user-volitional capabilities alone are not sufficient to achieve the desired levels of function. Thus, a device or training program to enhance overall function is necessary to desirably improve movement performance. The relative user versus device contribution function can vary considerably depending on application. The user typically drives most of the action, which is only enhanced by the device, for gait following stroke. Alternatively, it may be the device that makes movement even possible such as with FNS or powered exoskeletons to restore standing and walking following complete SCI.
Area 6 is the fundamental link between user and device. Often, the device operates in a preprogrammed manner according to distinct phases of the movement (e.g., gait cycle), or the user will command operation of the device (e.g., button push, tilting action, signal threshold exceeded). In the latter case, the onus is on the user to monitor the plant and select when the device should intervene. Ultimately, the user responds not only to his or her own observations of the plant but must react to the actions of the device as well. The device may also provide information to the user about the plant that the user does not otherwise have. In this case, the device may restore sensory feedback (touch or movement) to the user about bodyenvironment-task interactions [31,32]. The device may do so through neural activation to restore basic sensory capabilities such as vision, hearing, touch, or kinesthesia [33]. Or the device may provide supplementary sensory cues to alert the user to events or cautionary indicators related to the movement. For example, vibrational cues have been used for training individuals on how better to use a device [34], and audio cues have been provided for assisting individuals in balance function [35].
Moving forward, it is research initiatives in Area 7, that designs simultaneous synergies between plant, user, and device that may truly optimize rehabilitation for better functional performance. Considering how best to integrate the user may be most key to this end. Mechanical descriptions of the plant and target movement is necessarily done for designing the functional specifications of an assistive device. But integrating the user is comparatively more challenging without a priori indication of their responses (mechanical, physiological) using a device. It is often expected that the user will adapt and train with the device, and device parameters can be tuned by developers according to empirical patterns, ad hoc adjustments, and anecdotal feedback from the user. These approaches are understandable and imminently practical, but are they ultimately optimal? Rehabilitation approaches that better consider user integration in the early stages of development may better produce the synergy between plant, user, and device that maximizes performance. In the next three sections, we postulate examples of rehabilitation approaches to maximize movement performance by considering: device operation according to user actions, optimizing user sense of agency over a device, and identifying optimal movements from user-specific physiological responses.

Enacting device control from user actions
A biomechanical rehabilitation system with great potential for better incorporating user actions into enacted device operation is gait-restoration by powered exoskeletons. To overcome clinical barriers to translation, powered lower-limb orthoses for gait following SCI should be light-weight, cosmetic, and well-integrated with standard wheelchairs. With these design constraints, however, joint degrees-of-freedom and torque magnitudes are typically limited. Users must then utilize walking aids such as crutches or canes for balance and support. While exoskeleton-assisted gait is achievable, the observed motions are staggered and expend energy inefficiently for the user and orthosis. Feedback control schemes may be better designed to synergistically combine functional capabilities and actions of the user and worn exoskeleton to produce gait patterns that are more natural and energy cost effective.
An optimal control structure would consider full body dynamics in generating exoskeleton motion patterns that resemble able-bodied gait while also operating at minimal energy cost [36]. The motion patterns may be limited according to constraints on actuation by both the user and the device. These actuation constraints may be in terms of how many degrees of freedom are actively under control and the maximum torques that can be generated. It is, in part, due to the constraints that users must support and balance the actions of the device with upper-body walking aids such as crutches. The user typically must reactively respond to the actions of the device after the user initially triggers the device on a step-by-step basis. Upperbody actions have previously been used for command triggering the next step of exoskeleton operation [22]. Ideally, controllers would be online modulate operation of the exoskeleton according to proactive user actions of the support aids to generate the most efficient (optimal) walking patterns. An optimal feedback controller could continuously vary feedback gains over the walking cycle to synergistically combine actions of the user upper-body and lowerlimb orthosis. This could provide better performance and a more natural command interface. This user-centered objective in controller development would be to utilize user-commanded crutch motions in real-time to efficiently regulate exoskeleton actuation. In Figure 2, we show a block-diagram schematic of how such an exoskeleton control system may be considered.
The linear quadratic regulator (LQR) is a classical optimal control approach to regulate the dynamics of a linear system. The LQR can also be utilized on non-linear dynamical systems such as bipedal human gait by approximating as a linear time-varying system. Extending such an approach could provide a foundation for optimal control of powered gait devices such as exoskeletons. Exoskeletons to restore or augment gait function have previously employed time-varying proportional-derivative (PD) control [30]. However, gains and setpoints for closed-loop joint control were empirically fitted to human angle-moment data and consequently not unique nor optimal toward specific performance objectives. This is unlike an optimal control approach where parameters are uniquely identified for minimizing a specified cost function such as controller effort while better tracking desired motion trajectories.
Bipedal walking can largely be simulated with a sagittal plane model. A minimal representation may consist of seven rigid bodies, connected by six bilateral joints of the hips, knees, and ankles [37]. For full-state description, there would be nine kinematic generalized coordinates: global anterior-posterior and superior-inferior position coordinates of the hip joint, torso orientation, right/left hip angles, right/left knee angles, and right/left ankle angles. In this model, contact between feet and ground can be modeled with spring-damper elements uniformly distributed along each foot sole [37] and include some passive joint moments to provide realistic joint motions and limits. Reasonable passive moments include basic stiffness (1 N-m/rad) and damping (1 N-m-s/rad) through normal ranges of motion, and then high stiffness (1000 N-m/ rad) near range of movement limits. Modes to generate equations of motion (e.g., Autolev by Online Dynamics Inc., Sunnyvale, CA) can be readily integrated with MEX functionality [38] to compute system dynamics in the form: Where x is the full set of states, consisting of the nine generalized position coordinates and the corresponding nine velocities, and u are the six joint torques. The model dynamics equations should be formulated as twice differentiable to facilitate use of gradient-based optimization, linearization, and implicit solvers [39]. For a linear system in general state-space form in discrete time: an LQR formulation would provide an optimal feedback gain (k) solution given a control law of u i = − k i x i that minimizes the quadratic cost function of: as the optimal control problem. ( The diagonal weighting matrices Q and R quantify how to relatively minimize deviations from desired states and controls, respectively. The matrices serve as control design parameters to specify the relative desired effects for better tracking (smaller errors in states) versus less effort (lower joint torques). Each diagonal term corresponds to a respective state (i) or control (j) that normalizes its contribution to the objective function by the variance of the respective optimal trajectory. For variance, the matrix diagonal terms are set equal to the standard deviation (σ) values of the optimal trajectory for each state or control. These σ values allow us to perform quadratic normalization of each state and control term for units and magnitudes of variation during gait as follows: where i = 1 to 18 states (3) where j = 1 to 6 joint − torque controls (4) To uniformly vary weighting toward tracking over effort, only the Q ii terms need be subsequently and multiplied by a single Q/R ratio, ϕ. This ratio serves as the single controller design parameter. It is assumed with ϕ = 1, there is equal consideration of tracking and effort in controller performance. The optimal feedback gain matrix, k, can then be solved across time using a discrete, periodic Riccati equation solver [40].
For "exoskeleton walking," which typically requires volitional upper-body support, feedback states would need to include kinematics of the torso and upper-extremities as well. The upperextremities could include extensions of the crutches that make ground contact, converting the model to a type of quadruped. Alternatively, if arm support actions were more suspensory, as with a walker, user 'control' could be equivalently modeled as loads at the shoulders [41,42]. However, the upper-extremities with crutch walking would allow controller developers to utilize volitional motion of the upper extremities of the user to signal gait intentions. From these gross kinematic actions, desirable operation of the powered exoskeleton may be better identified. An LQR controller could be defined for the entire plant (include upper and lower extremities). A family of LQR controllers for different sets of optimal whole-body kinematic trajectories could be generated offline. Each set could represent a different gait speed or various couplings between upper and lower body motions. In any case, the sets of trajectories should comprehensively cover the spectrum of potential motion behaviors for a given user. But to parse out upper extremity actions to trigger lower extremity control, two distinct control loops would be necessary. First, if the user (human operator model) has the freedom to modulate gait from their upper extremities, then one loop needs to map the expected or desired lower extremity control actions. These actions would be necessarily coupled from the observed/measured upper extremity feedback states. Some mapping structure (e.g., artificial neural network) is necessary to identify from the upper extremity kinematics what is the corresponding desired kinematic states (x o ), gains (k o ), and controls (u o ) for the exoskeleton. Beyond controller actions, the remaining actuating elements on the plant are the user (i.e., human operator) and external disturbances. Such a block diagram system for user-driven exoskeleton walking could be simulated if stereotypical actions for the human operator and environmental (external) disturbances could be reasonably modeled.

Optimizing user agency over device
User-driven devices are often judged by measured ability to restore or add functional capabilities following neuromuscular disorder or injury. Limb loss has a profound impact on an individual's capability to perform activities of daily living. Amputations of the upper-limb can be especially debilitating to one's ability to interact with their environment and feel physically engaged to the world. Prostheses for limb loss are first-level evaluated by simple physical fit and comfort [43]. But further criteria for artificial limbs is feeling natural and operating as real ones based on embodiment [44] or brain-level control [45]. Applied neurosciences are also creating prostheses to provide a better sense of interaction with the external world and to seamlessly take actions according to user intent. Neuro-applications have been developed to elicit motor and sensory signals at a peripheral neurophysiological level for novel electromyograms and restoration of touch [1,8]. But more complete user integration with the device may reside at a cognitive level. The user should have a better 'feel' for the device, not just by literal touch restoration, but a sense they are controlling the device intuitively. These devices should be operated with sense of agency, or the perception the user is the true author of the device actions [14,16]. In this section, we propose a framework to identify processes that optimize sensory-feedback hand prostheses according to agency. Such an approach may provide insight into design formulations that better integrate users with their devices, facilitate clinical acceptance, and ultimately produce skilful performance control that is more intuitive to the user.
For a user to inherently recognize the prosthesis as an extension of self, device operation should facilitate a sensorimotor basis of self-awareness. A key aspect of self-awareness is recognition of being the author of one's own voluntary actions and the related consequences.
This phenomenon is known as a sense of agency. It has been shown that cognitive processes underlying agency may be abstracted even in the presence of intermediaries such as prostheses [14]. Thus, a sense of agency may not require direct physiological embodiment, or body ownership, and can exist across novel instrumental associations. Both agency and body ownership can be present with a surrogate prosthesis, as shown with the rubber hand illusion [12,44]. But the sense of agency may be reduced by altered embodiment. However, distortions in the causal chain that produce end-effector actions that are incongruent with user intention more notably reduces agency [14]. There is evidence that with increased agency there is greater ability to perform functional tasks [46].
There are innovative approaches restore sensory and motor pathways for better control of functional hand prostheses after upper-limb amputation. One such approach is targeted muscle reinnervation (TMR) where nerves that once controlled the lost hand are surgically reassigned to target sites upon denervated pectoral muscles [1]. Resultant EMG recordings from these sites represent motor commands to the missing hand that can be reliably utilized to drive a motorized prosthesis. And tactile manipulation of skin at these sites stimulates afferent pathways that elicit sensory feedback of the missing hand to the amputee. The biological neural-machine-interface created by TMR is a unique platform for investigating control of multi-sensory prosthesis [7,31,32] and potential cognitive frameworks of agency. Specifically, agency considerations may enhance the EMG motor command interface and augment how sensory feedback is perceived by the user. One could utilize a computer-generated virtual hand [47] to represent multi-joint prosthesis dynamics to be driven by descending motor commands recorded at TMR muscle sites. Tactile pressure and vibratory stimulation of reinnervated skin areas can activate feedback pathways that elicit illusions of touch [48] and kinesthesia [33] to be perceived as superimposed upon the computer hand. Procedures for perceptual mapping to restore robotic touch [12] can be used to identify locations and parameters for pressure and vibratory stimulation. The virtual hand could perform functional grasp tasks such as matching dynamic grip-load profiles [49,50] with concurrent visual feedback of the hand. The hand could be driven by EMG command with concurrent sensory illusions provided according to hand motion and contact with virtual objects. The end of the grip loading profile will represent task completion, after which tones will be sounded at variable intervals to detect intentional binding as a measure of agency. Intentional binding is the perceptual compression of the time-interval between a movement event and sensory consequence with action is voluntary [51], i.e., greater agency. Ultimately, patterns of EMG and visual illusions for touch and movement can be preferentially selected based on agency. Efficacy of such an approach may reflect improvement in performance at various levels of prosthesis development. These levels include EMG classification accuracy of user motor intent, user ability to interpret sensory feedback, and user functional performance.
Myoelectric control has been a major focus area for improved hand prosthesis function to create user command interfaces that are intuitive and effective [21,52,53]. Notable work has demonstrated the advantages of EMG pattern recognition over amplitude-based command to provide intuitive prosthesis control [54]. While these methods are largely based on prosthesis operation, the motor command interface could be designed to co-maximize function and sense of agency. Typically, EMG data from target muscle sites are collected to identify specific grasp patterns based on feature extraction [55][56][57]. Several of these methods produce similar identification results provided an appropriate selection of features [58]. Subsequently, the real-time effectiveness of the pattern classifier is evaluated in testing environments that observe functional metrics such as task times and success rates [59] under proportional control [60]. We postulate that collection of EMG-driven prosthesis grasping data with an intentional binding paradigm [16] may facilitate improved classification and performance given user-preferred grasping patterns. We postulate the collection of data and implementation of a hand prosthesis based on agency in Figure 3.
It may be possible that preferential selection of electromyography data for pattern classification training from trials where the subject indicates greater agency will produce higher classification success rates for various functional grips (cylinder, flat, and precision) [61]. For amputee subjects, training data collection can be done as part of grip matching paradigm between intact hand and missing (imagined) hand [62]. Using standard classifier plus standard proportional control [63], users could undergo functional EMG-drive trials to execute open-to-close grips of the virtual hand according to visual cues for target speed and position.
For each trial, the sensory illusion and virtual hand being visualized could be systematically altered. With each trial, the intentional binding metric with interval estimation would be evaluated to assess sense of agency. Optimal grip configurations for posture and movements may be identified that maximize agency and subsequently grasp performance. It is then reasonably hypothesized that subjects will more reliably generate EMG patterns for conditions in which they have greater agency. Ultimately, the EMG data used to train and test the pattern classifier, the grip kinesthetic illusions, and grip configuration illusions while having tactile feedback could all be preferentially selected according to user agency.

Identifying optimal movements from individual physiology
Observable biomechanical characteristics such as kinematics, external forces, masses, lengths, inertias) can be well assessed for function, but the underlying physiological patterns may not be. Muscle force patterns are often estimated from optimization with the criterion of minimal muscle effort [64] following an inverse analysis [65] to determine joint moments. We hypothesize the value of customizing the minimization criterion based on observable physiological responses for an individual. In turn, a model-based approach could be developed for rehabilitation paradigms to identify movements that are optimal for an individual based on other criteria such as minimizing injury stresses. For the remainder of this section, we propose an example methodology to modify a musculoskeletal modeling environment based on personspecific physiological responses to identify optimal movements that potentially minimize shoulder injury for those using wheelchairs.
Disease or injury affecting mobility commonly requires afflicted individuals to excessively recruit the upper-body when ambulating with walking aids, executing seated transfers, or propelling a wheelchair. The repetitive and unnatural loading of the arms creates abnormal joint stresses that often precipitate shoulder pathologies. The spectrum of shoulder injury includes shoulder impingement syndrome, rotator cuff tears, glenohumeral instability, avascular necrosis, acromioclavicular joint degeneration, biceps tendonitis, and distal clavicle osteolysis [66]. The etiology of shoulder pathology for the movement disabled is typically associated with increased mechanical loading at the shoulder complex [67] and abnormal internal stresses at the shoulder joints [68]. Higher magnitudes of forces at the shoulder and poorly directed forces with inefficient motion can be detrimental to surrounding muscles and bones [69]. It has been suggested that proper technique may reduce progression of shoulder pathology despite high usage. Wheelchair athletes have fewer hospitalizations and no greater incidence of shoulder pathology [70], and focal strengthening of target muscles [71] and optimal maneuver techniques [72] can be vital in preventing shoulder injury. While increased mechanical stressing of muscles and bones of the shoulder are well correlated to pathology, methodologies to identify user-specific movement training patterns to mitigate injury progression are lacking. Methodologies that utilize person-specific physiological responses with computational methods to identify movements that are optimal in function and minimizing potential injury would be of high clinical value. This would provide platforms to identify optimal movements and develop customized training protocols for those movements. Integrating medical imaging, multi-scale musculoskeletal modeling, and advanced apparatuses could provide such a platform.
As an example, we describe such a methodology for wheelchair propulsion to identify optimal movements for minimizing functional stresses at the shoulder. The developmental flow of the proposed approach is shown in Figure 4. The proposed approach exploits unique advantages of imaging, modeling, and a training apparatus. The first step would be to create a custom testing apparatus with specialized instrumentation, computer-controlled actuation, and visualization facilities to simulate advanced propulsion functions. Feedback control of wheel resistance could serve to systematically simulate ridges, harsh surfaces, and inclines. This advanced biomechanical testing apparatus facilitates well-controlled investigation of wheelchair propulsion for consistently repetitive and functionally specific data for an individual.
The second step would be to identify injury stresses at the shoulder by integrating highresolution imaging and multi-scale musculoskeletal modeling. Advanced medical imaging is typically used for physiological diagnosis of disease progression or structural characterization. Magnetic resonance imaging (MRI) can quantify activity-dependent muscle usage from contrast-detection of edema patterns following exercise [73,74]. After a session of wheelchair propulsion, the motion and MRI data for that person could be input into a musculoskeletal model to compute shoulder stresses. The model would be multi-scale with global rigid-body dynamics used to compute muscle forces local finite element models (FEM) [75] to quantify local bone and muscle stresses that precipitate injury. The MRI data could be used to create outlines of the FEM models but also bias an objective function being minimized for optimality for the specific person. The optimization can have a multi-objective minimization function: Optimizing User Integration for Individualized Rehabilitation http://dx.doi.org/10.5772/intechopen.70267 The K-terms pre-multiply each summation based on experimenter discretion to relatively weight contribution of each criterion and ensure robust solution convergence. The first summation represents muscle activation levels at the shoulder. Minimization of this term desirably reduces muscle activity at the shoulder for locomotor efficiency [64]. The bias coefficients (b i ) weight distribution of the relative muscle edema to consider the strain results from MRI and to shift task requirements away from the shoulder muscles that the subject relatively overuses. The second and third summations are for explicit minimization of the injury mechanisms for intrinsic muscle and bone stresses, respectively.
Finally, a rehabilitation paradigm could be generated from this methodology to provide customized training to individuals for performing the optimal movements to minimize injury. The optimization routine should identify propulsion patterns that efficiently distribute functional requirements across the arms and torso while reducing stresses at the shoulders based on person-specific data. A proposed rehabilitation protocol may utilize visual feedback targets to train its users to modify their cyclic motor actions toward the biomechanical patterns that minimize injury. Such training results could provide new perspectives into sensorimotor adaptations that occur when balancing natural motor tendencies versus performance targets to minimize injury. Such an approach to identify sensorimotor principles of model-based learning for injury prevention is novel and readily extensible to other exercise science, sports performance, and motor control studies.

Conclusions
In this chapter, we outlined the importance in user integration for the next major advancement in the feel and performance of movement rehabilitation devices and protocols. We presented three case examples for user integration: (1) optimizing device operation according to user actions, (2) designing a prosthesis controller according to cognitive agency, (3) developing musculoskeletal modeling platforms based on person-specific physiology. There is great potential to further develop such ideas for new applications and in combination for improved rehabilitation. In any case, user-centered approaches for movement rehabilitation offers the greatest promise for user acceptance of such devices and programs beyond further technological breakthroughs for sensing, actuation, and biological interfacing.