MODELING OF HEAD AND HAND COORDINATION IN UNCONSTRAINED THREE-DIMENSIONAL MOVEMENTS
Kyung Han Kim
A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Industrial and Operations Engineering) in The University of Michigan 2005
Doctoral Committee: Associate Professor Bernard J. Martin, Chair Associate Professor Susan Holly Curwin Brown, Cognate Professor Don B. Chaffin Assistant Professor Richard Brent Gillespie
© Kyung Han Kim All rights reserved 2005
TABLE OF CONTENTS
DEDICATION.................................................................................................................... ii LIST OF FIGURES ............................................................................................................ v LIST OF TABLES........................................................................................................... viii ABSTRACT....................................................................................................................... ix CHAPTER 1. INTRODUCTION ............................................................................................... 1 1.1 Applied Problem ..................................................................................... 1 1.2 Theoretical Problem ................................................................................ 3 1.3 Specific Aims of the Thesis .................................................................... 6 1.4 Thesis Organization................................................................................. 7 2. HEAD MOVEMENT STRATEGY IN UNCONSTRAINED VISUAL TARGET LOCALIZATION............................. 9 2.1 Abstract ................................................................................................... 9 2.2 Introduction ........................................................................................... 10 2.3 Methods................................................................................................. 12 2.4 Results ................................................................................................... 15 2.5 Discussion ............................................................................................. 22 3. MEASUREMENT OF THE HEAD MOVEMENT CONTRIBUTION RATIO ................................................................................ 26 3.1 Abstract ................................................................................................. 26 3.2 Introduction ........................................................................................... 26 3.3 Methods................................................................................................. 28 3.4 Results ................................................................................................... 32 3.5 Discussion ............................................................................................. 38
4. EYE AND HEAD ORIENTATION FOR MINIMUM SENSORIMOTOR ERROR .......................................................... 41 4.1 Abstract ................................................................................................. 41 4.2 Introduction ........................................................................................... 42 4.3 Methods................................................................................................. 45 4.4 Results ................................................................................................... 49 4.5 Discussion ............................................................................................. 55 5. CONTRIBUTION OF HEAD MOVEMENTS TO WHOLE BODY BALANCE CONTROL ......................................................... 59 5.1 Abstract ................................................................................................. 59 5.2 Introduction ........................................................................................... 60 5.3 Experiments........................................................................................... 62 5.4 Simulation ............................................................................................. 69 5.5 Discussion ............................................................................................. 73 6. MODELING THE NEGOTIATED CONTROL OF THE HAND AND HEAD ON TORSO MOVEMENTS USING DIFFERENTIAL INVERSE KINEMATICS.................................................... 77 6.1 Abstract ................................................................................................. 77 6.2 Introduction ........................................................................................... 78 6.3 Methods................................................................................................. 80 6.4 Results ................................................................................................... 86 6.5 Discussion ............................................................................................. 89 7. MULTI-PHASIC COORDINATION IN VISUALLY-GUIDED REACH MOVEMENTS .............................................. 91 7.1 Abstract ................................................................................................. 91 7.2 Introduction ........................................................................................... 92 7.3 Experiments........................................................................................... 94 7.4 Modeling ............................................................................................. 106 7.5 Simulation Results............................................................................... 118 7.6 Discussion ........................................................................................... 124 8. SUMMARY AND CONCLUSION ................................................................ 129 8.1 Principal Contributions ....................................................................... 129 8.2 Future Research Directions ................................................................. 135 BIBLIOGRAPHY........................................................................................................... 139
LIST OF FIGURES
General organization of the dissertation ................................................................. 8
The configuration of the target arc........................................................................ 13
Sample categories of head movement velocity profiles. ...................................... 16
Velocity profiles normalized for magnitude and time scale, and superimposed together ................................................................................................................. 17
Reconstruction of an initial velocity profile using a spherical linear interpolation method ............................................................................................. 18
Amplitudes of the IIHM as a function of target eccentricity ................................ 19
Amplitude of the IIHM versus final head aiming azimuth. .................................. 19
Distribution of peak velocity as a function of A) target eccentricity and B) the amplitudes of the IIHM.............................................................................. 20
Number of corrections measured by the occurrence of the peaks in the velocity profile after the initial acceleration phase............................................................. 21
A) Horizontal target array. B) Vertical target array. C) Vertical target arrays set at four different azimuths.......................................................................................... 29
A) Representation of head orientation angles; B) Definition of the head reference frame ..................................................................................................................... 32
Distribution of head orientation angle as a function of target eccentricity........... 33
Models of head orientation as a function of target azimuth and elevation ........... 35
Simulation of head orientation.............................................................................. 36
The configuration of the target arc and the definition of aiming error ................. 48
Example of distributions of finger aiming errors as a function of target eccentricity with respect to the head..................................................................... 51
Surface plot of the model prediction of finger aiming error as a function of head azimuth with respect to the torso and target eccentricity with respect to the head ................................................................................................................. 51
Schematics of the minimum-error optimization problem for a target at 60°........ 53
Correlation between actual and predicted head movement contribution ratios.... 54
Distributions of head aiming errors as a function of target eccentricity............... 55
Locations of the optical markers (hollow squares) and electromagnetic sensors (filled circles) placed on the subject’s body ......................................................... 63
A) Configuration of the visual targets; B) Definition of head posture ................. 64
Illustration of the four experimental conditions.................................................... 66
Trajectories of head movements in a sagittal plane as a function of time. ........... 67
Link segment model used in the simulation ......................................................... 70
A) Simulated location of the CoM in the sagittal plane; B) Simulated ankle moment.................................................................................. 73
Multi-link composition of 9-dof manual subsystem (A) and 8-dof visual subsystem (B)........................................................................................................ 81
Configuration of targets in a rear view (A) and top view (B)............................... 84
Torso angle simulations ........................................................................................ 86
A) Simulation of visual subsystem movement; B) Simulation of manual subsystem movement; C) Simulation of the negotiated control (manual + visual); D) Actual movement recording ............................................................................ 87
Configuration of the target arc array..................................................................... 95
A) Placement of motion sensors and estimated position of body landmarks; B) Estimated joint center locations ............................................................................ 97
A) Definition of torso – clavicle − upper arm − forearm − hand links (12 degrees of freedom); B) Definition of neck − head links (6 degrees of freedom) ............. 99
A) Movement onset; B) Lift-off phase; C) Transport phase; D) Landing phase 100
A) Definition of the lift-off phase B) Head and elbow movements during the liftoff phase.............................................................................................................. 100
Direction and magnitude of fingertip movement in the lift-off phase ................ 102
Definition of the transport phase......................................................................... 103
Hand trajectory and field of view during the landing phase............................... 104
Relative timing of head-hand-torso movements. ................................................ 105
Distribution of hand precedence index (HPI) ..................................................... 106
Framework of the movement control for the lift-off phase ................................ 107
Movement generator model for the lift-off phase............................................... 109
Performance of the direction-based feed-forward movement control model for a two-link planar movement .................................................................................. 111
Model of movement evaluation and decision for a phase transition from the transport phase to the landing phase. .................................................................. 115
Model of sequencing for unconstrained reach movements................................. 117
Simulation of unconstrained reach movements using JackTM digital human modeling software............................................................................................... 119
Trajectories of predicted and measured movements in a perspective (A) and top view (B) .............................................................................................................. 119
Angle time profiles for predicted (A: Left panels) and measured (B: right panels) movements .......................................................................................................... 120
Hand-head coordination by gaze redirection constraint conditions.................... 121
Reach movements with gaze constrained to redirect to the initial fixation point .................................................................................................................... 122
Simulation of gaze-constrained reach movements ............................................. 123
Dispersion of fingertip pointing positions .......................................................... 124
LIST OF TABLES
Table 2.1. Model coefficients for the reconstructed velocity profiles ........................................ 21 3.1. Model parameter estimates and r2 coefficients.......................................................... 34 4.1. Coefficients and significance of the aiming error regression models........................ 50 4.2. Measured and predicted head movement contribution ratio (HMCR). ..................... 53 4.3. Coefficients and significance of the regression models of head aiming error........... 54 5.1. Design of experiments ............................................................................................... 65 5.2. Means and standard deviations of head flexion/extension angle across all subjects in the static holding situation .................................................................................... 67 5.3. Means and standard deviations of head elevation across all subjects in dynamic lifting situations .................................................................................................... 69 6.1. Joint composition of the manual and visual subsystem ............................................. 80 6.2. RMS error of the joint angles predicted by each model ............................................ 88 6.3. Error of the joint angles of the end posture predicted by each model ...................... 89 7.1. Frequency, timing, and angular displacement of initial elbow flexion for all targets ............................................................................................................................. 101 7.2. Time of the peak fingertip velocity in proportion of normalized movement time .. 103
Visual information is crucial for the representation of the space in which the hand and body move. The acquisition of visual information is achieved by eye and head movements, which are affected by concurrent hand and whole-body movements. This thesis investigates the interaction and control of head and hand movements in the context of unconstrained, visually-guided aiming and pointing tasks in a three-dimensional space. Measurements of head movements for target localization tasks indicate that head movement kinematics is composed of an initial component weakly correlated to target position, followed by multiple corrections. Since the eyes are estimated to aim at the target when the corrections occur, it is suggested that a goal of head movements is to achieve a desired final orientation (posture). This hypothesis is supported by experiments showing that 1) the proportional contribution of head movement in gaze displacement, which corresponds to 68% and 43% (r2 = 0.95 and 0.65) of target azimuth and elevation, respectively, is consistent in spite of the variability in corrective movement kinematics; and 2) the final head orientation corresponds to an optimal posture for a given target position and task requirement that minimizes the error of the visuo-spatial representation in an egocentric reference frame associated with eye and head orientation. Furthermore, head posture and movement reflect the influence of the concurrent tasks performed by the whole-body or hands, as indicated by experiments showing that: 1) forward displacement of the center of mass induced by hand movement can be compensated by head elevation; 2) the head and hand movement controllers should negotiate the control of common links to achieve both global and segment-specific goals. Based on the above observations, a coordination model of unconstrained 3D reach movements, including multiple phases with specific controllers, was developed. A supervisory system coordinates appropriate control modes by discrete sampling of movement outcomes. The model suggests that ix
unconstrained 3D movements are effectively controlled on-the-fly within a context, and may not require optimization schemes for coordination. The model implementation shows its capacity to simulate accurately visually guided reach movements, while preserving their dynamic characteristics.
CHAPTER 1 INTRODUCTION
1.1 Applied Problem Biomechanical and neural control models of movements enhance the understanding of the complex behavior of the central nervous system (CNS) and musculoskeletal system, since the factors determining movements and postures can be simplified and systematically manipulated. From an engineering perspective, human movement models can be used to reduce the prototyping time and cost for ergonomic job analysis and product evaluation. In addition, since musculoskeletal work capacity can be rationally interpolated and extrapolated through models, a proactive approach can be applied for job and product design (Chaffin, 1999). The significance of head movements is strongly related to the acquisition of visual information, which is crucial for the calibration of the human movement systems and interaction with the environment. The orientation and position of the head affects the line of sight and the visual field. In particular, the mechanical range of motion for the eyes (±55° for horizontal rotation: Guitton & Volle, 1987) can be enhanced by head mobility (±64°: Sherk, 1989), which makes the effective range of gaze (= eye + head angle) cover up to ±109°. Since the size of the peripheral visual field is ±90° (Haines & Gilliland, 1973), visual information can be can be gathered over a whole range of ±180° from the mid-sagittal plane without moving the trunk or the whole body. Even though visual information provides an accurate representation of the environment over a large area, it has been suggested that the accuracy and reliability of the information degrades with target eccentricity (Jeannerod, 1988; Wickens, 1992).
Hence it is suggested that critical information should be presented and controls should be located within a relatively small area of 10−15° from the mid-sagittal plane, assuming that the head is in the neutral position with respect to the torso (Sanders & McCormick, 1993), in order to maintain minimal reaction times to visual stimuli (Haines & Gilliland, 1973). In a vehicle-driving situation, the duration of gaze deviating from the road increases with target eccentricity from the straightforward line of sight (Dukic, Hanson et al., 2005). In addition, it has been suggested that the accuracy of hand movements for the manipulation of controls degrades with visual eccentricity (Jeannerod, 1988), and hand reach movements to a control require visual guidance, as addressed by Woodworth (1899). The importance of the accurate prediction of head position of a vehicle or machine operator has been acknowledged in display and interface design (Millodot, 1986). For example, in the SAE Recommended Practice J941, the spatial arrangement of vehicle interior displays is expressed in terms of visual angle with respect to the reference point at the mid-eye (nasion) position. Knowing the position/orientation of the head is also crucial in designing the workspace. It was reported that the distance of a computer monitor preferred by the subjects is within the range of 43 to 90cm from the head (Jaschinski, 2002), and the preferred elevation of a computer monitor is −22 to −27° below the ear-eye plane of the head (Burgess-Limerick, Plooy, & Ankrum, 1998). While the position and orientation of the head affects the acquisition of visual information, the location and properties of visual targets also determine head and neck posture. In peg-insertion tasks simulating manual assembly work, the location of peg holes and the difficulty associated with visual and manual task requirements affect the head/neck flexion angles (Li & Haslegrave, 1999). It was also reported that the high level of precision in manual assembly tasks induces postural changes in the head and neck, in which the head typically moves closer to the hand during the work (Wartenberg et al., 2004). However, attempts have not been sufficiently made to develop a general framework of head movement models as a function of visual and manual aspects of task requirements. For example, in the case of the SAE J941 described above, it was assumed 2
that the head position and orientation is determined only by the vehicle seating reference point, seat-track length, and seat-back angle (Manary et al., 1998), while visual task requirements and target properties were neglected. Also, changes in neck and head posture as a function of the concurrent whole-body and hand movement interacting with the environment have not been investigated or modeled systematically. In principle, manual work in a seated posture can be considered as a comprehensive and interactive function performed by the eye, head, torso, arm, and hand. For visually guided tasks in general, the quantification of head movements, the definition of the head orientation in space and the associated visual function capacity should be taken into account.
1.2 Theoretical Problem 1.2.1 Models of Movement Control One of the key aspects of movement control models is related to how the CNS plans and generates the kinematics of reach movements. A number of modeling attempts has been made to explain movement generation as a process optimizing various cost functions, including jerk in the end-effector trajectories (Flash & Hogan, 1985), joint torque changes (Uno, Kawato, & Suzuki, 1989), and joint effort (Hasan, 1986), or weighted sum of joint velocity (Zhang, Kuo & Chaffin, 1999; Wang, 1999). In these models, the final end-effector position is constrained at the target position, and an optimization algorithm is used to resolve the redundant degrees of freedom and to determine time-dependent changes of joint angles that minimize the specified cost functions. In the models described above, it was assumed that the time-dependent changes in joint angles or the end-effector trajectories are the CNS’s primary concern, while the final posture is simply the end result of joint movements. However, it has been reported that even when the target position is perturbed immediately before the movement onset, the final posture remains invariant (Desmurget & Prablanc, 1997). The final posture and movement trajectories may be the independent domains on which separate adaptive controllers of the CNS act (Scheidt, Mussa-Ivaldi, & Ghez, 2004). Hence, it has been proposed that a predetermined final posture may be a goal that the CNS tries to achieve
through movement (Desmurget & Prablanc, 1997; Rosenbaum, Meulenbroek, & Vaughan, 2001). The arguments for a postural goal of movement control are also associated with the type of coordinate systems in which the CNS encodes the external world and programs the movement. Specifically, even though visually represented target locations may be encoded in an allocentric reference frame (task space), the control signals generating muscle tension and joint moment, and the proprioceptive feedback information about the current joint configuration is encoded in an egocentric reference frame (joint space). Studies have indicated that the final posture and movement controllers use different coordinate systems (Ghez, Dinstein, Cappell, & Scheidt, 2004), and three-dimensional reach movements are not controlled in a task space (Desmurget et al., 1995, Rosenbaum et al., 1995). Hence three-dimensional movements may be essentially controlled to achieve a desired posture using an egocentric reference frame; however, to date neither rigorous quantifications nor integrative modeling attempts have been made on three-dimensional movements from these perspectives. 1.2.2 Head Movements as a Postural Response to Visual Stimuli If a desired posture is what the CNS tries to achieve through movement, and it should be unaffected by disturbances applied before or during the movement, it may be because the final posture is an optimal set of joint angles for the given task requirements (Rosenbaum et al., 2001). The CNS may keep a repertoire of final postures that are continuously updated and optimized through learning and experience (Massion, 1992). A similar approach assumes that sets of movement representations can be retrieved and modified to generate new movements customized for the new context (Park, Chaffin, & Martin, 2004). If the final posture has optimality for given task requirements, it may be necessary to model postures separately from movements. With regard to gaze movements in which the eye line of sight is the end-effector, the final posture should be described and modeled by the relative contribution of head and eye orientation angles that vary as a function of target location. Several studies have modeled head orientation for visual tasks (Stahl, 1999; Hin & Delleman, 2000; Delleman et al., 2001). However these models have been
limited to two-dimensional (either horizontal or vertical) head orientation. In addition, the experimental tasks have included only stepwise gaze movements (Delleman, 2000; Delleman et al., 2001), or visual targets have been presented only within a restricted area (Stahl, 1999). Hence a head posture model should be developed for targets distributed over large eccentricity, and it should be based on the measurement of gaze movements simulating regular daily activities. It is not clear to date whether and how the final head postures are optimized. In other words, the specific cost functions determining head posture for given task requirements have not yet been explicitly identified. A potential clue can be found in the degradation of hand pointing accuracy when the head movement is constrained or prevented (Biguer, Prablanc, & Jeannerod 1984; Roll, Bard, & Paillard, 1986). Hence it has been suggested that the postural response of the head is optimized for target encoding and movement accuracy that tends to degrade with increased eccentricity from the neutral position (Roll et al., 1986; Vanden Abeele et al., 1993; Rossetti et al., 1994). A model of head orientation optimizing target position encoding error was developed in an early study (Rosetti et al., 1994); however the developed model was not based on the task and reference frame compatible with the programming of head movements as described above, and it did not fully explain the pattern of head posture. Hence, a new model based on the task requirements compatible with head movement control and rigorous parameterization is desired. Even though head posture models can predict accurate final head orientation, it is still necessary to determine and understand how the CNS programs movements in order to satisfy/establish a final posture. Since head movements play an important role for the visual acquisition of targets of large eccentricity, and accurate information about the target position may not be available before initiating a movement, due to the spatial resolution of the retina degrading with eccentricity, then the CNS needs to plan a movement to an “unknown” location. This singular problem is likely to require an on-thefly programming strategy that would give rise to kinematic features that differ from the preprogrammed time-optimal movements showing a classical bell-shaped velocity profile (Morasso, 1981). To the best of our knowledge, the kinematics of head movements participating in gaze movements to targets of large eccentricity has not been investigated.
1.2.3 Head Movements as an Integral Part of Whole-Body Movements The control of head movements cannot be completely independent of the context of whole-body movements and/or posture. For example, when hand reach movements require a change in head orientation, it is likely that torso movements that may accompany hand reach movements may affect gaze in a different way from what would be expected from “isolated” head movements. Similarly, the movement of the head can affect the movement of the whole body. The head is intrinsically unstable and its behavior can be explained with an inverted pendulum model (Gillies et al., 1998). The complicated activation-coactivation patterns in neck muscle groups have to achieve two objectives simultaneously: 1) to move the head toward the target of interest, and 2) to maintain the stability of the head and cervical spinal complex (Ouerfelli et al., 1999; Bogduk & Mercer, 2000). Head movements have been primarily studied in either one of these approaches alone, and the two aspects of head movement control (visual and postural) have not been integrated for movement analysis or modeling. From the above perspective, it can be hypothesized that head posture is optimized not only to serve visual functions, but also to integrate the context of movement, including whole-body posture and the requirement of concurrent tasks. The movements of both the head and whole body need to be coordinated so that all movement components and controllers can be organized in synergy with one another. For example, it has been found that hand movement onset is delayed until gaze is released from the previous task and available to provide visual guidance for subsequent hand navigation (Pelz, Hayhoe, & Loeber, 2001). Hence a general framework should be developed to take into account this coordination issue concerning multiple movement controllers and their interactions.
1.3 Specific Aims of the Thesis The present work will attempt to model the movements of the head in the context of unconstrained visually guided movements and the interaction and coordination of head and whole-body movements. The direct outcome of the present work will be the functional description of head movements with respect to visual targets, task
requirements and context, which can be readily implemented in digital human modeling software. In addition, it is expected that the present work would contribute to an enhanced understanding of how the CNS plans, organizes, and executes movements, especially when the task requires visual guidance and the coordination of multiple systems and movement components.
The specific aims of the present dissertation are as follows: 1) To investigate and model final head posture and movement kinematics as a function of target location and task requirements 2) To identify the factors determining the postural response of the head, and investigate the origin and necessity of optimality in head posture, in terms of sensorimotor capacity and concurrent whole-body posture. 3) To investigate the interaction between head and whole-body movements, and develop a model to simulate coordinated movements of the head and handperforming visual and manual functions. 4) To investigate the structure of unconstrained three-dimensional movements of the head and hand, using a modeling approach
1.4 Thesis Organization This thesis is composed of six studies addressing different aspects of head movements. The chapters are organized in such a way that the static versus dynamic aspects of head movements, and gaze control versus interaction with whole-body movements can be addressed and contrasted (Figure 1.1). The first chapter (present chapter) states the theoretical and applied problems in light of the current literature. The second chapter (Chapter 2) describes how head movement kinematics are planned and executed to achieve a goal posture. Chapter 3 concerns the development of a model of head orientation as a function of target location for large eccentricities. Chapter 4 attempts to explain how head posture is optimized, within the perspective that head orientation is determined to minimize movement errors. While these three chapters deal with head movements associated with visual functions only, the next three chapters focus
on the interaction of head and whole-body movements while performing lifting or reaching tasks. Specifically, Chapter 5 shows that head orientation may be affected by whole-body balance requirements and head posture is an integral part of whole-body posture. Chapter 6 suggests that in visually guided reach movements, the head and hand need to cooperate with each other in order to achieve their respective goals and the common goal simultaneously. Finally, Chapter 7 attempts to develop a comprehensive model of visuomanual coordination for head and hand reach movements. As illustrated in Figure 1.1, Chapter 3, 4, and 5 deal with posture, while Chapter 2, 6, and 7 are related to dynamic movement kinematics.
Figure 1.1. General organization of the dissertation
CHAPTER 2 HEAD MOVEMENT STRATEGY IN UNCONSTRAINED VISUAL TARGET LOCALIZATION
2.1 Abstract While targets within the visual field can be localized immediately, visual search over large eccentricities involves serial scanning driven by a cognitive map of the environment. The kinematics of visually guided head movements should therefore reflect the corresponding strategies of visual target localization. Measurements of head movements while directing gaze to horizontally-distributed targets at eye level were used to reconstruct the initially intended head movement (IIHM), based on the assumption that the initial head movement would have the characteristics of a pre-programmed timeoptimal movement. The reconstructed head movement kinematics indicated that the amplitudes of the IIHMs would reach an asymptote of to 20.3° on average, even though target eccentricity was up to 120° azimuth. The peak velocity of the IIHMs was linearly correlated with the amplitude of the IIHMs rather than the target eccentricity. Hence it can be assumed that that the initial head movement is programmed and controlled in a feed-forward mode to place the head in an intermediate predetermined optimal location that allows the eye to reach any expected targets. The slow-phase sub-movement components following the initial movements are likely to be driven by proprioceptive feedback. The existence of these subsequent corrections suggests that they are used displace the head to a specific location. They support the hypothesis that the goal of head movement control may include the achievement of a certain combination of eye and head orientation, which may be programmed in an egocentric reference frame.
2.2 Introduction Gaze movements displace the image of the object of interest from peripheral to foveal vision in order to generate an accurate representation of the task space where subsequent actions take place. In general, gaze movements accurately displace the line of sight onto the target (Guitton & Volle, 1987). Since vision is suppressed during the saccadic phases of eye movements (Bridgeman et al., 1975), it has been assumed that gaze movements are pre-programmed before initiation. Furthermore, the control of eye movements has long been explained by a local feedback system, which assumed error corrections by neuron output signals (Robinson, 1975; Jürgens et al., 1981; Laurutis and Robinson, 1986). The information about target location that is required to program gaze movements originates from the retina. As the spatial resolution of the retina is highest at the fovea and low at the periphery, the accuracy of such information degrades with the eccentricity from the foveal line of sight (Paillard & Amblard, 1985; Bock, 1993). Indeed, studies have reported that the outcome of motor programs for eccentric targets may not be as accurate as for foveated targets. Furthermore, saccades triggered to localize targets typically exhibit impaired accuracy and degraded kinematic efficiency when visual targets are presented in a peripheral region of the retina (Dick et al., 2004). The proportion of erroneous saccades increases with target eccentricity in visual target localization tasks (Viviani & Swensson, 1982). When target eccentricity is over 35° from the mid-sagittal plane, the direction of saccades is incorrect more frequently (Kalesnykas & Hallett, 1994). In contrast, targets within the visual field can be localized both instantly and efficiently (Carrasco, Evert, Chang & Katz, 1995). Studies have suggested that the localization of targets within the visual field is driven by sensorimotor map and essentially enables parallel processing. However, when the targets are located beyond the visual field, target-localizing tasks are performed in a serial search mode typically characterized by a fast transition of gaze across regions followed by scanning within a region. Hence it was suggested that visual target localization beyond the initial field of view is performed in a cognitive control mode,
rather than in a sensorimotor control mode (Paillard, 1987; Cave & Wolfe, 1990; Wolfe, Cave & Franzel, 1989; Wolfe, 1994). Visual targets presented in peripheral locations are likely to induce head movements for gaze displacements (Gauthier et al. 1986; Guitton & Volle, 1987; Fuller, 1992; Stahl, 1999). The patterns of eye-head coordination also differ depending on target eccentricity. Specifically, the eyes move prior to the head when the targets are displayed within the visual field (Biguer et al., 1984). However, the head tends to move earlier than the eyes when the target is not in sight from the initial fixation point (Reviewed by Netelenbos & Savelsbergh, 2003). Early initiation of head movements and change in head movement kinematics in such cases should therefore be of cognitive origin. Indeed, studies have indicated that the cognitive aspects of visually-guided tasks have significant influence on head movement kinematics. When a subject reads the same text several times, head movement amplitude increases while eye movement amplitude decreases (Lee, 1999). Head movements are also made more often, and their corresponding amplitude becomes larger, in conditions in which future gaze direction can be anticipated (Oommen et al., 2004). Previous studies have indicated that head movement kinematics are generally characterized by a predictable velocity profile composed of a smooth bell shaped curve, when the targets are presented within the visual field (Tweed et al., 1995; Freedman & Sparks, 2000). When a target is presented beyond the visual field, however, it is expected that head kinematics will show the features of a sequential target localization movement, characterized by a fast feed-forward transition followed by corrective scanning, rather than a so-called single “bell-shaped velocity” time-optimal movement. Gaze is obtained using a combination of head and eye orientation; hence the contribution of the head is less than 100% of target eccentricity (Biguer et al., 1984; Guitton & Volle, 1987; Fuller, 1992; Stahl, 1999). However, it is not clear whether final head orientation is merely an end outcome of the overall sequence of head movements. Conversely, it can be suggested that the overall sequence is generated in order to achieve a desired head orientation. This issue is related to the general question of whether the control goal of the CNS is movement, or posture (Feldman, 1986).
The objectives and hypotheses of the present study are as follows: Objectives 1) To characterize the kinematics of head movements used to locate targets of large eccentricity 2) To identify the strategy of unconstrained head movements in visual target localization tasks and determine the contribution of cognitive and sensorimotor mapping functions in head movement control 3) To determine the relationship between the final orientation (posture) and movements in head movement control Hypotheses 1) Head movements are composed of multiple components: •
A pre-programmed initial movement accounts for a gaze displacement to an estimated potential target position
Subsequent error corrections are based on proprioceptive feedback updating the spatial representation of the environment
2) For targets beyond the initial visual field, a cognitive mapping mode is initially used for head movement control; for targets within the visual field, a sensorimotor mapping mode is initially used 3) The goal of head movement is to achieve a desired orientation through a sequence of multiple movement components
2.3 Methods 2.3.1 Subjects Five male and five female subjects participated in the experiments as paid volunteers. Subjects were students from the University of Michigan recruited through inclass advertisements or email announcements. The mean age of the subjects was 22.3 years old (SD: 1.8). All subjects were free from any known musculoskeletal or neurological disorders, and had normal vision (20/20 or better) without corrective lenses.
Mean stature and body weight were 170.9 cm (SD: 12.0) and 67.2 kg (SD: 16.3), respectively. All subjects were right-handed. 2.3.2 Equipment Visual targets were placed on an arc (radius = 115 cm, arc length = 300 cm) set horizontally in front of the subject. The elevation of the target arc was set at the eye level of the seated subject (Figure 2.1). The visual targets were distributed every 10° up to a maximum azimuth of 120° of visual angle in the right hemisphere from the mid-sagittal plane. Each visual target was composed of either alphanumeric characters (0 – 9, A, C, E, F, H, L, U, or P), or a horizontal bar (–) displayed on seven-segment LED’s whose visual angle was approximately 0.25°. The subject was seated on a chair (seat pan height = 40cm, seat pan width = 50cm, back support height = 58cm) throughout the experiments. The seat position and arc height was adjusted so that the center of the display coincides with the center of the rotation of the head (atlanto-occipital joint), and the individual subject’s nasion aims at the 0° target location (Figure 2.1). The entire room was dimly illuminated during the experiments.
Figure 2.1 The configuration of the target arc 2.3.3 Movement recordings An electromagnetic motion capture system (Flock of BirdsTM, Ascension Technology) with a sensor that measures movements with six degrees of freedom was used to record head movements. The sensor was placed on the forehead, and a calibration procedure was carried out to determine the location of anatomical landmarks (nasion,
tragion, infraorbitale, etc.) in the sensor’s local reference frame. The landmarks’ motions in a global reference frame were then calculated as follows: global p global (t ) = Tsensor (t )p sensor
where p sensor denotes the location of a landmark in a sensor-attached reference frame, and global Tsensor (t ) represents the recorded homogeneous transformation matrix describing the
orientation and location of the sensor-attached reference frame as a function of time, respectively. The movements were recorded at a 25Hz sampling frequency, and the trajectory of each landmark was smoothed in an off-line process using a zero phase shift second order Butterworth low-pass filter with a 6Hz cutoff frequency. The horizontal head orientation angle was measured by the included angle between the global forward vector and the naso-occipital axis vector projected onto the horizontal plane, where 0° indicate the head aiming forward, while positive increments indicate rightward aiming, respectively. 2.3.4 Procedure Each subject was asked to sit on the chair at the center of the target display arc. As described above, the locations of anatomical landmarks were measured during the calibration procedure. The subject was provided with pilot trials until the experimental protocol became familiar. For each trial, the subject was asked to look at the initial home display located in the sagittal plane until it disappeared (duration = 1 s), and then redirect gaze to an eccentric target illuminated for 2 s. Home and target display illuminations were accompanied by 100 ms distinct tones of 500Hz and 2000Hz, respectively. The subject was asked to fixate on the illuminated display until the tone signaled the appearance of the next display (either home or target), at which the subject was subsequently required to redirect gaze. Throughout the experiments the head was free to move. No specific instructions were provided regarding head movements.
A trial was composed of the sequential occurrence of a home and target presentation, and a block was composed of 24 trials (12 target eccentricities × 2 replications). The target locations were randomized and balanced within a block. A total of two blocks, separated by a five-minute rest period, were performed by for each subject. Within one block the visual displays, for both home and eccentric targets, were alphanumeric characters, which the subject was asked to read aloud. In the other blocks, a horizontal bar was presented for the home and eccentric targets, and the subject verbally reported the ordinal value of the appearance. The order of the blocks was balanced and randomized across the subjects. The procedures were reviewed and approved by the University of Michigan Health Sciences Institutional Review Board for compliance with the appropriate guidelines, state and federal regulations. 2.3.5 Data Analysis The timings of initiation and termination of head movements were determined during the off-line analysis of the recorded angles. The initiation of head movement was defined as the time when the head had been immobile for the previous 120 ms (3 consecutive sampling frames) and engaged in active rotation for the next 120 ms. The threshold for active head rotation was set to an angular velocity ≥ 25°/s. Likewise, the termination of the head movement was defined as the time when the head returned to a stationary position for at least 280 ms (7 consecutive sampling frames).
2.4 Results 2.4.1 Head Movement Kinematics The majority of velocity profiles of head movements observed in the present experiments were categorized as follows (Figure 2.2): •
No head movements (eye movements only): 7% of total trials for all subjects
A bell-shaped curve with small or no asymmetry across the acceleration and deceleration phase (Figure 2.2A): 16% of total trials 15
A large acceleration phase followed by an extended deceleration phase (Figure 2.2B): 23% of total trials
A large peak velocity followed by a small additional peak (Figure 2.2C): 36% of total trials
A large peak velocity followed by multiple combinations of decelerations and accelerations (Figure 2.2D): 18% of total trials
From these categorized movement profiles, it was observed that the common component is the large acceleration phase in the initial part of the movement, when all of the velocity profiles obtained for each subject are normalized and superimposed (Figure 2.3). A
Figure 2.2 Sample categories of head movement velocity profiles. A) A single bellshaped curve; B) A large acceleration phase followed by an extended deceleration phase; C) A large peak followed by a small additional peak; D) A large peak followed by multiple combinations of accelerations and decelerations. 16
Figure 2.3 Velocity profiles normalized for magnitude and time scale, and superimposed together. The thick line represents the initial movement component common to all velocity profiles (thick solid line: “observed” common component; thick dotted line: “reconstructed” common component). Data from one subject.
2.4.2 Reconstruction of Initially Intended Head Movement The first acceleration phase appears to be common to all movements (the thick solid line in Figure 2.3), so it is of interest to recover the whole velocity profile of this component (truncated by subsequent subcomponents) by numerically reconstructing the deceleration phase (the thick dotted line in Figure 2.3) using a mirror-flipped acceleration profile. The recovering process is based on the assumption that the non-truncated initial phase of the movement has the symmetric velocity profile of a preprogrammed movement, in spite of some variability (Nagasaki, 1989; Freedman & Sparks, 2000). To reconstruct each velocity profile of the initially intended head movement (IIHM) component, the following procedures were applied (Figure 2.4). First, the completion of the initial acceleration phase was determined by the occurrence of the first zero crossing of the first order time derivative of head angular velocity. Second, based on the assumption that the identified initial acceleration phase corresponds to the initial half of a sigmoidal curve, the three parameters of the least-square fitting sigmoidal curve (mode position, dispersion, and scaling factors) were estimated. The timings for movement initiation, peak velocity, and completion were then estimated from the sigmoidal curve. Third, the head orientation in a global reference frame was represented by a time-dependent quaternion. Then the head-aiming direction at the end of the reconstructed velocity profile was estimated by extrapolation, knowing the head orientation at movement initiation and peak velocity. Specifically, the extrapolation process used a spherical linear interpolation of quaternions (Eq. 2.2) by assigning 0.0 of 17
normalized movement time to the head orientation at movement initiation and 0.5 to the peak velocity timing, and by estimating the head orientation at instant 1.0 of the normalized movement time. s(t ; p, q) =
sin((1 − t )θ )p + sin(tθ )q sin(θ )
where t: normalized movement time for interpolation or extrapolation p, q: quaternion representations of the initial and final head orientation, respectively
θ: angular distance between p and q
Figure 2.4 Reconstruction of an initial velocity profile using a spherical linear interpolation method
2.4.3 Amplitude of Initially Intended Head Movement (IIHM) The reconstructed velocity profiles indicate that the amplitude of IIHM increases with target eccentricity, and reaches an asymptotic value (Figure 2.5). The relationship was modeled using an exponential curve as in Eq. 2.3. ⎛ ⎛ x ⎞⎞ yˆ = b1 ⎜⎜1 − exp⎜⎜ − ⎟⎟ ⎟⎟ ⎝ b2 ⎠ ⎠ ⎝
where x: target eccentricity ŷ: estimated amplitude of IIHM b1, b2: model parameters estimated by least-square fitting
Figure 2.5 Amplitudes of the IIHM as a function of target eccentricity. The trend curve was obtained by least-square fitting (Eq. 2.3). Example of subject 2. The asymptotes estimated from the fitting curves for each subject are listed in Table 2.1. The mean asymptote is 20.3° with a standard deviation of 3.9° for all subjects, which indicates that the maximum amplitudes of the IIHMs do not exceed 20.3° for target eccentricity up to 120°. The head aiming direction did not reach the target orientation. On average, the ratio of head aiming direction was 72% (SD: 0.04%) of target eccentricity. The data showed that the amplitude of IIHM was much smaller than the final head aiming direction (Figure 2.6).
Figure 2.6 Amplitude of the IIHM versus final head aiming azimuth. The diagonal line represents the hypothetic equality of IIHM amplitude and head aiming azimuth. Subject 4.
2.4.4 Peak Velocity The peak velocity of the IIHM (reconstructed movement) increases with target eccentricity, and reaches an asymptotic value (Figure 2.7A), as observed for the amplitude of IIHM. The peak velocity increases with target eccentricity, which is also a 19
characteristic behavior of head movement amplitude (Goossens & van Opstal, 1997). The relationship between the peak velocity and target eccentricity was also modeled by an exponential function (Eq. 2.3). A deflection point of the model was defined as the target eccentricity, which corresponds to approximately 63% of the asymptotic peak velocity. From Eq. 2.3, the estimated value of the model parameter b2 corresponds to the deflection points at which the peak velocity increases only at a slow rate with an increasing target eccentricity. The mean deflection point for all ten subjects is 31.6° with a standard deviation of 10.7° (Table 3.1). In contrast to the nonlinear relationship between IIHM peak velocity and target eccentricity, IIHM peak velocity can be described as a linear function of the amplitude of IIHM (Figure 2.7B). The relationship was modeled using a simple linear function of the amplitude of the IIHM (Eq. 2.4). The mean slope across all subjects is 10.19, with a standard deviation of 4.02. yˆ = a1 + a2 x
where x: amplitude of IIHM ŷ: estimated peak velocity a1, a2: model parameters estimated by least-square fitting A
Figure 2.7 Distribution of peak velocity as a function of A) target eccentricity and B) the amplitudes of the IIHM. Example from subject 7.
Table 2.1. Model coefficients for the reconstructed velocity profiles Subject
b1 b2 R2 b1 b2 R2 a1 a2 R2
Intended Amplitude vs. target eccentricity 12.53 17.52 19.53 23.67 18.98 55.55
20.29 (3.86) 26.45 (14.54)
0.39 0.30 0.45 0.70 0.38 0.50 0.50 0.41 0.52 0.42 Peak velocity vs. target eccentricity 175.01 205.66 276.88 326.90 199.73 213.16 182.27 212.81 223.38 206.45 222.22 (45.91) 45.13 15.37 41.06 55.38 56.46 31.73 72.78 26.01 48.26 77.73 46.99 (19.67) 0.26 0.50 0.60 0.83 0.62 0.62 0.52 0.49 0.51 0.58 Peak velocity vs. intended amplitude -49.65 12.38 36.49 -13.24 -15.84 -0.44 -13.76 11.43 6.40 -6.16 -3.24 (22.78) 18.18 12.41 11.67 16.29 14.97 13.63 14.20 14.88 14.26 14.75 14.52 (1.84) 0.83 0.61 0.70 0.90 0.80 0.77 0.79 0.85 0.86 0.82
2.4.5 Subsequent Corrections Following the initial head aiming movement, subsequent corrections were made to displace the gaze onto the target. The number of corrective movements, measured by the occurrences of local maxima in the velocity profiles after the initial head peak velocity, is illustrated in Figure 2.8. The number of corrections seemed to increase with target eccentricity, although a large variability was observed.
Figure 2.8 Number of corrections measured by the occurrence of the peaks in the velocity profile after the initial acceleration phase (subject 2).
2.5 Discussion 2.5.1 Cognitive Versus Sensorimotor Control Head movement kinematics observed in the present study indicate that overall the movements can be characterized by a fast feed-forward transition movement followed by multiple corrections until a final position is reached. By assuming that the initial head movement may have the characteristics of pre-programmed time-optimal movements (Zangemeister & Stark, 1981; Stark, Zangemeister & Hannaford, 1988; Tweed et al., 1995; Freedman & Sparks, 2000), velocity profiles that would correspond to the intended movements were reconstructed. This model shows that the IIHM would reach an asymptote if uninterrupted, indicating that the head initially moves to a predetermined position regardless of target eccentricity beyond a certain range. It is remarkable to observe that IIHM amplitude was very similar for all subjects. Corrective movements were iteratively generated until the head was stabilized in a final position, and the gaze (eye + head direction) was aimed at the target. The relationship between the initial peak velocity and target eccentricity was non-linear, while a linear relationship was found between peak velocity and the amplitude of the initial head movement. Consequently it is suggested that initial head movements are programmed for an intended destination largely independent of target eccentricity. These movements are most likely intended to carry the head to an intermediate optimal location estimated from cognitive mapping based on the visuospatial representation of the environment known to the subject, allowing the eye to reach any expected target. Cognitive mapping and control of IIHM must be assumed, since the representation of the target location initially lacks a critical level of accuracy due to eccentricity. If visual search for targets beyond the visual field induces a reiteration of transition-and-scanning processes, it can be also assumed that subsequent scanning components are controlled by on-line feedback corrections, perhaps based on sensorimotor mapping. The strategy, consisting of utilizing a multi-phasic sequence composed of feedforward and consecutive corrective components, is in agreement with the classic perspective that reaching movements consist of a fast initial phase that is primarily ballistic, followed by a slower adjustment phase under the guidance of sensory feedback
(Woodworth, 1899; Jeannerod, 1988). In addition, Brown and Cooke (1981) found that a brief perturbation immediately prior to the movement onset altered only the late phase of the initial agonist burst of the bicep’s activity, while the early phase was unaffected. Nevertheless, the presence of multiple corrective movements of the head is puzzling, since apparently the head is not required to aim in a specific direction.
2.5.2 Eye-Head Relationship In the present case, secondary corrective movements may be employed to displace the head to a specific location, which can be described as a constant proportion of target eccentricity (Fuller 1992; Freedman & Sparks, 2000). From the perspectives of previous studies, it is estimated that gaze aims at the target by the time the corrective movements take place (Uemura et al., 1980; Biguer et al., 1984; Carnahan & Marteniuk, 1991; Vercher et al., 1994). As the movement is not constrained, the secondary adjustments may be made to compensate for a conservative underestimation of the head movement and reduce the cost of initiating a fast large head movement that would be truncated anyway. Head inertia is not negligible, and it has been shown that head movement peak acceleration and velocity decrease if head inertia is increased (Gauthier et al., 1986; Martin et al., 1986). Consequently it is proposed that the goal of head movement control may include the achievement of a certain combinations of eye and head orientation with regard to target position. This hypothesis is also indicative of the use of an egocentric coordinate system as opposed to an allocentric coordinate system that could be used in the first feed-forward phase of the motion to place the head in an intermediate spatial location. Maintaining a constant relative eye-head position, easier to achieve in an egocentric coordinate system, would help to better estimate both the eye-in-space and eye-in-the-torso positions, which require a significant amount of reference frame transformation in order to provide a visuospatial representation. Since the control of head movement cannot rely on the allocentric coordinates, which are normally derived from visual information about the end-effector position, it is likely that proprioceptive information originating from both neck and extraocular muscles provides the feedback information necessary for corrective movements. This hypothesis is supported by the
similarity of adjustment patterns of aiming movements using only proprioceptive feedback (Adamo, Martin & Brown, 2004). Proprioceptive information is known to be used to control movement in joint space coordinates, while visual information plays an important role in controlling movements in global coordinates (Sober & Sabes, 2003).
2.5.3 Range of Motion Potential alternative explanations for the limited amplitude of the IIHM include the mechanical limitations by the range of motion of the head and neck joints, since the maximum target eccentricity used in the present study is 120°, much larger than the 59° range of motion of the head and neck (Melzer & Moffitt, 1997). However, the amplitude of the IIHM is only 20° on average, which is still far smaller than the range of motion for the head-neck joint. In addition, it has been reported that the normalized comfort score decreases monotonously with horizontal head/neck rotation below 75°, while an abrupt decrease can be observed for > 75° (Kee & Karwowski, 2001). Hence it is unlikely that the range of motion or effort constraints determine the amplitude of the IIHM
2.5.4 Optimality in Head Movement Programming The asymptotic behavior of the IIHM amplitude and the subsequent corrective movements lead to the hypothesis that the CNS selects an optimal amplitude of the IIHM when the target is beyond the initial visual field. This optimal amplitude may be related to the task requirement of visual target localization for all potential target positions known to the subject. In the present experiment, for example, the combination of the visual field 90° (Haines & Gilliland, 1973) and the asymptotic amplitude of the IIHM 20° would make almost all targets visible (eccentricity ≤ 120°), even without eye movement.
2.5.5 Conclusion The present study demonstrated that head movements are composed of multiple phases – IIHM followed by corrective movements. This suggests that the IIHM is controlled by a cognitive map, while the corrective movements are controlled by sensorimotor feedback. Although the amplitude of the IIHM is only weakly related to the target eccentricity, the present study indicates that both the goal of the IIHM and
corrections is to achieve a certain desired orientation of the head. Head orientation should therefore be further investigated as a function of visual target position in order to address why the CNS attempts to achieve a specific orientation.
CHAPTER 3 MEASUREMENT OF THE HEAD MOVEMENT CONTRIBUTION RATIO
3.1 Abstract Since eye movements generally make a primary contribution to the visual acquisition of a target, head movement amplitude is limited to a fraction of the angular distance to a target. In this chapter, the proportion of head orientation versus target orientation, named head movement contribution ratio (HMCR), was quantified as a function of target eccentricity. Subjects oriented the gaze to randomly presented visual targets distributed along an arc placed horizontally (elevation: 0, −20, +20° from the eye level) or vertically (azimuth: 0, 20, 40, 90° on the right side of mid-sagittal plane). A nonlinear regression model based on the measurement of head orientation showed that the horizontal and vertical HMCR was approximately 68% and 43% of target azimuth and elevation angles, respectively, and was significantly affected by the interactions of target azimuth and elevation. The model was implemented in digital human modeling software and its performance was evaluated.
3.2 Introduction Head orientation is closely related to visual target locations (Guitton and Volle, 1987; Lestienne et al., 1995). The more eccentric a target is from the sagittal plane, the more the head rotates to carry the eyes. However, the head does not directly aim at the target, but travels by only a proportion of the angular distance to the target (Guitton and Volle, 1987; Fuller 1992; Lestienne et al., 1995; Stahl, 1999). The magnitude of head movements in response to a visual target can be quantified by the head movement 26
contribution ratio (HMCR), which is defined as the ratio of the head orientation angle to an angular distance to the target from the initial head orientation. For instance, if the head rotates 15° in response to a visual target located 30° away from the sagittal plane, the corresponding HMCR would be 50%. This implies that the remaining 50% rotation necessary to acquire the target is accomplished by eye movements. Previous studies have used a range of target presentation limited to ±80° (Guitton and Volle, 1987), ±20° (Fuller, 1992), ±90° (Stahl, 1999), and ±25° (Freedman and Sparks, 2000) and presented in the horizontal plane only. It is not clear yet whether the HMCR is either a linear or non-linear function of target eccentricity. Delleman et al. (2001) measured head movement contribution for visual targets distributed from 0° (straight forward) to 180° (straight backward) in the horizontal plane. They found an HMCR for horizontal gaze of approximately 84%, showing a remarkably linear correlation with Pearson’s correlation coefficients of 0.98. However, Stahl (1999) used visual targets subtending ±50° and found that the HMCR was less than 10% within a region around the sagittal plane (35.8 ± 31.9°), and distinct boundaries were found between small (eye-only range) and large (eye-head range) head movement contribution regions. Furthermore, head joint mobility measured in the previous studies has been restricted to a single dimension, that is, either horizontal (Fuller, 1992; Stahl, 1999; Delleman et al., 2001) or vertical head movements (Hin and Delleman, 2000). However, since the range of motion of a joint varies depending on the position and orientation of the adjacent joints (Webb Associates, 1978), it is likely that head orientation may be different when both directions (horizontal and vertical) interfere, such as in oblique movements. Hence, it can be expected that head orientation will be better predicted when all associated degrees of freedom are considered together. In Chapter 2, it was suggested that a goal of head movement in visually guided tasks was to achieve a desired proportional contribution of the head to gaze. This perspective called for modeling the final head orientation as a postural response to visual target presentation, since it was assumed that the goal posture (orientation) was specified as a function of task requirement, context, and biomechanical constraints independently
from movement dynamics. Hence in the present study the HMCR was modeled as a function of target location. The specific objectives and hypotheses were as follows: Objectives 1) To investigate the influence of target location on head orientation over a large range of target eccentricity distributed in a three-dimensional space 2) To develop a model of head orientation as a function of target location Hypotheses 1) The HMCR is affected by the interactions of multiple joint mobility. 2) The HMCR is a nonlinear function of target locations. 3) The final head orientation represents an optimal posture for the task requirement and movement context.
3.3 Methods 3.3.1 Subjects Five male and five female subjects participated in the experiments as paid volunteers. Subjects were students from the University of Michigan recruited through inclass advertisements or email announcements. The mean age of the subjects was 22.3 years old (SD: 1.8). All subjects were free from any known musculoskeletal or neurological disorders and had normal vision (20/20 or better) without corrective lenses. Mean stature and body weight were 170.9 cm (SD: 12.0) and 67.2 kg (SD: 16.3), respectively. All subjects were right-handed.
3.3.2 Equipment Visual targets were placed on an arc (radius = 115cm, arc length = 300cm) set either horizontally or vertically in front of the subject. In the horizontal configuration, the elevation of the target arc was set at −25°, 0°, or +25° from the eye level. In the vertical configuration, the azimuth of the target arc was set to 0°, 20°, 45°, or 90° to the right of the sagittal plane (Figure 2.1). The horizontal arc corresponded to 120° of visual angle from the mid-sagittal plane, while the vertical arc corresponded to ±50° from the 28
horizontal plane at eye level. Each visual target was composed of either alphanumeric characters (0 – 9, A, C, E, F, H, L, U, or P) or a horizontal bar (–) displayed on sevensegment LEDs whose visual angles were approximately 0.25°. The displays were placed along the arc at intervals corresponding to 10° of visual angle along the arc. The subject was seated on a chair (seat pan height = 40cm, seat pan width = 50cm, back support height = 58cm) throughout the experiments. The seat position and arc height were adjusted so that the center of the display coincided with the center of the rotation of the head (atlanto-occipital joint) and the individual subject’s nasion aimed at the 0° target location (Figure 3.1 A and B). The entire room was dimly illuminated during the experiments.
Figure 3.1. A) Horizontal target array. B) Vertical target array. C) Vertical target arrays set at four different azimuths.
3.3.3 Movement Measurement An electromagnetic motion capture system (Flock of BirdsTM, Ascension Technology) with a sensor that measured movements with six degrees of freedom was used to record head movements. The sensor was placed on the forehead, and a calibration procedure was carried out to determine the location of anatomical landmarks (nasion, tragion, infraorbitale, etc.) in the sensor’s local reference frame. The landmarks’ motions in a global reference frame were then calculated as follows:
global p global (t ) = Tsensor (t )p sensor
where p sensor denotes the location of a landmark in a sensor-attached reference frame and global Tsensor (t ) represents the recorded homogeneous transformation matrix describing the
orientation and location of the sensor-attached reference frame as a function of time. The movements were recorded at a 25Hz sampling frequency, and the trajectory of each landmark was smoothed in an off-line process using a zero phase shift second order Butterworth low-pass filter with a 6Hz cutoff frequency.
3.3.4 Procedure Each subject was asked to sit on the chair at the center of the target display arc. As described above, the locations of anatomical landmarks were measured during the calibration procedure. The subject was provided with pilot trials until the experimental protocol became familiar. For each trial, the subject was asked to look at the initial home display located in the sagittal plane until it disappeared (duration = 1 s), and then to redirect the gaze to an eccentric target illuminated for 2 s. Home and target display illuminations were accompanied by 100 ms distinct tones of 500Hz and 2000Hz, respectively. The subject was asked to fixate on the illuminated display until the tone signaled the appearance of the next display (either home or target), at which the subject was subsequently required to redirect the gaze. Throughout the experiments the head was free to move. No specific instructions were provided regarding head movements. A trial was composed of the sequential occurrence of a home and target presentation, and a block was composed of 24 trials (12 target eccentricities × 2 replications) performed on a single arc configuration. The target locations were randomized and balanced within a block. A total of 14 blocks, separated by a 5-minute rest period, were performed by each subject. Within one block the visual displays, for both home and eccentric targets, were alphanumeric characters, which the subject was asked to read aloud. In the other type of blocks, a horizontal bar was presented for the home and eccentric targets, and the subject verbally reported the ordinal value of the appearance. The order of the blocks was balanced and randomized across the subjects. The duration of the entire experiment was approximately 1.5 hours per subject.
The procedures were reviewed and approved by the University of Michigan Health Sciences Institutional Review Board for compliance with the appropriate guidelines and state and federal regulations.
3.3.5 Data Analysis The timings of initiation and termination of head movements were determined during the off-line analysis of the recorded angles. The initiation of head movement was defined as the time when the head had been immobile for the previous 120 ms (3 consecutive sampling frames) and engaged in active rotation for the next 120 ms. The threshold for active head rotation was set to an angular velocity ≥ 25°/s. Similarly, the termination of the head movement was defined as the time when the head returned to a stationary position for at least 280 ms (7 consecutive sampling frames). Target locations were represented by the azimuth and elevation angle from the mid-sagittal plane and horizontal plane at eye level, respectively. Head orientation angles were represented by three Euler angles including horizontal rotation (+: left), flexion/extension (+: up), and cyclotorsion (+: CW), which corresponded to the sequential rotations about the head-attached z- (+: up), x- (+: right), and y-axis (+: forward) when using the right-hand rule (Figure 3.2A). Head orientation angles were calculated from the rotation matrix with respect to the global reference frame as follows: ⎡cos α z cos α x − sin α z sinα y sinα x ⎢ R = ⎢ sinα z cosα x + cosα z sinα y sinα x ⎢ − cosα y sinα x ⎣
− sinα z cosα y cosα z cosα y sinα y
cosα z sinα x + sinα z sinα y cosα x ⎤ ⎥ sinα z sinα x − cosα z sinα y cosα x ⎥ ⎥ cosα y cosα x ⎦ (Eq. 3.2)
where αz , αx , and αy represented the head orientation angles about the z-, x- and y-axes (xhead, yhead, and zhead, respectively) in the head-centered reference frame. The transformation matrix representing the head orientation was determined by 1) placing the origin of the head reference system (xhead - yhead - zhead) at the origin of the sight vector (nasion); 2) aligning the yhead axis with the sight vector and the xhead axis with the vector
from the left to the right tragion; and 3) computing the zhead axis orientation by the cross product of xhead and yhead (Figure 3.2B). A
Figure 3.2. A) Representation of head orientation angles. 1: horizontal rotation (αz); 2: vertical flexion/extension (αx); 3: cyclotorsion (αy). Each arrow indicates the positive direction of the corresponding joint rotation. B) Definition of the head reference frame.
3.4 Results 3.4.1 Head Movement Contribution Ratio (HMCR) To determine the most appropriate form of the HMCR model, a preliminary analysis was performed on head movements for the horizontal target arc at eye level and the vertical arc at 0° azimuth. The HMCR increased nonlinearly with target eccentricity (Figure 3.3A). Specifically, for small target eccentricities (approximately < 10 degrees), the horizontal HMCR remained close to 0% (eye motion region). For target eccentricities greater than 10°, the HMCR was 72% (eye + head motion region). The eye motion region of the vertical HMCR varied significantly between subjects. In addition, the vertical HMCR was smaller (10%) for targets below eye level than for targets above eye level (49%) in the eye + head motion region (Figure 3.3B). The magnitude of the head rotation (φhead) can be estimated as follows:
φhead = 0, for |φtarget| < 10° φhead = λφtarget, for |φtarget| ≥ 10°
where φtarget represents the target eccentricity in degrees, and λr represents the HMCR. 32
Figure 3.3. Distribution of head orientation angle as a function of target eccentricity. A) Horizontal target arc at eye level; B) Vertical target arc at 0° azimuth.
The magnitudes of the eye motion region and the HMCR greatly varied between subjects. Among all subjects, the size of the eye motion region ranged between 0° and 40° for the horizontal target array at eye level. The smallest horizontal HMCR was 59% and the largest HMCR was 82%. 3.4.2 Modeling of Head Orientation The head orientation angles were modeled using piecewise functions of target azimuth and elevation angles in order to distinguish 1) the “eye alone” and “eye + head” regions of the horizontal head rotation and cyclotorsion angles; and 2) the HMCRs for the below and above eye level regions for vertical head orientation. The observations from all subjects were pooled together, and the fitting function parameters (Eq. 3.5 – Eq. 3.7) were estimated by nonlinear regression models using a Levenberg-Marquardt method (Bates & Watts, 1988). Based on the preliminary observations, the nonlinear prediction functions of horizontal rotation (αz), vertical flexion/extension (αx), and cyclotorsion (αy) angles were determined as follows: For horizontal rotation (Eq. 3.5), the HMCR was specified by the parameter β1, and the size of eye-motion region (Figure 3.3) was expressed as the dead zone threshold β2. In addition, the interaction between target azimuth and elevation was represented by β3.
αz = β1 sgn(θ) max(|θ| − β2, 0) + β3 θ ϕ 2
where θ : target azimuth (°) from the horizontal plane at the eye level (Right: +)
ϕ : target elevation (°) from the mid-sagittal plane (Up: +) 33
For vertical flexion/extension (Eq. 3.6), it was assumed that the HMCR was represented by β5 + β7 or β5 when target elevation was above or below a specified level (β6), respectively. In addition, it was assumed that there was a preferred level of baseline inclination angle (β4) and an interaction between the target azimuth and elevation (β8)
αx = β4 + β5 (ϕ − β6) + β7 max(ϕ − β6, 0) + β8 θ2ϕ
The cyclotorsion angle (Eq. 3.7) is expressed as a function of target azimuth including a dead zone with a thresold (β10).
αy = β9 sgn(θ) max(|θ| − β10, 0)
The estimated parameters for each regression model with corresponding r2 coefficient are listed Table 3.1. The results indicate that the horizontal HMCR is approximately 68% of target azimuth, and the eye-motion region is ±3° from the mid-sagittal plane (r2 = 0.95). The vertical HMCR is 71% or 16% when the targets are presented above or below 19° of elevation, respectively (r2 = 0.95). The cyclotorsion (tilt) is null for target azimuths less than 57° and 7% of target azimuth beyond 57°. The small r2 coefficient (0.13) indicates that a large variation exists. The response surfaces of the developed model described as a function of target azimuth and elevation are illustrated in Figure 3.4. Table 3.1. Model parameter estimates and r2 coefficients Model
β1 β2 β3 β4 β5 β6 β7 β8 β9 β10
Estimate 0.68684 3.10255 -0.00002 7.89010 0.16720 18.92262 0.53943 0.00002 0.07629 56.78490
95% confidence interval [ .67215, 0.70153] [ 1.67323 4.53187] [-0.00003, -0.000008] [ 5.58150, 0.19870] [ 0.11649, 0.21791] [14.70743, 23.13781] [ 0.44053 0.63834] [0.000014, 0.000025] [ 0.04638, 0.10620] [41.21422, 72.35559]
Figure 3.4. Models of head orientation as a function of target azimuth and elevation. A) horizontal head orientation; B) vertical head orientation; C) cyclotorsion (tilt)
3.4.3 Model Implementation Examples of model predictions implemented in a digital human modeling software (Unigraphics JackTM) are presented in Figure 3.5 for two target azimuths and elevations (θ = −120° azimuth, ϕ = 0° Figure 3.5A) and θ = −60°, ϕ =70°, Figure 3.5C), respectively. The model performance was contrasted with the prediction of the inverse kinematics algorithms built in JackTM software for identical target locations (Figure 3.5B and D). The results showed that the present model produced more natural appearances than the built-in algorithm for the head and neck joint angles.
Figure 3.5. Simulation of head orientation. Proposed model (left panels A & C) compared to built-in JackTM model (right panels B & D).
Figure 2.6 (Continued). Proposed model (left panels E & G) compared to built-in JackTM model (right panels F & H). In general, the new model generated a less exaggerated cyclotorsion (tilt) angle for a visual target at the same location.
3.5 Discussion 3.5.1 Nonlinearity in HMCR Head orientation was modeled as a nonlinear function of eccentricity for visual targets distributed in a three-dimensional space. Specifically, nonlinearity could be characterized by 1) an eye-only region approximately ±3° for horizontal head rotation and ±57° for cyclotorsion; 2) a vertical HMCR differing for elevations above and below 19° of the eye level; and 3) an HMCR affected by the interaction of target azimuth and elevation. These findings were consistent with the results of some studies (Fuller, 1992; Stahl, 1999), while the results of other studies suggested linearity of the HMCRs (Hin and Delleman, 2000; Delleman et al., 2001). The apparent inconsistency could be partially explainable by differences in target configuration and experimental tasks. In the studies showing linearity in HMCRs, the tasks consisted of sequentially moving the gaze in successive 15° increments along the hemispherical range (Hin and Delleman, 2000; Delleman et al., 2001). Since target eccentricity with respect to the head (visual eccentricity) was in fact 15° regardless of the eccentricity with respect to the torso (geometric eccentricity), it is suggested that the resulting head movements have shown the linearity of the HMCR. It should be noted that 15° of target eccentricity is smaller than the asymptotic amplitude of the initially intended head movement (Chapter 2), hence it was likely that the control of head movement was dominantly based on the sensorimotor mapping. In studies showing nonlinearity in HMCRs (Zangemeister, Jones, & Stark, 1981; Fuller, 1992; Guitton 1992; Goldring et al., 1996; Stahl, 1999), the experimental task consisted of displacing the gaze from the sagittal plane to a randomly presented eccentric target and returning back to the sagittal plane, as in the present study. Hence it is suggested that the nonlinearity of the HMCR is related to target eccentricity with respect to the head (visual origin) rather than torso (mechanical origin). As the HMCR presented a discontinuity, Stahl (1999) suggested that describing the HMCR by a regression equation is difficult even with high order polynomial terms. In the present study, the HMCRs were modeled using nonlinear piecewise regression
models in order to reflect the behavior of the HMCR changing by regions. This behavior may also have resulted from the large inter-subject variability, also acknowledged in other studies (Goldring et al., 1996; Fuller, 1992). As observed by Fuller (1992), in the present study, some subjects moved the head to the 10° target (head movers), while others did not (non-head movers). Hence, on average the model included a small eyeonly region of approximately 3°. The non-head movers also presented a smaller HMCR than the head movers. 3.5.2 Range of Motion and Eye-Only Region The full range of motion (ROM) of the horizontal eye rotation is ±55° (Guitton & Volle, 1987). Hence if a target is presented beyond the eye ROM, visual gaze can be achieved only with the contribution of neck-head and/or torso motion. Since the ROM of horizontal neck-head rotation is ±64° (Sherk, 1989), the resultant ROM of the gaze, which is equivalent to the sum of eyes-in-the-head and head-in-the torso angle, will reach up to ±109°. Stahl (1999) reported that the average functional ROM of the eyes (COMR: Customary Oculo-Motor Range), which was defined as the region where the eyes fall with a frequency of 90% for all head orientations, was 44±23.8°. Hence the functional ROM of the eyes was normally smaller than the mechanical ROM. Based on this result, it was suggested that the HMCRs be either scaled or gated so as to make final eye eccentricity remain within the limits of the COMR. In fact the COMR, rather than the mechanical ROM of the eyes, seemed to be a useful indication of an individual’s head movement propensity. However, the size of the eye-only region measured in this study was 3°, which was significantly smaller than the COMR (44±23.8°) and the full range of eye movements (55°), suggesting that the functional limit of the eye-only region may be neurally programmed rather than mechanically constrained for gaze displacement tasks. Furthermore, the limitation of eye movements compensated by head movements and the head corrective movements described in Chapter 2 support the hypothesis that the combination of eye-head movement is organized for a specific purpose.
3.5.3 Effect of Three-Dimensional Target Locations The model equations proposed in the present study (Eq. 3.5 – Eq. 3.7) indicated that both target azimuths and elevations are required to predict the horizontal rotation and vertical flexion/extension angles. As described previously, this is due to the interactions between adjacent joints that impose limitations on joint mobility (Webb Associates, 1978). Specifically, the present model reflects the interaction between horizontal and vertical head rotations: 1) the amplitude of horizontal head rotation decreases for targets of very high and very low elevation; 2) the amplitude of vertical head rotation increases for very large horizontal head rotations; 3) cyclotorsional angle to the right increases with horizontal head rotation to the right hemisphere and vice versa.
CHAPTER 4 EYE AND HEAD ORIENTATION FOR MINIMUM SENSORIMOTOR ERROR
4.1 Abstract Gaze movements include eye-in-the-head and head-in-the-torso movements. It was hypothesized that errors related to the sensory representation and movement control of each component constitute a cost function, and that the contribution of each component to gaze movements is programmed in a way that minimizes the unweighted sum of two cost functions associated with their respective errors. Seated subjects aimed with the right fingertip at the remembered position of a target presented in complete darkness, while the head was fixed at either 0 (forward), 15, 30, 45 or 60° (right) from the torso mid-sagittal plane. Target eccentricities were 15−90° with respect to the headaiming azimuth. The results indicated that finger-aiming error increases with target eccentricity with respect to the head (= eye-in-the-head angle), and head-aiming azimuth with respect to the torso (= head-in-the-torso angle). Second-order polynomial regression models were used to describe the finger aiming error as a function of head and target azimuths. A quadratic optimization model, whose objective function was to minimize the unweighted sum of errors associated with eye-in-the-head and head-in-the-torso angles, was developed to predict the optimal set of head and eye contribution for a given target eccentricity. In this context, the prediction model indicates that head orientation would be approximately 60% of target eccentricity on average, which is in agreement with the measured head movement contribution ratio (HMCR) of 72% for horizontal head rotations. In addition, the model predicted HMCRs are positively correlated with the HMCRs measured from individual subjects (r = 0.65). These results support the
hypothesis that final postures are optimal sets of joint contributions programmed in an egocentric reference frame for the achievement of task goals.
4.2 Introduction Head movements contribute to gaze displacement by expanding the accessible field of view limited by the range of motion (ROM) of the eyes. As described in Chapter 3, the angular distance the head contributes to gaze is typically smaller than target eccentricity (Freedman & Sparks, 1997; Stahl, 1999; Guitton & Volle, 1987; Fuller, 1992), but the reasons for this limited displacement are not clearly known. Undershooting patterns are not unique to head movements; they are also found for eye, finger, elbow and shoulder movements, particularly when visual feedback of the corresponding limb segment is not provided (Kapoula, 1985; Helsen, Elliot, Starkes & Ricker, 2000). The undershooting of head movements can thus be understood within the context of general movement control strategies, especially with regard to the manipulation of redundant degrees of freedom. Stahl (1999) suggested that head movement amplitude is controlled to maintain the eye within the functional range of motion and this control is related to a neural rather than a mechanical issue. A large intrasubject variability has also been found for head movement amplitude (Fuller, 1992; Bard et al., 1992; Goldring et al., 1996). Assuming that the final position of the end-effector are the goals of the CNS for movement control, it has been suggested that the variability in the relative contributions of individual limb segments paradoxically enables the accuracy of the end-effector position. Specifically, Todorov and Jordan (2002) indicated that as long as the performance of the main task (= final position of the end-effector, for example), which is the primary concern of the CNS, is not compromised, the movement of individual segments are only minimally supervised and thus may show significant variability. This viewpoint is also supported by the reduction in finger pointing variability associated with variability in elbow and shoulder (Helsen et al., 2000). Since the primary task of the central nervous system may be to control gaze, head movements alone may be minimally supervised in the same context.
However, as described in Chapter 2, even after the gaze (which can be considered as the end-effector for visual target localization movements) is aimed at the target, the head makes corrective movements until the desired set of a proportional contribution of eye and head orientation (= HMCR) is achieved. In addition, the HMCR was consistent for a given target position (Chapter 3). While the HMCR does not vary regardless of the mode of target presentation (visual, auditory, or visual + auditory), a large inter-subject variability was observed (Goldring et al., 1996). As described in previous chapters, it is therefore hypothesized that the relative orientation of the eye and head, which can be seen as a postural response of the gaze control system, is a goal that the CNS tries to achieve, since the desired posture is optimized for a given task requirement and movement context. However, the specific cost function for the optimization of posture is largely unknown. A potential clue comes from a study showing that individual body segments are controlled in such a way that the sum of the signal dependent noises, which is proportional to the magnitude of the control signal itself, can be minimized (Wolpert, Ghahramani & Jordan, 1995). This means that in a situation where only a single segment (e.g., the head) is allowed to move to fulfill the task requirements (e.g, gaze displacement), a large displacement amplitude of the corresponding joint would be necessary, further corresponding to the increased level of noise in the control signal. Hence in the context of large movements, the movements of a single segment are more prone to noise than are the movements of multiple segments, as in the latter case a redistribution of displacement can be done over several joints (e.g., the head and eye). In addition, the optimal limb segment angles may be related to attempts by the CNS to minimize errors in the sensory representations of target and body segment positions. For the eyes, the accuracy of target position information degrades with the angular distance from the foveal line of sight (Paillard & Amblard, 1985; Bock, 1993). Likewise, the accuracy of proprioception in extraocular muscles and neck muscles is best when the corresponding body segments are in neutral (resting) position (Biguer, Prablanc & Jeannerod, 1984; Roll, Velay & Roll, 1991; Abeele, Delreux, Crommelinck & Roucoux, 1993; Rossetti et al., 1994). The variability of finger pointing movements to the remembered position of a visual target increases in complete darkness, when gaze
displacement is made either by head or eye movements alone (Biguer et al., 1984). It has been claimed that without visual feedback, target position representation, which can only be estimated from eye-in-the-head and head-in-the-torso angles, would be deteriorated by the larger proprioceptive errors resulting from the large deviation of an individual segment from the neutral position. Similarly, the aiming error of forearm movements based primarily on proprioceptive information increases with displacement amplitude (Adamo et al., 2003). Rosseti et al. (1994) have proposed a model of eye and head orientation in which the contribution of the individual components should be optimized in such a way that the combined error (eye and head proprioceptive information) for target position information can be minimized. However, it is still not clear whether head orientation/displacement amplitude is necessarily associated with the dispersion or variability of the finger pointing position. A number of studies have shown that the undershooting of finger pointing movements increases with target eccentricity (Fookson et al., 1994; de Graaf et al., 1994; Darling et al., 1996; Adamovich et al., 1998; Chieffi et al., 1999; Becker & Saglam, 2001). The model proposed by Rosetti et al. (1994), which quantified the endpoint dispersion of the finger pointing movements, predicted that the optimum contribution of the head would be 90% of the target eccentricity, while others studies (Fuller, 1992; Guitton & Volle, 1987, Chapter 3) have shown that the HMCR is approximately 65-75%. Another limitation of the model proposed by Rossetti (1994) is whether the control of the head movement can effectively be modeled from the measurement of finger pointing/reaching movements. Specifically, head movements are likely to be represented and programmed in a spherical coordinate system (azimuth and elevation), while pointing/reaching movements are represented in a Cartesian coordinate system (x−y−z coordinates). Furthermore, vectorcoded target representations (Bock & Eckmiller, 1986; De Graaf et al., 1996) and pointcoded target representations (Polit & Bizzi, 1979) are expected to lead to different movement strategies. Since gaze movements are intended to align the line of sight with the target, it could be assumed that head and eye movements are controlled using a vector-coded target representation, while finger reaching/pointing movements are based on a position-coded target representation.
The objectives and hypotheses of the present study are as follows: Objectives 1) To measure hand aiming error in absence of visual feedback of the limb segment, as a function of head eccentricity from the torso and target eccentricity from the head 2) To determine the role of sensory motor error in the relative contributions of eye and head movements to gaze 3) To develop an empirical model describing head movement contribution based on error measurement and optimization methods 4) To investigate the optimality of head posture from the perspective of visual target acquisition, task requirements and subsequent hand movements Hypotheses 1) Sensory motor error can be more explicitly quantified by the systematic
undershooting magnitude of finger aiming direction, rather than the dispersion of the finger pointing/reaching end points 2) The head movement contribution can be determined by the minimization of the unweighted sum of two types of errors: •
The error in encoding target eccentricity with respect to the head aiming direction (= eye-in-the-head angles, assuming that gaze is on the target). The error would increase with target eccentricity
The error in encoding head-aiming direction, which increases with head eccentricity (head-in-the-torso angle)
3) Head posture is optimized to improve/enhance the accuracy of subsequent hand and whole-body movements
4.3 Methods 4.3.1 Subjects Five male and five female subjects participated in the experiments as paid volunteers. Subjects were students from the University of Michigan recruited through in45
class advertisements or email announcements. The mean age of the subjects was 22.3 years old (SD: ± 1.8). All subjects were free from any known musculoskeletal or neurological disorders, and had normal vision (20/20 or better) without corrective lenses. Mean stature and body weight were 170.9 cm (SD: ±12.0) and 67.2 kg (SD: ±16.3), respectively. All subjects were right-handed. 4.3.2 Equipment Visual targets were placed on an arc (radius = 115cm, arc length = 300cm) set horizontally in front of the subject. The elevation of the target arc was set at the eye level of the seated subject. The arc corresponded to 150° of visual angle in the right hemisphere from the mid-sagittal plane. Each visual target was composed of alphanumeric characters (0 – 9, A, C, E, F, H, L, U, or P) displayed on seven-segment light emitting diode display (LED’s) whose visual angle was approximately 0.25°. The displays were placed along the arc at intervals corresponding to 15° of visual angle along the arc. The subject was seated on a chair (seat pan height = 40 cm, seat pan width = 50 cm, back support height = 58 cm) throughout the experiments. The seat position and arc height were adjusted so that the center of the display coincides with the center of the rotation of the head (atlanto-occipital joint), and the individual subject’s nasion aimed at the 0° (forward) target location. The entire room was completely dark during the experiments, but illuminated during the rest between the blocks to prevent dark adaptation. 4.3.3 Procedure The subject was initially seated in a reference posture, with the head aligned with the torso mid-sagittal plane. The head was then horizontally rotated to set the nasooccipital axis azimuth at either 0 (forward), 15, 30, 45 or 60° (rightward) for each block (Figure 4.1A). The subject was asked to bite a dental impression bar attached to a fixed frame in order to secure the head orientation during the trials, while the torso was secured to the chair with a harness. Each subject performed training trials until the experimental protocol became familiar. In each trial, the subject was asked to look at and read aloud 46
the initial fixation display (duration = 2 seconds) aligned with the naso-occipital axis. A target, whose eccentricity was randomly chosen from 15 to 90° azimuth, was then displayed for 2 seconds. The subject was asked to redirect the eye gaze and to read the illuminated target display aloud until it disappeared. The illumination of the initial fixation and the eccentric target was accompanied by a 100 ms tone at 500Hz or 2000Hz, respectively. The rate of change of displayed alphanumeric characters for both the initial fixation and eccentric targets was set to one per second. Once the eccentric target was turned off, the subject was asked to extend the right arm horizontally and aim with the right index finger at the remembered position of the target as closely as possible in the absence of visual feedback (Figure 4.1A). Once the finger-aiming direction was perceived as being close enough to the target, the subject was asked to depress a button switch held in the left hand. Two seconds after the button depression, a 100 ms audio tone at 500Hz signaled the subject to return the right hand to the initial position (on the right lap) to rest for 5 seconds. A trial consisted of the presentation of a home and target display followed by an aiming task. A block was composed of twenty-four trials (6 target eccentricities × 4 replications). The target locations were randomized and balanced within each block. Each subject performed a total of four blocks, each of which corresponded to one of the four head azimuths. A rest period of 5 minutes was provided between each block. The order of blocks was balanced and randomized across subjects. Five subjects performed an additional experiment in order to measure the aiming accuracy of the head. In this experiment, the head was free to move and the subject was asked to align the head to the memorized position of the displayed target, and to depress the button when the head-aiming direction was perceived to be close enough to the target position. The procedure was otherwise identical to the one described above. The initial fixation was only in the mid-sagittal plane of the torso, and the eccentric targets were randomly chosen from six locations within a range of 15-90° (rightward) from the torso mid-sagittal plane (Figure 4.1B). One block of twenty-four trials (6 target eccentricities × 4 replications) was performed by each subject.
The procedures were reviewed and approved by the University of Michigan Health Sciences Institutional Review Board for compliance with appropriate guidelines, state and federal regulations. A
Figure 4.1. The configuration of the target arc and the definition of aiming error. A) Finger aiming tasks; B) Head aiming tasks.
4.3.4 Movement recordings An electromagnetic motion capture system (Flock of BirdsTM, Ascension Technology) with five sensors that measure movements with six degrees of freedom was used to record torso, upper extremity and head movements. The sensors were placed on the forehead, C7, right upper arm, right hand and low back (midpoint between left and right posterior superior iliac spine). A calibration procedure was carried out to determine the location of anatomical landmarks (nasion, tragion, infraorbitale, etc.) in the sensor’s local reference frame. A splint was placed and wrapped in the right hand to maintain index finger posture throughout the experiments, so that the coordinates of the six degrees of freedom sensor on the right hand could be used to estimate the fingertip position. The movements of all landmarks were recorded at a 25Hz sampling frequency, and the trajectory of each landmark was smoothed in an off-line process using a zero phase shift second order Butterworth low-pass filter with a 6Hz cutoff frequency. The Cartesian coordinates and
orientations of the sensors were used to estimate the joint center locations (Reed et al., 1999). 4.3.5 Data Analysis The independent variables manipulated in present study were the head-in-thetorso angle (θhead) and the eye-in-the-head angle (θeye). The head-in-the-torso angle (= head aiming azimuth) was measured by the azimuth of the initial fixation with respect to the mid-sagittal plane (+: rightward rotation), and was preset for each block. Although eye gaze directions were not measured, it was assumed that the eye was fixating the target at the time of gaze completion, since the visual angle of a target was very small (< 0.25°) and the subject was required to read the displayed character changing intermittently. The eye-in-the-head angle (= eye aiming azimuth) was measured by target eccentricity with respect to head-aiming direction. The dependent variable was the magnitude of the angular error between finger aiming direction and target direction (Figure 4.1A). The finger aiming direction was defined as the projection onto the horizontal plane of the vector from the head center of rotation (mid-tragion) to the tip of the right index finger, while the target direction was defined as the projection onto the horizontal plane of the vector from the head mid-tragion to the target center. The included angle between the two projected vectors was used to measure the error magnitude. In the head aiming experiment, the independent variable was the target direction (= θeye) with respect to the torso mid-sagittal plane, while the dependent variable was the magnitude of the error between head aiming direction and target direction (Figure 4.1B).
4.4 Results 4.4.1 Regression models of finger aiming error The overall mean of finger aiming error across all subjects was 8.21° (SD 7.07°), which underlines a significant systematic undershooting with respect to target directions (Student’s t-test p < 0.01), while no overshoot was observed. The preliminary analysis of variance for combined subjects’ data identified significant effects of head azimuth with 49
respect to the torso (p < 0.01) and target eccentricity with respect to the head (p < 0.01) on the magnitude of the error. Specifically, the aiming error (i.e., magnitude of undershoot) increases with both head azimuth (head-in-the-torso angle), and target eccentricity (eye-in-the-head angle). However, a chi-square test indicated that the dispersion of the aiming error for replicated trials does not vary with either head azimuth or target eccentricity. Based on the preliminary findings, the following model was used to carry out a regression analysis for each subject (Eq. 4.1): 2 2 εˆ (θ eye ,θ head ) = β1 + β 2θ eye + β 3θ head
where εˆ : predicted aiming error
θeye, θhead : eye and head direction, respectively β1, β2, β3: least-square fit coefficients Examples of aiming error distribution and model predictions are illustrated in Figure 4.2. Overall, the adjusted r2 ranged between 0.86 and 0.17 (p < 0.01 for all subjects). The least-square fit coefficients indicate that for eight subjects the eye-in-thehead angle (θeye) contributes more than the than head-in-the-torso angle (θhead) to the finger aiming error, while the opposite is observed for two subjects (Table 4.1). A surface plot of the prediction model (Figure 4.3) indicates that the aiming error is the smallest when both the eye and head respective azimuths are in the neutral position (segment local angle = 0°), and that it increases with deviation from neutral.
Table 4.1. Coefficients and significance of the aiming error regression models Subject
β R2 F p
1 -3.6438 0.0031 0.0029 0.86 726.13 0.00
2 0.0743 0.0012 0.0018 0.36 67.13 0.00
3 0.9311 0.0009 0.0006 0.31 51.19 0.00
4 1.6861 0.0020 0.0007 0.81 506.68 0.00
5 5.8689 0.0027 0.0014 0.69 269.00 0.00
6 -0.1567 0.0009 0.0011 0.49 110.66 0.00
7 -0.6686 0.0013 0.0010 0.51 125.50 0.00
8 -1.1593 0.0023 0.0019 0.72 295.08 0.00
9 3.7410 0.0008 0.0004 0.17 22.97 0.00
10 2.3106 0.0014 0.0001 0.48 110.74 0.00
Figure 4.2. Example of distributions of finger aiming errors as a function of target eccentricity with respect to the head. Each panel corresponds to a head azimuth setting with respect to the torso. Solid lines represent model predictions. Data points for negative target eccentricities are estimated by mirror-flipping directions (for illustration purposes only). Data from subject 5.
Figure 4.3. Surface plot of the model prediction of finger aiming error as a function of head azimuth with respect to the torso and target eccentricity with respect to the head. Data from subject 5.
4.4.2 Minimum-Error Model The proposed minimum-error model assumes that the relative contribution of eye and head orientation for gaze displacement is determined in such a way that it minimizes the unweighted sum of errors associated with the deviation of each segment from its neutral position. It is also assumed that the eye and head errors, whose origins have yet to be determined, are linearly correlated with the magnitude of finger aiming error and estimated from the prediction equation (Eq. 4.1). The objective function that minimizes the finger aiming error can be defined as follows:
Find θ* = (θ*eye , θ*head) such that ε (θ) can be minimized 2 2 min ε (θ ) = β1 + β 2θ eye + β 3θ head
θhead + θeye = θtarget
lb(θeye ) < θeye < ub(θeye )
lb(θhead ) < θhead < ub(θhead )
where lb(θi), ub(θi): lower and upper bound of the i-th segment imposed by the respective range of motion
The first constraint (Eq. 4.3) indicates that the direction of gaze, which is the sum of the head-in-the-torso and eye-in-the-head angles, should be on target at the time of gaze completion. From here on it will be assumed that the eye-in-the-head angle is equivalent to the target azimuth with respect to the head. The constraints represented in Eq. 4.4 and 4.5 indicate that both eye and head angles should be within their respective ranges of motion. A sequential quadratic programming method was used to solve the constrained nonlinear optimization problem (Fletcher & Powell, 1963). Specifically, the number of eye-head direction combinations which satisfy the constraint in Eq. 4.2 (as represented by the diagonal line across the horizontal plane in Figure 4.4), should be first restricted by the range of motion constraints (the shaded boxes in Figure 4.4). The
restricted set of angle combinations (feasible solutions) is then evaluated for the associated error function (ε(θ)) in order to find the optimal combination that minimizes the error (θ*).
Figure 4.4. Schematics of the minimum-error optimization problem for a target at 60°.
4.4.3 Head Movement Contribution Ratio The optimization model provided eye-head combinations for which head contribution linearly increases with target eccentricity, which is equivalent to the head movement contribution ratio (HMCR, see Chapter 3) (Table 4.2). The predicted HMCR was 0.60 on average, for all subjects (SD: 0.15), while the measured HMCR for horizontal gaze movements of the same pool of subjects was 0.72 on average (SD: 0.04). As illustrated in Figure 4.5, the predicted and actual HMCRs show a positive correlation (Pearson’s r = 0.65)
Table 4.2. Measured and predicted head movement contribution ratio (HMCR). Subject
Mean (SD) Actual 0.69 0.69 0.75 0.77 0.65 0.68 0.74 0.72 0.73 0.78 0.72 HMCR† 0.04 Predicted 0.52 0.40 0.59 0.74 0.65 0.45 0.56 0.54 0.68 0.90 0.60 HMCR 0.15 † The HMCR was measured from gaze movements to visual targets distributed horizontally at eye level, with a maximum azimuth of 120° (right) from the mid-sagittal plane.
Figure 4.5. Correlation between actual and predicted head movement contribution ratios
4.4.4 Head Aiming Error The systematic undershooting pattern was also observed in head aiming error (Figure 4.6). Specifically, the undershooting of the head to the memorized target positions was 19.52° on average (SD: 10.15). In addition, the head aiming error increased with target eccentricity (analysis of covariance, p < 0.01 for all subjects). Linear regression models were developed to describe the head aiming error as a function of target eccentricity, and to estimate the corresponding HMCR. The adjusted r2s of the regression models for individual subjects ranged between 0.74 and 0.93. No systematic trend was observed for the dispersion of the head aiming error as a function of target eccentricity. For this subset of five subjects, the mean predicted and actual HMCR were 0.73 (SD: 0.04) and 0.64 (SD: 0.04), respectively.
Table 4.3. Coefficients and significance of the regression models of head aiming error Subject
β R2 F p Actual HMCR Predicted HMCR
-3.7333 -5.3283 -2.0667 -4.8301 -0.2333 0.3017 0.3897 0.3562 0.3648 0.3764 0.89 0.93 0.92 0.74 0.94 174.01 274.99 270.95 60.82 355.89 0.00 0.00 0.00 0.00 0.00 0.68 0.74 0.72 0.73 0.78 0.70
0.73 0.04 0.64 0.04
Figure 4.6. Distributions of head aiming errors as a function of target eccentricity. Solid lines correspond to errors predicted by regression models.
4.5 Discussion 4.5.1 Source of Error It was assumed that finger aiming error stems from at least two different sources: the misjudgment of the eye-in-the-head angle and the misjudgment of the head-in-thetorso angle, since both angles are needed to estimate target direction. In the present experiments, the error increased in a quadratic manner with both the eye-in-the-head angle and head-in-the-torso angle, and the error increased faster for the former than the latter. Consequently the present results support that finger aiming error is related to respective deviations from the neutral position of the eye and the head. The error of eye-in-the-head angle estimates is likely due to the degraded accuracy of the retinal information for images away from the fovea (Bard et al., 1990), since movement accuracy is enhanced by foveal vision (Temprado et al., 1996). In addition, the error originates from the degraded accuracy of extraocular proprioception with eye deviation from the neutral position (Gauthier, Vercher & Blouin, 1995). This is particularly true in the absence of visuo-spatial reference (Bock, 1986; Prablanc, Pélisson & Goodale, 1986). Similarly, the accuracy of the head-in-the-torso angle estimate is impaired by the degraded resolution and accuracy of neck proprioception with head azimuth. In the present case, vestibular information may not play a role, since the head was immobile for more than 3 minutes before the beginning of the experiment. It is likely that the fluid 55
flow in the semicircular canals is below the threshold level that gives rise to otholith activation (Israel & Berthoz, 1989). Unlike eye-in-the-head angle estimates, signaldependent noise in motor commands may not contribute significantly either, since the head was immobilized. It should be noted that neck proprioception in the present study might have drifted due to the sustained passive static posture of the head (Sainburg et al., 2003). Furthermore, no information can be derived from the efference copy, since the head position is maintained passively (Farrer, Franck, Paillard & Jeannerod, 2003). Memory decay may be another source of error (Chieffi, Allport & Woodin, 1999). In the present study, no delay was imposed before the movement initiation in order to minimize memory decay, but it is still likely that the time needed to raise and extend the arm during the pointing movement could have contributed to a loss of information. In the present experiments, the aiming tasks were performed with the arm extended in the horizontal plane. Since the target ranged up to 150° (60° head deviation + 90° eye deviation), the range of motion of the shoulder (mean 151°± SD 14°, Webb Associates, 1978) could be only marginally enough to cover the entire target eccentricity. Indeed, the data show that the magnitude of undershoots was as large as 45° for a 150° target eccentricity relative to the torso. Nevertheless, an undershooting pattern was still observed for small target eccentricity (15 to 30°), suggesting that mechanical limitation is not a prevalent issue and may only minimally interfere with arm motion in the present context of target eccentricities. Finger aiming tasks for visually presented targets require multiple transformations of reference frames, including eye-to-head, head-to-torso, and torso-to-shoulder, which inevitably introduce noise and computation errors (Harris, Zikovitz & Kopinska, 1998). Hence, the proposed model may also reflect trade-offs between the error associated with the reference frame transformations and extreme limb positions. Since finger aiming tasks in the present experiments might have introduced further sources of errors, an additional aiming experiment was performed with the head. This experiment verified that the source of error is not confined to eye-to-hand transformation noise or mechanical limitations of the shoulder but is instead associated with proprioception, vision and motor commands in general.
4.5.2 Modeling the HMCR by Finger Aiming Error The undershooting error observed in the present study is in agreement with previous studies that have used visual targets (Chieffi et al., 1999; Adamovich et al., 1998) and auditory/tactile targets (DeGraaf et al., 1994). The proposed model attempts to explain the relative contribution of the head and eye by sensorimotor error reflected in the
undershooting of finger aiming movements in a three-dimensional space. This model indicates that the predicted HMCR was approximately 60% on average, which shows a good correlation with the measured HMCR (mean: 72%). However, a previous study has developed a similar model based on dispersion of finger pointing movements in a twodimensional space, with a predicted HMCR of 90% (Rossetti et al., 1994). A finger
pointing movement is essentially based on a point-coding of target position in a Cartesian coordinate system, while a finger aiming movement is based on a vector-coding in a spherical coordinate system. Since a visually guided head movement is more likely to be based on vector-coding in a spherical coordinate system, a finger aiming task in a 3D space, such as that used in the present study, would more accurately reflect the underlying process of head movement control than would a finger pointing task in a 2D space. Furthermore, since the purpose of the present study is to model the systematic undershooting pattern of head movements (Freedman and Sparks, 1997; Stahl, 1999; Guitton & Volle, 1987; Fuller, 1992; Chapter 2 & 3), the sensorimotor errors the CNS attempts to minimize for head movement control should be analyzed in terms of their effect on the undershooting rather than on the dispersion magnitude. 4.5.3 Validity of the Proposed Model The structure of the model developed in the present study based on error measurements, is comparable in principle to that of the model proposed by Rosseti et al. (1994). However, the present study employed more rigorous quantification and explicit optimization modeling methods. The magnitude of error as a function of eye-in-the-head and head-in-the-torso azimuths was modeled by a second order polynomial regression equation, which can be split into two parts corresponding to the two sources of finger aiming error, plus a constant baseline error. A second order polynomial function was selected for the regression model for the sake of simplicity and the prevention of
overfitting problems. Since the regression function for each subject attempts to achieve the global minimum fitting error of the combined data for all target eccentricities and head azimuths, some fitting curves (the solid line within each panel of Figure 4.2) may not accurately describe the corresponding data subset (for example, the graph corresponding to a head azimuth of 45° in Figure 4.2). In addition, one of the model’s shortcomings is that it does not accommodate a flat region near the mid-sagittal plane where only eye movements are made to displace the gaze. The size of the flat region was found to be approximately ±3° (Chapter 3). Another study (Roll et al., 1986) suggested that head orientation begins to affect hand-pointing accuracy for target eccentricities greater than 20°; however, a flat region near the sagittal plane and multiple boundaries of nonlinear increments may unnecessarily complicate the deterministic prediction of the optimization model. The optimization model proposed in this study assumed that the contribution of head movements to gaze displacement is determined by the CNS to minimize sensory motor errors, as evidenced by the outcome of finger pointing tasks. It is remarkable that the predicted HMCR is well correlated with the actual head movement contribution ratios measured for horizontal gaze. However, head movements typically show a large intersubject variability (Fuller, 1992; Goldring et al., 1996, Chapter 2 & 3) whose origin and functional outcome have not been identified. A future study should analyze individual strategies for head movement and posture.
CHAPTER 5 CONTRIBUTION OF HEAD MOVEMENTS TO WHOLE BODY BALANCE CONTROL
5.1 Abstract The purpose of the present chapter was to investigate whether head movements can be counted as an integral part of whole-body posture, in addition to the traditionally acknowledged role for gaze displacements, which would support the hypothesis that the whole body posture would be also a constraint for head movement control. Standing subjects directed the gaze at one of three targets (+35, 0 -35° from the eye level) distributed vertically in the sagittal plane. The task was performed while standing in three conditions: 1) with the arms along the side of the torso; 2) holding a load with both hands in a static condition; 3) lift the hand-held object (heavy versus light) to one of three predetermined elevations indicated by target positions (+35, 0 -35° from the eye level). When the subjects looked at the upper and lower targets the head orientation was shifted up by 2.16° and 1.84° respectively in the hand loaded conditions, when compared to the reference condition (unloaded hands). Similarly, when the subject lifted the hand-held object, heavy hand-load conditions induced an upward shift of the head by 2.32° and 1.22° for the upper and lower targets, respectively, when compared to light hand-load lifting conditions. The effect of the center of mass (CoM) displacement on head orientation in the static holding condition was investigated through simulations of link segment models whose angles were manipulated in a sagittal plane. The result showed that when holding a load in the hand, approximately 18% of the load effect on the CoM position and ankle torque could be compensated by head extension. The findings indicate
that when a load is carried in the hand, head movements may be used to compensate shifts in the CoM and are part of the associated reorganization of posture.
5.2 Introduction Visually guided tasks are generally dependent on both head and eye movements for foveal acquisition of targets. The range of ocular movements is approximately ± 55° in humans (Guitton and Volle, 1987). In spite of this oculomotor capacity, head movements are used to carry the eye toward the target even when target eccentricity from the initial line of sight is within the range of eye movements alone (Bahill, 1975; Stahl, 1999). Head movements, which modify the eye-in-head and eye-in-space positions, are strongly related to the spatial location of the visual targets (Gauthier, Martin and Stark 1986; Guitton and Volle 1987; Guitton, 1988), which was also evidenced in Chapter 2 and 3. However, the location of a visual target is not the only factor determining head orientation. As Massion (1992) has indicated, the central nervous system pursues two objectives simultaneously in a dynamic voluntary tasks: the first is to control the primary tasks, and the second is to maintain the balance compensating the perturbation imposed by the primary tasks. In order to maintain whole body balance, all body segment locations and associated muscle activities have to be adjusted in correspondence with the perturbation. From this perspective, the head should also pursue the simultaneous achievement of two major goals: one is to carry the eye for stable vision necessary for the primary tasks, and the other would be to contribute to whole body balance. The head contains important sensors involved in spatial representation such as the eyes and the vestibular organs, which are crucial for whole body posture and balance. Hence the CNS needs to control head movements with respect to whole body perturbation. Nashner (1985) suggested two modes of head movement control strategies: the “head-stable-in-the-space” strategy primarily focuses on maintaining the head stable in space so that the gaze is directed at the object of interest. Conversely, the “headlocked-to-the-trunk” strategy attempts to maintain the head position on the torso in order to take advantage of the vestibular inputs in the head for keeping the torso, not the head,
upright. It has been suggested that “head-locked-to-the-trunk” strategy plays an important role as a voluntary reflex mechanism to compensate predictable perturbation and maintain balance (Keshner, 1997; Vivani & Berthoz, 1975). As the perturbation becomes unpredictable, a mechanical resonance emerges to dampen the perturbation and maintain the head position stabilized in space. Another aspect of the head movements, which has been largely neglected is that any displacement of the head can be viewed as the motion of a mass that will either need to be stabilized by a reorganization of the body posture or contribute to the stability of a posture. The head is intrinsically unstable and its behavior can be explained by an inverted pendulum model. The weight of head and neck segments correspond to approximately 8.4% of the total body weight, which is comparable to the combined weight of both arms and hands (10.2% = both upper arms + lower arms + hands; Webb Associates, 1978). The head/neck range of motion in the sagittal plane is 54-72° in flexion and 39-93° in extension (Melzer & Moffitt, 1997). As described above, head movements are involved in visual gaze displacements. Hence it is of interest to determine if the head position is integrated in postural adjustments for whole body balance, when head movements are also visually constrained by the primary tasks.
The objectives and hypotheses of the present study are as follows: Objectives 1) Investigate whether the mechanical constraints imposed by the whole-body body posture play a role in the determination of head movements 2) Investigate the interaction between the whole-body posture and visual gaze function on the head posture Hypotheses 1) Head movements are modulated by whole body balance requirements in load holding and lifting tasks combined with gaze requirements. 2) A posture is an outcome of optimization for a given visual target position, task requirements, the context of movement.
5.3 Experiments 5.3.1 Methods The experiment focused on the contribution of head position to visual target acquisition and postural stability simultaneously. The subjects were required to direct their gaze and point to targets distributed vertically in the sagittal plane with and without a load in their hands. Subjects Four male and six female subjects participated in this study. Subjects were recruited from the local area through advertisements in local newspapers. All subjects were free from any known musculoskeletal or neurological disorders. Their age ranged from 21 to 64 years old (median = 46). The mean stature and body weight was 165.79 cm (sd: 7.57cm) and 71.37kg (SD: 16.82kg), respectively. Movement Recording An optical motion analysis system (MacReflexTM, Qualisys Inc.) with six cameras and an electromagnetic motion analysis system (Flock of BirdsTM, Ascension Technology Corp.) were employed to measure the movement of the head and other body segments. The two systems were combined to avoid “occlusion” problems inherent to the tasks. The optical and electromagnetic markers were placed on the subjects’ body at the locations indicated in Figure 5.1. The measurement accuracy was 1.4mm for the optical system and 0.2mm for the electromagnetic system. The movements of the subjects were sampled at 25Hz and stored in a computer for off-line analysis. Head orientation was defined as the angle of the naso-occipital axis projected on the mid-sagittal plane from the horizontal plane (up: +).
Figure 5.1. Locations of the optical markers (hollow squares) and electromagnetic sensors (filled circles) placed on the subject’s body. A) frontal view; B) sagittal view
Equipment A vertical fixture supporting three vertically arranged LED targets was placed facing the subject (Figure 5.2). The height of the fixture was adjusted so as to place the center target at the eye level of each subject in a standing posture. Target locations corresponded to viewing angles of +35° (upper target), 0° (center target), and −35° (lower target). The visual angle of each target was smaller than 0.5°. Each subject was asked to hold a cylindrical bar (width: 40cm) with both hands. The load weight corresponded to either 340g (empty bar) or 40% of the shoulder maximum voluntary flexion strength of the subject with the horizontally extended arm (when the bar was filled). The average weight of the load across all subjects was 3.28 kg.
Figure 5.2. A) Configuration of the visual targets; B) Definition of head posture
Procedure In static holding conditions (Figure 5.3A & B), a trial was composed of an initial fixation to the eye level target (0° / 3s) presented with a 500Hz/0.2s signal tone, followed by an upper or lower target display (±35° / 3s) presented with a 2000Hz/0.2s signal tone. The subject was asked to look at the illuminated target, and make “natural” head movements in two conditions: 1) the arms and hands hanging along the side of the body (Figure 5.3A: unloaded hands conditions), or 2) flexing the forearm horizontally while holding the load with both hands (Figure 5.3B: loaded hands conditions). Each condition consisted a block of 8 trials (2 targets × 4 replications), and the order of targets and blocks was randomized and balanced for each subject. In dynamic lifting conditions (Figure 5.3C & D), a signal tone of 500Hz/0.2s indicate the trial initiation, and the subject was asked to keep the gaze on the initial fixation point at 0° (without illumination). Then a target either at 0, +35, or −35° was illuminated with a signal tone of 2000Hz/0.2s. The subject was asked to lift up with both hands the hand-held bar, which was initially placed on the home shelf at elbow height, when reaching the target the subject was required to depress the target LED using the bar
to activate a microswitch. The size of the microswitch was approximately same as the diameter of the hand-held bar; hence visual occlusion was prevented even when the handheld bar was nearing the target. The hand-held bar was either empty (Figure 5.3C: light load conditions), or filled with lead shots (Figure 5.3D: heavy load conditions). After a delay of 2s, the visual target was turned off and a tone of 500Hz/0.2s signaled the subject to return the hand-held bar to the home shelf. The subject was instructed to make movements at a comfortable speed, but no specific instructions were provided regarding head movements. Each condition consisted of a block of 12 trials (3 targets × 4 replications), .The order of targets and blocks was randomized and balanced for each subject. The procedures were reviewed and approved by the University of Michigan Health Sciences Institutional Review Board for compliance with the appropriate guidelines, state and federal regulations.
Table 5.1. Design of experiments Task Condition Static holding condition
Unloaded hands Loaded hands
Dynamic lifting condition
Posture: standing with both arms along the side of the body Task: look at the illuminated target (T1, T2, or T3). Posture: standing and maintaining the lower arms horizontal while holding the filled load. Task: Identical to the unloaded hand condition. Load: adjusted for each subject to 40% of the shoulder maximum voluntary contraction level (2.7 kg ~ 7.6 kg; Median = 3.28kg). Posture: standing Task: lift the load from the home location (location of the support of the load at rest) to the illuminated target and touch the target with the load. Load: 340g for all subjects. Posture: standing Task: Identical to the light load condition. Load: adjusted for each subject to 40% of the shoulder maximum voluntary contraction level (2.7 kg ~ 7.6 kg; Median = 3.28kg).
Figure 5.3. Illustration of the four experimental conditions. A) Static-holding / unloaded hand condition; B) Static-holding / loaded hand condition; C) Dynamic lifting / light load condition D) Dynamic lifting / heavy load condition.
5.3.2 Results Static Holding Condition Figure 5.4 illustrates head flexion/extension angle profiles from a representative subject in the static holding condition. The unloaded and loaded hand conditions are represented by the dotted and solid lines, respectively. Upward deflections correspond to head/neck extension (upward) movements. The range of head elevation did not extend to the full range of target eccentricity. In this example, the head elevation (in flexion or extension) reached approximately 8 − 29° even though the targets were located at 35°.
Figure 5.4. Trajectories of head movements in a sagittal plane as a function of time. Upward and downward deflections correspond to head/neck extension and flexion respectively. Only the dynamic phases of movements were illustrated in this figure. Typical example of data, obtained from one subject.
More importantly, the magnitude of head extension angles toward the upper target (T1) was larger in loaded (solid line) than in unloaded conditions (dotted line). The magnitude of head flexion angles toward the lower target (T3) was smaller in loaded than unloaded conditions. The means and standard deviations of head displacements across all the subjects are listed in Table 5.2. The results show that the mean absolute head elevation was larger for upper targets and smaller for lower targets in the hand loaded than unloaded condition. A repeated measurement analysis of variance (ANOVA) indicated that the mean absolute head orientation was significantly shifted up in loaded than unloaded hand conditions (F = 37.755, p < .01). No significant interaction effect between target and load conditions was observed.
Table 5.2. Means and standard deviations of head flexion/extension angle across all subjects in the static holding situation. (Units are in degree). Upper Target (T1) Lower Target (T3)
Mean SD Mean SD
Unloaded 9.54 0.35 −9.85 0.33
Loaded 11.70 0.43 −8.01 0.44
Dynamic Lifting Condition As in static lifting conditions, the magnitude of head movements toward the upper target (T1) was larger in the heavy load than light load condition. The magnitude of head movements toward the lower target (T3) was smaller in the heavy load than in light load conditions. The means and standard deviations obtained for all the subjects are presented in Table 5.3. The mean absolute head elevation was significantly larger in the heavy load than in the light load condition (F = 25.507, p < .01). Once again, this result implies that the mean absolute head elevation was significantly larger for upper targets and smaller for lower targets in loaded than unloaded hand conditions. The mean head elevation was significantly larger in the dynamic lifting condition than in the static holding condition (F = 55.069, p < .05).
Table 5.3. Means and standard deviations of head elevation across all subjects in dynamic lifting situations (Units are in degree). Upper Target (T1) Center Target (T2) Lower Target (T3)
Mean SD Mean SD Mean SD
Light load 15.18 0.51 −0.21 0.27 −8.94 0.40
Heavy load 17.50 0.64 0.47 0.52 −7.72 0.66
5.4 Simulation 5.4.1 Methods Material The influence of head elevation on the center of mass (CoM) displacement in static holding conditions was investigated through a simulation study. An eight-link segment system representing the head-neck, trunk, upper arm, forearm, hand, upper leg, lower leg and foot, was used for the simulation (Figure 5.5). The linkage system was two-dimensional and assumed to act in the sagittal plane. The stature of the model, body weight and load weight were 176.8 cm, 78.43 kg, and 3.28 kg, respectively. These anthropometry and load weight parameters were derived from a representative
participant. The proportional length and mass of each body segments were estimated from actual movement recording data. A
Figure 5.5. Link segment model used in the simulation. A) Unloaded hand condition; B) Loaded hand condition
Procedure All body segment and hand load locations were initially obtained from measurement during task performance for both loading conditions. For each hand load condition, the flexion/extension angle of head/neck segment was manipulated continuously between −35° (flexion) and +35° (extension) at C7/T1 joint, while other body segment angles were maintained constant. The whole-body center of mass (CoMwb) and moment around the ankle were calculated and compared as dependent variables. The CoM location was obtained from the following equation:
∑C W + C i =1
Wload (Eq. 5.1)
∑W + W i =1
where Ci and Wi denote the CoM location and mass of the i-th segment, respectively. The antero-posterior displacement of Cbody was used to compute the CoM projection on the ground, which represents the effect of head orientation and the hand load. The moment around the ankle was calculated using the following equations (Chaffin, Andersson & Martin, 1999): v
∑ Rv = 0,v ∑ Rv = R ∑ M = 0, v v ∑ M =M
v + WL
v v + [ jC (cos θ ) W ] + [ L (cos θ ) R j −1 L j L jj −1 j j −1 ]
where R j : reactive force at joint j, v R j −1 : reactive force at the previous joint adjacent to joint j
v W L : mass of link L jC L : distances from joint j to the CoM of link L;
θ j : angle of the link L at each joint j with respect to the horizontal axis, L jj −1 : segment link lengths measured from joint j to the adjacent joint j -1
Assumptions The following assumptions were made when computing the position changes of the CoM: 1) all movements occur only in the sagittal plane; 2) the head center of rotation is uniquely located at the C7/T1 joint center; 3) all simulated postures are static, and therefore the effect of the dynamic components such as inertia are negligible; and 5) the ground-projection of the origin of the CoM (neutral posture) coincides with the groundprojection of the ankle joint center, and the displacement of the CoM along the anterior/posterior axis is described in reference to that point. The origin of the coordinate system was set to the location of the ankle. Displacement in a forward direction and moment in a counter clockwise direction was represented by positive numbers.
5.4.2 Results Effects of Head Movements on Whole-Body CoM Displacements In an unloaded hand condition (Figure 5.5A), when the head was set at 0°, the CoMwb projection on the horizontal plane of the foot support was 4.82 cm (forward) from the projection point of the ankle. When the head was flexed to –35° the shift of the CoMhead was 8.92 cm forward; when the head was extended to 35° the CoMhead projection shifted 12.52 cm backward. These changes in CoMhead location were accompanied by CoMwb shifts of 5.75 cm (forward shift) and 3.95 cm (backward shift), respectively (dotted line in Figure 5.6A). In loaded hand conditions, the upper arms and forearms are flexed to hold the load (Figure 5.5B). Thus, the weight of the load, upper arms, forearms, and hands displaces the CoMwb location 1.05 cm forward for a head elevation of 0°. (solid line in Figure 5.6A). When head posture was flexed from 0 to –35° the CoMwb moved forward by 0.90 cm (from 5.87 cm to 6.78 cm). When the head was extended from 0 to −35°, the CoMwb moved backward by 0.93 cm (from 5.87 cm to 5.02 cm). Thus, the CoMwb was shifted by –0.026cm for one degree of head rotation, which implies that the shift in CoMwb by a 3.28kg hand load and change in arm-hand posture could be completely compensated by head extension of 40.84°. Effects of Head Movements on Ankle Moment As the CoMwb shifted when the head elevation was changed, the moment around the ankle was also expected to vary accordingly, in order to maintain body balance. First, the effect of the change in head posture was investigated in unloaded conditions (dotted line in Figure 5.6B). When the whole body is set to the neutral posture, the initial moment around the ankle is 13.55 N-m (counter-clockwise). However, when the head is flexed from 0 to –35°, the CoMwb shifts forwardly, and the moment around the ankle required to maintain the whole body upright is reduced by 2.64 N-m (13.55 to 11.07 Nm). In the same way, an increase of 2.48 N-m of clockwise moment around the ankle is required when the head is flexed to +35° from 0° (13.55 to 16.19 N-m). Second, in loaded hand conditions, the CoMwb is displaced forward by the hand-held load and accompanied arm-hand posture change. Consequently, the initial moment around the 72
ankle is increased by 3.42 N-m (solid line in Figure 5.6B). The ankle moment in this latter case was estimated to be 16.97 N-m in the counter clockwise direction. When the head was flexed from 0 to –35°, the ankle moment increased from 13.55 N-m to 16.19 Nm. The ankle moment decreased from 13.55 N-m to 11.07 N-m, when the head was extended from 0 to +35°. Thus, head extension has a compensatory effect on the ankle moment. Since the change of –0.073 N-m in ankle moment corresponds to the change of 1° in head extension angle, the magnitude of effect by hand load (3.42 N-m) could be compensated by –46.81° (extension) of change in head elevation. A
Figure 5.6. A) Simulated location of the CoM in the sagittal plane; B) Simulated ankle moment
5.5 Discussion Head movements are most likely dependent on visual and mechanical constraints. The present data support the hypothesis that visual constraints alone are not sufficient to predict head position. 5.5.1 Head Movements for Visual Acquisition of Target Visual guidance helps to determine the orientation of the head with respect to the object of interest. However, since the eyes move independently, the head does not need to 73
aim directly at the target. Furthermore, mechanical constraints such as the weight of the head and joints range of motion also contribute to the determination of head orientation. Nevertheless, the head, like other body segments could be used to stabilize the body in space. As observed in other studies (reviewed by Vercher et al. 1994), the acquisition of visual targets generally requires head movements, and the final orientation of the head is to some extent a function of the spatial location of the target (Zangemeister and Stark, 1982; Guitton and Volle, 1987; Vercher et al., 1994). As in Chapter 2 and 3, it was also observed that the final orientation of the head does not directly aim at the target. Other studies (Fuller, 1992; Stahl, 1999) indicate that head orientation is not completely explained by target eccentricity alone. Hence the present results suggest that head orientation is also strongly constrained by factors that are not necessarily related to visual requirements as observed in Chapter 2, 3, and 4. 5.5.2 Head Contribution to Balance The spatial location of the target is still one of the important factors that determine the amplitude of the head movement. However, the present results indicate that the movement of the head is also affected by the mechanical constraints, such as gravity and overall body posture. Indeed, head elevation in the sagittal plane is strongly influenced by the magnitude of the load held in the hands. Head extension increases and head flexion decreases when the hand load increases. These data suggests that head position contributes to the adjustment of balance in response to changes in posture required by the weight and motion of the load. As supported by simulation data obtained in loaded and unloaded-hand conditions, head flexion and extension contribute to a non-negligible excursion of the CoMwb and therefore changes in ankle moment requirement (Figure 5.6B). The systematic alteration of the head elevation when the hands are loaded (compared to unloaded) can be viewed as one of the compensatory mechanisms triggered by the forward displacement of the CoM of the whole body. When the CoM of the head moves in the rearward direction, this displacement contributes to the balance of the entire body with less effort, as less torso extension is required. Conversely, if the CoM of the head moves in the forward direction, more torso extension is required as both the head
CoM and the body CoM are displaced forwardly. Moving the head is probably chosen as the primary strategy to maintain balance as it constitutes a simple reorganization of posture that does not require coordination of the whole–body multi-link system, despite head displacement may not be sufficient. Head movements alone would not compensate the entire displacement of the CoM and take charge of the reorganization of balance. As shown in the simulation, the ankle moment does not reach zero within the manipulated range of head elevation (Figure 5.6B). It was also estimated that the reduction of ankle moment obtainable by head movements within the valid range of motion was 18.80 N-m maximum (when the head flexed to 72°) and 6.75N-m minimum (when the head extended to 93°), which is still larger than zero. Hence, balance must be also achieved by the coordinated movements of multiple body segments. However, the absence of significant changes in torso or knee angles, which could have contributed to a modification of head orientation tend to support the proposed hypothesis. However it should be also noted that an increase in ankle dorsiflexion is likely to play an important role to maintain the posture and balance in that case. Finally, head movements were larger in extension than in flexion when looking/aiming at the upper and lower targets, respectively. This result is also in agreement with the hypothesis of head contribution to body stability. In addition, the head position in the loaded-hand condition may be an optimal posture setting for the main task requirement (gaze displacement for the target) and movement context (maintain the whole body balance). Hence as long as the performance of the main task is not compromised, the CNS selects the posture and movement plan that can be also used to satisfy the movement constraints and achieve the concurrent task goals simultaneously. 5.5.3 Static Holding versus Dynamic Lifting Task Changes in head orientations were larger in dynamic lifting than in the static holding situations. This difference may result from the fact that balance control is more challenging in dynamic lifting. In this situation the subjects had to fully extend both forearms and upper arms to reach the target with the load in the hands, which means
more perturbation of balance. Consequently, the contribution of head orientation becomes more critical for dynamic lifting conditions. 5.5.4 Implications for Movement Modeling The experimental and simulation results indicate that the movements of the head are not independent of the context in which body posture is determine, since they vary for identical visual requirements. Consequently, the aspect of the eye-head coordination is also affected by that context and is not determined purely by posture or the visual property of the given task. Previous studies have indicated that head posture is affected by the visual properties of the task, but the variability of the head position is relatively large, and depends on task requirements (Li & Haslegrave, 1999). Traditionally, the studies on the eye-head coordination modeling did not fully appreciate the whole body posture as an influencing factor. However, as reported in the present study, the orientation of the head toward the same visual target varies as a function of the load held in the hands and the CoM displacement. The difference of 3° in head movement due to the hand load condition is modest but represents almost 9% of the total eccentricity of the targets (± 35°) and almost 20% of the head movement toward the targets. It is suggested that the hand load conditions, and thus the displacements of the CoM are not negligible factor in determining head orientations. Therefore, the context in which visual tasks are performed must be taken into account to estimate/predict head orientation and its influence on the field of view.
CHAPTER 6 MODELING THE NEGOTIATED CONTROL OF THE HAND AND HEAD ON TORSO MOVEMENTS USING DIFFERENTIAL INVERSE KINEMATICS
6.1 Abstract Hand reach movements for manual work, vehicle operation, and manipulation of controls are planned and guided by visual information acquired by the combined movements of the head and eyes. It is hypothesized that reach movements are based on the negotiation of multiple subsystems that pursue simultaneously common and individual goals including visual gaze and manual reach. In the present study, the simultaneous control of multiple subsystems was simulated in seated reach movements using a differential inverse kinematics model. An 8-DOF model represented the torsoneck-head link (visual subsystem), and a 9-DOF model represented the torso-upper limb link (manual subsystem), respectively. Joint angles were predicted in the velocity domain via a pseudo-inverse Jacobian that weighted each link for its contribution to the movement. A secondary objective function was introduced to enable both subsystems to achieve the corresponding movement goals in a synergistic manner by manipulating redundant degrees of freedom. Simulated motions were compared to motion recordings from ten subjects performing right-hand reaches in a seated posture. Joint angles were predicted with and without the contribution of the negotiation function, and model accuracy was determined using the RMS error and differences in end posture angles. The results showed that prediction accuracy was generally better when a negotiated control was included. This improvement was significantly more pronounced for low and eccentric targets, as they required greater contribution of the joints shared by both visual and manual subsystems.
6.2 Introduction Movement prediction models of lifting, reaching, or pointing tasks have been commonly based on a multiple body link system with a single end-effector (Dysart & Woldstad, 1996; Zhang, Kuo, & Chaffin, 1999; Wang, 1999). However, even simple reaching movements may include multiple task components, other than moving the hand toward the goal target. For example, reaching involves the movement of the head and the eyes to capture images of the environment and build an internal representation of the space in which hand movements are planned and guided. It has been shown that head and/or eye movements are modulated by the movement of the whole body and the hand ( Delleman, Huysmans, and Kujit-Evers, 2001; Tipper, Howard, and Paul, 2001; Chapter 5). Furthermore, whole body and/or hand movements are also adjusted to accommodate visual perception of the environment (Peterka, & Benolken, 1995; Cohn, DiZio, & Lackner, 2001; van der Kooij, Jacobs, Koopman, & van der Helm, 2001). Hence it may be suggested that the CNS, while planning and executing a movement, simultaneously controls multiple subsystems that pursue individual and shared goals (guiding the hand, displacing the gaze, etc) in order to achieve the general aim of the task (reaching for the target). Seated reach movements include the movements of the visual sub-system, which is in charge of the gaze and guide hand movements, and the manual subsystem, which is in charge of moving the arm and hand to the target. It can be assumed that body segments consisting the visual subsystem include the eye, head, neck, and torso, while the manual subsystem is composed of the finger, hand, forearm, upper arm, clavicle, and torso. Depending on target location and postural requirements, both systems may move synergistically (in a same direction) or antagonistically (in different directions). Since the visual and manual subsystems share a common link (torso), it is hypothesized that the two subsystems negotiate the control of this common link involved in the motion of their respective end-effector. Hence, on planning and controlling reach movements, the CNS should consider the requirements of both subsystems and allocate the use of the common link in such ways that both subsystems can achieve their individual goals. One of the potential methods to simultaneously achieve multiple goals would be manipulating the redundant degrees of freedom and produce internal movements in each subsystem. 78
Differential inverse kinematics is a method to solve the problems associated with degrees of freedom greater than the dimensions of the task space. The closed-form solutions for such redundant systems are difficult to obtain due to highly nonlinear relationships between joint space variables and task space variables (Sciavicco & Siciliano, 1996). However, differential inversion enables linear mappings in the velocity domain, even though special care should be taken to invert the Jacobian matrices due to kinematic redundancy. Human reach movements have been modeled using differential kinematics, assuming preprogrammed trajectories of end-effector in general (Zhang, Kuo, & Chaffin, 1998; Wang, 1999; Komura, Shinagawa, & Kunii, 2001).
The objectives and hypotheses of the present study are as follows: Objectives 1) Construct a multibody link system representing the visual and manual subsystems 2) Develop differential inverse kinematics models to simulate the movements of subsystems separately and incorporatively 3) Quantify the benefit of incorporating multiple subsystems by comparing simulated and actual movements. 4) Investigate the interaction between head and hand movements on the control of head, hand, and whole-body movements Hypotheses 1) Visually guided reach movements can be explained by the activities of visual and manual subsystems 2) The two subsystems should negotiate the use of the common links to achieve the subsystem specific goals 3) The predicted movements of the differential inverse kinematics models would show the better accuracy when the negotiation/coordination of the both systems is taken into account.
6.3 Methods 6.3.1 Differential Inverse Kinematics A manual subsystem (finger-hand-forearm-upper arm-clavicle-torso; Figure 6.1A) and a visual subsystem (eye-head-neck-torso links; Figure 6.1B) were modeled to represent a human subject performing a seated reach task. The manual and visual subsystem were composed of nine and eight revolute joints, respectively (Table 6.1). It should be noted that three torso joints (q1 thru q3) are commonly employed in both the manual (qm1 thru qm3) and visual (qv1 thru qv3) subsystem. Table 6.1. Joint composition of the manual and visual subsystem Manual subsystem joints qm1 Torso extension (+) qm2 Torso lateral bending (+ left) qm3 Torso axial rotation (+ ccw) qm4 Clavicle horizontal (+ forward) qm5 Clavicle vertical (+ up) qm6 Shoulder flexion (+) qm7 Shoulder abduction (+) qm8 Upper arm axial rotation (+ccw) qm9 Elbow flexion (+)
Visual subsystem joints qv1 Torso extension (+) qv2 Torso lateral bending (+ left) qv3 Torso axial rotation (+ccw) qv4 Neck vertical (+ up) qv5 Neck horizontal (+ left) qv6 Head extension (+) qv7 Head tilt (+ left) qv8 Head axial rotation (+ccw)
The joint angles of each subsystems are combined and represented by a vector as follows: ⎡qc ⎤ q = ⎢⎢q m ⎥⎥ ⎢⎣ q v ⎥⎦
q c = q1 , q2 , q3
(torso joint angles- common to both manual and visual subsystems)
q m = qm 4 , qm 5 , qm 6 , qm 7 , qm8 , qm9
(clavicle-shoulder-elbow angles: manual subsystem)
q v = qv 4 , qv 5 , qv 6 , qv 7 , qv8
(neck-head angles: visual subsystem) 80
Figure 6.1. Multi-link composition of 9-dof manual subsystem (A) and 8-dof visual subsystem (B). The arrow extending from each joint indicates the positive direction of joint rotation in a right-hand rule.
The position of the manual subsystem end-effector (pm) can be represented by Cartesian coordinates as follows: p m = f m (q)
where fm denotes a function of direct kinematics of hand reach movements. In contrast, the end-effector of the visual subsystem (pv) is defined using a two-dimensional image coordinate system (Hashimoto, 1999). p v = f v (q) =
⎡ px ⎤ ⎢p ⎥ ⎣ y⎦
where fv and k denotes a direct kinematics function for the visual subsystem, and a spatial scaling factor respectively; px, py, and pz represent x-, y-, and z-coordinates of the target in a head-centered reference frame.
In general, the velocity of the end-effector p
can be obtained by: p& =
∂f (q) q& = J (q)q& ∂q
where J is a Jacobian matrix. Eq. 6.3 can be used for both the manual and visual subsystem separately: 81
p& m = J mq& p& v = J vq&
where Jm and Jv represent the Jacobian matrix for the manual and visual subsystem, respectively. In the present study, it was assumed that the end-effector trajectories are preprogrammed prior to movement execution. Hence it is necessary to obtain q& as a function of p& , but due to the redundant degrees of freedom of the multi-link systems (number of columns > number of rows of J), the ordinary inverse of J cannot be †
obtained. Alternatively, a weighted pseudo-inverse of J (denoted as J ) may be used (Zhang et al., 1999).
q& = W −1J T (JW −1J T ) −1p& = J †p&
where W is a weighting matrix that characterizes the instantaneous contribution of each joint. In the present study, the weighting matrices were obtained from statistical regression models of the peak joint velocities in motion recordings as a function of target position for both subsystems. Hence this function attempts to satisfy the primary objectives of 1) obtaining the joint angles that place the end-effector at the desired positions of the given trajectory at a given time; 2) manipulating joint angles in a way that the squared sum of all joint velocity is minimum; and 3) setting the relative contribution of each joint as determined by the weighting matrix. Since both subsystems have redundant degrees of freedom, a secondary objective can be introduced which reconfigures the joint angles of the linkage system without changing the end-effector position by using a matrix (I – J†J) which projects an arbitrary vector q& 0 in the null space of J. Hence for the manual subsystem:
q& = J †mp& m + (I − J †m J m )q& 0
Multiplying Jv for on both sides and solving for q& 0 :
J vq& = J v J †mp& m + J v (I − J †m J m )q& 0 q& 0 = [J v (I − J †m J m )]† (J vq& − J v J †mp& m )
By substituting q& 0 in Eq. 6.6 with Eq. 6.7
q& = J †mp& m + (I − J †m J m )[J v (I − J †m J m )]† (J vq& − J v J †mp& m )
and simplifying as Eq. 6.9 with a gain term (α) scaling the secondary objective function (Maciejewski, & Klein, 1985).
q& = J †mp& m + α [J v (I − J †m J m )]† (p& v − J v J †mp& m )
Then q can be obtained from the numerical integration of q& , and the cumulative error of the end-effector position predictions is reduced by a feedback control algorithm (Chiacchio, Chiaverini, Sciavicco, & Siciliano, 1991).
6.3.2 Movement Recording Subjects Five male and five female subjects participated in the experiments as paid volunteers. Subjects were students from the University of Michigan recruited through inclass advertisements or email announcements. The mean age of the subjects was 22.3 years old (SD: ± 1.8). All subjects were free from any known musculoskeletal or neurological disorders, and had normal vision (20/20 or better) without corrective lenses. Mean stature and body weight were 170.9 cm (SD: ±12.0) and 67.2 kg (SD: ±16.3), respectively. All subjects were right-handed. Equipment Visual targets were placed on an arc (radius = 115 cm, arc length = 300 cm) set horizontally in front of the subject. The arc position was adjusted so that the mid-point of the arc coincided with the mid-sagittal plane of the subject. The elevation of the arc was set either at eye level or at 50 cm below eye level (Figure 6.2A). The horizontal forward distance between the arc and the subject’s sternum was either 100% or 155% of the individual arm length reach distance. Reach distance was measured by the length between the right acromion process and the tip of the right index finger, while the upper arm, forearm, and hand extened horizontally at shoulder level. The mean distance across all subjects was 65cm and 95cm from the sternum, for 100% and 155% reach distance configurations, respectively (Figure 6.2B).
Each target was composed of alphanumeric characters (0 – 9, A, C, E, F, H, L, U, or P) displayed on a seven-segment LED whose visual angle was approximately 0.50°. Five targets were placed in each hemisphere. In the 100% reach distance condition, the interval between the targets was approximately 15°, and the leftmost and rightmost target positions were approximately −75 and +75° of azimuth with respect to the mid-sagittal plane, respectively. In the 155% reach distance condition, the target interval was approximately 10° and the most eccentric positions corresponded to ±50°. A separate LED display, used as the initial fixation point, was placed in the mid-sagittal plane at eye level for all arc elevation settings. The subject was seated on a chair (seat pan height = 40 cm, seat pan width = 50 cm, back support height = 58 cm) throughout the experiment. A pad equipped with a micro switch was placed on the subject’s right lap and served as the home position of the right index finger. The position of the lap switch was adjusted so that the elbow-included angle was approximately 90° when the index finger was on the button. The entire room was dimly lit during the experiment. A
Figure 6.2. Configuration of targets in a rear view (A) and top view (B). Although multiple arcs were illustrated within a figure, only one arc was set at a time facing the subject for each block, as illustrated in panel C.
Movement Recordings An electromagnetic motion capture system (Flock of BirdsTM, Ascension Technology) with four sensors placed on the subject’s forehead, upper torso (C7), lower torso (L5), upper arm, and right hand, was used to record the movements of the head, neck, clavicle, torso, upper arm, forearm, and hand. A splint was placed and wrapped in 84
the right hand to maintain the index finger posture throughout the experiments so that the coordinates of the six degrees of freedom sensor on the right hand could be used to estimate the fingertip position. The Cartesian coordinates and orientations of the sensors were used to estimate the joint center locations (Reed et al., 1999). The movements were recorded at 25Hz, and the obtained trajectories of each landmark were smoothed in an off-line process using a second order Butterworth low-pass filter with a 6Hz cutoff frequency. Procedures When the initial fixation point was illuminated, accompanied by a signal tone of 500Hz/0.1 second, the subject was asked to align the nasion with the initial fixation point and depress the switch on the right lap pad with the right index finger. After a delay of 2 seconds, the initial fixation display was turned off and a randomly selected eccentric target was displayed. A 2000Hz/0.1 second tone signaled the subject to initiate the reach movement. The subject was asked to point just below the target with the right index finger, which activated the micro switch placed on the right index fingertip. The alphanumeric characters presented on all targets were changed at a rate of one per second. The subject was asked to read each alphanumeric character aloud throughout the trials. The eccentric target was turned off 2 seconds after the fingertip contact. Each block of target presentations was composed of twenty trials (10 target locations × 2 replications). Target locations were randomized and balanced within a block. A total of four blocks corresponding to one of four target arc array locations (2 distance situations × 2 arc elevations) were recorded for each subject. A five minutes rest period was provided after each block. The order of blocks was balanced and randomized across the subjects. The procedures were reviewed and approved by the University of Michigan Health Sciences Institutional Review Board for compliance with the appropriate guidelines, state and federal regulations.
6.4 Results Movements were simulated using either a model of the manual subsystem alone (Eq. 6.5), the visual subsystem alone (Eq. 6.5), or the incorporated manual and visual subsystems in negotiation (Eq. 6.9). Examples of torso angle profiles generated by each model and the corresponding motion recordings are illustrated in Figure 6.3. For torso angles, the model of the manual subsystem alone (Figure 6.3B) made an accurate prediction for q2 (LB: lateral bending), while the model of the visual subsystem (Figure 6.3C) was better at predicting q1 (FE: extension).
Figure 6.3. Torso angle simulations. The definition of torso angles and the corresponding reach postures are illustrated in the right panels.
This observation is in agreement with a direct kinematics model (Figure 6.4) indicating that lateral bending of the torso may not contribute significantly to head rotation in the direction of an eccentric target (Figure 6.4A: visual subsystem simulation), while it may be of primary importance for moving the torso and the hand toward the target (Figure 6.4B: manual subsystem simulation). Hence it is suggested that the model with negotiated control, which benefits from both individual models (Figure 6.4C: visual and manual negotiated control) provides a better “combined” accuracy for all three torso angles (Figure 6.3D). 86
Figure 6.4. A) Simulation of visual subsystem movement; B) Simulation of manual subsystem movement; C) Simulation of the negotiated control (manual + visual); D) Actual movement recording
In general, the accuracy for eight of the fourteen joint angles of the combined visuo-manual linkage system was significantly improved by the use of a negotiated control model (Table 6.2). Even though q1 (torso flexion/extension angle) did not show a significant difference as a main effect when contrasted by the models with and without a negotiation function, the negotiation × target height interaction effect indicated that the negotiation model made significantly greater accurate predictions for the low target positions (p < 0.05), where downward flexion movements are required for visual gaze and manual reach. Negotiation × target eccentricity interaction effects indicated that
prediction accuracy of the negotiation model for q2 (torso lateral bending) increases with target eccentricity. These results indicated that the negotiation function improves model prediction accuracy when the body segments are effectively involved in a motion. Likewise, the accuracy of the negotiation model is better for reach movements to targets away from the sagittal plane for all joint angles.
Table 6.2. RMS error of the joint angles predicted by each model. Shaded rows correspond to significant improvements in prediction accuracy by the negotiation model. Joint
q1 q2 q3 qm4 qm5 qm6 qm7 qm8 qm9 qv4 qv5 qv6 qv7 qv8
Without Negotiation (mean ± se°) 3.5 ± 0.3 2.3 ± 0.2 7.7 ± 0.9 6.2 ± 0.7 8.1 ± 1.1 25.4 ± 2.8 14.2 ± 1.6 39.1 ± 4.2 14.7 ± 0.9 8.9 ± 1.0 4.0 ± 0.5 6.5 ± 0.6 7.9 ± 0.3 4.1 ± 0.5
With Negotiation (mean ± se°) 4.2 ± 0.4 3.0 ± 0.3 5.1 ± 0.5 8.3 ± 1.0 7.1 ± 1.0 19.5 ± 2.3 11.8 ± 1.1 35.2 ± 3.7 11.8 ± 1.0 5.8 ± 1.0 5.1 ± 0.6 5.3 ± 0.5 7.6 ± 0.4 4.0 ± 0.4
p < 0.05 Non-significant p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 Non-significant Non-significant
The prediction of end-posture angles was also generally improved by introducing a negotiation model (Table 6.3). More remarkably, the prediction error for torso axial rotation (q3) was reduced by 85% when using the negotiation model. Also the prediction accuracy for q1 (torso flexion/extension) and q2 (torso lateral bending) increases with target height and eccentricity, respectively, which is consistent with the observations about the RMS error statistics above.
Table 6.3. Error of the joint angles of the end posture predicted by each model. Shaded rows correspond to significant improvements in prediction accuracy by the negotiation model. Joint
q1 q2 qm3 qm4 qm5 qm6 qm7 qm8 qm9 qv4 qv5 qv6 qv7 qv8
Without Negotiation (mean ± se°) 0.8 ± 1.0 0.7 ± 0.9 11.5 ± 1.5 7.9 ± 2.0 13.8 ± 1.8 -41.0 ± 4.4 24.7 ± 3.4 53.3 ±10.7 25.8 ± 2.0 -13.6 ± 1.9 2.0 ± 1.2 -8.2 ± 1.4 8.2 ± 1.2 3.4 ± 1.0
With Negotiation (mean ± se°) 2.4 ± 0.9 -2.4 ± 1.0 1.7 ± 1.2 13.0 ± 2.1 12.2 ± 1.8 -31.1 ± 3.7 14.3 ± 3.6 46.3 ±10.7 19.7 ± 2.4 -7.9 ± 1.8 6.2 ± 1.1 -3.6 ± 1.3 5.7 ± 1.4 -2.0 ± 0.9
p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05 p < 0.05
6.5 Discussion In visually guided reach movements, the visual and manual subsystems act to locate the target and move the hand to the target, respectively (Vercher, Magenes, Prablanc, & Gauthier, 1994; Kim & Martin, 2002). The present model proposes a method of incorporating multiple subsystems with individual end-effectors using a framework of differential inverse kinematics. In this process of incorporation, negotiation is required in order to share common resources and subsystems that are dedicated to manipulate their respective end-effector. The common links may be controlled by either one of the subsystems exclusively, while the other subsystem’s control is restrained. Movement accuracy can be viewed as the result of this negotiation/coordination. Dominance of one subsystem over another may be a function of tasks requirements in a specific context. It was found that in general the accuracy of the predicted joint angle trajectories was better when the negotiation is introduced as a secondary cost function in the differential inverse kinematics algorithms. Accordingly the results of this model suggest that 1) the central controller takes into account the constraints of each subsystem to find an optimal set of joint angles, hence coordination and movement synergy can be viewed as the negotiation over the shared control between multiple subsystems involved in a 89
movement; 2) the advantage of the negotiation model is more prominent for reach movements to low and eccentric targets. This latter effect shows that the accuracy of the model increases with the effective contribution of a joint to a visually guided reach movement. Hence, accuracy of the negotiation model may be better than it appears when considering only the average RMS errors including all target locations. The statistically significant interaction effects support this hypothesis. It should be noted that the proposed model assumes that multiple goals of all participating subsystems are pursued at any given time simultaneously. However as the number of subsystems the CNS has to consider at a time increases, simultaneous control of multiple subsystems would be no longer an efficient solution. Hence as an alternative method, the activities of subsystems can be divided and sequenced, which has been observed from multiple phases in joint angles and end-effector trajectories (Paillard & Amblard, 1985; Jeannerod, 1988). Also it would be difficult to verify that the endeffector trajectories are predetermined prior to the movement onset and joint angles are manipulated on-line to keep the end-effector on the desired trajectory (reviewed by Sergio & Scott, 1998; Tordorov & Jordan, 2002). However, the proposed model suggests that goal-directed human movements are accomplished by multiple subsystems in charge of different aspects of the movement goal, and the CNS should coordinate the multiple subsystems, particularly for movements performed in a three-dimensional space.
CHAPTER 7 MULTI-PHASIC COORDINATION IN VISUALLY-GUIDED REACH MOVEMENTS
7.1 Abstract Coordination can be understood as the organization of the cooperation among multiple subsystems involved in movement control, with different individual goals achieved through different principles. The purpose of the present study is to develop a kinematic model of coordinated movements of the head and upper extremities in threedimensional unconstrained seated reach tasks. Observation of subjects performing reach movements to visual targets indicates that three distinct phases can be identified in reach movement kinematics: 1) lift-off (fast head movement followed by a preparatory hand displacement); 2) transport (compensatory head movement to maintain aiming direction, accompanied by hand displacement to near the target); and 3) landing phase (slow approach to the target, mostly along the line of sight). It was assumed that the movements within each phase are controlled by phase-specific modes, including feed-forward direction-based, feed-forward posture-based, and feedback inverse kinematics modes, respectively. Movement coordination was modeled using nodes and connections that represent the movement components and the transitions from one phase to another. It was also assumed that a transition can be made depending on the evaluation of each component’s outcome. The simulation results showed that the model can generate the natural pattern of coordinated reach movements; and furthermore, that the connections among movement component nodes can be reorganized to simulate the kinematic variations induced by different movement strategies and task constraints. The proposed model supports the hypothesis that even though individual movement components may
have been optimized for the phase-specific objectives, coordination among these components under minimal task constraints can exhibit random behavior whose variability is a function of the global task goal.
7.2 Introduction In the previous chapters, it was suggested that one of the goals of head movement control might be to achieve a preplanned posture in order to ensure accurate gaze orientation, spatial representation of target position and subsequent hand movements to a target (Chapter 2, 3, and 4). Furthermore, the head movement controller takes the whole body posture and movement into account when performing other tasks, such as reaching and lifting (Chapter 5 and 6). More specifically, the visual subsystem, comprised of the head/neck and eyes, needs to negotiate the use of common links with the manual subsystem, engaged in reach movements, in order to achieve the common and respective goals of those otherwise independent subsystems. Coordination can therefore be understood as organized interaction among multiple subsystems that have different goals achieved through different principles. A number of attempts have been made to explain movement control using a single unifying principle, including minimum jerk (Flash & Hogan, 1985), minimum weighted joint velocity (Zhang et al., 1999; Wang, 2002), equilibrium point (Feldman, 1966; Latash, 1993; Bizzi & Hogan, 1992), and minimum variance of the end-effector positions (Wolpert et al., 1995). However, in many cases the prediction validity of these models has been limited to a small range around the initial configuration under specific constraints, and models based on optimization criteria have not been verified for unconstrained three-dimensional movements. As suggested in Chapter 6, unconstrained three-dimensional movements may result from the cooperation of multiple subsystems and controllers, having different movement goals, frames of reference, and control laws. Coordination is therefore necessary to organize this cooperation. In Chapter 6, it was assumed that the CNS continuously takes the goals of the visual and manual subsystems into account, but as the number of subsystems and controllers being considered increases, it may become difficult
to find solutions to satisfy all system requirements simultaneously throughout the course of movements. Hence it is suggested that the CNS may prioritize the role of subsystems and their controllers, and sequence the activities over time in a coordinated manner. In addition, the sequenced movement component performed by each subsystem would compose a distinct phase, exhibiting unique kinematic features in joint angles and endeffector trajectories.
The objectives and hypotheses of the present study are as follows: Objectives 1) To identify the multiple phases that constitute visually-guided, unconstrained reach movements in a three dimensional space 2) To investigate the aspects of coordination among multiple movement phases 3) To develop a model of sequencing and coordination based on the observations 4) To investigate the integration and organization of multiple subsystems (head and hand), control modes (posture and movement), and reference frames (egocentric and allocentric), with respect to the global task goal Hypotheses 1) Three-dimensional unconstrained reach movements are composed of multiple phases with unique goals and kinematics patterns controlled in a specific mode. 2) The activities of multiple phases show a pattern of sequential coordination rather than synchronous movement onset and offset of subsystems. 3) The coordination of multiple phases and components can be performed by a supervisory controller that evaluates the outcome of each component and selects the appropriate mode of subsequent movement components. 4) Coordination can be achieved without optimization, and exhibits variability.
7.3 Experiments 7.3.1 Methods Subjects Five male and five female subjects participated in the experiments as paid volunteers. Subjects were students from the University of Michigan recruited through inclass advertisements or email announcements. The mean age of the subjects was 22.3 years old (SD: ± 1.8). All subjects were free from any known musculoskeletal or neurological disorders, and had normal vision (20/20 or better) without corrective lenses. Mean stature and body weight were 170.9 cm (SD: ±12.0) and 67.2 kg (SD: ±16.3), respectively. All subjects were right-handed. Equipment Visual targets were placed on an arc (radius = 115 cm, arc length = 300 cm) set horizontally in front of the subject. The arc position was adjusted so that the mid-point of the arc coincided with the mid-sagittal plane of the subject. The elevation of the arc was set either at eye level or at 50 cm below eye level. The distance between the subject and the arc was set as follows: •
Normal reach situation: The distance between the mid-point of the arc and the subject’s sternum was set to 100% of the extended arm and hand length of each subject (Figure 7.1A). The mean distance was 60 cm.
Extended reach situation: the distance was set to 155% of extended arm and hand length (Figure 7.1B). The mean distance was 95 cm.
Figure 7.1. Configuration of the target arc array. A) Normal reach situation; B) Extended reach situation
Two visual conditions were imposed for the normal reach distance situation. The subject was instructed to either direct gaze to the target when it appeared and then maintain the gaze in that direction until the target disappeared, or to look at the target and then return gaze to the initial fixation direction as early as possible. The intent of this latter condition was to measure the effect of imposing an additional task (e.g., maintaining the front view for driving) that must be performed concurrently with a reach task (e.g., manipulating a control). The protocol for each visual condition was as follows: •
Gaze remaining on the target: The subject was asked to look at the initial fixation point in the mid-sagittal plane with the hand at the home location on the right lap. When an eccentric target was illuminated, the subject was required to reach the target with the right index finger and to maintain the final head and whole body posture until a tone indicated to return the gaze to the initial fixation point and the hand to the home location.
Gaze constrained to return: The initial fixation point was kept illuminated throughout the trial, even when the eccentric target was turned on. The subject was asked to look at the eccentric target but was required to redirect the gaze as early as possible to the initial fixation point. The hand was returned to the home location after a tone signal.
Each target was composed of alphanumeric characters (0 – 9, A, C, E, F, H, L, U, or P) displayed on a seven-segment LED whose visual angle was approximately 0.50°. Five 95
targets were placed in each hemisphere. In a normal reach situation, the interval between the targets was approximately 15°, and the leftmost and rightmost target positions were approximately −75 and +75° of azimuth with respect to the mid-sagittal plane, respectively. In the extended reach situation, the target interval was approximately 10° and the most eccentric positions corresponded to ±50°. A separate LED display, used as the initial fixation point, was placed in the mid-sagittal plane at eye level for all arc elevation settings. The subject was seated on a chair (seat pan height = 40 cm, seat pan width = 50 cm, back support height = 58 cm) throughout the experiment. A pad equipped with a micro switch was placed on the subject’s right lap and served as the home position of the right index finger. The position of the lap switch was adjusted so that the elbow-included angle was approximately 90° when the index finger was on the button. The entire room was dimly lit during the experiment. Movement Recording An electromagnetic motion capture system (Flock of BirdsTM, Ascension Technology) with five sensors that measure movements with six degrees of freedom was used to record torso, upper extremity and head movements. The sensors were placed on the forehead, C7, right upper arm, right hand and low back (midpoint between left and right posterior superior iliac spine). A calibration procedure was carried out to determine the location of anatomical landmarks (nasion, tragion, infraorbitale, etc.) in the sensor’s local reference frame (Figure 7.2A). The motions of the landmarks in a global reference frame were then calculated as follows: global p global (t ) = Tsensor (t )p sensor
where p sensor denotes the location of a landmark in a sensor-attached reference frame, and global (t ) represents the recorded homogeneous transformation matrix describing the Tsensor
orientation and location of the sensor-attached reference frame as a function of time, respectively.
Figure 7.2. A) Placement of motion sensors and estimated position of body landmarks; B) Estimated joint center locations
A splint was placed and wrapped in the right hand to maintain the index finger posture throughout the experiments, so that the coordinates of the six degrees of freedom sensor on the right hand could be used to estimate the fingertip position. The movements were recorded at a 25Hz sampling frequency, and the trajectory of each landmark was smoothed in an off-line process using a zero phase shift second order Butterworth lowpass filter with a 6Hz cutoff frequency. The Cartesian coordinates and orientations of the sensors were used to estimate the joint center locations (Reed et al., 1999; Figure 7.2B). Procedure When the initial fixation point was illuminated, accompanied by a signal tone of 500Hz/0.1 second, the subject was asked to align the nasion with the initial fixation point and depress the switch on the right lap pad with the right index finger. After a delay of 2 seconds, the initial fixation display was turned off and a randomly selected eccentric target was displayed. A 2000Hz/0.1 second tone signaled the subject to initiate the reach movement. The subject was asked to point just below the target with the right index finger, which activated the micro switch placed on the right index fingertip. The
alphanumeric characters presented on all targets were changed at a rate of one per second. The subject was asked to read each alphanumeric character aloud throughout the trials. The eccentric target was turned off 2 seconds after the fingertip contact. Each block of target presentations was composed of twenty trials (10 target locations × 2 replications). Target locations were randomized and balanced within a block. A total of four blocks corresponding to one of four target arc array locations (2 distance situations × 2 arc elevations) were recorded for each subject. A five minutes rest period was provided after each block. The order of blocks was balanced and randomized across the subjects. The procedures were reviewed and approved by the University of Michigan Health Sciences Institutional Review Board for compliance with appropriate guidelines, state and federal regulations. Data Analysis The calculated positions of joint centers (Figure 7.2B) were used to compose a seven link-segment biomechanical model consisting of the head, neck, torso, right clavicle, upper arm, forearm and hand. The link segments, with corresponding degrees of freedom, are illustrated in Figure 7.3. The time-dependent joint angles for each degree of freedom were calculated from the joint center positions, and a total of eighteen joint angles were used to describe movements.
Figure 7.3. A) Definition of torso – clavicle − upper arm − forearm − hand links (12 degrees of freedom); B) Definition of neck − head links (6 degrees of freedom). The arrow extending from each joint indicates the positive direction of joint rotation in a right-hand rule.
7.3.2 Results Composition of Reach Movements Fingertip trajectories were not linear between the initial and final position, and presented several inflection points. Prominent inflection points occurred immediately after movement onset and immediately before movement completion. Based on this observation, it can be suggested that unconstrained visually-guided reach movements compose a sequence of multiple phases, including 1) lift-off; 2) transport; and 3) landing (Figure 7.4). The lift-off phase takes place at the onset of the hand/fingertip movement (Figure 7.5 A & B). The head starts to move earlier than the hand.
Figure 7.4. A) Movement onset; B) Liftoff phase; C) Transport phase; D) Landing phase
Figure 7.5. A) Definition of the lift-off phase B) Head and elbow movements during the lift-off phase
In the present experiment, the elbow had to extend to move the hand toward the target, since the target arc was located either at 100% or 155% of hand/arm reach distance, and the elbow remained flexed before the onset of hand movement. However, it was observed that the elbow initially flexed, and then reversed to extend (Figure 7.5B). More specifically, 81% to 94% of the trials exhibited a movement pattern of initial elbow flexion. The elbow then started to extend at about 22% to 31% of the normalized movement time, on average. The initial elbow flexion was 25% to 30% of the final elbow extension angle (full extension angle is defined as 180°), on average (Table 7.1).
Table 7.1. Frequency, timing, and angular displacement of initial elbow flexion for all targets Subject 1 2 3 4 5 6 Proportion of trials exhibiting initial elbow flexion 100/0 61% 94% 67% 100% 100% 94%
81% (26%) 85% (14%) 94% (11%) 94% (13 %)
Normalized movement time of the fingertip when elbow reversal occurs: mean (SD) 0.32 0.38 0.16 0.43 0.35 0.26 0.35 0.15 0.27 0.35 0.31 (0.07) (0.16) (0.08) (0.09) (0.06) (0.05) (0.08) (0.05) (0.10) (0.06) (0.13) 100/-50 0.32 0.32 0.27 0.41 0.29 0.24 0.32 0.13 0.20 0.31 0.29 (0.07) (0.09) (0.13) (0.07) (0.05) (0.05) (0.06) (0.03) (0.06) (0.07) (0.10) 155/0 0.21 0.28 0.13 0.26 0.20 0.18 0.30 0.12 0.27 0.24 0.22 (0.06) (0.07) (0.03) (0.05) (0.06) (0.05) (0.05) (0.03) (0.10) (0.05) (0.08) 155/-50 0.19 0.30 0.16 0.26 0.20 0.21 0.34 0.14 0.24 0.20 0.22 (0.04) (0.06) (0.07) (0.07) (0.05) (0.05) (0.05) (0.05) (0.10) (0.05) (0.08) Initial elbow flexion angle in proportion of final elbow extension angle (θflex/θext): mean (SD) 100/0 23% 51% 11% 28% 54% 15% 47% 10% 9% 28% 30% (15%) (38%) (12%) (12%) (17%) (13%) (44%) ( 8%) ( 4%) (10%) (28%) 100/-50 15% 23% 45% 28% 26% 7% 45% 12% 10% 26% 25% (10%) ( 9%) (45%) (13%) (10%) ( 4%) (29%) ( 5%) ( 7%) (10%) (22%) 155/0 13% 24% 32% 33% 40% 8% 54% 11% 19% 37% 28% ( 6%) ( 7%) (20%) (11%) (12%) ( 6%) (21%) ( 4%) (15%) (10%) (18%) 155/-50 13% 28% 30% 28% 35% 8% 42% 14% 34% 32% 27% ( 6%) (10%) (24%) ( 7%) (15%) ( 4%) (18%) ( 5%) (39%) ( 8%) (19%) 100/0: target arc at 100% of the extended arm and hand length at eye level 100/−50: target arc at 100% arm/hand length, 50cm below the eye level 155/0: target arc at 155% arm/hand length at eye level 155/−50: target arc at 155% arm/hand length, 50cm below the eye level 100/0
The azimuth of fingertip displacement during the lift-off phase exhibits a positive correlation with target azimuth. The r2 coefficient ranges between 0.69 and 0.93 (Figure 7.6A). Neither the elevation nor distance of the fingertip movement in the lift-off phase shows a correlation with the corresponding parameters of the target (Figure 7.6 B & C). However, in spite of a large inter-subject variability, it was observed that fingertip travel distance in the lift-off phase increased with the absolute magnitude of target azimuth (Figure 7.6D). The second order polynomial regression predictions indicate low (0.04) to moderate (0.46) r2 coefficients.
Figure 7.6. Direction and magnitude of fingertip movement in the lift-off phase. A) Liftoff azimuth versus target azimuth; B) Lift-off elevation versus target elevation; C) Liftoff distance versus target distance; D) Lift-off distance versus target azimuth. Filled circles: target arc at 100% reach distance at the eye level. Hollow circles: target arc at 100% reach distances 50 cm below the eye level. Crosses: target arc at 155% reach distances at the eye level. Squares: target arc at 155% reach distances 50 cm below the eye level. Data from subject 4.
During the transport phase following the lift-off, head-aiming direction is maintained/stabilized near the target. At the end of the transport phase the fingertip trajectory shows an inflection at the estimated boundary of the foveal field of view (Figure 7.7). It is not clear whether the hand moves faster during the transport phase than during the lift-off phase. The relative timing of fingertip peak velocity with respect to liftoff completion (= elbow reversal) varied from subject to subject. For nine subjects at least, the fingertip peak velocity occurred after lift-off phase completion in the 155% reach-distance situation (Table 7.2); while in the 100% reach distance condition, the fingertip peak velocity occurred earlier than lift-off phase completion for at least five subjects. For all target arc settings, during all three phases, the fingertip traveled the longest distance during the transport phase. 102
Table 7.2. Time of the peak fingertip velocity in proportion of normalized movement time. Mean (SD) Subject 1 2 3 4 5 6 7 8 9 10 Mean Target (SD) 100/0 0.35 0.36 0.31 0.48 0.30 0.30 0.31 0.29 0.20 0.39 0.34 (0.13) (0.17) (0.09) (0.08) (0.08) (0.13) (0.15) (0.11) (0.05) (0.11) (0.13) 100/-50 0.33 0.38 0.34 0.43 0.31 0.30 0.28 0.22 0.32 0.38 0.33 (0.17) (0.10) (0.12) (0.12) (0.10) (0.13) (0.14) (0.07) (0.09) (0.08) (0.12) 0.41 0.26 0.40 0.36 0.31 0.32 0.18 0.34 0.31 0.33 155/0 0.36 (0.09) (0.13) (0.10) (0.06) (0.12) (0.09) (0.08) (0.08) (0.13) (0.07) (0.12) 155/-50 0.40 0.47 0.31 0.35 0.30 0.43 0.45 0.19 0.37 0.25 0.35 (0.11) (0.14) (0.09) (0.06) (0.09) (0.17) (0.11) (0.06) (0.16) (0.04) (0.14) 100/0: target arc at 100% of the extended arm and hand length at the eye level 100/−50: target arc at 100% arm/hand length at −50cm below the eye level 155/0: target arc at 155% arm/hand length at the eye level 155/−50: target arc at 155% arm/hand length at −50cm below the eye level Shaded cells indicate that the hand peak velocity occurs significantly later than elbow reversal (p < 0.05)
Figure 7.7. Definition of the transport phase.
The landing phase, concluding a reach movement, is characterized by slow fingertip movements. In this phase, the fingertip is usually located within the foveal field of view aimed at the target (Figure 7.8 A & B). It was observed that when the fingertip velocity dropped below 50% of its peak level, the fingertip spent approximately 55% to 80% of the remaining movement time within a visual angle of 5° around the line of sight (Figure 7.8C & D).
Figure 7.8. Hand trajectory and field of view during the landing phase. A) Initial and final postures of a reach movement. The field of view was estimated from the target and head position. B) The hand does not move linearly to the target. An inflection point can be observed at the boundary of the foveal field of view. The visual angle α denotes the angle between the head-to-fingertip vector and the head-to-target vector. C) During the landing phase the fingertip moves along the line of sight, as evidenced by the minimal visual angle α. D) Distribution of visual angle (α) as a proportion of the remaining movement time after the fingertip velocity drops 50% below the peak level.
Transition between Phases The data indicate that the onset of hand movement coincides with the peak velocity of head movement. Similarly, the onset of torso movement coincides with the first peak velocity of hand movement (Figure 7.9).
Figure 7.9. Relative timing of head-handtorso movements.
The temporal relationship between hand movement onset and head movement peak velocity was described by a hand precedence index (HPI), defined as follows: Hand Precedence Index (HPI) = t(hand onset) − t(head peak velocity) (Eq. 7.2)
Hence, a null HPI represents the coincidence of the two events, while a positive HPI indicates that hand movement starts later than head peak velocity, and vice versa. The positive HPI describes a “conservative” behavior, while the negative HPI corresponds to a “risky” behavior (Figure 7.10). The HPI distribution for each subject indicated that six out of ten subjects showed a conservative behavior, one subject showed a risky behavior, and three subjects exhibited a neutral behavior. The categorization was based on the measurements of the skewedness of the individual HPI distribution.
Figure 7.10. Distribution of hand precedence index (HPI). A) Neutral behavior, where the HPI distribution is symmetric; B) Conservative behavior (right-skewed); C) Risky behavior (left-skewed).
7.4 Modeling 7.4.1 Model Components Lift-Off Phase The results of the present experiment indicate that during the initial phase, the hand makes a preparatory movement characterized by “elbow reversal”. This observation is in agreement with an earlier study (Desmurget and Prablanc, 1997). Reversed movement direction leads to an assumption that the lift-off phase relies on a different control mode than the subsequent transport phase. As previously described, the movement direction during the lift-off phase is correlated with target direction, while the travel distance of the fingertip during the lift-off phase is not correlated with the target distance (Figure 7.6). Similar results were also found for head movement kinematics (Chapter 2), in which the amplitude of the initial head movement is weakly correlated with target eccentricity. Hence it is suggested that the lift-off phase corresponds to a direction-based, rather than a position-based, control mode. Trajectory (Desmurget & Prablanc, 1997) and electromyographic (Brown & Cooke, 1981) data have indicated that target position perturbation immediately before movement onset does not induce corrections during the initial phase of the movement. In 106
addition, since the head and gaze are moving toward the target during the lift-off phase, it can be assumed that accurate visual feedback of the hand and fingertip positions may not be available. However, proprioceptive information and feed-forward prediction of the system state is available. Hence it is proposed that a fast feed-forward control rather than a slow visual feedback control is predominantly used during the lift-off phase. The framework of the proposed model, utilizing direction-based feed-forward control, is illustrated in Figure 7.11 for the simple case of two-joint (shoulder and elbow) movements in the sagittal plane. In the given joint configuration and target location, the instant rotation of the elbow joint results in the vertical displacement of the end-effector (fingertip), as denoted by the vector velbow , which is orthogonal to the target vector (vtarget). In contrast, when the shoulder rotates, the end-effector moves toward the target (vshoulder). This suggests that shoulder joint rotation should be facilitated by a large weighting factor, while elbow joint rotation should be minimized by a small weighting factor. The weighting factor can be obtained by the projection of the end-effector displacement vector (vshoulder or velbow) onto the target direction vector (vtarget).
Figure 7.11. Framework of the movement control for the lift-off phase. A) When the elbow joint (θelbow) rotates, the end-effector (fingertip) instantly moves in an orthogonal direction to the target vector (vtarget). B) When shoulder joint (θshoulder) rotates, the instant movement of the end-effector resulting from shoulder movement (vshoulder) is in the same direction as the target vector.
For a given target position (xtarget) and time-dependent end-effector position (xee), the target vector (vtarget) is defined as follows (Figure 7.12):
v target = x target − xee
The displacement of the end-effector resulting from the instant rotation of the i-th joint can be described by the vector vi as defined in Eq. 7.4. It should be noted that the vector vi is equivalent to the corresponding vectorial component of a Jacobian matrix. vi =
∂x ee ∂θ i
The projection length of the above two vectors (Eq. 7.3 and 7.4) is defined as the weighting factor (wi) for the i-th joint, which can be obtained by the dot product of two vectors. wi = v target ⋅ v i
Since projection length also varies with the length of the vector vtarget, the projection lengths for all joints are pooled together and normalized into a unit length vector (W). W=
[ w1 , w2 L wn ] [ w1 , w2 L wn ]
The instant joint velocity vector for all joints ( θ& ) is calculated from the weighting vector scaled by the gain factor α. θ& = αW
In the present study, a time-dependent Gaussian function representing a joint velocity impulse was used for α. The duration of an impulse was set to 250 ms, which corresponded to twice the mean duration of the initial acceleration phase of the measured hand and head movements. The peak height of an impulse was scaled by a magnitude corresponding to 1% of the length of the vector vtarget (value estimated from trial and error basis).
Figure 7.12. Movement generator model for the lift-off phase.
It was also assumed that each impulse is generated every 40 ms (= sampling interval of the movement measurement), and the resultant velocity profile obtained from the predicted movements is built by the accumulation of successive impulses. The scheme of successive generation of impulses until the error between the desired and current directions of the end-effector is reduced to a minimum was derived from a saccadic gaze control model, in which a hill of neural impulses moves along the topologically mapped error space in the superior colliculus of the midbrain (Guitton, Munoz, and Pelisson, 1993; Matsuo, Bergeron, and Guitton, 2004). In the present model, movement direction was specified by the vector from the predicted future position to the desired position of the end-effector. The predicted position of the end-effector was calculated by the current joint angles derived from proprioceptive feedback, and future joint angles were derived from the feed-forward prediction (Eq. 7.6). t +∆ xˆ ee (t + ∆) = f ⎛⎜ θ(t ) + ∫ θ&ˆ (τ )dτ ⎞⎟ t ⎝ ⎠
where xˆ ee (t + ∆) : estimated position of the end-effector at time t+∆ f(·): forward kinematics function of joint angles
θ(t): joint angles at time t θˆ& (τ ) : future joint velocity
The combination of feedback and feed-forward information has been suggested by a number of movement control studies (Sabes, 2000; Todorov, 2004), indicating that
the CNS uses an optimal estimation of the system state based on the efference copy, proprioception, and visual information. Each time-dependent joint angle profile was calculated by the integration of joint velocities for each degree of freedom. The calculated joint angles were bounded within the valid range of motion (Eq. 7.9). t +∆ θ(t + ∆) = min⎛⎜ max⎛⎜ θ(t ) + ∫ θ&(τ )dτ , λlowerbound ⎞⎟ , λupperbound ⎞⎟ t ⎝ ⎠ ⎝ ⎠
where λlowerbound and λupperbound denote the predefined lower and upper bounds of θ, respectively. This range of motion constraint was applied to all movement phases. The performance of the proposed model for a simplified two-link system in the sagittal plane indicates that the model by itself can enable the end-effector to reach the target (Figure 7.13 A). The proposed model generates the “elbow reversal” pattern, which was typically observed in the lift-off phase (Figure 7.13B). However, one of the shortcomings of the model is that the “efficiency” of the controller degrades with the progress of the reach movement. This is because the vectors associated with the endeffector movement (vshoulder or velbow in Figure 7.11) eventually become near orthogonal to the target vector (vtarget) even before the end-effect arrives at the target, particularly when the target is far from the initial end-effector position. Hence, a different control mode should take over joint manipulation in order to achieve the movement goal efficiently.
Figure 7.13. Performance of the direction-based feed-forward movement control model for a two-link planar movement. A) Superimposed stick figures of the upper arm and forearm moving from the initial posture (90° elbow flexion) to reach the target denoted by T. B) Shoulder and elbow flexion/extension angles. It should be noted that the elbow angle shows a pattern of “reversal” approximately at time unit 15, and only negligible progression is made after time unit 40.
Transport Phase The transport phase is the largest component of the reach movement. Previous studies have suggested that three-dimensional reach movements are not controlled in an allocentric reference frame (Flanders et al., 1992, Rosenbaum et al., 1995). Hence movement planning could be based on an egocentric reference frame (joint space) during the transport phase. The consistency of the HMCR, regardless of the variability in head movement kinematics, and the presence of corrective sub-movements presented in Chapter 2 and 3 suggest that posture is one of the goals of movement control defined in an egocentric reference frame. This perspective is also supported by other results showing that the final posture is invariant as long as the movement context is identical, even though the end effector trajectory may vary (Desmurget & Prablanc, 1997). It is also assumed that visual feedback information may not be available during the transport phase, since gaze aimed at the target and the end-effector and most upper extremity segments are out of the foveal field of view. The CNS is thus likely to rely on an egocentric reference frame derived from proprioceptive rather than visual information (Chapter 4). It is therefore proposed that a posture-based feed-forward control mechanism could be used during the transport phase. 111
In the present study, the model predicted desired posture was achieved using an interpolation method. An Euler angle-based interpolation method was introduced by Rosenbaum et al. (2001); however, a quaternion-based method was used in the present study in order to avoid the problems associated with non-communicative sequential rotations (Mukundan, 2002). For the manual subsystem (Figure 7.3), twelve Euler angles representing the time-dependent link segment configuration were transformed into five sets of quaternions, each of which represented the orientation of the torso, clavicle, upper arm, forearm and hand, respectively. The spherical linear interpolation equation (Eq. 7.10) was evaluated using quaternions describing the initial and final postures to generate a time-dependent change of the link orientation. Specifically, the interpolated quaternion s at time proportion ν ∈[0,1] can be calculated as follows: s(ν ; p, q) =
sin((1 − ν )θ )p + sin(νθ )q sin(θ )
where p and q denote the quaternions describing the initial and final postures, respectively, θ represents the angular distances between the vectorial components of quaternions p and q. The spherical linear interpolation method can be understood as the generation of the intermediate sequence of rotations on a path along the great circle connecting two points on the sphere. Since this method is equivalent to a linear interpolation, the time interval between each sample was scaled using a sigmoid function in order to obtain a bell-shaped velocity profile, which is characteristic of time-optimal human motion in a feed-forward control mode (Morasso, 1981). In actual implementation, current and future joint configurations were continuously estimated and used to update the quaternions representing the initial posture (p) during the entire transport phase. t+∆ p(t ) = g ⎛⎜ θ(t ) + ∫ θ&ˆ (τ )dτ ⎞⎟ t ⎝ ⎠
where g(·): transformation function from Euler angles into a quaternion
θ(t): joint angles at time t θˆ& (τ ) : estimated joint velocity
The desired posture (q) was obtained from the data recorded in the experiment. It was assumed that the CNS does not store all possible postures, and relies on a finite repertoire (Massion, 1992). The discrepancies between the end-effector and target positions at the end of the transport phase must therefore be reduced during the subsequent landing phase. In order to simulate the effect of the limited number of postures available in the repertoire, the maximum relative proportion of the final posture, with respect to the initial posture that was to be achieved through the interpolation (ν in Eq. 7.10), was reduced to 0.9 − 1.0 randomly (0.0: the interpolated posture is the same as the initial posture, 1.0: the interpolated posture is the same as the final posture). Landing Phase Since the landing phase takes places when the fingertip is near the target location at the same time the gaze is aimed at the target, it can be assumed that visual feedback information is available to guide fingertip movements. It should be noted that control of the landing phase might be based on an allocentric reference frame, while the preceding phases are based on an egocentric reference frame. This assumption is supported by previous studies indicating that foveal vision is used for the end-point (position) control that takes places in the final phase of movements (Paillard, 1996), and visual information of the target and the hand can provide an allocentric (task space) reference frame (Sober & Sabes, 2003). Hence the “desired trajectory” can be virtually drawn in the allocentric task space to guide the end-effector to the target. This was confirmed by the results of the present experiment, which showed that the direction of the movement in the landing phase closely follows the line of sight. The proposed mode of control for the landing phase consists of a visual feedback-based inverse-kinematics control (Dysart & Woldstad, 1996; Zhang, Kuo, and Chaffin, 1999; Wang, 1999). While feed-forward phase movement control estimates the end-effector position from a forward kinematics function of joint angles (lift-off and transport phases), visual feedback control takes into account the actual end-effector positions in a global coordinate frame to enhance accuracy (landing phase). The future position of the endeffector should be still estimated from the feed-forward information of the joint angles and the internal model of the hand movement (Eq. 7.12).
t +∆ xˆ ee (t + ∆ ) = xee (t ) + f ⎛⎜ ∫ θ&ˆ (τ )dτ ⎞⎟ ⎝ t ⎠
The present differential inverse kinematics controller was based on the vectorial error between the end-effector position and the target position (Eq. 7.13). θ& = J T (JJ T ) −1 (x& target − x&ˆ ee )
where J is a position Jacobian matrix
∂x ee ∂θ
For the control of the visual subsystem where the controlled variable is the eye gaze direction, an orientation Jacobian matrix (Eq. 7.14) and angular velocity vectors were used for control inputs. Jφ =
∂φ ee ∂θ
where φee: orientation vector for the end-effector (eye) 7.4.2 Movement component outcome evaluation and phase transition decision
Even though certain types of feedback information about the system state are always available, the outcome of the current action does not have to be evaluated continuously throughout the entire course of a movement, particularly when visual feedback is taken into account, since the processing delay can be as long as 100−250 ms (Bizzi, Kalil, Morasso and Tagliasco, 1972; Robinson, Gordon, and Gordon, 1986; Goossens and Van Opstal, 1997). In the present study, it was assumed that the expected outcome of a movement in terms of goal subcomponents is sampled and evaluated only at specific time intervals. The experimental observations described above indicate that the initiation of the hand movement coincides with the peak velocity of the head movement, and torso movement initiation coincides with the peak velocity of the hand movement (Figure 7.9). It is likely, then, that in movement phases that rely on a feed-forward control mode (lift-off and transport phases), the predicted outcome of the ongoing movement component is evaluated against the phase-specific goal when the limb of primary concern reaches a peak velocity. A decision is then made based on pre-specified rules related to the goal of the corresponding component, and an appropriate type of subsequent component is
selected and initiated. For example, if the outcome evaluation at the time of head peak velocity indicates that gaze is nearing the direction of the target, then hand movement is initiated. Also, if the target cannot be reached with the extended arm, torso movements may be initiated to enhance reach distance. For feedback-controlled components (landing phases), it was assumed that the outcome is evaluated at predetermined intervals, which are largely determined by sensory delays. A model of outcome evaluation and phase transition decision for the transition between the transport phase and the landing phase is illustrated in Figure 7.14. The generated movement components are monitored using feedback information and feedforward estimation. The movement generator pre-schedules the time at which an outcome evaluation will be performed, which may correspond to the time of peak velocity of the link considered to be the primary mover. The system state is then estimated, and decision rules are applied to determine whether the goal of the transport phase can be achieved. The potential decision rules include the following: 1) both the hand and target should be within the field of view; and 2) the hand is close enough to the target, as suggested by the experimental data (Figure 7.8). If all of the decision rules are satisfied, the landing phase is initiated and a disturbance and error compensation control mode is activated. Alternatively, if any of the decision rules are left unsatisfied, an additional transport component is initiated and the sequence is repeated until all decision rules are satisfied.
Figure 7.14. Model of movement evaluation and decision for a phase transition from the transport phase to the landing phase.
7.4.3 Sequencing and Coordination
As in Chapter 6, it was assumed that multiple subsystems controlling different body segments participate in coordinated movements. In seated reach activities, at least the visual and manual subsystems are involved. The visual subsystem controls the torso, neck, head, and eyes to displace the gaze in the direction of the object of interest, and guide hand movements. The manual subsystem controls the torso, clavicle, upper arm, forearm, and hand, and contributes to moving the hand to the target. As assumed in Chapter 6, the torso link is shared by both the visual and manual subsystems; hence the goals of all subsystems are simultaneously pursued by manipulation of the redundant degrees of freedom to generate internal movements. In the present chapter, however, since reach movements are composed of multiple phases and subsystems, it was assumed that the availability of the common link and the necessity for its contribution to the goal is checked before the newly initiated movement phase. If the common link is available for use, the new movement phase is allowed to take control of it; if not, the corresponding movement phase does not use the common link. The model of sequencing and coordination is based on node connections. Each node represents a movement controller with a specific mode within the subsystem. The pattern of connections (template), which specifies the sequence of movement components and phases, is determined by the task requirements, constraints and movement evaluations. The connection can be either unconditional or conditional. An unconditional connection activates the subsequent node when the execution of the current node is completed, without movement outcome evaluation, while a conditional connection is based on the evaluation of the movement outcome. Depending on whether or not the decision criteria were met, different connection pathways are taken and different nodes subsequently executed. A connection can be either excitatory or inhibitory; an excitatory connection initiates the action of the subsequent node, while an inhibitory connection is used to terminate the ongoing process of the connected node. In principle, each connection can be either deterministic or probabilistic, and the subsequent node can therefore be activated as a probability function of the connection input. A framework of sequencing and coordination by interconnected nodes for unconstrained reach movements is illustrated in Figure 7.15. The upper nodes are for the
visual subsystem, while the lower nodes are for the manual subsystem. When the movement sequence is initiated at the Start node, both the visual feed-forward directionbased control (VFFD) and Wait nodes are activated. Since the Wait node is an infinite loop executed until an external termination signal is provided, activation of the MFFD (manual feed-forward, direction based) node is delayed until the VFFD movement is completed (=gaze on target). Both the VFFP and MFFP nodes (visual/manual feedforward control, posture-based) have conditional connections to themselves; hence the corresponding nodes are invoked iteratively until the respective control goals are satisfied. The VFB and MFB nodes (visual and manual feedback control) are conditionally activated thereafter, and the completion of the MFB component (= the fingertip on the target) terminates the VFB movement that holds the position of the head and gaze near or at the target while compensating for disturbances.
Figure 7.15. Model of sequencing for unconstrained reach movements. VFFD/MFFD: visual/manual feed-forward control, direction-based; VFFP/MFFP: visual/manual feedforward control, posture-based; VFB/MFB: visual/manual feedback control; Wait: executes an infinite loop until stopped externally
7.4.4 Model Implementation and Simulation
The proposed models were implemented in MatlabTM and simulated reach movements following experimental task requirements. The link segment lengths were derived from the recorded movement data. The computation time for the simulation of a reach movement is, on average, less than 0.3 seconds when using a 2.4GHz clock speed PentiumTM class computer.
7.5 Simulation Results 7.5.1 Unconstrained Reach Movements (Gaze Remains on Target)
The simulated results of the model proposed in Figure 7.15 are illustrated in Figure 7.16 using a sequence of JackTM figures. The target location was set to 29.0 cm right, 57.2 cm forward, and 69.0 cm above the hip-point of the subject. The model generates movements in agreement with the measured movements in terms of the overall appearance (Figure 7.16), end-effector trajectories (Figure 7.17), and joint angles variations (Figure 7.18). In addition, the characteristics of the lift-off and landing phase, i.e., the elbow reversal immediately after movement onset and the slow speed inflected hand trajectory near the target, are also present in the simulation.
Figure 7.16. Simulation of unconstrained reach movements using JackTM digital human modeling software. Upper panels (A−E): Predicted movements. Lower panels (F−J): Measured movements.
Figure 7.17. Trajectories of predicted and measured movements in a perspective (A) and top view (B). The dark lines represent the stick figure of the multi-link system (posture). The bright lines represent the cone of the field of view surrounding the gaze direction (cross: gaze center). The broken lines indicate the head-aiming vector.
The angle time profiles also indicated that general patterns and magnitudes are similar between the predicted and measured joint angles (Figure 7.18), except for clavicle horizontal rotation angles. A
Figure 7.18. Angle time profiles for predicted (A: Left panels) and measured (B: right panels) movements. The definition of each joint angle, which is represented by the number assigned for each angle profile, is illustrated in Figure 7.3. TRS: torso, CLV: clavicle, SHL: shoulder, ELB: elbow, WRT: wrist, NCK: neck, HED: head.
7.5.2 Reach Movements with Constrained Gaze Return
In a condition with gaze redirection constraints, gaze was constrained to return as soon as possible to the initial fixation point during the reach movement (Figure 7.19 A & C). However, in a condition without gaze redirection constraint, gaze was required to remain at the target even after completion of the reach movement (Figure 7.19 B & D). With gaze redirection constraints, some subjects started gaze redirection only when the hand reached close to the target, while other subjects started redirection early. In the former (Figure 7.19B: late gaze redirection strategy) and latter cases (Figure 7.19D: early
gaze redirection strategy), the head started redirection when the hand reached approximately 90% and 70% of the normalized hand position displacement, respectively.
Figure 7.19. Hand-head coordination by gaze redirection constraint conditions. Solid lines: angular distance between the initial and current head-aiming vector (% max). Dotted lines: linear distance between the initial and current hand position (% max). Left and right panels: conditions with or without gaze redirection, respectively. Upper panels: late gaze redirection strategy (subject 6 data). Lower panels: early gaze redirection strategy (subject 7 data). Arrows denote the time when the head started redirection.
These gaze redirection constraints were simulated by placing an additional node, VFFP2 (returns the gaze and head to the initial fixation point), in the existing framework
(Figure 7.20A). It may also be necessary to include a VFFD2 node before the VFFP2 in order to make the returning sequence consistent with that of the preceding movement phases toward the target (VFFD to VFFP). However, previous studies have indicated that the head can return directly to the neutral posture, and the HMCR described in Chapter 3 may not be considered in such a case (Fuller, 1992). Hence it was assumed that the kinematics of the head movement in the returning phase is composed of posture-
based control only. When gaze redirection occurs after or in the end of the landing phase, the VFFP2 node may be set to receive a connection from the VFB node, so that the gaze and head stay at or near the target to guide the landing phase of hand movements (late gaze redirection strategy). In contrast, the MFB node can be skipped if a different strategy was selected so that gaze is redirected early to the initial fixation point (early gaze redirection strategy). In this case, the VFB node is inhibited at the completion of MFFP, and VFFP2 is activated immediately (Figure 7.20B). Since the time-consuming visual feedback phase of the hand movement is eliminated, the gaze and head can return to the initial fixation point at an earlier time. However, no visual guidance can be provided for the final phase of the hand movement, as the gaze leaves the target site before completion of hand movement. The simulated movements for each strategy (late versus early gaze redirection) are contrasted in Figure 7.21.
Figure 7.20. Reach movements with gaze constrained to redirect to the initial fixation point. A) Strategy for a late gaze redirection, in which an additional node VFFP2 (returning the gaze and head to the initial fixation) is activated only when the MFB activity is completed. B) Strategy for an early gaze redirection, in which the MFB node is skipped and the VFFP2 node is activated directly by the preceding MFFP node.
Figure 7.21. Simulation of gaze-constrained reach movements. Upper panels (A−E): Predicted movement of a late gaze redirection strategy. Lower panels (F−J): Predicted movement of an early gaze redirection strategy. The head returns to the neutral position before the completion of the hand reach movements when the early gaze redirection strategy is selected (panel I versus D).
From the recorded data, it was observed that five out of ten subjects employed the late gaze redirection strategy described in Figure 7.20A, while the other five subjects used the early gaze redirection coordination strategy represented in Figure 7.20B, even though no specific instructions regarding time constraints for the returning gaze and head were provided. Obviously, one of the shortcomings of early gaze redirection is a reduction in pointing accuracy. Figure 7.22 illustrates the end-point dispersion of the fingertip for each strategy. For subject 3 (Figure 7.22A & B), who used the late gaze redirection strategy represented in Figure 7.20A, the fingertip dispersion was not significantly different between the two gaze redirection conditions (F = 0.72; p = 0.80). However, for subject 2 (Figure 7.22 C & D), who used the early gaze redirection strategy depicted in Figure 7.20B, the fingertip dispersion significantly increased when gaze redirection occurred early (F = 4.46; p < 0.05)
Figure 7.22. Dispersion of fingertip pointing positions. Left panels (A & C): conditions without gaze redirection constraints. Right panels (B & D): conditions with gaze redirection constraints. Panel B: late gaze redirection strategy. Panel D: early gaze redirection strategy. The circle indicates the mean dispersion size.
7.6 Discussion 7.6.1 Validity of the Proposed Model
The proposed model is composed of two major parts: the movement controllers and the interconnected nodes representing the activities of the supervisory system (sequencer/coordinator). A different type of movement control mode can be employed depending on the movement phase and task requirements. In the present study, it was assumed that at least three different types of movement control modes can be used, including the feed-forward direction-based control, feed-forward posture-based control, and feedback inverse kinematics control. These control modes are specifically in charge 124
of the lift-off, transport and landing phases, respectively. The transition between the phases is contingent upon the evaluation of movement component outcomes. In the proposed model, as derived from recorded data, the predicted movements do not necessarily show synchronized movement onset of all joints; on the contrary, one joint starts to move earlier than another joint quite frequently. This behavior is consistent with the results showing that the head movement onset always precedes the hand movement onset, that the hand movement onset frequently precedes the torso movement onset, and so on. While the early initiation of the head and gaze with respect to the hand has been generally confirmed (Carnahan & Marteniuk, 1991; Fuller, 1992; Helsen et al., 2000; Herst, Epelboim and Steinman, 2001; Flanagan & Johansson, 2003), other studies have reported that torso and all upper extremity joints start to move simultaneously in reach/grasp/pointing movements (Hoff & Arbib, 1993; Desmurget & Prablanc, 1997; Sailer et al., 2005), or that all joints reach peak velocity at the same time (Sailer et al., 2005). A major difference in the present study is the relaxation of constraints in the tasks. No specific constraint was imposed on movement time or speed. In addition, the tasks required multi-joint movements in a three-dimensional space, as opposed to single- or two-joint planar movements. Specifically, when constraints are imposed in terms of time, accuracy and/or workspace dimensions, it appears that the movement initiation of participating joints tends to be synchronized, while unconstrained movements in a threedimensional workspace do not show this pattern. Furthermore, since the targets were often presented at random locations beyond the instant visual field and only two replications were executed per target position, it can be assumed that learning and practice play only a minimal role in reach movements in the present experiments. It was assumed that movements in the lift-off and transport phases are performed without visual feedback. The analysis of lift-off movements indicates that although the lift-off azimuth is correlated with the target azimuth, neither the elevation nor the distance is correlated with actual target position. In addition, the amplitude of a lift-off component is reduced or even suppressed when the targets are presented near the midsagittal plane. Therefore, the control of the lift-off phase does not need to encode movements in an allocentric reference frame. For the transport phase, the final posture the controller attempts to achieve is described by the set of joint angles, which are
encoded in a joint space rather than a task space. The landing phase takes advantage of visual information about the target position and the surrounding space, and the inverse kinematics controller proposed in the present model requires the desired end-effector trajectory specified in an allocentric reference frame. Hence, the landing movements may be controlled in an allocentric reference, which confirms the result of an earlier study (Conti & Beaubaton, 1980). Both visual and proprioceptive feedback is available, regardless of the specific modes of movement control. However, depending on the accuracy and reliability associated with a specific feedback mode, one type of feedback information may be more heavily used than the other (Todorov, 2004; Sober & Sabes, 2003). For example, visual information is more important for task space encoding, while proprioceptive information is more important for joint space encoding (Sober & Sabes, 2003). Nevertheless, the control system can take advantage of feed-forward-based predictions in order to improve the accuracy of state estimation (Sabes, 2000). In the present model, feed-forward control modes (VFFD/MFFD/VFFP/MFFP) use prediction to continuously estimate the current system state and issue the motor commands, while feedback information is taken into account for movement outcome evaluation only at a predetermined time. In contrast, feedback control modes (VFB/MFB) sample sensory information at predetermined intervals. However, feed-forward information is taken into account for feedback control modes to adjust the motor commands to prevent overcorrection problems. Therefore, FF nodes do not exclude the alternative source of information (feedback), since feedback information is still used in an intermittent basis to update movement control, at least before the movement onset (calibration) and at the peak velocity (evaluation). A similar argument can be made for FB nodes. 7.6.2 Organization of Movements
The overall framework of the present model can be considered as a hierarchy of control (Marder & Abott, 1995; Badler et al., 1996), in which the lower layer is composed of movement controller components and the upper layer concerns the organization of the components. The upper layer is also assumed to be a central controller that supervises the global aspects of movement, and the controllers in the lower layer are
assumed to be able to function independently of the upper layers, and to have the autonomy to generate movement components according to their specific functioning principles. Each controller in the lower layer attempts to satisfy the individual phasespecific goals defined by the upper layer controllers, and the outcome is conveyed to the upper layer for evaluation and decision-making for subsequent actions. Traditionally, movement prediction models have relied on the optimization method (Flash & Hogan, 1985; Kawato, 1992; Zhang et al., 1999; Torress & Zipser, 2002; Park, Chaffin & Martin, 2003). However, in the present model, and within the context of unconstrained three-dimensional movements, it is assumed that the overall organization of movements (upper level control) does not necessarily rely on optimization. The organization of movement components often exhibits random responses to the environment and unoptimized decision-making. The coordination of multiple movement phases and subsystems may therefore vary, depending not only on the task requirements but also on individual preferences and other random factors, as evidenced by the various distributions of head-hand coordination patterns (Figure 7.10). In addition, one half of the subject group showed a late gaze redirection coordination strategy (Figure 7.22), which was modeled by eliminating the MFB node, while the other half did not show such patterns. Nevertheless, the individual movement controllers in the lower layer may still rely on optimization. For example, the posture-based feed-forward controller can generate time-optimal movements, and the differential inverse kinematics algorithm in the MFB/VFB nodes is based on the cost function of minimum joint velocity, which satisfies the pre-specified end-effector trajectory requirements (Chapter 6). Hence, “unoptimized” movement patterns are more apparent in the coordination rather than in the individual movement components. This hypothesis is supported by the randomness and variability observed in the patterns of movement coordination. For example, the hand precedence index describing hand-head coordination varies not only across subjects, but also within a single subject, exhibiting unique probability distributions. In addition, as illustrated in Chapter 2, head movement kinematics tends to show a time-optimal velocity profile when a visual target is presented at a small eccentricity, where only the initial movement component is required without subsequent corrections. However, the
supervisory organization could gain optimality through task repetitions, practice and learning. The level of familiarity of the tasks and the imposed constraints would certainly influence the coordination of the movements and increase the level of optimization. Such characteristics are not present in our results since movements were largely unconstrained, and not really practiced or learned. 7.6.3 Model Enhancements
Each movement control mode/phase in the present model can be also treated as a separate module or movement controller. Hence it is possible to replace modules depending on the task requirements. Likewise, a new subsystem may be incorporated in addition to the visual and manual subsystems that are considered in the present study. For example, balance control may be implemented as a separate subsystem with a dedicated end-effector (center of pressure). For the present model, the recorded movements provide only the initial and final posture angles and anthropometry specifications. Although actual posture angles sampled from recorded movements could enhance model prediction accuracy, it is also possible to substitute experimental data with static inverse kinematics models and anthropometric models. As described above, randomness can be observed from the coordination rather than from the activities of the individual movement components, suggesting that the organization of movement components may be further manipulated to exhibit the random behavior. More specifically, by implementing a probability function in the connection between nodes, different coordination patterns can be generated for each model simulation. From the same perspective, the present model can benefit from a neural network that adjusts the threshold of node activation with learning. Hence the effect of learning on coordination can be also simulated, making it possible to test the properties of coordination and the robustness of the model.
CHAPTER 8 SUMMARY AND CONCLUSION
8.1 Principal Contributions 8.1.1 Head Movement for Gaze Displacement Function
Investigation of kinematics data indicated that gaze-driven head movements are composed of a truncated feed-forward type fast component followed by multiple feedback corrections. The reconstructed velocity profile of the initial component shows that the intended amplitude of this truncated component increases with target eccentricity to reach quite early an asymptotic value of approximately 20°. This feed-forward transition movement, most likely based on a cognitive mapping of the space in the absence of knowledge of the exact location of the target to be reached, seems to be generated to bring the head to a location that would allow the eye to reach any expected target. When foveal acquisition of the target occurs, the target location can be estimated and corrective movements based on proprioceptive feedback are initiated to place the head in a specific location. These corrections should be unnecessary since gaze is already aligned with the target direction. Consequently, the results support the hypothesis that achieving the final posture is one of the goals of head movement control, which is in agreement with the perspectives brought by earlier studies (Crossman & Goodeve, 1983; Meyer et al., 1988). The presence of corrective head movements may also interfere with or delay the subsequent reach task. This perspective is supported by the investigation on movement coordination presented in Chapter 7, which showed that limb movements follow a timing sequence to avoid interference.
Although the number of corrective movements increases with target eccentricity, the overall kinematics of corrective movements shows a large variability. This observation implies that the final head posture, which is proportional to target eccentricity, can be achieved through a number of kinematic variations that are loosely programmed using an on-the-fly strategy. 8.1.2 Model of Head Orientation as a Postural Response to Visual Targets
Since the head is moved into a specific orientation, the resulting joint angle configuration can be considered as a postural response associated with a visual target, and movements can be viewed as transitions to a goal posture, as opposed to the notion that posture is simply the end outcome of movements. This hypothesis justifies the modeling of head orientation as a function of target position separately from movement kinematics. Nonlinear regression models revealed that, in general, the horizontal and vertical head movement contribution ratios (HMCR) are approximately 68% and 43% of target azimuth and elevation, respectively. In addition, the HMCR is characterized by a threshold/dead zone for head movements (±3° from the mid-sagittal plane). The HMCR seems to be influenced by the interaction of three-dimensional neck and head joint mobility. Hence, the overall results indicated that head posture is a function of task requirement, context and biomechanical constraints. The threshold zone of head movement, which is far smaller than the eye range of motion, indicates that the purpose of head movement is not to overcome a limitation of the eye range of motion. Furthermore, in previous studies the relative joint contributions to whole- or upper-body posture has been explained by an attempt to minimize biomechanical cost, such as torque (Runge et al., 1999), effort (Dysart & Woldstad, 1996; Khatib et al., 2004), fatigue (Sparto et al., 1997; Gribble & Hertel, 2004) or energy expenditure (Bianchi, 1998). However, these biomechanical processes would not be an ideal underlying mechanism that determines the head and eye posture, since moving the head, which is almost 100 times heavier than the eyes, would not be an economical solution at all. Thus it is suggested that the rationale to move the head even at a high physiological and kinetic “cost” is related to a neural, rather than a biomechanical issue, associated with the accuracy of movement in the selected reference frame.
8.1.3 Optimality in Head Posture
Chapter 2 indicated that the goal of head movement control might be to achieve a predetermined posture, and Chapter 3 suggested that head posture is determined by both neural and mechanical factors. Hence, it can be assumed that a head posture is an outcome of optimization for the given task requirements and system constraints. Chapter 4 tried to answer the question about how head posture is optimized, based on the assumption that the cost function is related to the sensorimotor error of the eye and head system. The proposed cost function was composed of the unweighted sum of errors originating from the head/neck and eye position, which is represented in the head-in-thetorso and eye-in-the-head angles, respectively. The results support the hypothesis that the postural response of the head may reflect an optimal solution for redundant degrees of freedom in terms of the accuracy of task space representation in an egocentric reference frame. The accuracy of an egocentric reference frame is important if subsequent visuomanual tasks are to be executed after the target is located, since it is necessary to transform the egocentrically represented body position and task space into a corresponding joint reference frame. Hence, a minimal visuo-spatial representation error within the initial reference frame is crucial since the transformation would not only transfer but also exacerbate the error, and further deteriorate the movement accuracy. The accuracy of the target localization task is accordingly the most important criterion in controlling the head and gaze movements. This issue is also important from both application and theoretical perspectives, since visuo-spatial calibration, which is determined by head orientation, could critically influence the performance of subsequent hand movements generated to interact with the localized target. This optimization differs from previously proposed schemes for the modeling of hand reach movements, whose cost functions included minimum jerk (Flash & Hogan, 1985), torque (Uno et al., 1989), and effort (Hasan, 1986), although an extreme joint posture is normally associated with both decreased end-effector position accuracy (Rossetti et al., 1994) and increased discomfort (Kee & Karwowski, 2001) at the same time. In addition, this perspective is different from the minimum variance model
(Wolpert et al., 1995) and the optimal control model (Todorov et al., 2004) in that the optimality is associated with posture rather than the variability of end-effector positions. It should be noted that the desired posture is achieved through the entire process of the initial component and subsequent corrections. This strategy may not appear to be an efficient way to achieve the movement goal, since the CNS may still pursue the desired posture from the movement onset by setting the desired limb segment angles directly, as suggested by an equilibrium point hypothesis (Feldman, 1966; Latash, 1993). However, the CNS does not know precisely where the target is before a gaze movement is made; accordingly, the desired posture can be selected only after the execution of the initial component of head movement. This hypothesis is supported by the reduced hand travel distance made during the lift-off phase for a reach movement when the target presented is within the initial field of view. Furthermore, it is suggested that each corrective movement may be used for successive evaluation of the state of the head/neck movement system, which may enhance the accuracy of the head position representation. Therefore, the course of the multiple corrections can be viewed as fine-tuning of the posture by the updated representation of the target position and task space. 8.1.4 Control of Head Movements in the Context of Whole-Body Movements
It was suggested that the control of head movements should also be understood in the context of whole-body movements. In addition to the accuracy of egocentric representation, therefore, the optimality of postural responses of the head also comes from the interaction with whole body posture. A standing posture can be viewed as equilibrium of an inverted pendulum model, and the heavy mass of the head on top of the pendulum could affect balance significantly. Both experiments and static biomechanical simulation results from this study indicated that head inclination angles are manipulated to counteract the center of mass of the whole body shifted by a load in the hand. Consequently, it can be suggested that head posture is an outcome of movement control for visual and postural functions, and head postures are optimized for the global context of the movement, not only a single task requirement. The interaction between the head and whole body that exists for the control of static postures could also be found for the control of dynamic movements. The head and
hand movements, which are controlled by different subsystems, should negotiate the control of the commonly shared link (torso), so the two subsystems take advantage of redundant degrees of freedom and generate internal movements in order to pursue the respective goals simultaneously. Indeed, the predicted movements indicate that clear differences can be made both in the joint angle trajectories and final postures when a secondary objective function for negotiated control is introduced into the model. Furthermore, the negotiated control model predicts movements that are more similar to the measured movements than the models including a single subsystem’s goal exclusively (visual or manual). From this perspective, a question is raised as to how the two different goals (posture and movement) are coexisting and organized together. Recent findings indicate that both the posture and movement may be independent domains that the CNS can control separately (Scheidt et al., 2004; Ghez et al., 2004). Furthermore, another study reported that neither the postural nor the movement control model alone can fully account for the hand reach movements in a three-dimensional space (Hermens & Gielen, 2004). Hence, it is suggested that unconstrained three-dimensional movements may be comprised of multiple components with individual goals associated with both posture and movement control, and the CNS pursues both of the goals. This hypothesis is supported by the interaction between head and whole body movement control for both posture and movement domain. In such a case, it is suggested that the CNS should temporally coordinate the individual attempts to achieve a specific goal. 8.1.5 Organization of Unconstrained Movements Involving Multiple Links
Chapter 6 indicated that the global task goal (visually-guided reach movement) can only be achieved by the fulfillment of each segment-specific goal (visual guidance and hand displacement). Hence, it would be of interest how the CNS temporally organizes the activities to pursue these segment-specific goals. Since the role of gaze displacement is to obtain the visual representation of the task space, the visual subsystem should be initiated prior to the manual subsystem in order to ensure the accuracy of the hand reach movement. Indeed the results of Chapter 7 confirmed that head movements precede the hand movement initiation. Accordingly, it is suggested that the CNS should
prioritize the activities of each subsystem and regulate the coordination so that the outcome of one subsystem (space representation) can effectively be used for the subsequent action (hand reach movement). Hence, a visually-guided unconstrained hand reach movement in a three-dimensional space should exhibit a sequence of multiple phases each of which corresponds to the activity of an individual subsystem and control mode. The above hypotheses were used as the foundations for a coordination model. The principle of the proposed coordination model for unconstrained three-dimensional movements was that the sequenced activities of each subsystem and movement phase are evaluated against a goal in order to decide the subsequent mode of control and/or phase transition. The mode of control is also related to the reference frame in which it takes place and control goals that the CNS uses during a specific phase of movement. It was assumed that the movements during the initial and intermediate phases are directionbased and posture-based control, respectively, and both phases are encoded in an egocentric reference frame. The transition between reference frames has been reported in previous studies (Kim & Martin, 2002; Keshner, 2004), where the CNS attempts to control either head-on-the-torso or head-in-space position depending on the task requirement and context. The proposed model includes a hierarchy of control: the lower layer, consisting of different controllers operating under different modes, and the upper (supervisory) layer being the coordinator. Even though the movement controllers in the lower layer may use optimization as a functioning principle, the supervisory layer does not rely on optimization for the organization of the movement component controllers. In addition, the proposed model takes into account randomness and decision-making for coordination. It is suggested that the lack of optimization and randomness in coordination may actually make the movement prediction realistic, and provides the capacity to generate a number of variations in movement strategies, which is typical for unconstrained threedimensional movements.
8.1.6 Summary of Contributions
In summary, the present work addressed the issues of head and hand movement control as follows:
1) Head movements are intended to provide an accurate visual representation of task space for the subsequent movements of the hand and whole body. At the same time, head movements are influenced by the activities of the hand and whole body as an integral part of a whole body posture. 2) One of the goals of head and hand movement control is to achieve a desired posture, which is optimized for the task requirement and context. From this perspective, movement is a transition between goal postures. 3) Unconstrained three-dimensional movements are multi-phasic, and can be modeled by the coordination of multiple subsystems with specific goals. 4) Coordination can be essentially performed by on-line decision-making without optimization. From this perspective, movements can be random and variable depending on the context and individual preference.
8.2 Future Research Directions 8.2.1 Functional Outcome of the HMCR and Head Movement Kinematics
It was evidenced that the HMCR and head movement kinematics influence the performance of visually guided tasks. Target eccentricity with respect to the head would be related to the accuracy of visual information, reaction time for subsequent actions, and accuracy of subsequent hand movements. In addition, hand movement onset is delayed until the head reaches peak velocity. Hence, the functional outcome of the HMCR and head movement kinematics should be investigated and associated with the proposed model, particularly for digital human modeling applications and manual work design. Furthermore, the HMCR model can take into account the subject’s anthropometric and demographic characteristics in order to be used as an ergonomic design tool and for driving/work safety guidelines. Another potential area for model enhancement would be the investigation of the HMCR and resultant hand movement accuracy for subjects
wearing corrective lenses, helmets or head mounted displays, which interfere with visual field of view and impose additional weight to the head. For example, it has been reported that additional inertia to the head changes eye and head movement kinematics and coordination pattern (Gauthier, Martin & Stark, 1986; Neary, Bate & Heller, 1993; Lehnen, Glasauer & Buttner, 2003). The HMCR model can be further applied to predict the effect of a prolonged head/neck posture on fatigue and other musculoskeletal disorders. 8.2.2 Reference Frame and Movement Primitives
Throughout this dissertation, it was suggested that the CNS switches the modes of control during the course of a movement, which results in separate movement phases. This problem is also related to the specific reference frame that the CNS uses to encode the space and movements. These issues are also related to the types of primitives (characteristic movement patterns) that compose movements. Studies on planar movements have suggested that hand trajectories can be explained by a linear superposition of primitives (Sanger, 2000; Fod et al., 2002). Hence, it is likely that similar superposition rules may be identified from three-dimensional head and hand kinematics. Furthermore, it is not clear why the CNS uses one specific reference frame over another under a given movement context, and the functional outcome of specific reference frames has not yet been determined. Hence, it is suggested that the effect of reference frame on functional outcome of movement, along with the identification of three-dimensional movement primitives, would need further investigation. 8.2.3 Validity of Multiphasic Movement Coordination
Unconstrained 3D hand reach movements can be split into a series of multiple phases. While it has been reported that the final feedback phase contributes to movement accuracy (Crossman & Goodeve, 1983; Meyer et al., 1988), the purpose of the lift-off phase has not yet been clearly defined. A clue came from the initial head movement that is cognitively driven to optimize visual target localization performance over a range of potential target locations. From this perspective, it is suggested that the lift-off phase helps the hand to move in the direction of the potential position of the target based on a
cognitive mapping of the task space. However, it can also be suggested that the lift-off phase can contribute to a preparation of movement that enhances available degrees of freedom by adjusting the link segment configurations to overcome geometrical constraints. In addition, it may be used to calibrate proprioception to the initial condition of joint variables, particularly when the movement is initiated from a prolonged static posture. Another aspect is that the lift-off phase can be used to “break up” the inertia of the arm and hand during the early phase of movements and elevate muscle tone for movement preparation. In any case, experimental evidence would be necessary to support these hypotheses. From the same perspective, the selection of the control modes for each phase of reach movement is largely based on the kinematics of head/hand movements and evidence from previous studies, without support of experiments identifying the effects. One way to validate the selection of control modes would be to simulate the responses to disturbances applied to the end-effector position, joint angle, and target position. Furthermore, the results of the experiment in Chapter 7 show discrepancies with previous studies on planar movements in which time/accuracy were constrained. For example, all joints begin to move together in two-dimensional movements (Hoff & Arbib, 1993; Vercher et al., 1994; Sailer et al., 2005), and the superposition of primitives can still be achieved by a model without separate controllers (Sternad & Schaal, 1999). Although studies on unconstrained three-dimensional movements (Desmurget & Prablanc, 1997; Pelz et al., 2001; Flanagan & Johansson, 2003) are in good agreement with the present study, it would still be necessary to determine the factors that differentiate the unconstrained three-dimensional movements from constrained and/or two-dimensional or single joint movements. 8.2.4 Probabilistic Approaches to Coordination
The proposed coordination model can accommodate probability functions in the node connections to simulate kinematic variability. A potential model enhancement would be the implementation of probability thresholds of phase transition that change with learning, a feature used for neural network modeling (Anthony & Bartlett, 1999). This can be considered as an attempt to resolve the system uncertainty by making decisions about what to do next in the coordinated sequencing. Since the upper layer of
the hierarchical structure in the proposed model, which is in charge of the movement evaluation and sequencing of movement components, does not include optimization processes, it is expected that optimality in decision-making processes may emerge through learning, and may not a be an intrinsic goal as assumed by many earlier studies that attempted to solve the redundancy problem with optimization schemes. In addition, the deterministic components of the present model, including target position representation, musculoskeletal system dynamics, and proprioceptive/visual feedback information, could be manipulated to act as a source of random noise, so that the effect of uncertainty about the environment representation and system state can be simulated. More importantly, validation of the model should be achieved by comparing simulated data and recorded movements using a quantitative index, which also requires further development.
Adamo, D.E., Martin, B.J. & Brown. S.H. (2003). Proprioceptive acuity young and elderly adults. Program No. 735.15. 2003 Abstract Viewer/Itinerary Planner. Washington, DC: Society for Neuroscience. Adamovich, S. V., Berkinblit, M. B., Fookson, O., & Poizner, H. (1998). Pointing in 3D space to remembered targets. I. Kinesthetic versus visual target presentation. Journal of Neurophysiology, 79(6), 2833–2846. Anthony, M., & Bartlett, P. L. (1988). Neural Network Learning: Theoretical Foundations. Cambridge, UK: Cambridge University Press. Badler, N., Webber, B., Becket, W., Geib, C., Moore, M., Pelachaud, C., et al. (1996). Planning for animation. In N. M-Thalmann & D. Thalmann (Eds.), Interactive Computer Animation. Prentice-Hall. Bahill, A. T., Adler, D., and Stark, L., (1975) Most naturally occurring human saccades have magnitudes of 15 degrees or less, Investigative Ophthalmology, 14, 468469,. Bard, C., Fleury, M., & Paillard, J. (1992). Head orienting and aiming accuracy. In A. Berthoz, W. Graff & P. P. Vidal (Eds.), The Head-Neck Sensory-Motor System (pp. 582–586). Oxford: Oxford University Press. Bard, C., Paillard, J., Fleury, M., Hay, L., & Larue, J. (1990). Positional versus directional control loops in visuomotor pointing. European Bulletin of Cognitive Psychology, 39, 151–161. Becker, W., & Saglam, H. (2001). Perception of angular head position during attempted alignment with eccentric visual objects. Experimental Brain Research, 138, 371– 385. Biguer, B., Prablanc, C., & Jeannerod, M. (1984). The contribution of coordinated eye and head movements in hand pointing accuracy. Experimental Brain Research, 55(3), 462–469.
Bizzi, E., Hogan, N., Mussa-Ivaldi, F. A. and Giszter, S. (1992). Does the nervous system use equilibrium-point control to guide single and multiple joint movements? Behavioral and Brain Sciences, 15, 603-613. Bizzi, E., Kalil, R. E., Morasso, P., & Tagliasco, V. (1972). Central programming and peripheral feedback during eye-head coordination in monkeys. Bibliotheca Ophthalmologica, 82, 220–232. Bock, O. (1993). Localization of objects in the peripheral visual field. Behavioral Brain Research, 56, 77–84. Bock, O., & Eckmiller, R. (1986). Goal-directed arm movements in absence of visual guidance: evidence for amplitude rather than position control. Experimental Brain Research, 62, 451–458. Bogduk, N., & Mercer, S. (2000). Biomechanics of the cervical spine I. Normal kinematics. Clinical Biomechanics, 15, 633-648. Bridgeman, B., Hendry, D., & Stark, L. (1975). Failure to detect displacement of the visual world during saccadic eye movements. Vision Research, 15(6), 719–722. Brown, S. H., & Cooke, J. D. (1981). Responses to force perturbations preceding voluntary human arm movements. Brain Research, 220, 350–355. Burgess-Limerick, R., Plooy, A., & Ankrum, D. (1998). The Effect of Imposed and Selfselected Computer Monitor Height on Posture and Gaze Angle. Clinical Biomechanics. 13, 584-592 Carnahan, H., & Marteniuk, R. G. (1991). The temporal organization of hand, eye, and head movements during reaching and pointing. Journal of Motor Behavior, 23(2), 109-119 Carrasco, M., Evert, D. L., Chang, I., & Katz, S. M. (1995). The eccentricity effect: Target eccentricity affects performance on conjunction searches. Perception & Psychophysics, 57, 1241–1261. Cave, K. R., & Wolfe, J. M. (1990). Modeling the role of parallel processing in visual search. Cognitive Psychology, 22, 225–271. Chaffin, D., B., Andersson, G., B., J. & Martin, B. J. (1999). Occupational Biomechanics. 3rd edition, John Wiley & Sons, New York. Chiacchio, P., Chiaverini, S., Sciavicco, L., & Siciliano, B. (1991). Closed-loop inverse kinematics schemes for constrained redundant manipulators with task space augmentation and task priority strategy. International Journal of Robotics Research, 10, 410–425.
Chieffi, S., Allport, D. A., & Woodin, M. (1999). Hand-centred coding of target location in visuo-spatial working memory. Neuropsychologia, 37(4), 495–502. Cohn, J. V., DiZio, P., & Lackner, J. R. (2000). Reaching during virtual rotation: context specific compensations for expected coriolis forces. Journal of Neurophysiology, 83(6), 3230–3240. Conti, P., & Beaubaton, D. (1980). Role of structural visual field and visual reafference in accuracy of pointing movements. Perception and Motor Skills, 50, 239-41. Darling, W. G., Butler, A. J., & Williams, T. E. (1996). Visual perceptions of head-fixed and trunk-fixed anterior/posterior axes. Experimental Brain Research, 112(1), 127–134. De Graaf, J. B., Denier van der Gon, J. J., & Sittig, A. C. (1996). Vector coding in slow goal-directed arm movements. Perception and Psychophysics, 58, 587–601. De Graaf, J. B., Sittig, A. C., & Denier Van der Gon, J. J. (1994). Misdirections in slow goal-directed arm movements are not primarily visually based. Experimental Brain research, 99, 464–472. Delleman, N. J., & Hin, A. J. S. (2000). Postural Behaviour in Static Gazing Upwards and Downwards, SAE Technical Papers Series 2000-01-2173, Warrendale, PA: Society of Automotive Engineers. Delleman, N. J., Huysmans, M. A., & Kujit-Evers, L. F. M. (2001). Postural behaviour in static gazing sidewards (SAE Technical Papers Series, 2001-01-2093). Warrendale, PA: Society of Automotive Engineers. Desmurget, M., & Prablanc, C. (1997). Postural Control of Three-Dimensional Prehension Movements. Journal of Neurophysiology. 77, 452-464. Desmurget, M., Prablanc, C., Rossetti, Y., Azi, M., Paulignan, Y., Urquizar, C., Mignot, J. (1995). Postural and synergic control for three-dimensional movements of reaching and grasping. Journal of Neurophysiology, 74(2), 905–910. Dick, S., Ostendorf, F., Kraft, A., & Ploner, C. J. (2004). Saccades to spatially extended targets: the role of eccentricity. Neuroreport, 15(3), 453–456. Dukic, T., Hanson, L, Wartenberg, C, & Holmqvist, K. (2005). Effect of button location on driver's visual behaviour and safety perception. Ergonomics, 48 (4), 399-410. Dunham, D. N. (1997). Cognitive difficulty of a peripherally presented visual task affects head movements during gaze displacement. International Journal of Psychophysiology, 27(3), 171–182. Dysart, M. J., & Woldstad, J. C. (1996). Posture Prediction for Static Sagittal-Plane Lifting. Journal of Biomechanics, 29(10), 1393-1397. 142
Farrer C., Franck N., Paillard J., Jeannerod M. (2003) The role of proprioception in action recognition. Consciousness and Cognition 12, 609-619. Feldman, A. G. (1966). Functional turning of nervous system with control of movement or maintenance of a steady posture. Controllable parameters of the muscle. Biophysics, 11, 565–578. Feldman, A. G. (1986). Once more on the equilibrium point hypothesis (lambda model) for motor control. Journal of Motor Behavior, 18, 17-54. Flanagan, J. R., Johansson, R. S. (2003). Action plans used in action observation. Nature, 424, 769–771. Flanders M, Helms-Tillery SI, Soechting JF (1992) Early stages in a sensorimotor transformation. Behavioral & Brain Science 15:309–362 Flash, T., & Hogan, N. (1985). The coordination of arm movements: an experimentally confirmed mathematical model. Journal of Neuroscience, 5, 1688–1703. Fletcher, R., & Powell, M. J. D. (1963). A Rapidly Convergent Descent Method for Minimization. Computer Journal, 6, 163–168. Fod, A., Mataric, M. J., & Jenkins, O. C. (2002). Automated derivation of primitives for movement classification. Autonomous Robots, 12(1), 39-54. Fookson, O., Smetanin, B., Berkinblit, M., Adamovich, S., Feldman, G., & Poizner, H. (1994). Azimuth errors in pointing to remembered targets under extreme head rotations. Neuroreport, 5, 885–888. Freedman, E. G., & Sparks, D. L. (1997). Eye-head coordination during headunrestrained gaze shifts in rhesus monkeys. Journal of Neurophysiology, 77, 2328–2348. Freedman, E. G., & Sparks, D. L. (2000). Coordination of the eyes and head: movement kinematics. Experimental Brain Research, 131(1), 22–32. Fuller, J. H. (1992). Head movement propensity. Experimental Brain Research, 92(1), 152–164. Gauthier, G. M., Martin, B. J., & Stark, L. (1986). Adapted head and eye movement responses to added-head inertia. Aviation, Space, and Environmental Medicine, 57, 336–342. Gauthier, G. M., Vercher, J. L., & Blouin, J. (1995). Egocentric visual target position and velocity coding: role of ocular muscle proprioception. Annals of Biomedical Engineering, 23, 423–435.
Ghez, C., Dinstein, I., Cappell, J., & Scheidt, R.A. (2004). Posture and movement are encoded in different coordinate systems. Program No. 873.14. 2004 Abstract Viewer/Itinerary Planner. Washington, DC: Society for Neuroscience, Gillies, G., T., Broaddus, W., C., Stenger, J., M., & Taylor, A., G. (1998). A biomechanical model of the craniomandibular complex and cervical spine based on the inverted pendulum. Journal of Medical Engineering & Technology, 22(6), 263-269. Goldring, J. E., Dorris, M. C., Corneil, B. D., Ballantyne, P. A. & Munoz, D. P. (1996) Combined eye-head gaze shifts to visual and auditory targets in humans. Experimental Brain Research, 111, 68-78. Goossens, H.H.L.M., & Van Opstal, A.J. (1997). Human eye-head coordination in two dimensions under different sensorimotor conditions. Experimental Brain Research, 114 (3), 542-560. Guitton, D. (1988). Eye-head coordination in gaze control. In B.W. Peterson & F.J. Richmond (Eds.), Control of head movements. New York: Oxford University Press. Guitton, D. (1992). Control of eye-head coordination during orienting gaze shifts. Trends in Neurosciences, 15(5), 174–179. Guitton, D., & Volle, M. (1987) Gaze control in humans: eye-head coordination during orienting movements to targets within and beyond the oculomotor range. Journal of Neurophysiology, 58, 427. Guitton, D., Munoz, D. P., & Pelisson, D. (1993). Are gaze shifts controlled by a 'moving hill' of activity in the superior colliculus? [Reply]. Trends in Neurosciences, 16, 216–217. Haines, R. F., & Gilliland, K. (1973). Response time in the full visual field. Journal of Applied Psychology, 58(3), 289–295. Harris, L. R., Zikovitz, D. C., & Kopinska, A. E. (1998). Frames of reference with examples from driving and auditory localization. In L. R. Harris & M. Jenkin (Eds.), Vision and Action. Cambridge University Press. Hasan, Z. (1986) Optimized movement trajectories and joint stiffness in unperturbed, inertially loaded movements. Biological Cybernetics, 53, 373-382. Helsen, W. F., Elliott, D., Starkes, J. L., & Ricker, K. L. (2000). Coupling of eye, finger, elbow, and shoulder movements during manual aiming. Journal of Motor Behavior, 32(3), 241–248.
Herst, A. N., Epelboim, J., & Steinman, R. M. (2001). Temporal coordination of the human head and eye during a natural sequential tapping task. Vision Research, 41, 3307–3318. Hoff, B., & Arbib, M. A. (1993). Models of trajectory formation and temporal interaction of reach and grasp. Journal of Motor Behavior, 25, 175–192. Israel, I., & Berthoz, A. (1989) Contribution of the otoliths to the calculation of linear displacement. Journal of Neurophysiology, 62, 247-263. Jampel, R. S., & Shi, D. X. (1992). The primary position of the eyes, the resetting saccade, and the transverse visual head plane. Investigative Ophthalmology and Visual Science, 33(8), 2501–2510. Jaschinski, W. (2002). The proximity-fixation-disparity curve and the preferred viewing distance at a visual display as an indicator of near vision fatigue. Optometry & Vision Science. 79(3), 158-169. Jeannerod, M. (1988). The neural and behavioural organization of goal directed movements. Oxford: Clarendon Press. Jürgens, R., Becker, W., Kornhuber, H., H. (1981) Natural and drug-induced variations of velocity and duration of human saccadic eye movements: evidence for a control of the neural pulse generator by local feedback. Biological Cybernetics, 39, 87–96. Kalesnykas, R. P., & Hallett, P. E. (1994). Retinal eccentricity and the latency of eye saccades. Vision Research, 34, 517–531. Kapoula, Z. (1985). Evidence for a range effect in the saccadic system. Vision Research, 25, 1155–1157. Kawato, M. (1992). Optimization and learning in neural networks for formation and control of coordinated movement. In D. E. Meyer & S. Kornblum (Eds.), Attention and performance XIV: Synergies in experimental psychology, artificial intelligence, and cognitive neuroscience (pp. 821–849). Cambridge, MA: MIT Press. Kee D, Karwowski W (2001) The boundaries for joint angles of isocomfort for sitting and standing males based on perceived comfort of static joint postures Ergonomics , 44, 614-648 Keshner, E. A. (2004). Head-trunk coordination in elderly subjects during linear anteriorposterior translations. Experimental Brain Research, 158, 213-222. Khan, M. A., Lawrence, G. P., Franks, I. M., & Buckolz, E. (2004). The utilization of visual feedback from peripheral and central vision in the control of direction. Experimental Brain Research, 158, 241–251.
Kim, K. H., & Martin, B. J. (2002). Visual and Postural Constraints in Coordinated Movements of the Head in Hand Reaching Tasks. Proceedings of the 46th Human Factors and Ergonomics Society Conference, Baltimore, Maryland. Komura, T., Shinagawa, Y., & Kunii, T. L. (2001). An Inverse Kinematics Method Based on Muscle Dynamics. Computer Graphics International, 15–22. Latash M.L. (1993) Control of Human Movement. Human Kinetics: Urbana, IL. Laurutis, V., P., & Robinson, D., A. (1986). The vestibulo-ocular reflex during human saccadic eye movements. Journal of Physiology, 373, 209–233 Lee, C. (1999). Eye and head coordination in reading: Roles of head movement and cognitive control. Vision Research, 39(22), 3761–3768. Lehnen, N., Glasauer, S., & Buttner, U. (2003). Eye-head coordination - Challenging the system by increasing head inertia. Oculomotor and Vestibular Systems: Their Function and Disorders, Annals of the New York Academy of Sciences, 1004, 524-526. Lestienne, F. G., Le Goff, B., & Liverneaux, P. A. (1995). Head movement trajectory in three-dimensional space during orienting behavior toward visual targets in rhesus monkeys. Experimental Brain Research, 102, 393–406. Li, G. Y., & Haslegrave, C. M. (1999). Seated work postures for manual, visual and combined tasks. Ergonomics, 42(8), 1060–1086. Manary, M. A., Flannagan, C. A. C., Reed, M. P., & Schneider, L. W. (1998). Development of an improved driver eye position model. SAE International Congress and Exposition, Detroit, MI, USA. Marder, E., & Abbott, L. F. (1995). Theory in motion. Current Opinion in Neurobiology, 5, 832–840. Martin, B. J., Roll, J., P., & Gauthier, G., M. (1986). Inhibitory effects of combined agonist and antagonist muscle vibration on H-reflex in man. Aviation Space Environ Med 57, 681–687 Massion, J. (1992). Movement, posture and equilibrium: Interaction and coordination. Progress in Neurobiology, 38, 35–56. Matsuo, S., Bergeron, A., & Guitton, D. (2004). Evidence for gaze feedback to the cat superior colliculus: Discharges reflect gaze trajectory perturbations. Journal of Neuroscience, 24(11), 2760–2773. Melzer, J. E., & Moffitt, K. (1997). Head Mounted Displays: Designing for the User. New York: McGraw-Hill.
Millodot, M. (1986). Dictionary of Optometry, London: Butterworths. Morasso, P. (1981).Spatial control of arm movements. Experimental Brain Research, 42, 223–227. Mukundan, R. (2002). Quaternions: from classical mechanics to computer graphics, and beyond. In: Proceedings of the 7th Asian Technology Conference in Mathematics. Nagasaki, H. (1989). Asymmetric velocity and acceleration profiles of human arm movements. Experimental Brain Research. 74(2), 319-326. Nashner LM. (1985). Strategies for organization of human posture. In: Vestibular and Visual Control of Posture and Locomotor Equilibrium edited by Igarashi M and Black FO. Basel: Karger, p.1–8. Neary, C., Bate, I. J., Heller, L. F., & Williams, M. (1993). Helmet slippage during visual tracking: The effect of voluntary head movements. Aviation, Space, and Environmental Medicine, 64(7), 623-630. Netelenbos, J. B., & Savelsbergh, G. J. P. (2003). Children’s search for targets located within and beyond the field of view: effects of deafness and age. Perception, 32, 485–97. Oommen, B. S., Smith, R. M., Stahl, J. S. (2004). The influence of future gaze orientation upon eye-head coupling during saccades. Experimental Brain Research, 155(1), 9–18. Ouerfelli, M., Kumar, V., & Harwin, W. S. (1999). Kinematic Modeling of Head-Neck Movements, IEEE Transactions of Systems Man and Cybernetics Part A: Systems and Humans, 29(6), 604-615. Paillard, J. (1987). Cognitive versus sensorimotor encoding of spatial information. In P. Ellen & C. Thinus-Blanc (Eds.), Cognitive Processes and Spatial Orientation in Animal and Man (pp. 43–77). Dordrecht: Martinus Nijhoff. Paillard, J. (1996). Fast and slow feedback loops for the visual correction of spatial errors in a pointing task: a reappraisal. Canadian Journal of Physiology and Pharmacology, 74, 401–417. Paillard, J., & Amblard, B. (1985). Static versus kinetic visual cues for the processing of spatial relationships. In D. J. Ingle, M. Jeannerod & D. N. Lee (Eds.), Brain mechanism in spatial vision (pp. 367–385). Amsterdam: Martinus Nijhoff. Park, W., Chaffin, D., & Martin, D. (2004). Toward Memory Based Human Motion Simulation: Development and Validation of a Motion Modification Algorithm. IEEE Transactions on Systems, Man and Cybernetics, 34(3), 376–386.
Pelz, J. B., Hayhoe , M. M., & Loeber, R. (2001). The coordination of eye, head, and hand movements in a natural task. Experimental Brain Research, 139, 266–277. Peterka, R. J., & Benolken, M. S. (1995). Role of somatosensory and vestibular cues in attenuating visually induced human postural sway. Experimental Brain Research, 105(1), 101–110. Polit, A., & Bizzi, E. (1979). Characteristics of motor programs underlying arm movements in monkeys. Journal of Neurophysiology, 42, 183–194. Prablanc, C., Pélisson, D., & Goodale, M.A. (1986). Visual control of reaching movements without vision of the limb—I: Role of retinal feedback of target position in guiding the hand. Experimental Brain Research, 62, 293–302. Reed, M. P., Manary, M. A., & Schneider, L. W. (1999). Methods for Measuring and Representing Automobile Occupant Posture. In: Proceedings of SAE International Congress and Exposition, Detroit, MI. Robinson, D. A., Gordon, J. L., & Gordon, S. E. (1986). A model of the smooth pursuit eye movement system. Biological Cybernetics, 55, 43–57. Robinson, D., A. (1975) Oculomotor control signals. In: Lennerstrand G, Bach-y-Rita P (eds) Basic mechanisms of ocular motility and their clinical implications. Pergamon Press, Oxford, pp 337–374 Roll, R., Bard, C., & Paillard, J. (1986). Head orienting contributes to the directional accuracy of aiming at distant targets. Human Movement Science, 359–371. Roll, R., Velay, J.L., & Roll, J.P. (1991). Eye and neck proprioceptive messages contribute to the spatial coding of retinal input in visually oriented activities. Experimental Brain Research, 85, 423-431 Rosenbaum, D. A., Loukopoulos, L., Meulenbroek, R.,G., J., Vaughan, J., & Engelbrecht, S., E. (1995). Planning reaches by evaluating stored postures. Psychological Review, 102, 28–67. Rosenbaum, D. A., Meulenbroek, R. G. J., & Vaughan, J. (2001). Planning reaching and grasping movements: Theoretical premises and practical implications. Motor Control, 2, 99–115. Rossetti, Y., Meckler, C., & Prablanc, C. (1994). Is there an optimal arm posturedeterioration of finger localization precision and comfort sensation in extreme arm-joint postures. Experimental Brain Research, 99(1), 131–136. Sabes, P. N. (2000). The Planning and Control of Reaching Movements, Current Opinions in Neurobiology 10, 740-746.
Sailer, U., Eggert, T., Ditterich, J., & Straube, A. (2005). Spatial and temporal aspects of eye-hand coordination across different tasks. Experimental Brain Research, 134, 163–173. Sanders, M. S., & McCormick, E. J. (1993). Human factors in engineering and design (7th ed.). New York: McGraw-Hill. Sanger, T., D. (2000). Human Arm Movements Described by a Low-Dimensional Superposition of Principal Components. Journal of Neuroscience, 20(3), 10661072. Scheidt, R., A., Mussa-Ivaldi, F., A., & Ghez, C. (2004). Posture and movement invoke separate adaptive mechanisms. Program No. 873.13. 2004 Abstract Viewer/Itinerary Planner. Washington, DC: Society for Neuroscience Sciavicco, L., & Siciliano, B. (1996). Modeling and Control of Robot Manipulators. New York: McGraw Hill. Sergio, L. E., & Scott, S. H. (1998). Hand and joint paths during reaching movements with and without vision. Experimental Brain Research, 122, 157–164. Sherk, H. H. (1989). Physiology and biomechanics. In H. H. Sherk, E. J. Dunn, F. J. Eismont, J. W. Fielding, D. M. Long, K. Ono, et al. (Eds.), The Cervical Spine. Philadelphia: J. B. Lippincott. Sober, S. J., & Sabes, P. N. (2003) Multisensory integration during motor planning. Journal of Neuroscience, 23(18), 6982-6992. Sodhi, M., B. Reimer, J. L. Cohen, E. Vastenburg, R. Kaars and S. Kirschenbaum. (2002). On-Road Driver Eye Movement Tracking Using Head-Mounted Devices. In: Proceedings of the Eye Tracking Research and Applications Conference. Association of Computing Machinery. Stahl, J. S. (1999). Amplitude of human head movements associated with horizontal saccades. Experimental Brain Research, 126(1), 41–54. Stark, L., Wolfgang H. Zangemeister, and Blake Hannaford. (1988). Head Movement Models, Optimal Control Theory and Clinical Application. In "Control of Head Movement", pp. 245-260, B. Peterson, F. Richmond, Ed., Oxford. Sternad D., & Schaal S. (1999). Segmentation of endpoint trajectories does not imply segmented control. Experimental Brain Research, 124, 118-136. Temprado, J. J., Vieilledent, S., & Proteau, L. (1996). Afferent information for motor control: the role of visual information in different portions of the movement. Journal of Motor Behavior, 28, 280–287.
Tipper, S. P., Howard, L. A., & Paul, M. A. (2001). Reaching affects saccade trajectories. Experimental Brain Research, 136(2), 241–249. Todorov, E. (2004). Optimality principles in sensorimotor control Nature Neuroscience 7(9):907-915 Todorov, E., & Jordan, M. I. (2002). Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5, 11. Torres E, B & Zipser D. (2002). Reaching to grasp with a multi-jointed arm. I. A computational model. Journal of Neurophysiology 88: 1-13. Trevarthen, C. B. (1968). Two mechanisms of vision in primates. Psychologische Forschung, 31, 299–337. Tweed, D., Glenn, B., & Vilis, T. (1995). Eye-head coordination during large gaze shifts. Journal of Neurophysiology, 73, 766–779. Uemura, T. Arai, Y. & Shimazaki, C. (1980). Eye-head coordination during lateral gaze in normal subjects. Acta Oto-Laryngologica. 90(3-4), 191-198. Uno, Y., Kawato, M., & Suzuki, R. (1989). Formation and control of optimal trajectory in human multijoint arm movement - minimum torque-change model, Biological Cybernetics, 61, 89-101. Van den Abeele, S., , Delreux, V., Crommelinck, M., & Roucoux, A. (1993). Role of eye and hand initial position in the directional coding of reaching. Journal of Motor Behavior, 25(4), 280–287. Van der Kooij, H., Jacobs, R., Koopman, B., & van der Helm, F. (2001). An adaptive model of sensory integration in a dynamic environment applied to human stance control. Biological Cybernetics, 84(2), 103–115. Vercher, J.L., Magenes. G., Prablanc, C., & Gauthier, G. M. (1994). Eye-head-hand coordination in pointing at visual targets: spatial and temporal analysis, Experimental Brain Research, 99, 507-523. Viviani, P., & Swensson, R.G. (1982). Saccadic eye movements to peripherally discriminated visual targets. Journal of Experimental Psychology: Human Perception and Performance, 8 (1), 113–126. Wang, X. (1999). Behavior-based inverse kinematics algorithm to predict arm prehension postures for computer-aided ergonomic evaluation. Journal of Biomechanics, 32(5), 453–460. Wartenberg, C., Dukic, T., Falck, A. C., & Hallbeck, S. (2004). The effect of assembly tolerance on performance of a tape application task: A pilot study. International Journal of Industrial Ergonomics, 33(4), 369–379. 150
Webb Associates. (1978). Anthropometric Source Book Vol. I. (NASA Ref. 1024). Washington, DC: National Aeronautics and Space Administration. Wickens, C. D. (1992). Engineering Psychology and Human Performance (2nd ed.). New York: HarperCollins Publishers. Wolfe, J. M. (1994). Guided search 2.0: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202–238. Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419–433. Wolpert, D. M., Ghahramani, Z., & Jordan, M. I. (1995). An Internal Model for Sensorimotor Integration. Science, 269, 1880–1882. Woodworth, R. S. (1899). The accuracy of voluntary movements. Psychological Review (Monograph Supplement), 3, 1-114. Zangemeister, W. H., & Stark, L. (1982). Types of gaze movements: variable interactions of eye and head movements. Experimental Neurology, 77, 563–577. Zangemeister, W. H., Jones, A., & Stark, L. (1981). Dynamics of head movement trajectories: main sequence relationship. Experimental Neurology, 71(1), 76–91. Zhang, X., Kuo, A. D., & Chaffin, D. B. (1999). Optimization-based differential kinematic modeling exhibits a velocity-control strategy for dynamic posture determination in seated reaching movements. Journal of Biomechanics, 31, 1035– 1042.