International Journal of Psychophysiology 50 (2003) 101–115

Combination of conflicting visual and non-visual information for estimating actively performed body turns in virtual reality Simon Lambrey*, Alain Berthoz LPPA, College ` de France-CNRS, 11 Place Marcelin Berthelot, Paris 75005, France Received 14 October 2002; accepted 24 April 2003

Abstract

Whereas constant-weight linear models suffice for understanding many phenomena in the domain of perception and action, how the weights given to each sensory input are determined remains an open question. Notably, it has been suggested that weighting depends on the sensory context (e.g. the inconsistency between sensory signals) as well as on the subject. In the present study, the problem of non-linearity in multisensory interaction for estimating actively performed body turns was addressed at the level of group and individual data. Standing subjects viewed a virtual corridor in which forward movements were simulated at a constant linear velocity, and rotations were actually performed. Subjects were asked to learn the trajectory and then reproduce it from memory in total darkness. In the baseline condition, the relative amplitudes of visual and non-visual information for the performed rotations were the same, but were systematically manipulated in six ‘sensory conflict’ conditions. The subjects performed the task in these seven conditions 10 times (10 sessions), with a delay of at least 2 days between sessions. Five subjects placed more weight on visual than on non-visual information. The other 5 subjects placed more weight on non-visual than on visual information. Interestingly, the difference between ‘visual’ and ‘non-visual’ subjects in their use of conflicting information seemed to be accentuated by the fact of becoming aware of the sensory conflict. In all subjects, conflicting sensory inputs were combined in a linear way in order to estimate the angular displacements. However, signatures of non-linearity were detected when the data corresponding to the day on which subjects became aware of the conflict were considered in isolation. The present findings support the hypothesis that subjects used conflicting visual and non-visual information differently according to individual ‘perceptive styles’ (bottom-up processes) and that these ‘perceptive styles’ were made more observable by the subjects changing their perceptive strategy, i.e. re-weighting (top-down processes). 䊚 2003 Elsevier Science B.V. All rights reserved.

Keywords: Spatial memory; Sensory conflict; Human; Multisensory interaction; Virtual reality

*Corresponding author. Tel.: q33-1-44-27-13-86; fax: q33-1-44-27-13-82. E-mail address: [email protected] (S. Lambrey). 0167-8760/03/$ - see front matter 䊚 2003 Elsevier Science B.V. All rights reserved. doi:10.1016/S0167-8760(03)00127-2

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

102

1. Introduction The sensory weighting model of multisensory integration consists of three processing layers (Zupan et al., 2002). Firstly, each sensor provides the central nervous system with information regarding a specific physical variable (S). Secondly, because the information available from different sensory systems is qualitatively different, sensory estimates are converted to intermediate estimates that share the same ‘units’, a process referred to as ‘promotion’ by Landy et al. (1995). This conversion is based on internal models of the relationships between sensory systems. Thirdly, because several sensory systems may provide information about the same physical variable, the ˆ is computed as a weighted final estimate (S) average of all available intermediate sensory estimates as follows: ˆ Ss 8iwywi=fiŽS.z~ x

|

where the subscript i refers to a specific sensory modality, f is the operation by which the central nervous system estimates S and w is the weight given to a specific sensory estimate. The sensory weighting model has been extensively examined. How the weights are determined, however, remains an open question. According to the hypothesis of a maximum likelihood estimator (MLE), high weights are given to reliable cues and low weights to unreliable ones (see Jacobs, 2002 for a recent review): a cue is reliable if the distribution of inferences based on that cue has a relatively small variance, otherwise the cue is regarded as unreliable. Consistent with this hypothesis, a Kalman filter (or extended Kalman filter), which is an instance of MLE, has been proposed to model multisensory integration in, for instance, postural control (Gusev and Semenov, 1992; Kuo, 1995; van der Kooij et al., 1999; Kiemel et al., 2002), self-motion perception (Borah et al., 1988; Merfeld et al., 1993) or visual-haptic object perception (Ernst and Banks, 2002; Hillis et al., 2002). Some of these models view multisensory fusion as a constant-weight, linear process (e.g. Borah et al., 1988; Gusev and Semenov, 1992; Kuo, 1995; van der Kooij et al., 1999; Ernst and Banks, 2002;

Kiemel et al., 2002). Linear weighting rules suffice for understanding many phenomena in the domain of perception and action, notably because all sensory systems normally provide congruent information about a specific physical variable (e.g. self-motion, posture or the shape of an object). Various studies, however, have reported evidence of non-linearity in multisensory interaction (e.g. Crowell et al., 1998; Mergner et al., 2000; Jeka et al., 2000; Oie et al., 2001; Lambrey et al., 2002; Oie et al., 2002; Hillis et al., 2002). For example, some authors, investigating the combination of touch and vision for postural control in humans, have suggested that multiple sensory inputs are dynamically re-weighted to maintain upright stance as sensory conditions change (Oie et al., 2002). In a study examining how human subjects combine visual and vestibular inputs for self-rotation perception, Mergner et al. (2000) reported findings that were incompatible with linear system predictions. The authors, therefore, described their results by a non-linear dynamic model in which the visual input can, in certain conditions, be suppressed by a visual-vestibular conflict mechanism. More recently, Lambrey et al. (2002) studied the effect of mismatched visual and non-visual information on the reproduction of actively performed body turns and discussed the hypothesis that one source of information may be dominant in estimating angular displacements, depending on the size of the sensory conflict and on the task to be performed (reproduction by body turns vs. drawing). These findings are consistent with non-linear cue conflict models that were proposed earlier (Zacharias and Young, 1981; Oman, 1982). They are also consistent with the robust estimator hypothesis (Hampel, 1974; Huber, 1981; Bulthoff and Mallot, 1988; Landy et al., 1995), which predict that as the discrepancy between conflicting cues increases beyond the range present in normal conditions, the weight given to the discrepant (outlier) cue will be increasingly low. In the event of major cue discrepancies, the outlier cue may even be completely disregarded, entailing the ‘turning off’ of the corresponding sensory modality. This phenomenon was referred to as ‘vetoing behaviour’ by Bulthoff and Mallot (1988).

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

In the light of the literature summarised above, it appears that the weight given to each sensory input in multisensory fusion depends on the reliability of the corresponding sensory modality as well as on the degree of coherence between different inputs. Interestingly, experiments have shown that the sensory weights also vary from one individual to another for given situations. For example, when deducing the three-dimensional structure of the environment from vision, some subjects consistently give more weight to disparity, whereas others preferentially weight texture gradients (Johnston et al., 1993; Backus and Banks, 1999). However, some authors have suggested that inter-individual differences in postural stability may be due to different processes in selecting the spatial frame of references (Marendaz, 1989; Berthoz, 1991; Ohlmann and Marendaz, 1991). Consistent with such a hypothesis, it has been reported that some subjects used visual cues to improve their balance, while others did not (Cremieux and Mesure, 1994; Collins and De Luca, 1995). Moreover, Isableu et al. (1997, 1998) addressed the question of the possible links between perceptive visual field dependence and the visual contribution to postural control, demonstrating that they interact in a complex way. In the present study, we addressed the problem of non-linearity in multisensory interaction for estimating actively-performed body turns. We considered data at both the group and individual level, since inter-individual variability has been demonstrated in multisensory processing. A paradigm in virtual reality (see Lambrey et al., 2002) was used so as (a) to manipulate the relative amplitude of the visual and non-visual signals of rotation in a navigation task (N.B. non-visual signals included proprioceptive, vestibular and somatosensory information as well as efferent copies of motor commands) and (b) to investigate the interaction between conflicting visual and non-visual information in a reproduction task. Standing subjects were shown a virtual corridor in which their forward motion was simulated at a constant linear velocity while they actively controlled their direction (i.e. orientation in the corridor) by rotating on the spot. They were first asked to learn the path travelled and then to actively reproduce it

103

from memory in total darkness. In the baseline condition, the relative amplitudes of visual and non-visual information for the rotations performed were the same, but were manipulated in six sensory conflict conditions. In the light of the literature summarised above, one may hypothesise that some subjects will use visual cues mainly for estimating angular displacements (‘visual’ subjects), whereas others will rather rely on non-visual information (‘non-visual’ subjects). Furthermore, consistent with the robust estimator hypothesis, the weights given to visual and non-visual cues for estimating angular displacements may change according to the size of the sensory conflict. These two points were examined in the present study. 2. Materials and methods 2.1. Subjects Ten healthy volunteers (five females, five males; mean ages22.2 years, S.D.s1.6) took part in the study. All were right-handed. None of the subjects had a history of neurological disease, head injury, or psychiatric disorders; none had complained of memory difficulties. All subjects had normal or corrected to normal, vision. Informed consent was obtained after the nature and possible consequences of the study were explained. The experiment was approved by the local ethics committee (CCPPRB 120-98). 2.2. Experimental set-up The experimental set-up used in the present study has already been described elsewhere (Lambrey et al., 2002). Subjects stood at the central point of a circular safety rail and were fitted with a virtual reality helmet (Fig. 1a). They were shown virtual corridors with simulated forward motion generated by the computer at a constant translation velocity. The linear speed in the corridor was within the locomotor range of human walking gait (approx. 1 mys). When a corner appeared in the corridor, subjects actually had to rotate their body to turn in the virtual environment. A gain g was included in the computation of virtual image

104

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

updating on the basis of body rotations, so that visual rotations in the helmet were equal to g times the body rotations. The software did not allow subjects to cross walls, and in the event of collisions, the trajectory ran along the wall. The virtual apparatus included an LCD display (Kaiser Electro Optics’ ProView娃 60, Carlsbad, CA, USA), which had a monocular field of view of 488 by 368 (resolution 640=480) and was refreshed at 60 Hz. The refresh rate did not induce any noticeable flicker or eyestrain. The orientation of the subject’s head on the horizontal plane was measured by a magnetic system (Flock of Birds, Burlington, VT, USA), with a report rate of 30 Hz. The image generator (O2, Silicon Graphics) recorded the angular position of the head, the data being sent from the tracker, transmitting the corresponding image to the display (the total latency between sensor motion and complete image display being estimated at 240 ms). 2.3. Virtual environment Thirty different corridors were used. For each corridor the graphic model consisted of a checkerboard pattern on the floor, a plain coloured ceiling, and evenly spaced black stripes on the walls (Fig. 1b). Each of the corridors had four straight segments (all of the same length) connected by three angles (45, 90 and 1358 either to the right or to the left). Different colours and different angle order were used to distinguish each corridor. 2.4. Procedure

Fig. 1. Experimental set-up. (a) The standing subject was immersed in a virtual environment by means of a head-mounted display. Translation in the corridor was imposed by the computer. The direction of the subject’s head was recorded using a magnetic tracking system. Custom-written software processed the data on the angular position of the head sent by the tracker and updated the corresponding visual image, providing the visual rotation components in the virtual corridor. (b) Graphic model of the virtual environment (example: view of the entrance to one corridor).

Before the experiment, subjects were instructed not to move their head in relation to their trunk when turning. They were asked to control their direction when trying to go around the corners as if they were in a real corridor and to stay as close as possible to the middle of the corridor. The experiment began with a training period during which subjects navigated along a single corridor with seven 908 angles. In the event of a collision along the path, subjects had to travel through the corridor once more. The experimental design, which is derived from

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

Fig. 2. Synopsis of the experiment. Each experimental trial comprised of two identical navigation tasks and one reproduction task. These tasks were successively performed by the subjects under seven distinct conditions: one baseline and six ‘sensory conflict’ conditions. All subjects performed this set of seven conditions 10 times during 10 equivalent sessions.

the one used in a previous study (Lambrey et al., 2002), is summarised in Fig. 2. Each experimental trial comprised of two tasks. Subjects were first asked to navigate along a virtual corridor twice, trying to memorise the trajectory travelled (navigation task). They were then asked to reproduce the path travelled without any visual cues, by mentally visualising the route (reproduction task). The reproduction task was performed immediately after the navigation task without any delay. During the navigation task, forward motion in the corridor was simulated by the computer at a constant linear velocity. When a corner appeared, subjects actually had to rotate their body to turn in the virtual environment. It took approximately 1 min to complete the path along the corridor. During the reproduction task, subjects had to remain immobile during the imagined straight segments, but when they remembered a turn, had to rotate their body by the amount needed to fit the real image from the previous navigation. After the reproduction task, subjects had to remove the helmet and were asked whether they had any comments about the path they had just travelled. In all cases they were given the opportunity to rest for 2 or 3 min before performing the next navigation task.

105

The tasks described above were successively performed by the subjects under seven distinct experimental conditions: one baseline and six ‘sensory conflict’ conditions. In the baseline condition, the gain g (see Section 2.2) was equal to 1, i.e. visual and body rotations had the same amplitude. In the conflict conditions, visual and non-visual cues did not provide the same information on angular amplitudes during navigation. The gain values used in these conflict conditions were 2.5, 1.67, 1.25, 0.83, 0.71 and 0.62, i.e. body rotations were respectively 40%, 60%, 80%, 120%, 140% and 160% the size of visual rotations (N.B. visual rotations had the same amplitude for whatever the condition was). The six conflict conditions were respectively called Visiony60%, Visiony40%, Visiony20%, Visionq20%, Visionq40% and Visionq60% condition. All subjects were exposed to this set of seven conditions 10 times during 10 equivalent sessions with a delay of at least 2 days between two successive sessions. For each session and each subject, the condition order was randomised and a random sampling of seven corridors among 30 (see Section 2.3) was used, so that on average each corridor was travelled 2.33 times by each subject. One corridor was never associated more than once with a specific gain (i.e. a specific condition). One complete session lasted approximately 90 min. Subjects were asked after each reproduction task whether they had any comment about the current trial. They were informed they could say whatever came into their mind. The experimenter wrote down all the comments without any exception, so as to avoid any bias. These free comments were subsequently used to determine the day that subjects became aware of the conflict. 2.5. Data analysis 2.5.1. Angular amplitude analysis The computer recorded the instantaneous angular direction of subjects’ head during both navigation and reproduction. Direction profiles were plotted, producing graphs showing the angular direction of the head (in degrees) as a function of time. Analysis of the profiles gave measurements

106

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

of the amplitudes of the rotations performed (body rotations of navigation and reproduced body rotations). As the image generator recorded information on the angular direction of the head from the tracker and transmitted the corresponding visual image to the head-mounted display, the amplitudes of visual rotations of navigation were computed by multiplying those of body rotations by the gain, g (see Section 2.2). A variable called amplitude error (expressed as a percentage) was used for further analyses and was calculated as follows for each trial: Amplitude errors 100=Žvisual rotation of navigationyreproduced body rotation.

Žvisual rotation of navigation.

It should be noted that the amplitude error does not simply reflect accuracy in reproducing the angular displacements, but rather corresponds to the percentage difference between visual rotations of navigation and reproduced body rotations. Thus, a big amplitude error (in absolute value) could be due to inaccuracy in reproducing visual rotations of navigation, but also to the fact that subjects preferentially used non-visual information for estimating the angular displacements. Hence, the effect of the sensory conflict on the amplitude error, rather than the amplitude error itself, needs to be considered, since a modification of the amplitude according to the condition (i.e. to the sensory conflict) would necessarily indicate the use of non-visual information. 2.5.2. Translation duration analysis On the direction profiles, one segment with a constant angular direction corresponds to one translation along one segment of the corridor. The analysis of direction profiles, therefore, allows the duration (in seconds) of each translation portion of the paths travelled to be measured. A variable called duration error (expressed as a percentage) was used for further analyses and was calculated as follows for each trial: Duration errors 100=Žtranslation duration of navigationyreproduced translation duration.

Žtranslation duration of navigation.

Fig. 3. Distribution of subjects according to the slopes of the individual regression lines (reproduced body rotations as a function of body rotation of navigation).

Unlike the amplitude error, the duration error can be directly interpreted as reflecting accuracy in reproducing the duration of translations. 3. Results 3.1. Individual data analysis Under the baseline condition, subjects were able to perform the navigation task easily. In all sensory conflict conditions, all the subjects also managed to navigate accurately along the corridors. None of them reported any feeling of motion sickness in spite of the incongruent gain between visual and non-visual information on angular displacements. No subject reported that the memorisation task was more difficult in one or more conditions compared to the others. These observations suggest that the conflict between visual and non-visual information did not noticeably disturb the subjects’ performance in the navigation or memorisation tasks. For every subject the mean amplitude of reproduced body rotations (averaged across 10 sessions and three angles) was significantly correlated with the mean amplitude of body rotations of navigation (averaged across 10 sessions and three angles) (Ns7, P-0.005). The seven dots of every indi-

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

107

Fig. 4. Individual amplitude errors under the seven experimental conditions. For each subject and each condition, the mean angular amplitude errors have been averaged across the three corridor turns (458, 908 and 1358) and the 10 sessions. The dotted lines correspond to the predicted theoretical lines for use of either pure visual information (slopes0) or pure non-visual information (slopes1).

vidual regression line corresponded to the seven experimental conditions. The slope of these individual regression lines ranged from 0.13 to 0.88. This indicates that there were neither purely ‘visual’ (theoretical slopes0) nor purely ‘non-visual’ (theoretical slopes1) subjects, but that all the subjects showed an influence of both sources of information on the reproduction of angular displacements. Moreover, the large standard deviation of slope (S.D.s0.28) indicates that subjects differed in the way that they placed more or less weight on visual and non-visual information. Interestingly, the distribution of subjects according to the individual slope seemed to be bimodal (Fig. 3), suggesting that subjects could be split into two groups. Although the hypothesis of a uniform distribution cannot be completely ruled out from the present data, notably because of the small number of subjects, such a split is fully consistent with the literature cited in the Section 1. The 10 subjects were, therefore, classified into two groups (Fig. 4) depending on whether the slope was either lower or higher than 0.5: the first group comprised of 5 subjects and was referred to as the Visual group wi.e. subjects that placed more weight on visual than on non-visual information (mean

slopes0.23, S.D.s0.08)x; the second group comprised of the other 5 subjects and was referred to as the Non-visual group wi.e. subjects that placed more weight on non-visual than on visual information (mean slopes0.73, S.D.s0.12)x. 3.2. Conflict awareness Of the 10 subjects six noticed the difference between body and visual rotations in the conflict conditions. Some examples of subjects’ comments, which allowed us to determine whether a given subject had become aware of the conflict or not (as well as the day he or she became aware), were as follows: ‘I have a feeling that it was very slow. I had to rotate a lot on the spot to be able to turn in the corridor’ and ‘I got the impression of turning more than what I could see in the helmet, that body and visual rotations were different’. Three of the five ‘non-visual’ subjects became aware of the conflict during the first, second and third session, respectively. Two of the 5 Visual subjects became aware of the conflict during the third session, and one during the fifth session. The other subjects (two Visual and two Non-visual) did not mention any difference between the baseline and sensory

108

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

conflict conditions, whatever the session. When asked at the end of the experiment whether, during navigation, they sometimes perceived a discrepancy between the amplitude of visual rotations and body turns, all four subjects answered ‘no’. They even expressed considerable surprise at the question. 3.3. Angular amplitude analysis To assess the reproduction of angular displacements performed during navigation, an analysis of variance examining the amplitude error (averaged across the three angles) was conducted. A 2=2=10=7 ANOVA design was used, GROUP (Visual vs. Non-visual) and AWARENESS (Aware vs. Unaware) being considered as between-subject factors, while SESSION (1–10) and CONDITION (Visiony60% vs. Visiony40% vs. Visiony20% vs. Baseline vs. Visionq20% vs. Visionq40% vs. Visionq60%) were manipulated within-subjects. Visual rotations of navigation were systematically overshot during reproduction. The mean amplitude error (averaged across 10 subjects, 10 sessions and seven conditions) was equal to 31% (in % of visual rotation of navigation). The ANOVA showed a main effect of the session (F9,54s 2.15, P-0.05), suggesting that visual rotations were less and less overshot as the sessions progressed. Furthermore, there was a significant interaction between the session and awareness (F9,54s 2.60, P-0.05), suggesting that the effect of the session on the amplitude error existed only in Unaware subjects. In fact, the amplitude error in Unaware subjects was equal to 54% during the first session but decreased to 33% during the last session. By contrast, the amplitude error in Aware subjects ranged approximately from 24% to 29%. The ANOVA showed a main effect of the condition (F6,36s113.00, P-0.00001) indicating an influence of the sensory conflict on the reproduction of angular displacements. Furthermore, there was a significant interaction between the group and the condition (F6,36s27.77, P0.00001), suggesting that this influence of the conflict differed between the two groups and subsequently providing statistical proof of the existence of two groups of subjects (Fig. 5). In fact,

Fig. 5. Mean amplitude errors under the seven experimental conditions in the visual and non-visual groups: The mean amplitude errors have been averaged across the three corridor turns (458, 908 and 1358), the 10 sessions and the number of subjects in each group. The error bars show the standard deviations. The dotted lines correspond to the predicted theoretical lines for use of either pure visual information (slopes0) or pure non-visual information (slopes1).

subjects in the Non-visual group performed smaller reproduced body rotations (reproduction task) than subjects in the Visual group in the Visiony60%, Visiony40% and Visiony20% conditions. Conversely, subjects in the Non-visual group performed greater reproduced body rotations than subjects in the Visual group in the Visionq20%, Visionq40% and Visionq60% conditions. Interestingly, there was also an interaction between the group, the condition and the awareness (F6,36s 2.54, P-0.05), suggesting that being aware of the sensory conflict reinforced the difference between Visual and Non-visual subjects in the use of conflicting information (Fig. 6). In order to assess the existence of non-linearity in the effect of the sensory conflict on the amplitude error (i.e. in the use of conflicting visual and non-visual information for remembering angular displacements), three different polynomial fits were tested using F-tests at a significance level of as0.05. The linear fit, which is consistent with linear sensory weighting, was significant for all Aware Visual, Aware Nonvisual, Unaware Visual and Unaware Non-visual groups. By contrast, the quadratic and cubic fits,

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

109

Fig. 6. Mean amplitude errors under the seven experimental conditions in aware and unaware subjects. The mean amplitude errors have been averaged across the three corridor turns (458, 908 and 1358), the 10 sessions and the number of subjects in each group (aware vs. unaware). The dotted lines correspond to the predicted theoretical lines for use of either pure visual information (slopes 0) or pure non-visual information (slopes1).

which indicate non-linearity, were not significant whatever the group of subjects. Finally, the ANOVA showed a significant interaction between the group, the condition and the session (F54,324s1.44, P-0.05). In order to make this interaction easier to interpret and to understand the effect of becoming aware of the conflict, an additional analysis was conducted. Only subjects who became aware of the conflict were considered. Three phases were considered instead of the 10 sessions: the Realising phase (i.e. the session during which subjects became aware of the sensory; see Section 3.1), the Before phase (all sessions before becoming aware of the conflict) and the After phase (all sessions after becoming aware of the conflict). One of the 6 Aware subjects was excluded from this analysis because she realised the existence of the conflict during the first session, and so there were no data for the Before phase. For each phase, each subject and each condition, the amplitude error averaged across the number of sessions that formed the considered phase. A 2=3=7 ANOVA design was used; GROUP (Visual vs. Non-visual) being considered as a between-subject factor, while PHASE (Before vs. Realising vs. After) and CONDITION (Visiony60% vs. Visiony40% vs. Visiony20% vs. Visionq20% vs. Baseline vs. Visionq40% vs. Visionq60%) were manipulated within-subjects.

The ANOVA showed an interesting interaction between the group, the phase and the condition (F12,36s2.04, P-0.05) (Fig. 7), which could be related to the previously reported interaction between the group, the session and the condition. Consistent with the previously reported effect of awareness, this suggests that becoming aware of the sensory conflict reinforced the difference between Visual and Non-visual subjects in the use of conflicting information. A trend analysis was conducted using F-tests in order to detect signatures of non-linearity in the effect of the sensory conflict on the amplitude error. A linear trend was found in Visual subjects during the Before phase (P-0.05) and during the After phase (marginally significant, Ps0.06) but not during the Realising phase (Ps0.10). Neither quadratic nor cubic trend (non-linear) was found in Visual subjects whatever the phase. In Non-visual subjects, a significant linear trend was found during the Before phase (P-0.01), the Realising phase (P-0.05) and the After phase (P-0.005). Furthermore, a significant quadratic non-linear trend was found in Non-visual subjects during the Realising phase, but not during the two other phases. 3.4. Translation duration analysis To assess the reproduction of translation duration, an analysis of variance examining the dura-

110

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

tion error (averaged across the four segments) was conducted. A 2=2=10=7 ANOVA design was used, GROUP (Visual vs. Non-visual) and AWARENESS (Aware vs. Unaware) being considered as between-subject factors, while SESSION (1–10) and CONDITION (Visiony60% vs. Visiony40% vs. Visiony20% vs. Baseline vs. Visionq20% vs. Visionq40% vs. Visionq60%) were manipulated within-subjects. The ANOVA showed a main effect of the session (F9,54s9.25, P-0.000001). In fact, the mean duration error (averaged across 10 subjects, seven conditions) was equal to y17.4% in the first session, then gradually increased as sessions progressed and was finally equal to q11.6% in the last session (Fig. 8). Furthermore, there was a main effect of the group (F1,6s6.89, P-0.05), suggesting that Visual subjects tended to undershoot the translation durations (y11.9%), while the Non-visual subjects tended to overshoot them (q13.3%) (Fig. 9a). There was also a main effect of the awareness (F1,6s7.85, P-0.05), indicating that Aware subjects tended to undershoot the translation durations (y9.2%), while the Unaware subjects tended to overshoot them (q15.6%) (Fig. 9b). However, there was no interaction between the group and the awareness.

Fig. 7. Mean amplitude errors in aware visual and aware nonvisual groups before, during and after becoming aware of the sensory conflict. For each phase (Before, Realising and After) and each condition, the amplitude error was averaged across the three corridor turns (458, 908 and 1358) and the number of subjects in each group. The error bars show the standard deviations. The dotted lines correspond to the predicted theoretical lines for use of either pure visual information (slopes0) or pure non-visual information (slopes1).

Fig. 8. Mean duration error as a function of session number. The mean duration errors have been averaged across the four linear segments of the corridor, the seven conditions and the 10 subjects. The error bars show the standard deviations.

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

Fig. 9. Mean duration error as a function of the group of subjects. The mean duration errors averaged across the four linear segments of the corridor, the 10 sessions, the seven experimental conditions and the number of subjects in each group. The error bars show the standard deviations. (a) Non-visual vs. visual subjects. (b) Unaware vs. aware subjects.

4. Discussion In the present study, subjects were asked to reproduce a path travelled through a virtual environment in one baseline condition and in six sensory conflict conditions in which the normal relationship between actual motion and visual consequences was manipulated. In three conflict conditions (Visiony60%, Visiony40% and Visiony 20%), actual body rotation led to oversized changes in the visual angle. In the other three conflict conditions (Visionq20%, Visionq40% and Visionq60%), the same body rotation led to undersized changes in the visual angle. The aim of the study was to examine how conflicting visual and non-visual cues are used to memorise and reproduce actively performed body turns. 4.1. Conflict awareness Interestingly, only 6 of the 10 subjects (60%) expressed awareness of the sensory conflict. The percentage was much higher than in a previous study using a similar paradigm (Lambrey et al., 2002) in which one of the 21 subjects (less than 5%) reported the discrepancy between visual and body rotations. The inconsistency in conflict

111

awareness between these two studies can be explained by two differences in the experimental procedure. In the previous study (Lambrey et al., 2002), body rotations of navigation were at least 67% and at the most 150% the size of visual rotations of navigation. In the present study, body rotations of navigation ranged from 40 to 160% the size of visual rotations and, therefore, included a wider range of sensory conflicts. Furthermore, the previous study (Lambrey et al., 2002) consisted of only one session (one baseline and two conflict conditions) whilst the present study was comprised of ten sessions (one baseline and six conflict conditions). Thus, in the present study, subjects had more opportunities to notice the sensory conflict. In fact, only one of the 10 subjects (10%) who participated in the present experiment became aware of the conflict during the first session. Whatever the case, even if the proportion was less than in the previous study by Lambrey et al. (2002), it is quite surprising that almost half of the subjects who participated in the present study did not realise that actual body rotation sometimes led to largely oversized (or undersized) changes in the visual angle. This may be because the navigation task was performed under visual guidance and the sensory conflict was, therefore, not a problem in performing the required action (i.e. to rotate on the spot). Differences between body rotations and updating of the visual scene could also be attributed by the subjects to the virtual reality apparatus, e.g. to a poor adjustment of updating computation. Nevertheless, such hypotheses are unlikely to explain how some subjects could rotate their body by 548 in the Visiony60% condition (or 2168 in the Visionq60% condition) while seeing a 1358 visual angle, without any conscious awareness of the difference between body and visual rotations. Further studies will be necessary in order to examine in greater depth this type of behavioural profile, namely conflict nonawareness. 4.2. Reproduction of angular displacements In the baseline condition, mean reproduced body rotations was 29.7% greater than visual rotations

112

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

of navigation. Such an overshoot has already been reported and discussed in a previous study (Lambrey et al., 2002). 4.2.1. Inter-individual variability in sensory weighting An important question addressed in the present study was inter-individual variability in sensory weighting, i.e. in the use of conflicting visual and non-visual information for estimating angular displacements. Despite an influence of non-visual information on the reproduction of rotations for all the subjects (individual data), the present findings clearly show that either visual or non-visual information is dominant for estimating angular displacements, depending on the subject. One hypothesis is that exposure to the sensory conflict did not lead to adaptive dominance of one of the two conflicting cues, but rather revealed predefined individual weights for visual and non-visual information, respectively. Consistent with this view is the classical distinction between subjects who are either dependent or independent with respect to the visual field in subjective vertical perception. An inter-individual variability in sensory weighting, which could be referred to as ‘perceptive styles’, has also been demonstrated in studies investigating the visual contribution to body orientation and postural balance (Cremieux and Mesure, 1994; Collins and De Luca, 1995; Isableu et al., 1997, 1998). Golomer et al. (1999) studied the degree of dependence on vision, for postural control and for perception, among male adult dancers and untrained subjects. They showed that professional physical training might shift the sensorimotor dominance from vision to proprioception. This study suggests that the more a sensory modality is used andyor is useful, the greater its weight becomes. Individual sensory experiences could, therefore, be responsible for influencing ‘perceptive styles’ in multisensory processing. Such a view is consistent with the hypothesis of a maximum likelihood estimator (see Section 1), since frequent use of one particular modality would entail an increase in the reliability of this modality. Do such ‘perceptive styles’, however, suffice as an explanation for the present findings?

According to the hypothesis formulated above, that the exposure to the conflict merely revealed an existing differential sensory weighting, the difference between the Visual and Non-visual groups in the present study should have been greatest from the beginning of the experiment, which was clearly not the case. On the contrary, the difference, which was initially not discernable, seemed to progressively emerge during the various sessions. One hypothesis is that subjects adapted their interpretation of one of the two conflicting cues so as to render their interpretation more consistent with that based on the other cue, a phenomenon referred to as ‘re-calibration’ (e.g. see Leibowitz and Shupert, 1985; Correia, 1998; Paillard, 1999). However, re-calibration has been shown to occur after prolonged exposure to the same sensory conflict (Berthoz and Melvill Jones, 1985; Gordon et al., 1995; Viaud-Delmon et al., 1998; Weber et al., 1998; Ivanenko et al., 1998; Viaud-Delmon et al., 1999), whereas a randomised exposure to seven different sensory situations was used in the present study. Another hypothesis, recently discussed by Jacobs (2002), is that subjects change their cue combination strategies by increasing the weights associated with reliable cues and decreasing the weights associated with unreliable cues. Determining whether visual or non-visual cues are more reliable, would depend, as discussed above, on ‘perceptive styles’ (bottom-up process), resulting in two groups of subjects. Unlike re-calibration, such re-weighting would not restore any coherence between sensory systems, but would rather entail the ‘turning off’ of one or more modalities. Interestingly, the difference between the Visual and Non-visual groups was much greater among subjects who became aware of the sensory conflict during the experiment than among subjects who did not. Furthermore, when considering only the subjects who became aware, the distinction between the Visual and Non-visual groups became more clear-cut after the subjects noticed the sensory conflict. This suggests that a high-level cognitive process, e.g. conscious perceptive decision-making, could underlie sensory re-weighting (top-down process). Using the same paradigm as in the present study, Lambrey et al. (2002) previously showed that subjects relied on visual

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

information when asked to draw the trajectory travelled, yet reproduced rotations on the basis of non-visual information during active blindfolded movements. These findings, which suggest that either visual or non-visual information is selected according to the task, are consistent with the existence of top-down processes. 4.2.2. Linear vs. non-linear multisensory processing One key question in the present study was whether signatures of non-linearity could be detected in multisensory interaction when subjects were confronted with large sensory discrepancies, as predicted by the hypothesis of robust estimator (e.g. Bulthoff and Mallot, 1988; Landy et al., 1995). Using the same paradigm as in the present study, Lambrey et al. (2002) recently discussed the hypothesis that one of the two conflicting cues may be dominant according to the sensory context (e.g. the minusyplus sign of the conflict, see Section 2), which can be considered as a certain type of non-linearity. At first sight, the present findings do not support this hypothesis, since no trace of non-linearity in multisensory interaction was found when considering the data averaged across the 10 sessions. However, when distinguishing three phases in the experiment, signatures of non-linearity were detected during the day that subjects became aware of the conflict. More precisely, there was a significant linear trend in Visual subjects both before and after becoming aware of the conflict, but not during the day they realised the existence of the sensory discrepancies. Likewise, there was a significant non-linear trend in Non-visual subjects during the day they became aware of the conflict, a trend that was not found before or after that day. It is not possible to determine from the present data whether such nonlinearity is an example of robustness (i.e. vetoing behaviour) or not, and further experiments are needed to understand more clearly what kind of mechanism these traces of non-linearity reflect. The interesting point is that non-linearity could be related to the moment of re-weighting (i.e. changing perceptive strategy) as well as to the moment of becoming aware of the sensory discrepancy. This could explain why many experiments did not

113

find any signature of non-linearity in multisensory processing, since the instant of non-linear processing may be very transient. 4.3. Translation duration reproduction Subjects were on average very accurate in reproducing translation durations (q0.7%), which is consistent with studies on the timing of ‘mental walking’ by humans (Decety et al., 1989; Ghaem et al., 1997; Mellet et al., 2000). In line with previous results (Lambrey et al., 2002), subjects tended to shorten translation durations during the first session (y17.4%) but lengthened them during the last phase (q11.6%). This tendency reversal may have been due to habituation to the virtual environment, leading to increasing boredom with the navigation task and consequently to a stretching of translation durations. Interestingly, Visual subjects tended to shorten the translation durations, while Non-visual subjects tended to lengthen them. Therefore, the dominance of either visual or non-visual information for estimating angular displacements seems to have an influence on the reproduction of translation durations. The reproduction task surely involved visual imagery since (1) forward motion was visually simulated by the computer during the navigation task and (2) subjects were explicitly instructed to mentally visualise the route during the reproduction of the path. Motor imagery could also be implicated during imagined straight segments, given the fact that visual stimulation simulated walking along the corridor. The literature provides many studies supporting the hypothesis that imagined and executed actions share the same neural substrate. Accordingly, the timing of simulated movements follows the same constraints as that of actually executed movements. Recently, Papaxanthis et al. (2002) demonstrated that both inertial and gravitational constraints are accurately incorporated in the timing of the motor imagery process during imagined arm movements. Consistent with the results of their mental chronometry experiments, one hypothesis to explain the present findings is that mental imagery was forced differently in Visual and Non-visual subjects. In other words, the timing of imagined walking along the virtual

114

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115

corridor would follow different constraints, depending on individual ‘perceptive styles’. It is indeed possible that biomechanical constraints linked to walking movements were greater during motor imagery in Non-visual subjects than in Visual subjects. Non-visual subjects would, therefore, lengthen the translation durations compared to Visual subjects. Another finding needs to be briefly discussed here. Aware subjects shortened the translation durations during the reproduction task, while Unaware subjects lengthened them. This may be due to the fact that Unaware subjects had more difficulty than Aware subjects in remembering the turns along the path during the reproduction task and so tended to delay the instant of turning. 4.4. Conclusion In conclusion, the present findings support the hypothesis that subjects use conflicting visual and non-visual information differently according to individual ‘perceptive styles’ (bottom-up processes). These ‘perceptive styles’ probably existed prior to the exposure to the sensory conflict but they were also influenced by high-level cognitive processes such as perceptive decision-making (topdown process). In particular, the subjects’ ‘perceptive styles’ seemed to be accentuated by their becoming aware of the conflict. Overall, the present study did not provide direct evidence for nonlinear multisensory processing. However, when the data for the day on which subjects became aware of the conflict were considered separately, signatures of non-linearity were detected, which could be related to a changing perceptive strategy, namely a re-weighting according to individual ‘perceptive styles’. Acknowledgments This work was supported by SmithKline Beecham, ‘GIS-Sciences de la Cognition’ and HFSP: RG71y96B. The equipment was provided in part by the CNES (Centre National d’Etudes Spatiales). The authors wish to thank Michel-Ange Amorim, Nicholas Barton and Celine Chappuis for their

helpful comments on the text, as well as France Maloumian for the graphs. References Backus, B.T., Banks, M.S., 1999. Estimator reliability and distance scaling in stereoscopic slant perception. Perception 28, 217–242. Berthoz, A., 1991. Reference frames for the perception and control of movement. In: Paillard, J. (Ed.), Brain and Space. Oxford University Press, Oxford, pp. 82–111. Berthoz, A., Melvill Jones, G., 1985. Adaptative Mechanisms in Gaze Control. Elsevier, Amsterdam. Borah, J., Young, L.R., Curry, R.E., 1988. Optimal estimator model for human spatial orientation. Ann. N.Y. Acad. Sci. 545, 51–73. Bulthoff, H.H., Mallot, H.A., 1988. Integration of depth modules: stereo and shading. J. Opt. Soc. Am. A. 5, 1749–1758. Collins, J.J., De Luca, C.J., 1995. The effects of visual input on open-loop and closed-loop postural control mechanisms. Exp. Brain Res. 103, 151–163. Correia, M.J., 1998. Neuronal plasticity: adaptation and readaptation to the environment of space. Brain Res. Brain Res. Rev. 28, 61–65. Cremieux, J., Mesure, S., 1994. Differential sensitivity to static visual cues in the control of postural equilibrium in man. Percept. Motiv. Skills 78, 67–74. Crowell, J.A., Banks, M.S., Shenoy, K..V., Andersen, R.A., 1998. Visual self-motion perception during head turns. Nat. Neurosci. 1, 732–737. Decety, J., Jeannerod, M., Prablanc, C., 1989. The timing of mentally represented actions. Behav. Brain Res. 34, 35–42. Ernst, M.O., Banks, M.S., 2002. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433. Ghaem, O., Mellet, E., Crivello, F., Tzourio, N., Mazoyer, B., Berthoz, A., et al., 1997. Mental navigation along memorized routes activates the hippocampus, precuneus and insula. Neuroreport 8, 739–744. Golomer, E., Cremieux, J., Dupui, P., Isableu, B., Ohlmann, T., 1999. Visual contribution to self-induced body sway frequencies and visual perception ofmale professional dancers. Neurosci. Lett. 267, 189–192. Gordon, C.R., Fletcher, W.A., Melvill, J.G., Block, E.W., 1995. Adaptive plasticity in the control oflocomotor trajectory. Exp. Brain Res. 102, 540–545. Gusev, V., Semenov, L., 1992. A model for optimal processing ofmultisensory information in the system for maintaining body orientation in the human. Biol. Cybern. 66, 407–411. Hampel, F.R., 1974. The influence curve and its role in robust estimation. J. Am. Stat. Assoc. 69, 383–393. Hillis, J.M., Ernst, M.O., Banks, M.S., Landy, M.S., 2002. Combining sensory information: mandatory fusion within, but not between senses. Science 298, 1627–1630. Huber, P.J., 1981. Robust Statistics. Wiley, New York.

S. Lambrey, A. Berthoz / International Journal of Psychophysiology 50 (2003) 101–115 Isableu, B., Ohlmann, T., Cremieux, J., Amblard, B., 1998. How dynamic visual field dependence-independence interacts with the visual contribution to postural control? Hum. Movement Sci. 17, 367–391. Isableu, B., Ohlmann, T., Cremieux, J., Amblard, B., 1997. Selection of spatial frame of reference and postural control variability. Exp. Brain Res. 114, 584–589. Ivanenko, Y.P., Viaud-Delmon, I., Siegler, I, Israel, I., Berthoz, A., 1998. The vestibulo-ocular reflex and angular displacement perception in darkness in humans: adaptation to a virtual environment. Neurosci. Lett. 241, 167–170. Jacobs, R.A., 2002. What determines visual cue reliability? Trends Cognit. Sci. 6, 345–350. Jeka, J., Oie, K.S., Kiemel, T., 2000. Multisensory information for human postural control: integrating touch and vision. Exp. Brain Res. 134, 107–125. Johnston, E.B., Cumming, B.G., Parker, A.J., 1993. Integration of depth modules: stereopsis and texture. Vision Res. 33, 813–826. Kiemel, T., Oie, K.S., Jeka, J.J., 2002. Multisensory fusion and the stochastic structure of postural sway. Biol. Cybern. 87, 262–277. Kuo, A.D., 1995. An optimal control model for analyzing human postural balance. IEEE Trans. Biomed. Eng. 42, 87–101. Lambrey, S., Viaud-Delmon, I., Berthoz, A., 2002. Influence of a sensorimotor conflict on the memorization of a path traveled in virtual reality. Brain Res. Cognit. Brain Res. 14, 177–186. Landy, M.S., Maloney, L.T., Johnston, E.B., Young, M., 1995. Measurement and modeling of depth cue combination: in defense of weak fusion. Vision Res. 35, 389–412. Leibowitz, H.W., Shupert, C.L., 1985. Spatial orientation mechanisms and their implications for falls. Clin. Geriatr. Med. 1, 571–580. Marendaz, C., 1989. Selection of reference frames and the ‘vicariance’ of perceptual systems. Perception 18, 739–751. Mellet, E., Briscogne, S., Tzourio-Mazoyer, N., Ghaem, O., Petit, L., Zago, L., et al., 2000. Neural correlates of topographic mental exploration: the impact of route vs. survey perspective learning. Neuroimage 12, 588–600. Merfeld, D.M., Young, L.R., Oman, C.M., Shelhamer, M.J., 1993. A multidimensional model of the effect of gravity on the spatial orientation of the monkey. J. Vestib. Res. 3, 141–161. Mergner, T., Schweigart, G., Muller, M.., Hlavacka, F., Becker, W., 2000. Visual contributions to human self-motion percep-

115

tion during horizontal body rotation. Arch. Ital. Biol. 138, 139–166. Ohlmann, T., Marendaz, C., 1991. Various processes involved in selectionycontrol of frames of reference and spatial aspects of field dependence-independence. In: Wapner, S., Dernick, J. (Eds.), Bio-Psycho-Social Factors in Cognitive Style. Lawrence Eribaum, Hillsdale, NJ, pp. 105–129. Oie, K.S., Kiemel, T., Jeka, J.J., 2001. Human multisensory fusion of vision and touch: detecting non-linearity with small changes in the sensory environment. Neurosci. Lett. 315, 113–116. Oie, K.S., Kiemel, T., Jeka, J.J., 2002. Multisensory fusion: simultaneous re-weighting of vision and touch for the control of human posture. Brain Res. Cognit. Brain Res. 14, 164–176. Oman, C.M., 1982. A heuristic mathematical model for the dynamics of sensory conflict and motion sickness. Acta Otolaryngol. Suppl. 392, 1–44. Paillard, J., 1999. Motor determinants of a unified world perception. In: Aschersleben, G., Bachmann, T., Musseler, J. (Eds.), Cognitive Contributions to the Perception of Spatial and Temporal Events. Elsevier Science B.V., pp. 95–111. Papaxanthis, C., Schieppati, M., Gentili, R., Pozzo, T., 2002. Imagined and actual arm movements have similar durations when performed under different conditions of direction and mass. Exp. Brain Res. 143, 447–452. van der Kooij, H., Jacobs, R., Koopman, B., Grootenboer, H., 1999. A multisensory integration model of human stance control. Biol. Cybern. 80, 299–308. Viaud-Delmon, I., Ivanenko, Y.P., Berthoz, A., Jouvent, R., 1998. Sex, lies and virtual reality. Nat. Neurosci. 1, 15–16. Viaud-Delmon, I., Ivanenko, Y.P., Grasso, R., Israel, I., 1999. Non-specific directional adaptation to asymmetrical visual– vestibular stimulation. Brain Res. Cognit. Brain Res. 7, 507–510. Weber, K.D., Fletcher, W.A, Gordon, C.R., Melvill, J.G., Block, E.W., 1998. Motor learning in the ‘podokinetic’ system and its role in spatial orientation during locomotion. Exp. Brain Res. 120, 377–385. Zacharias, G.L., Young, L.R., 1981. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation. Exp. Brain Res. 41, 159–171. Zupan, L.H., Merfeld, D.M., Darlot, C., 2002. Using sensory weighting to model the influence of canal, otolith and visual cues on spatial orientation and eye movements. Biol. Cybern. 86, 209–230.

Combination of conflicting visual and non-visual ...

non-linearity were detected when the data corresponding to the day on which ..... a big amplitude error (in absolute value) could be ... Individual data analysis.

402KB Sizes 6 Downloads 334 Views

Recommend Documents

The Source of Beliefs in Conflicting and Non Conflicting ...
source has some influence on the degree to which these beliefs are endorsed. ... order to deal with the complexity of the social environment. (Byrne & Whiten ...

SELECTION AND COMBINATION OF ... - Research at Google
Columbia University, Computer Science Department, New York. † Google Inc., Languages Modeling Group, New York. ABSTRACT. While research has often ...

Combination of Trastuzumab and Radiotherapy ... - OMICS International
Feb 28, 2018 - Overall Survival (OS) [1]. Many clinical trials and meta-analyzes have standardized the use of anthracyclines and Taxanes in the adjuvant.

A Nonvisual Look at the Functional Organization of ... - Semantic Scholar
Buy your tickets now for the pre/post debate, 2010. REFERENCES. Ahmed .... at the level of conceptual domain provides a compelling explanation for their (and.

Combination of NH2OH·HCl and NaIO4: a new and mild ... - Arkivoc
reactions1 as well as the carbon-metal atom bond formation such as Grignard's reagent,2-5 .... Chalcone containing electron donating –OMe group on the.

Comparison and Combination of BPEL and SCA
entry points or binding specific endpoint addresses, while the targets of the wires ... http://zouzhile.googlepages.com/webservices, including source code, as-.

A Nonvisual Look at the Functional Organization of ... - Semantic Scholar
Buy your tickets now for the pre/post debate, 2010. REFERENCES. Ahmed .... at the level of conceptual domain provides a compelling explanation for their (and.

SITE MEASUREMENT WITH COMBINATION OF POCKET PC AND ...
paper we propose low cost devices that give information precise enough to use in ... preliminary design, create concept, layout and master plan, but the accuracy.

A combination of SharePoint and CRM to ensure atomic transactions ...
structured Data. Social. Engagement & ... Data. Interactions,. Activities &. Tasks. Information. Generation &. Analytics. Formalized ... One of the big benefits is to ...

SITE MEASUREMENT WITH COMBINATION OF POCKET PC AND ...
In present day, GPS technology presents various tools to use in survey. However, a high ... results of site information for many purposes, such as on-site meeting,.

Linear Combination of Random Variables.pdf
Linear Combination of Random Variables.pdf. Linear Combination of Random Variables.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Linear ...

combination of statistical and rule-based approaches for spoken ...
... Frey* and Leon Wong. Microsoft Research, One Microsoft Way, Redmond, Washington 98052, USA ... Applications of task classification include call routing and information ..... International Conference on Speech and Language. Processing ...

Cross-site Combination and Evaluation of Subword ...
Subword Spoken Term Detection Systems .... with uncertainty inherent to 1-best decoding. .... alarms, and are often not able to perform at false alarm rates.

a winning combination
The principles of building an effective hybrid monetization strategy . . . . . . . . . . . . .12. A framework for segmenting users . ... Read this paper to learn: B What an effective hybrid monetization model looks like ... earn approximately 117% mo

Appliance leakage current interrupter and nightlight combination
Apr 23, 2003 - ?cation/Data Sheet for RV4145A Integrated Circuit, Fairchild. (21) Appl. ... Powering an appliance includes an ALCI Plug having a. 361/45 78.

Activity Recognition Using a Combination of ... - ee.washington.edu
Aug 29, 2008 - work was supported in part by the Army Research Office under PECASE Grant. W911NF-05-1-0491 and MURI Grant W 911 NF 0710287. This paper was ... Z. Zhang is with Microsoft Research, Microsoft Corporation, Redmond, WA. 98052 USA (e-mail:

Decisions with conflicting and imprecise information
Feb 9, 2012 - Page 1 ... the decision maker has a prior distribution, and use experts' assessments to update this prior. This leads to ... However, to the best.

Conflicting Independence, Land Tenancy and the American Revolution
Conflicting Independence, Land Tenancy and the Am ... omas J Humphrey - JER Vol 28 No 2 Summer 2008.pdf. Conflicting Independence, Land Tenancy and ...

Combination pocket article.
'Be' it known that I, STEFAN MACHUGA, a subject ofthe King of Hungary, residing at. Denver, in the county of Denver and State of Colorado, have invented ...

Gas combination oven
Sep 17, 1987 - system, a conventional gas-?red snorkel convection .... ture and other control data. A display 46 shows ..... An alternate gas distribution system.

Positively Aware? Conflicting Expert Reviews and ...
Jul 19, 2017 - Nt. ,. (3) where Nt is the total number of HIV+ individuals at visit t. Table 5, panel (C) provides some summary statistics of combo-level market shares.The average market share of the .... both in April and October, we consider three

Temporal variability, threat sensitivity and conflicting ...
Thus, we have indications that in some systems at least, predation ..... and Use of Experi- mental Animals published by the Canadian Council on Animal Care.

Decisions with conflicting and imprecise information
‡CNRS, Centre d'Economie de la Sorbonne and CERSES. E-mail: ... A simple way out consists in aggregating experts opinions by probability intervals. A natural ...... We extend ˜V to R2 by homogeneity, and still call ˜V its extension (which is.

Revisiting the notion of conflicting belief functions
merging rules is the matter of lively debates [1]. Recently, some researchers have questioned the validity of the usual conflict measure (i.e., the mass attributed to ...