SPECIAL ISSUE: ORIGINAL ARTICLE TOOL-USE: CAPTURING MULTISENSORY SPATIAL ATTENTION OR EXTENDING MULTISENSORY PERIPERSONAL SPACE? Nicholas P. Holmes, Daniel Sanabria, Gemma A. Calvert and Charles Spence (Department of Experimental Psychology, Oxford University, Oxford, UK; Department of Psychology, Bath University, Bath, UK)

ABSTRACT The active and skilful use of tools has been claimed to lead to the “extension” of the visual receptive fields of single neurons representing peripersonal space – the visual space immediately surrounding one’s body parts. While this hypothesis provides an attractive and potentially powerful explanation for one neural basis of tool-use behaviours in human and nonhuman primates, a number of competing hypotheses for the reported behavioural effects of tool-use have not yet been subjected to empirical test. Here, we report five behavioural experiments in healthy human participants (n = 120) involving the effects of tool-use on visual-tactile interactions in peripersonal space. Specifically, we address the possibility that the use of only a single tool, which is typical of many neuropsychological studies of tool-use, induces a spatial allocation of attention towards the side where the tool is held. Participants’ tactile discrimination responses were more strongly affected by visual stimuli presented on the right side when they held a single tool on the right, compared to visual stimuli presented on the left. When the use and/or manipulation of two tools were held, one in each hand, this spatial effect disappeared. Our results are incompatible with the hypothesis that tool-use extends peripersonal space, and suggest instead that tools results in an automatic multisensory shift of spatial attention to the side of space where the tip of the tool is actively held. These results have implications for many of the cognitive neuroscientific studies of tool-use published to date. Key words: tool-use, vision, touch, multisensory, crossmodal, peripersonal space

INTRODUCTION When humans and animals use a tool, the tool acts as a functional extension of the body: tool-use enables one to feel, reach, and manipulate objects that may otherwise be out of reach. Since tools are not literally a part of the body, however, and do not contain somatosensory receptors, the brain of the tool-user must compute how tactile sensations felt in the hand that holds the tool, might relate to objects seen or felt at the end of the tool (Farnè et al., 2005). There has been a recent resurgence of interest in this issue, with many published studies concerning the integration of visual and tactile stimuli during tool-use both in macaque monkeys, and in brain-damaged and neurologically normal human participants. These studies have shown that active tool-use increases the salience or effectiveness of visual stimuli presented at the tip(s) of the hand-held tool(s). These visual enhancements have been demonstrated either directly, on judgments concerning or actions made towards visual stimuli or objects, or indirectly, as inferred from the effects of visual stimuli on the detection or discrimination of simultaneous tactile stimuli. Results along these lines have been obtained in a variety of cognitive neuroscientific disciplines, from neuropsychological studies of spatial neglect (e.g., Ackroyd et al., 2002; Berti and Frassinetti, 2000; Forti and Humphreys, 2004; Humphreys et al., 2004; Pegna Cortex, (2007) 43, 469-489

et al., 2001; Schendel and Robertson, 2004) and crossmodal extinction (Farnè and Làdavas, 2000; Farnè et al., 2005; Maravita et al., 2001, 2002a), to behavioural studies conducted in neurologically normal participants (Holmes et al., 2004; Maravita et al., 2002b). To date, the majority of these data have been interpreted in terms of the hypothesis that active tool-use extends the boundary of multisensory peripersonal space, the visual space immediately surrounding the body or parts of the body, as initially suggested by Iriki et al. (1996b) (though see Forti and Humphreys, 2004; Holmes and Spence, 2004; Holmes et al., 2004; Humphreys et al., 2004, for alternative interpretations). While the evidence from these very different experimental approaches seems to be converging on a single explanation for the effects of tool-use on peripersonal space, one needs to be cautious in drawing direct comparisons between the results of single cell recordings in macaque monkeys on the one hand, and behavioural findings in human participants on the other. For example, the definition of peripersonal space is not a unitary construct and varies from study to study, depending primarily on the methods used to measure it. The results of single-unit recording studies of macaque monkeys have led to a definition of peripersonal space in terms of the spatial limit of the visual receptive fields of individual neurons that code visual space in body-part centred coordinates (e.g., Graziano et al., 1997). The best examples of such

470

Nicholas P. Holmes and Others

neurons have been recorded in the ventral premotor cortex, while other brain regions contain neurons with similar response properties, which together probably form a functionally interconnected circuit (e.g., involving the putamen, ventral intraparietal cortex, and premotor cortex; Graziano and Gross, 1995). In the neuropsychological literature, peripersonal space is often defined with respect to changes in the deficits manifested by patients as the distance of visual stimuli, measured either from the patients’ intact or impaired hand, increases. Some of these patients show deficits in perceiving stimuli or performing actions in near contralesional space (e.g., more than 10 or 20 cm from their body), but not in far space beyond this limit (Berti and Frassinetti, 2000; Farnè and Ládavas, 2000; Farnè et al., 2005; Forti and Humphreys, 2004; Maravita et al., 2001; Halligan and Marshall, 1991), while others show the opposite dissociation of distance-related spatial impairments (Ackroyd et al., 2002; Cowey et al., 1994; Keller et al., 2005), and more still show no differential effects of stimulus distance (e.g., Butler et al., 2004). In neurologically intact humans, peripersonal space has been defined primarily on the basis of, or inspired by, the monkey and neuropsychological data (e.g., Previc, 1998). However, several behavioural studies have now examined how the strength of multisensory (visual-tactile) interactions varies as a function of the distance between visual stimuli and one of the participant’s hands (Holmes et al., 2004; Spence et al., 2004a, 2004b; Spence and Walton, 2005). Tasks such as the crossmodal congruency task may provide a useful behavioural index of multisensory integration in peripersonal space (e.g., Spence et al., 2004b). Given the methodological and theoretical differences between cognitive neuroscience disciplines, interpreting the results of human psychophysical or neuropsychological studies in terms of the plasticity of multisensory spatial receptive fields of individual cortical neurons is therefore a difficult matter. Without performing single- or multi-unit recording studies at the same time as behavioural tasks, the direct link between neurons with varying visual-spatial receptive fields and crossmodal extinction or crossmodal congruency effects will remain hypothetical at best, and at worst impossible to prove (see also Maravita et al., 2003, on this point). Despite this methodological gap, most neuropsychological and psychological researchers in this area still prefer to interpret their results in terms of the neurophysiologically-defined hand-centred peripersonal space (i.e., with direct reference to the spatial limits of hand-centred visual receptive fields as recorded in anaesthetized or awake behaving monkeys). While this explanation is certainly an attractive and potentially powerful one, we believe there are other possible explanations for at least some of the tool-use dependent modulations of

visual-tactile interactions reported to date (see also Humphreys et al., 2004). In the tool-use studies published to date, animal subjects, human neuropsychological patients, or healthy human participants have used a variety of tool objects to perform complex behavioural tasks, before and/or after additional behavioural measurements of visual-tactile interactions (i.e., detecting or discriminating visual and/or tactile stimuli). While some of these studies required participants to actively manipulate two tools (e.g., Holmes et al., 2004; Maravita et al., 2001, 2002b; Yamamoto and Kitazawa, 2001; Yamamoto et al., 2005), the majority of studies have required the use of only a single tool, usually held in the right hand, but also, in several neuropsychological patients, in their left hand (Maravita et al., 2002a; Schendel and Robertson, 2004). Most of these studies have required participants to make some form of spatial discrimination of brief visual and/or tactile stimuli, often concerning on which side (or sides), or at which elevation (upper vs. lower), the stimulus (stimuli) was (were) presented. In addition, many of these studies have been conducted in neuropsychological patients who show a variety of spatial deficits prior to tool-use. Indeed, it is precisely because of these spatial deficits that the neuropsychological studies of the spatial effects of tool-use have been made possible. Many neuropsychological studies of tool-use have required patients to use only one tool. Despite this choice, however, it is not yet known whether using one or two tools has any bearing on the performance of neurologically normal participants in similar behavioural situations. This issue is particularly important with regard to spatial modulations of visual-tactile interactions across the left-right midline; for example, in studies of crossmodal extinction and temporal order judgments, since these tasks often require participants to make explicit comparisons between stimuli presented on both sides of space (see also Spence et al., 2001b). A single tool held on a single side of space during behavioural testing might attract one’s attention to the side of the tool, enhancing the salience or effectiveness of visual stimuli presented there. Furthermore, if this tooldependent spatial attention is affected, increased, or brought about by active tool-use, it then becomes impossible to separate the effects of spatial sensory-motor or attentional cuing effects from the effects of the representation of peripersonal space using behavioural techniques. In short, using a single tool instead of two tools may induce confounds that render the effects of the representation of multisensory peripersonal space unclear. For clarity, “peripersonal space” refers throughout to the spatial limits of hand-centred visual receptive fields of parietal and premotor neurons with additional somatosensory receptive fields on a given body part (i.e., the hand). This

Tools, attention, and peripersonal space

definition of peripersonal space (or ‘peri-hand’ space; Farnè et al., 2005) is used by most cognitive neuroscience researchers interested in tool-use, and is outlined explicitly by Maravita and Iriki (2004, Figures 1c and 2h; and see Holmes et al., 2004, Figure 1; though see Figure 1g of Maravita and Iriki, 2004, for an alternative definition of ‘bodycentred’ peripersonal space). In the five experiments reported here, we studied the behavioural responses of 120 neurologically normal human participants in a novel visual-tactile behavioural discrimination task. The participants used a single tool or two tools to perceive the vibrotactile stimuli. In the first three experiments, participants held only one tool with their right hand throughout the experiments. Visual and vibrotactile stimuli were presented at the distal end of the tool, from either the left or right side of space, with respect to the participants’ body midline. The tip of the tool was held either uncrossed on the right side, to contact the right vibrator, or crossed over the midline to contact the left vibrator. In the fourth and fifth experiments, participants held two tools in either an uncrossed or a crossed posture on the vibrators placed on either side. In all five experiments, the primary task remained the same, and at all times participants kept their left and right hands on the left and right sides respectively. GENERAL METHODS Participants One hundred and twenty healthy young participants (24 per experiment) were recruited by advertisement from the population of staff and students of, and visitors to, the University of Oxford. All participants reported having normal or corrected-to-normal vision, and no known history of neurological impairment. The experiments were approved by the local research ethics committee. The participants gave their informed consent prior to the experiment, and were fully debriefed after the experiment was completed. Participants were rewarded for their participation with gift vouchers to the value of five pounds (UK Sterling). Each experiment took around 30 minutes to complete. Apparatus and Stimuli The experiments were performed in a dimly illuminated booth. The stimulus presentation and data collection were controlled by a standard personal computer, operating bespoke software programmed in the Turbo Pascal programming language. A custom-built parallel-port hardware interface device was used to control the visual and vibrotactile stimuli. Vibrotactile stimuli were presented by means of two Oticon bone-conducting

471

vibrators (Somerset, New Jersey, USA; p/n: BC461-1 100), driven by a broadband white-noise signal. The vibrotactile stimulators were positioned 30 cm apart, 15 cm either side of the middle of a table, 70 cm from the front edge of the table (from the participant’s perspective), and raised 6 cm off the table surface on two 1 cm diameter rigid plastic cylinders. The visual stimulators were two 10 mm diameter ultra bright yellow-orange LEDs, and were positioned 1 cm directly above the vibrotactile stimulators on each side. A red LED (5 mm diameter) was positioned centrally between the two LEDs (approximately 7 cm above the table surface), and served as the visual fixation point. Two 10 mm diameter red LEDs (the ‘warning LEDs’ used in Experiment 3) were positioned 20 cm to the left and right side of the fixation LED, 25 cm above the table surface. Participants held either one or two wooden tools (cylindrical wooden dowels, 8 mm in diameter, and 75 cm in length). Design All experiments followed a within-participants repeated measures design, with the independent variables of target stimulus (single vs. double vibration), distractor stimulus (single vs. double flash), target side (left vs. right), distractor side (left vs. right), tool posture (uncrossed vs. crossed), and target side expectancy (left vs. right, Experiment 5 only). These conditions were fully factorised and repeated twelve (Experiments 1, 2, and 4) eight (Experiment 3), or six (Experiment 5) times respectively during each experiment (for details, see the individual experimental methods sections below, and see Figure 1). The participants performed two short practice blocks (32 trials) before the start of the experiment, and four main blocks of trials (64 or 96 trials per block – see below). Procedure The participants sat at the table, with their chest approximately 20 cm from the near edge of the table, and placed their hands out in front of them, 15 cm either side of the midline, 10 cm in front of the near edge of the table, resting on two foam support pads which raised their hands to approximately 5 cm above the table surface. In this position, participants could hold the tool(s) in the uncrossed position, contacting the target vibrator on the right (and left) side(s) using 60 cm of the total length of the tool. In order to perceive the vibrations, it was necessary for the participants to maintain the tool in position on the vibrator actively, applying pressure downwards at the tip of the tool. When the tool(s) was (were) in the crossed posture, the participants held the tools approximately 67 cm from their distal end. We assumed that it would be more important to control

472

Nicholas P. Holmes and Others

Fig. 1 – Apparatus and pertinent methodological details for Experiments 1 to 3. Open white squares – stimulators (vibrotactile targets), large filled grey circles – LEDs (visual distractors), small filled black circles – central fixation LED. Filled dark lines – active tools, filled grey lines – inactive/less active tools, broken dark lines – gaze direction. Percentage values next to target and distractor stimulators indicate the proportion of stimuli presented to each side in each block of trials.

for the relative position of the hands between uncrossed and crossed postures (i.e., to keep the hands in the same position, and to hold the tool slightly further down in order to reach the target on the opposite side) rather than to control for the relative lengths of the tools used in the present experiment (which would only be possible by moving the two hands closer together in the

left-right dimension and/or further from the body towards the targets). It is not known what effects these relatively minor differences in tool length between crossed and uncrossed conditions might have on the present results, or on previous studies in which tools were crossed over the midline (Maravita et al., 2002a, 2002b; Yamamoto and Kitazawa, 2001; Yamamoto et al., 2005).

Tools, attention, and peripersonal space

473

Fig. 1 – (continued) Apparatus and pertinent methodological details for Experiments 4 to 5. Open white squares – stimulators (vibrotactile targets), large filled grey circles – LEDs (visual distractors), small filled black circles – central fixation LED. Filled dark lines – active tools, filled grey lines – inactive/less active tools, broken dark lines – gaze direction. Percentage values next to target and distractor stimulators indicate the proportion of stimuli presented to each side in each block of trials.

Throughout the experiment, both of the participants’ hands were always in view. The general purposes of the experiment and instructions were described to the participants, who were then given an opportunity to ask questions. The participants were required to discriminate between single (200 msec duration) and double (65 msec on, 70 msec off, then 65 msec on) vibrotactile stimuli presented via the tool to their right (Experiments 1-3) or randomly to either the right or left (Experiments 4-5) hands, while trying to ignore near-simultaneous single or double visual distractor stimuli (a continuous 200 msec flash or two 65 msec flashes separated by 70 msec respectively, i.e., similar to the durations of the target stimuli) presented immediately above the left or right vibrotactile stimulators. The visual distractor location was pseudorandomised, irrelevant to the participant’s task (vibrotactile discrimination), and non-predictive of either the side of the vibrotactile stimulus (left or right – Experiments 4-5 only) or the type of vibrotactile stimulus (single or double). The participants were instructed to try to ignore the

visual stimuli, to concentrate on the vibrotactile stimuli, and to respond both as quickly and as accurately as possible. The participants responded using two foot pedals, one positioned under the toes, and one under the heel of one of the participant’s feet. The participants lifted their foot off one of the pedals in order to make their response. Half of the participants within each experiment used their left foot to respond and the other half used their right foot. Half of each of these groups responded by lifting their toes off the front pedal in response to a single continuous vibration, and by lifting their heel off the rear pedal in response to a double pulsed vibration. The other half of each subgroup responded in the opposite manner. Finally, half of each of these four subgroups performed the uncrossed-tools condition in the first and third experimental blocks, and the crossed-tools condition in the second and fourth blocks. The other half of the participants performed the experimental blocks in the opposite order. In short, all experimental conditions and stimulus-response mappings were

474

Nicholas P. Holmes and Others

fully and exhaustively counterbalanced across the 24 participants per experiment. The positions of the two vibrators (left and right) were swapped halfway through each experiment (i.e., after 12 participants had performed the experiment), to control for any minor differences in stimulator output. The participants were instructed to fixate the central LED throughout the experiment and to refrain from making eye movements to the left or right during the vibrotactile discrimination task. A closed-circuit television camera and monitor was installed in front of the participant, oriented towards their face, and the experimenter observed the participants’ eyes periodically to ensure that visual fixation was maintained, giving prompts to fixate centrally when necessary. All of the participants were able to maintain correct central fixation throughout the experiment. The participants wore headphones over which white noise at approximately 80 dB was presented, in order to mask any sounds associated with the presentation of the experimental stimuli. Before the main experimental session, the participants completed two to four practice blocks of 32 trials each. In the first practice block, the vibrotactile stimuli were presented alone in order to familiarise the participants with the target stimuli and the required responses. In the second practice block, the visual stimuli were added, and participants were reminded that they should try to ignore the visual stimuli as much as possible, while keeping their eyes open and fixated on the central fixation spot. Typically, participants achieved around 85-95% correct on their first practice block, however several of the participants could not feel the stimulus clearly at the initial intensity setting. In these cases, the intensity of the vibrotactile stimuli was increased in subsequent practice blocks in the absence of visual distractor stimulation until their performance reached at least 75% correct, ensuring that the intensity of the background white noise was sufficient to mask any sounds associated with stimulus presentation. Once this initial criterion was reached, one additional practice block with concurrent visual distractor stimuli was run, and the experiment proper then began. The intensity of the vibrotactile stimulation then remained constant throughout the remainder of the experiment. The majority of participants required only two short practice blocks, while none required more than four to reach the 75% accuracy criterion in the absence of visual stimuli. A typical trial proceeded as follows: When both of the pedals under the participant’s foot were fully depressed, the central fixation LED was illuminated for a random foreperiod of 750-1000 msec. On every experimental trial, one vibrotactile stimulus (i.e., a single or double pulse) and one visual stimulus (a single or double flash) were presented. The onset of the vibrotactile stimulus began 30 msec after the onset of the visual stimulus. This

visual-vibrotactile stimulus onset asynchrony (SOA) was used following previous work suggesting that this SOA leads to enhanced visualvibrotactile congruency effects as compared to when both stimuli are presented simultaneously (see Spence et al., 2004a). The participants then responded by lifting either their toes or their heel from the foot pedal, the fixation LED was turned off, and the next trial began after a delay of 1500 msec. Following a correct response, the LED was extinguished, and was not illuminated again until the start of the next trial. Following an incorrect response, the fixation LED flashed twice briefly (2 × 250 msec, with an ISI of 250 msec) before the trial ended. Participants were encouraged to use this feedback to try to improve their performance as the experiment progressed. Trials in which one or both foot pedals were released before the stimuli were presented were aborted. Trials in which responses were made before 200 msec (‘anticipation errors’) or after 3000 msec (‘omission errors’) were removed from the data set and not analysed further. Neither aborted trials nor anticipation or omission trials were replaced during the experiment. Experiment 1 Single Tool Twenty-four participants (18 female, 22 righthanded, aged 18-36, mean age 22 years) were recruited. Participants held a single tool in their right hand throughout the experiment. In two blocks, the tool was held in an uncrossed posture contacting the right vibrator, and in the other two, the tool was held in a crossed posture contacting the left vibrator. The right hand holding the tool remained in the same position on the right side throughout the experiment. The tip of the tool was crossed over the midline after each block of trials, with the tool posture for the first experimental block (uncrossed or crossed) counterbalanced across participants. While the tool side, and therefore the vibrotactile target side, was fixed throughout a block of trials, visual distractors occurred equally often and pseudorandomly on either side. Six repetitions of each condition were performed during each block of 96 trials. The total number of trials per participant was 384. Experiment 2 Single Tool, Plus Inactive Control Tool Twenty-four new participants were recruited (17 female, 23 right-handed, aged 18-40, mean age 26 years). Experiment 2 was identical to Experiment 1, except that a second tool was placed symmetrically across the midline with respect to the tool held in the participants’ right hand, and served to control for the visual appearance of a tool contacting the opposite, inactive vibrator. In the uncrossed posture, the control tool extended

Tools, attention, and peripersonal space

from next to the participants’ left hand and rested on the left vibrator. In the crossed posture, the control tool was crossed over the midline and rested on the right vibrator. Targets were never presented to the control tool, and the participant never held or used it. For half of the experimental blocks for each participant, the control tool was uppermost (resting on an additional foam pad under the participants’ uppermost hand to ensure the two tools did not touch in the middle), and for the other half of blocks, the control tool was lowermost. Experiment 3 Single Tool, Active Crossing Twenty-four new participants were recruited (19 female, 23 right-handed, aged 18-31, mean age 21 years). Experiment 3 was identical to Experiment 1 except that the participants crossed the single tool after every fourth trial of the vibrotactile discrimination task. Four repetitions of each condition were performed during each block of trials (240 trials in total). Since the tool-crossing took extra time to complete, the number of trials per block was reduced to 64 to keep the total length of the experiment to 30 minutes per participant. The posture of the tool at the beginning of each block was alternated between blocks, and the posture in the first block was counterbalanced across participants. In order to provide enough time between groups of four trials to cross the tool, one of the two red warning LEDs was illuminated for 4.5 sec after the end of every fourth trial. The LED was lit on the side opposite the current tool position, that is, indicating the tool position for the next four trials. Pilot testing had shown that 4.5 sec allowed sufficient time for the participant to cross or uncross the tool over the midline while maintaining the same hand position, and to reacquire fixation prior to the onset of the next four trials. The participants were asked after the last practice block whether the time required to change tool posture was sufficient. No participants reported any difficulty in this aspect of the task. Experiment 4 Two Tools Twenty-four new participants were recruited (17 female, 23 right-handed, aged 18-36, mean age 24 years). Experiment 4 was identical to Experiment 1 except that participants always held two tools, one in each hand, either both uncrossed or both crossed across the midline. In the crossed-tools posture, an additional foam pad was placed under the uppermost hand to elevate the tool, ensuring that the two tools did not touch where they crossed in the middle. The uppermost hand was alternated between the first and second crossed tools blocks. In this experiment, vibrotactile targets were delivered

475

randomly, unpredictably, and equiprobably to either the left or the right vibrator. Experiment 5 Two Tools, Spatial Expectancy Twenty-four new participants were recruited (16 female, 22 right-handed, aged 18-34 years, mean age 25 years). Experiment 5 was identical to Experiment 4, except that, on each block of trials, one of the two vibrotactile targets was presented twice as often as the other (i.e., 67% for the ‘expected side/vibrator’ vs. 33% for the ‘unexpected side/vibrator’, as compared to 50% to either side in Experiment 4). Half of the participants were instructed to expect twice as many targets from one vibrator compared to the other in each block (i.e., the instructions were in terms of the source of the vibration), while the other half were instructed to expect twice as many targets to one hand compared to the other (i.e., the instructions were in terms of hand that received the vibration). All experimental factors were fully counterbalanced across participants. Since the tool posture was alternated after each block of trials, four different orders for the expected target side were used and counterbalanced across participants: LLRR, LRRL, RLLR, RRLL, where L = twice as many targets from the left vibrator; and R = twice as many targets from the right vibrator. The instructions concerning target expectancy given to the participant therefore depended on their group (side vs. hand) and whether the next block was an uncrossed or a crossed block. RESULTS AND DISCUSSION Trials on which the participants omitted a response or responded before 200 msec were removed from the dataset. The median reaction times for correct responses and the percentage of errors made were recorded for each of eight conditions (visual-tactile congruency × distractor side × tool posture). A congruency effect was calculated by subtracting the mean value for correct congruent trials from the mean value for correct incongruent trials. The congruency effects (incongruent – congruent values) for RTs and errors for each condition and each participant were entered into a repeated measures analysis of variance (ANOVA) with the independent variables of distractor side and tool posture. A large congruency effect reflected a large influence of the visual stimulus on simultaneous vibrotactile discrimination performance, while a smaller congruency effect reflected a smaller interaction between visual and tactile stimuli. The ANOVA statistics for all five experiments are presented in Table I, and the congruency effects are presented in Figures 2 and 3.

476

Nicholas P. Holmes and Others TABLE I

Results of the analyses of variance for Experiments 1 to 5 Posture Experiment

Measure

E1: Single tool

RT Error RT Error RT Error RT Error RT Error

E2: Single tool plus control tool E3: Single tool, active crossing E4: Two tools E5: Two tools, spatial expectancy

Posture × Side

Side*

F

P

F

2.32 .23 .80 .16 5.00 .11 < .01 .12 < .01 .02

.14 .63 .38 .70 < .05 .74 .98 .73 .98 .89

.85 6.42 8.50 20.06 .96 .02 6.06 < .01 1.49 1.94

P .37 < .05 < .01 < .001 .34 .90 < .05 1.00 .23 .18

F 2.32 30.90 1.05 15.26 .07 1.91 4.20 1.27 4.61 .02

P .14 < .001 .32 < .001 .79 .18 .05 .27 < .05 .87

Note. F and p statistics are shown for all main effects of posture and side, and for the interaction between these variables. Significant effects are highlighted in bold text. *For E5, the label “Side” refers to the variable tool (same tool vs. different tools).

Experiment 1 Single Tool Overall performance in the straight tool posture (mean of median RT = 772 ± 23 msec; errors = 22.4 ± 2.0%) was not significantly different from performance in the crossed posture [RT = 763 ± 21 msec, F (1, 23) = .93, p = .34; errors = 20.9 ± 2.0%, F (1, 23) = 1.42, p = .25]. When the tool was held uncrossed on the right side, however, visual distractors presented on the right (RT congruency effect = 121 ± 24 msec, error congruency effect = 35.8 ± 3.6%) resulted in larger crossmodal congruency effects than distractors presented on the left side (90 ± 23 msec, 24.8 ± 3.5%). Only the error data showed significant effects of the experimental manipulations (see Table I and Figure 2). When the tool was crossed over to the left side, visual distractors on the left (90 ± 22 msec, 30.6 ± 3.5%) now resulted in slightly larger congruency effects than for visual distractors on the right (84 ± 18 msec, 27.3 ± 3.5%). Post-hoc paired t-tests, Bonferroni corrected for multiple comparisons, confirmed that the congruency effects in the error data were significantly stronger for distractors on the same side as the tool compared to the opposite side only when the tool was held straight [t(23) = 5.19, p < .001], and marginally stronger when held crossed [t(23) = 2.34, p = .027]. In brief, visual-tactile interactions were strongest for visual stimuli presented at the distal end of the tool held in the uncrossed posture. This difference between left and right sides was reduced and partially reversed when the tool was held crossed over the midline. The results of Experiment 1 could be interpreted in several ways. First, it could be argued that the multisensory representation of peripersonal space extended along the tool held in the right hand to incorporate the visual space around the vibrator on the right (when the tool was uncrossed), and perhaps on the left (when the tool was crossed), since visual-tactile interactions were stronger when both stimuli fell within the same hand-centred visual receptive fields. Alternatively,

one might conclude that participants were cued or encouraged towards paying more attention to one side of space – the side of the space where the tool was held, and where it was seen to be positioned, or simply from where the target stimuli were presented. Such an allocation of spatial attention might have enhanced the salience of visual stimuli on one side, resulting in stronger visual-tactile interactions for stimuli on that side. To assess whether such a spatial “bias” introduced by the visible presence of the distal end of a tool on only one side of space might have played a role in Experiment 1, the experiment was repeated with an additional, control tool in place, contacting the inactive vibrator in each block (see also Maravita et al., 2002a, for a very similar control manipulation). Note that the participants did not hold this tool, vibrotactile targets were not presented to the control tool, and the task and stimuli remained identical to those of Experiment 1. The prediction for Experiment 2 was that if the visual appearance of the tool in Experiment 1 was causing the allocation of attention towards the tool side, then the visual presence of a second, control tool should neutralise that directional allocation (i.e., there will be no significant interaction between posture and side since both the active and control tools will induce equal and opposite directional biases). Alternatively, if the visual appearance of the control tool is not enough to reduce the spatial effects, then there should still be a significant interaction between posture and side, suggesting that actively holding a tool, rather than simply seeing a tool, is responsible for the increased multisensory interactions associated with the tool. Experiment 2 Single Tool Plus Control Tool Overall performance in the straight tool posture (RT = 721 ± 29 msec; errors = 19.8 ± 2.1%) was not significantly different from performance in the crossed posture [RT = 726 ± 29 msec, F (1, 23) = .40, p = .53; errors = 21.0 ± 2.5%, F (1, 23) = .84,

Tools, attention, and peripersonal space

p = .37]. When the active tool was held uncrossed on the right side, visual distractors presented on the right (118 ± 28 msec, 32.4 ± 2.8%) resulted in larger congruency effects than distractors presented on the left (80 ± 22 msec, 21.4 ± 3.9%). This effect was significant only for the error data, replicating the finding of Experiment 1 [post-hoc Bonferroni corrected paired t-test, t(23) = 6.56, p < .001]. When the active tool was crossed over to contact the left vibrator, the right visual distractors produced slightly (but not significantly) larger congruency effects than the left visual distractors for the RT data [right 119 ± 23 msec, left 100 ± 26 msec, t(23) = 1.81, p = .08)], while the percentage of errors were almost identical for the two sides [right 28.0 ± 4.0%, left 27.6 ± 4.5%; t(23) = .8, p = .43] (see Figure 2). In brief, visual distractors presented at the end of the active tool when held in an uncrossed posture on the right side resulted in larger visual-tactile congruency effects than distractors presented at the end of the control tool, but when the tool was crossed over, these differences between the effects of distractors on the left and right side were reduced. The results of Experiment 2 show that the visual appearance of a second control tool was not sufficient to reduce the spatial effects found in Experiment 1. There was a decrease in the magnitude of the same-side versus different-sides difference in Experiment 2 as compared to Experiment 1, both in the RT and in the error data, but the significant interaction between tool posture and distractor side remained for the error data. This leaves open several possible explanations for the results of Experiment 1. First, it could be that the tool has to be held physically in order to induce the spatial effects. To maintain the position of the tool on the vibrator required some effort and concentration. Occasionally, the active tool would slip off from the vibrator during the experiment, in which case the experiment was stopped, the tool replaced, and the participant continued as soon as he or she was ready. Such an active maintenance of the position of the tool is often taken to constitute “tool-use” (Farnè et al., 2005; Forti and Humphreys, 2004; Maravita and Iriki, 2004; Maravita et al., 2001; Schendel and Robertson, 2004). Tool-use is a difficult motor activity, and should therefore require some additional concentration and/or cognitive resources devoted to the hand holding the tool, the side of space where the tool is held, and/or the location of the target of the tool-use action. One possible explanation for the results of Experiments 1 and 2 is therefore that the spatial differences in visual-tactile interactions arose from the additional sensory and motor requirements of actively maintaining the tool in the correct position on one side of space. An alternative possible explanation for these results is that the active maintenance of the tool position resulted in an extension of visual-tactile

477

peripersonal space for the hand holding the tool, which enhanced the interaction of visual and tactile stimuli presented to that tool alone. In order to test the hypothesis that increasing the sensory and motor activity associated with the use of the tool leads to an increase of visual-tactile interactions associated with the tool, we performed Experiment 3. In this experiment, rather than simply holding the tool in the same position throughout a block of trials (which may have required sustained attention to one side to maintain tool position), participants actively crossed the far end of the tool over the midline after every four trials of the visual-tactile discrimination task. A similar experimental manipulation has been shown to decrease and even to reverse the same-side advantage for visual-tactile stimuli compared to opposite-sides stimuli (Maravita et al., 2002b; though there are important differences between the two studies – see the General Discussion). In Maravita et al.’s (2002) Experiment 1, in which two tools were actively crossed over the midline after every four trials, the strongest visual-tactile interactions followed the tips of the tools as they crossed the midline, rather than remaining strongest in the anatomical coordinates (i.e., both visual and tactile stimuli on the same side of space, with the hands held uncrossed). This result was interpreted as support for the hypothesis that tool-use modifies peripersonal space and the “body schema” (e.g., Maravita and Iriki, 2004). Additionally, in several studies of tool-use, active tool-use has resulted in greater changes in multisensory spatial interactions than such changes following a passive tool-holding or no-tool condition (Farnè and Làdavas, 2000; Farnè et al., 2005; Maravita et al., 2002a). If the proposed activity-dependent effects of tool-use are due only to the position of the tip of the tool in space, then there should be a significant interaction between posture and side in Experiment 3, just as in Experiments 1 and 2. If, on the other hand, the spatial enhancements induced by the actively-held tool in Experiments 1 and 2 were due not to tooluse activity per se, but to the maintained concentration on, attention to, or other devotion of sensory and motor resources on one side of space, there should be no significant interaction between posture and side in Experiment 3, where tool position is maintained only temporarily (for approximately 10 sec) on each side of space. Experiment 3 Single Tool, Active Crossing Overall performance in the straight tool posture (RT = 787 ± 26 msec; errors = 28.2 ± 2.7%) was not significantly different from performance in the crossed posture [RT = 793 ± 27 msec, F (1, 23) = 1.23, p = .28; errors = 26.3 ± 2.5%, F (1, 23) = 3.0, p = .09]. Visual distractors presented on the right side when the tool was held on the right (96

478

Nicholas P. Holmes and Others

Fig. 2 – Results of Experiment 1, 2, and 3 – Single tools. Data show the mean (and standard error of the mean) of RTs and the percentage of errors for the visual-tactile congruency effect across 24 participants per experiment. Filled black bars and filled squares – visual distractors above the left vibrator. Open white bars and open circles – visual distractors above the right vibrator. The two pairs of bars on the left and right of each panel represent congruency scores when the active tool was positioned on the left (i.e., crossed) and on the right (i.e., uncrossed) respectively.

± 17 msec, 36.8 ± 4.3%) resulted in crossmodal congruency effects not significantly different from those following presentation of visual distractors on

the left side (84 ± 25 msec, 34.2 ± 4.8%), which is in contrast to the results of the first two experiments. When the tool was held on the left

Tools, attention, and peripersonal space

vibrator, there were still no significant differences between congruency effects for visual distractors presented on the right (122 ± 21 msec, 33.8 ± 4.7%) and for those on the left (103 ± 22 msec, 35.7 ± 4.1%) (see Figure 2). The results of Experiment 3 are striking: when the frequency and difficulty of the tool-use task increased, with repeated crossings of the tool between left and right sides, the spatial bias associated with the tool decreased. Several possible alternative conclusions might be drawn at this stage: a) Active tool-use does not lead to an extension of visuotactile peripersonal space; b) The effects of the extension of visual-tactile peripersonal space are masked by other effects, perhaps related to the type of task used here, or to underlying directional spatial attentional effects; or c) The primary driving factors in this, and potentially many other studies of tool-use, are the visible presence of a single tool, the requirement to maintain its position actively on one side, the presence of a goal for the tool-use action, and the additional experimental requirement that the tool be used periodically in achieving or acting upon that goal. This rather abstract description of tool-use can be applied to the present example. In Experiment 3, after every fourth trial, participants were required to move the tool from one side to the other. The goal of their upcoming action was always in view, on the opposite side of space to where they currently held the tool. There may thus have been two opposing spatial effects driving the spatial modulation of visual-tactile interactions, which effectively neutralised each other: one spatial effect which was induced by the requirement to maintain the tool position on one side of space, and another spatial effect acting toward the region of space opposite the tool that served as the target of the upcoming tool-use action. The introduction of movement within a block of trials may therefore lead to an additional spatial effect that is not associated with the tool per se, but is closely related to the spatial and sensorymotor demands of the task. In the absence of any movement goals, there is another way to manipulate the presence of spatial effects related to the requirement to maintain tool position on one side of space. If participants hold two tools, one in each hand, and receive vibrations randomly to either one in the presence of visual distractors, then two clear opposing predictions are borne out. First, if visual-tactile peripersonal space is responsible for enhanced visual-tactile interactions associated with hand-held and actively-maintained tools, then two tools, held actively both uncrossed and crossed over the midline should still result in a significant interaction between posture and visual distractor side (in external coordinates), since each tool is “connected” to only a single vibrator and a single visual stimulus on one side of space. If the whole portion of space, along the tool and including

479

visual-tactile stimuli presented at its tip, is incorporated within the expanded “peri-hand” area (e.g., Farnè et al., 2005; see also Figures 1c and 2h in Maravita and Iriki, 2004), then visual and vibrotactile stimuli presented on the same tool, regardless of the tool posture, should interact more strongly than stimuli presented on different tools. Second, if the requirement to maintain the position of two tools, one on each side of the midline, induced equal spatial effects in both directions, then the result of those two effects should cancel each other out, or result in larger crossmodal interactions for stimuli presented on the same (anatomical) side of space, as is common in this and other visual-tactile congruency experiments (Holmes et al., 2004, 2006; Spence et al., 2004a, 2004b). If the latter hypothesis is true, then no significant interaction between posture and side will be found when two tools are used on opposite sides of space. In Experiment 4, we therefore gave participants a second active tool. The task and stimuli remained identical to Experiments 1, 2, and 3, except that the targets were equally likely to be presented at the left or at the right vibrators. Participants held the two tools in place throughout a block of trials, and crossed the tools over after each block. Experiment 4 Two Tools Overall performance in the straight tool posture (RT = 761 ± 32 msec; errors = 18.4 ± 2.4%) was not significantly different from performance in the crossed posture [RT = 774 ± 33 msec, F (1, 23) = 1.66, p = .21; errors = 20.1 ± 2.7%, F (1, 23) = 2.0, p = .17]. When participants held two tools in an uncrossed posture, visual distractors presented to the same tool as the target vibrotactile stimulus (i.e., also on the same side of space), the congruency effects were slightly but not significantly larger (88 ± 14 msec, 24.0 ± 4.1%) than when visual distractors were presented on the opposite tool on the opposite side of space (82 ± 15 msec, 22.3 ± 4.1%). By contrast, when the two tools were held in the crossed posture, visual distractors on the same tool (64 ± 16 msec, 24.7 ± 4.1%) now resulted in significantly lower congruency effects than distractors on the opposite tool for the RT data [107 ± 17 msec, post-hoc Bonferroni corrected paired ttest, t(23) = 3.29, p < .005], but had no significant effect on the error data (23.0 ± 3.6%) (see Figure 3). When participants held two tools, there was no evidence for a strong tool-related spatial effect with uncrossed tools, and an effect in the opposite direction for the RT data when the tools were crossed (i.e., stronger congruency effects for distractors presented on the same side of space – on opposite tools). The results of Experiments 1 to 4 demonstrate significant increases in visual-tactile interactions

480

Nicholas P. Holmes and Others

Fig. 3 – Results of Experiments 4 and 5 – Two tools. Data show the mean (and standard error of the mean) RTs and percentage of errors for the visual-tactile congruency effect across 24 participants per experiment. Filled black bars and filled squares – visual distractors above the target vibrator (i.e., same side). Open white bars and open circles – visual distractors above the non-target vibrator (i.e., opposite sides). For Experiment 5, the data are separated according to whether the target was on the expected side or hand (the first and third pairs of bars) or the unexpected side or hand (second and fourth pairs of bars).

associated with tool-use only when a single tool is used and maintained on one side of space. With two tools, visual-tactile interactions were strongest only on the anatomical side of space when the tools were crossed (i.e., vibrations felt by the right hand were more affected by visual distractors on the right side). This spatial effect induced by a single tool might be purely “endogenous”, that is, generated by the experimental participants themselves in the form of a strategy or spatial expectation. Such an effect might therefore be modifiable by varying the expectancy of target stimuli presented to either side. Alternatively, if the spatial effect is a purely “exogenous” one, it should be unaffected by spatial expectancy (though note that it may not be possible to examine ‘purely’ endogenous effects in isolation from exogenous effects; Spence et al., 2001a). In our final experiment, to determine if the strength of visualtactile interactions can be modulated by a spatial expectancy, we manipulated the proportion of

targets presented to either side (or either hand), and instructed participants to expect more stimuli presented on the “expected” side than on the “unexpected” side of space (or hand). Apart from the relative frequency of target stimuli presented to either side within each block, and the instructions given to expect more targets on one side or hand, Experiment 5 was identical to Experiment 4. Experiment 5 Two Tools, Spatial Expectancy Overall performance in the straight tools posture (RT = 708 ± 22 msec; errors = 14.0 ± 1.9%) was slightly better (shorter RTs), and marginally significantly different from performance in the crossed tools posture [RT = 726 ± 22 msec, F (1, 23) = 4.2, p = .05; errors = 13.3 ± 1.9%, F (1, 23) = 1.84, p = .19]. Though note that the marginal effect on the RTs was opposed by error rates in the opposite direction, which suggests a

Tools, attention, and peripersonal space

speed-accuracy trade-off. The statistics for the analysis of the effects of posture and side are presented in Table I. In addition, there was no main effect of expectancy on the RT data [F (1, 23) = .71, p = .41], but a marginal effect on the error data [F (1, 23) = 3.50, p = .07]. There was a significant interaction between tool posture and expectancy on the RT data [F (1, 23) = 5.24, p < .05], but not for the error data [F (1, 23) = .12, p = .73]. The three-way interaction between posture, side, and expectancy was not significant either for RTs [F (1, 23) = .02, p = .89], or errors [F (1, 23) = .05, p = .83] (see Figure 3). Overall, the results of Experiment 5 were similar to those of Experiment 4. The introduction of a spatial expectancy concerning the probable target location or hand did not result in any spatial modulation of the visual-tactile congruency effects with respect to tool posture. The instruction to attend to one side or hand throughout a block of trials led to a slight decrease in congruency effects for the errors on the expected side or hand (15.5 ± 3.1%) compared to the unexpected side or hand (17.8 ± 3.6%), however the reverse effect was shown in the RT data (expected, 74 ± 14 msec, unexpected, 61 ± 15 msec), suggesting a speed-accuracy trade-off. The absence of significant effects of attention or expectancy on multisensory behavioural tasks is not uncommon (Vroomen et al., 2001; Spence et al., 2004a). The significant interaction between posture and expectancy in Experiment 5 seems to reflect the fact that, in the crossed position, the “unexpected” side (which was analysed in terms of the position of the target vibrator in external coordinates, rather than with respect to the receiving hand) was the same as the anatomical side, on which visual-tactile interactions were largest. This further suggests that, with two tools, one held in either hand, it is the anatomical reference frame that dominates visual-tactile interactions, rather than a tool-centred or external reference frame. GENERAL DISCUSSION The interpretation of the results of the five experiments reported here needs to account for the following findings: 1) Holding a single tool in the right hand with its tip on the right side of space resulted in significantly larger visual-tactile interactions for distal visual distractors on the right side, than when the tool was held by the right hand with its tip on the left, or when visual distractors were presented on the left (see Figure 2); 2) This tool-induced spatial effect remained when an additional control tool was seen, but not held; 3) The effect disappeared both when the tip of a single active tool was crossed from left to right repeatedly within a block of trials, or when two tools were held, one on each side throughout a

481

block of trials; 4) The instruction to expect more target stimuli on one side (or hand) than the other, and the presentation of more stimuli on that side, did not affect the spatial distribution of visualtactile interactions. The most plausible interpretation of these results is that the requirement to maintain a single tool in position on one side of space throughout a block of experimental trials leads to increased multisensory interactions for stimuli presented on that side of the midline. The precise cause of this spatial effect is as yet unclear, but two potential sources are an automatic allocation of spatial attention to the side of space where a tool is held and/or the selection of spatial locations as targets for upcoming tool-use actions, since covert attentional orienting may precede overt orienting and eye movements. As a whole, the results of the five experiments reported here are incompatible with the idea that peripersonal space is extended along the longitudinal axis of a tool unless one (or more) of the following possible qualifications is (are) also accepted: a) The spatial, temporal, sensory-motor, and/or behavioural aspects of the present task are not well-suited to examining the effect of “tooluse” on visual-tactile interactions in peripersonal space; b) Increased active usage of the tool leads to a decreased extension of peripersonal space; c) Peripersonal space is only extended with tools larger than a certain size and/or including a functional part (which might itself capture attention) rather than simply being a long tool; d) Peripersonal space can only be extended to the end of one tool held in one hand at any one time; e) The extension of peripersonal space in the present series of experiments was masked by other effects such as spatial attention, and/or the cognitive, sensory, or motor resources require to maintain tool position on one side. While each or all of these possible caveats may be partially or wholly acceptable, the crucial point to note is that their acceptance, either alone or together, has important consequences for the interpretation of the majority of psychological and neuropsychological studies of tool-use published to date. In the remainder of this section, we address a number of these issues. What is a Tool? Providing a universally acceptable definition of tools and tool-use will not be easy, however it may be useful for the purposes of the present discussion, and for the cognitive neuroscience community studying the effects of tool-use on multisensory and sensorimotor interactions. Beck (1980) defined tool-use (and thereby those objects that one could consider as a ‘tool’) as follows: “… Tool use is the external employment of an unattached environmental object to alter more efficiently the form, position, or condition of another object, another organism, or the user itself

482

Nicholas P. Holmes and Others

when the user holds or carries the tool during or just prior to use and is responsible for the proper and effective orientation of the tool” (p. 10). Beck (1980) based his definition on a nearexhaustive examination of the literature on tool-use in non-human animals, which he had catalogued over more than fifteen years. Further to the definition quoted above, Beck (1980, pp. 1-2) also ruled out from inclusion in his catalogue all instances of “tool-use” where the tool-user had been trained by differential reinforcement to perform the tool-use task. If the cognitive neuroscience community adopts Beck’s (1980) definition of tool-use strictly, then many recent and interesting studies should no longer be classified as of relevance to “tool-use”. The studies of Iriki and colleagues (Hihara et al., 2003; Iriki et al., 1996b, 2001; Ishibashi et al., 2000, 2002a, 2002b; Obayashi et al., 2000, 2001, 2003), in which macaque monkeys were trained to use tools, would not conform to this definition. Similarly, when sticks are used for line-bisection tasks in studies of patients with neglect (Berti and Frassinetti, 2000; Pegna et al., 2001), this task cannot also be considered as tool-use, since “the form, position, or condition” of the line to be bisected is not altered by pointing at it with a stick. Those studies where the tools were held or moved actively, but not used to “alter the form, position or condition” of another object (e.g., Maravita et al., 2001, 2002b) would also be excluded. The same fate would arise for those studies where a stick is held and used to perceive vibrations but not to “alter” another object (the present five experiments; see also Hanley and Goff, 1974; Hoover, 1950; Klatzky et al., 2003; LaMotte, 2000; Sunanto and Nakata, 1998; Vaught et al., 1968). Finally, in our previous experiments (Holmes et al., 2004), and in Maravita et al.’s (2001, 2002b) previous research, the electrical wires that supplied the experimental stimuli embedded in the tools were attached to a computer, meaning that, strictly speaking, these objects were not “unattached environmental objects”! In summary then, strict adherence to Beck’s (1980) definition of tool-use would leave only the studies of Farnè and Làdavas (2000), Farnè et al., (2005), and of Yamamoto and colleagues (Yamamoto and Kitazawa, 2001; Yamamoto et al., 2005). The participants in Yamamoto and colleagues’ (Yamamoto and Kitazawa, 2001; Yamamoto et al., 2005) experiments used the sticks both to perceive tactile stimuli (which would not count as tool-use) but also to respond by pushing the buttons on which the sticks were resting (which would count as tool-use; see also Riggio et al., 1986). Beck’s (1980) definition seems to us to be an unnecessarily strict one, which rules out much of the interesting work conducted over the last few years. Since Beck did not attempt to define or classify the varieties of human tool-use behaviour

(but only non-human tool-use), an extension and revision of his definition for present purposes seems justified. In order to encompass the majority of recent tool-use studies, we propose the following revised definition of tool-use: Tool-use is the use of a functionally unattached environmental object to alter more efficiently the form, position, or condition of or to perceive, or to act directly and physically upon, another object, another organism, or the user itself when the user holds or carries the tool during, or just prior to, use and is responsible for the proper and effective orientation of the tool. This revised definition of tool-use would include those instances of tool-use in which the tool is used internally (i.e., a toothbrush), to contact or to perceive another object, or in which the tool is skilfully aimed at or positioned in relation to another object. The term “functionally unattached” is meant to convey the requirement that any attachment (such as wires, cords, etc.) does not interfere with the “hand-held” usage of the tool, nor prevent the “proper and effective orientation of the tool”. We have also included the clause “to act directly and physically upon” to exclude the use of electronic or electro-magnetic tools (laser pointers, computer mice etc). While this exclusion may still be in some sense arbitrary, inclusion of such tools as computer mice and keyboards, torches, lights, and pointers would entail that virtually every behavioural experiment carried out with such devices might be classified as an empirical study of tool-use! We believe such a definition would be too liberal to be useful (see Holmes and Spence, 2006, for further discussion). Our revised definition of tool-use is consistent with the interpretations and conclusions of several key researchers interested in the cognitive neuroscientific approach to tool-use (Farnè et al., 2005, p. 239; Maravita et al., 2004, pp. 82-83). We are therefore in agreement with these recent arguments that objects as simple as a stick, held and used to perform tasks such as pointing at, or perceiving, stimuli can be classified as tools. What is “Active Tool-Use”? A closely related question to that concerning the definition of a tool is the definition of “active tooluse”. As described above, researchers often distinguish between the effects of a number of different tools and tool-use actions when interpreting how a given tool in a given tool-use situation has an effect on a given behavioural task. While the effects of holding a stick have been interpreted as evidence of an almost immediate change in the “body schema” or in “peripersonal space” (e.g., see Maravita et al., 2001, p. 584), the effects of holding a small rake on visual-tactile extinction have typically been assessed after five

Tools, attention, and peripersonal space

minutes of active tool-use, typically involving retrieving distant objects with the rake. The rationale for such a tool-use task stems from the work of Iriki et al. (1996b, pp. 2326-2327), who compared the responses of neurons in the anterior bank of the intraparietal sulcus after five minutes of a food-retrieval task using a tool and one free hand, with the activity in the same cells following three minutes of food-retrieval using only the hands. The “standard model” of tool-use in the cognitive neuroscience, and particularly in the neuropsychological, literature is therefore a classically simple one: the tool-user undergoes a pre-test, an exposure period, and a post-test. Any differences between the pre-test and the post-test are then interpreted in terms of the effects of tooluse on the test measure. The test measure in neuropsychological studies of tool-use is typically the level of crossmodal extinction, as assessed at different regions of space around the tool-user. Since the standard model of tool-use is a pre-test, exposure, post-test design, in order to rule out the possible confounding effects of order, fatigue, and practice, one must also study the performance of a different group of participants or patients under a pre-test, exposure to comparable non-tool-use task (in terms of difficulty, the spatial location of stimuli, targets, and responses, etc.), post-test design. While such control conditions have been studied in a within-participant manner (e.g., Maravita et al., 2001), they have not been performed using a between-participants comparison or in a group study (though see Keller et al., 2005 for a recent related study). Clearly, controlling for these order effects will be an important future step to rule out such potential confounds. We did not use this standard model of “active” tool-use in our experiments. Paradoxically, perhaps, many other tool-use studies do not conform to this standard model (Ackroyd et al., 2002; Berti and Frassinetti, 2000; Forti and Humphreys, 2004; Holmes et al., 2004; Maravita et al., 2001, 2002b; Pegna et al., 2001; Schendel and Robertson, 2004; Yamamoto and Kitazawa, 2001; Yamamoto et al., 2005). We were interested primarily in examining just the most basic effects of tools on visual-tactile interactions in peripersonal space. We chose to use simple sticks with no functional parts or extensions (such protuberances might also draw one’s attention toward a specific part of the tool), and to use a very simple form of tool-use (holding a stick) that has been used elsewhere and interpreted as tool-use by several authors. Perhaps this strippingdown of the task has meant that the wooden dowels we used were not really “tools”, or that the task was not really “tool-use”, or that the effects we discovered were not really of relevance to “peripersonal space”? If either (or all of these) is (are) the case then, as discussed at length above, several other important studies and tasks can also not be interpreted as “tool-use”. More importantly,

483

such a re-classification of our dowels as non-tool objects and our task as non-tool-use would not change the basic implications of our results – a single object held on one side of space attracts multisensory spatial attention to that side. One caveat that we should add is that, in our studies, the tool-use and the visual-tactile testing phase were performed simultaneously and alternately in both left and right hemispaces. In several studies, the tool-use phase was performed in both (Farnè and Làdavas, 2000; Farnè et al., 2005) or in the contralateral hemispace (Maravita et al., 2002a), while the visual-tactile testing phase was performed in the ipsilateral (Farnè and Làdavas, 2000; Farnè et al., 2005) or the contralateral hemispace (Maravita et al., 2002a). Whether any differences between the space tested during tool-use exposure and the space tested during the visual-tactile testing phase has any measurable effect on the extension of peripersonal space, or on the allocation of spatial attention, is as yet unknown. What does Our Task Measure? The nature of the multisensory discrimination task used in the present studies is potentially an important issue in interpreting the effects we report. We asked participants to make speeded judgments about the type of vibrotactile stimuli presented to their hands through the tools. Specifically, participants discriminated whether a single or a double stimulus was presented on each trial. At first glance, this task is similar to those used in the crossmodal extinction studies of tooluse, where patients are asked to report whether a single “left”, a single “right”, “both” left and right or no (‘none’) stimuli were presented on any given trial. Such a task requires an explicit spatial comparison between the numbers of visual and/or tactile stimuli presented on either side of the patient’s body. Indeed, such a task may be both spatial (‘left’ vs. ‘right’) and non-spatial (‘both’ vs. ‘none’) at the same time. It is possible that the requirement to make such spatial discrimination responses engages spatial mechanisms in the brain, which results in (or allows) the patient’s deficits being (to be) manifested – in short, spatial deficits are highlighted with difficult, explicitly spatial tasks (though see also Rapp and Hendel, 2003; Spence et al., 2000b, for the possibility that task difficulty by itself is sufficient to induce such spatial effects). There is some support for the idea that spatial effects emerge only with explicit or implicit spatial tasks, or where the information in the distractor modality is spatially informative (McDonald and Ward, 1999; Spence and Driver, 1994; Spence and McDonald, 2004). However, we have recently performed a version of the present vibrotactile discrimination task (Holmes et al., 2006) in young, healthy participants with their hands held either

484

Nicholas P. Holmes and Others

uncrossed or crossed over the midline, without using any tools. We found that visual-tactile interactions were significantly stronger when the stimulated hand was next to the active visual distractor as compared to the opposite distractor, regardless of the posture of the hands (uncrossed or crossed). While the effects were relatively small (approximately 5% more errors for visual distractors in the same external location than in a different location), the result confirmed that the underlying spatial interactions between visual and vibrotactile stimuli were modulated by the posture of the hands, and their relative position in space. This finding suggests either the involvement of a hand-centred mechanism, or of an allocentric spatial mechanism dominating the multisensory interactions reported in the present experiments. The fact that performance on this task requiring non-spatial discrimination responses was significantly modulated by spatial factors goes some way to arguing against the possibility that only spatial tasks tap into spatial mechanisms in spatially unimpaired normal participants (or, indeed in spatially impaired neuropsychological patients). The present task is similar to tasks used by our colleagues in recent years (e.g., Driver and Spence, 1998; Spence et al., 1998, 2004b), but there are several important differences. First, as detailed above, our task involved non-spatial judgments of vibrotactile stimuli, a task we chose explicitly to minimise the chances of finding spatial attentional effects. Second, unlike studies of multisensory spatial attentional pre-cuing (e.g., Eimer, 2001; Spence et al., 2000a) in which a non-predictive cuing stimulus is presented between 100 and 300 msec before the target stimulus (stimulus onset asynchrony – SOA), we used a near-simultaneous presentation of the visual distractor and vibrotactile target stimulus (see also the numerous studies which have used the spatial version of this crossmodal congruency task). SOAs of less than 100 msec are rarely used in spatial attentional cuing studies. Third, unlike such spatial attentional cuing studies, in which the non-informative visual stimulus typically improves tactile target discrimination performance at the cued location, the effect of the visual distractor stimuli in crossmodal congruency tasks (both the traditional spatial discrimination version and the present non-spatial discrimination version) is to worsen performance more for targets presented on the same side as the distractor as compared to distractors on the opposite side – i.e., effects opposite in direction to those of multisensory spatial attentional cuing studies. Finally, the non-spatial discrimination task used here (single or double vibrotactile judgments) was orthogonal to the side of presentation of the visual distractor stimuli. The above discussion still leaves unanswered the question of what our novel task actually measures. The multisensory discrimination task

used in the present study is a difficult one to perform well at speed. In the absence of the visual distractors (data not shown), or in the presence of congruent visual distractors, participants performed at about 95% correct throughout the experimental trials and manipulations. In the presence of incongruent visual distractors, however, performance dropped substantially, in some conditions and in some of the participants studied here, to chance levels, but on the whole, to around 60-70% correct. Indeed, the stimuli were designed in pilot testing to produce an overall error rate of about 20% across participants. We attempted to design a behavioural task that was similar in several respects to the tasks used in both crossmodal extinction (where high error rates are observed for double and bilateral simultaneous stimulus presentation), and to some recently reported multisensory illusions (Shams et al., 2000). In summary, we therefore believe that our task is strongly influenced by (and therefore is an index of) multisensory integration processes, but also to an unknown extent by factors such as response priming/response competition, selective tactile attention, and multisensory spatial attention. Shore et al. (2006) have recently studied a variety of potential explanations for spatial crossmodal congruency effects at a number of different SOAs between the visual and vibrotactile targets (see also Spence and Walton, 2005), which suggests that, at the visual-tactile SOA that we used, the multisensory integration and response priming factors may account for the majority of the congruency effects we report. Relations with Previous Studies of Tool-Use Our experimental design and the interpretation of our results have been very strongly influenced by numerous tool-use studies as discussed above. Yet our results are apparently in conflict with many of the very same studies. What might be the reasons for these differences? Our results lead us to suggest that, regardless of one’s definition of tools and of active tool-use, the interaction of visual and tactile stimuli is stronger when the end of a single object is held on the side ipsilateral to the hand that holds it throughout a block of trials, compared to when it is held on the contralateral side or crossed over the midline actively within a block of trials. This effect on its own is compatible with several studies, which required patients and participants to hold a tool actively on one side (e.g., Maravita et al., 2001). By contrast, another study by the same authors, involving the same crossmodal extinction patient and a single tool showed that actively holding a tool in the left hand, but with the tip crossed over to the right side of space, had no effect on the rate of crossmodal extinction for visual stimuli presented on the right, and tactile

Tools, attention, and peripersonal space

stimuli on the left (Maravita et al., 2002a). When, however, that patient actively used the tool in his impaired left hand for a period of 10 or 20 minutes, the desired effects were obtained: holding the recently-used tool actively in position crossed over to the visual stimulus on the right side decreased the extinction of left-sided tactile stimuli under double bilateral visual-tactile stimulation (Maravita et al., 2002a). This improvement lasted for at least 60 minutes after 20 minutes of tool-use on the first testing day, and for at least 30 minutes after 10 minutes of tool-use on the second testing day. It is intriguing that the effects of actively holding a tool with the right hand in this patient (Maravita et al., 2001) were sufficient to affect crossmodal extinction, but that a very similar task, actively holding the tool in the left hand, was not sufficient to affect crossmodal extinction in the same patient in a later study (Maravita et al., 2002a). The reasons for this are at present unclear. One difference was that the first study on this patient predicted a worsening of crossmodal extinction under the extending peripersonal space hypothesis, while the second study predicted an improvement of crossmodal extinction. One might therefore conclude that simply holding a tool can worsen but not improve crossmodal extinction through an extension of peripersonal space. One other difference is the hand in which the patient held the tool – the patient’s sensory-motor deficits were left-sided which may have affected the use or the effects of the tool. The reasons for the apparent conflict in these results needs to be assessed in a greater number of patients holding and/or using tools with their left and/or right hands in a fully counterbalanced manner across patients and groups. A recently published study of left visual extinction revealed that extinction was improved when the left sided stimulus was “connected” to the right side by means of perceptual grouping or other lowlevel perceptual features (such as a thin, 2D horizontal line) that crossed the midline (Brooks et al., 2005; see also Ward et al., 1994). An alternative explanation for the findings of Maravita et al. (2002a) could therefore be that active tooluse led to greater crossmodal perceptual grouping of the tactile and visual stimulus presented in association with the tool crossing the midline. This is potentially a very interesting line of research that should be explored in future studies. Several other major differences between our results and those of other studies of tool-use concern the type and number of participants used. We examined the performance of 120 neurologically normal young participants in five experiments, while the majority of tool-use studies have been performed on a limited number of older, brain-damaged patients suffering from neglect and crossmodal extinction. Many of these are single case studies (Ackroyd et al., 2002; Berti and Frassinetti, 2000; Forti and Humphreys, 2004;

485

Maravita et al., 2001, 2002a; Pegna et al., 2001), but there are also several published group studies of tool-use in which seven (Farnè and Làdavas, 2000) and eight (Farnè et al., 2005) right-brain damaged patients with crossmodal extinction were studied. Unfortunately, these latter two reports failed to include a control group of patients without crossmodal extinction, or a group of crossmodal extinction patients who did not undergo the tooluse training. In addition, visual and tactile stimuli were always delivered manually by the experimenter, which may introduce extra sources of variance and experimenter-induced effects into the experimental measurements (see Bueti et al., 2004 on this point). Control experiments to rule out these potentially confounding factors will be important in the future to establish the specificity of tool-use effects to particular sub-populations of neuropsychological patients, or indeed to particular individual neuropsychological patients, in order to rule out confounding effects of presentation order, and control the relative timing and intensity of stimuli delivered to the left and right sides of space (Bueti et al., 2004). Beyond the issue of the number of participants studied, it is not clear what other between-participants effects may have on the effects of tool-use reported to date. Clearly, one needs to study a number of different individuals under identical and counterbalanced testing conditions in order to see what is consistent between participants, and what is unique to certain individuals. Other purely behavioural studies of tool-use in neurologically normal humans have been able to test larger numbers of participants (72 in Holmes et al., 2004; 40 in Maravita et al., 2002b; 8 in Yamamoto and Kitazawa, 2001; 11 in Yamamoto et al., 2005). It may therefore be informative to compare results between these larger behavioural studies. Maravita et al. (2002b) used an active toolcrossing task similar to that used in our Experiment 3. However, there are several factors that may explain why the results of our two studies differ. First, participants in Maravita et al.’s (2002b) study held two tools, one in either hand, and received target vibrations to both hands equiprobably (i.e., similar to our Experiment 4). Second, the task was a spatial discrimination task, in which participants discriminated between upper and lower tactile targets. Third, the number of incorrect responses made in the spatial crossmodal congruency task was smaller than the number in the novel, nonspatial task that we have used here. Fourth, the tools used by Maravita et al. (2002b) were toy “golf-clubs” (very similar to those used later by Holmes et al., 2004), while we used simple wooden dowels. Finally, in Maravita et al.’s (2002b) study, participants alternated the uppermost tool (i.e., in the crossed posture) after every eight trials – this required that one of the tools (the uppermost one) had to be moved out of the crossed

486

Nicholas P. Holmes and Others

posture first – perhaps this differential (left-right) movement planning component contributed to the spatial modulations of visual-tactile congruency effects they reported? We are currently testing the effect of tool-use order and predictability using the crossmodal congruency task. These data will be reported in a later manuscript. It is very difficult to pinpoint, in the absence of additional data and experimental manipulations, what the crucial factor(s) that may account for the differences in results may be. Clearly, discovering this(these) crucial factor(s) will be very important if we are to understand the diverse effects of tooluse on visual-tactile and sensorimotor interactions that have been reported recently using numerous different approaches. We suspect that the difference may lie in the choice of behavioural task used to assess visual-tactile interactions: While our task resulted in only quite small but consistent spatial modulations, the spatial crossmodal congruency task typically results in stronger spatial modulations (e.g., Spence et al., 2004b). The extent to which either of these tasks relates directly or indirectly to the neural representation of peripersonal space is as yet unclear. One additional possibility that needs to be considered in future experiments is that, since many different kinds of tool-use tasks have been used to study the same underlying neural representations, it may be difficult to parse out what are the effects of the task in each case, and what are the effects of the neural representation of peripersonal space itself. The tool-use tasks utilized in the literature to date have included using a rake to retrieve or sort items of food, or plastic objects (Farnè and Làdavas, 2000; Farnè et al., 2005; Iriki et al., 1996b, 2001; Maravita et al., 2002a), using a stick, pencil, or laser-pointer to bisect horizontal lines (Berti and Frassinetti, 2000; Halligan and Marshall, 1991), or to detect objects or other stimuli (Ackroyd et al., 2002; Forti and Humphreys, 2004), actively holding a stick or tennis racquet on one side of space (Maravita et al., 2001; Schendel and Robertson, 2004), or in different regions of space (Forti and Humphreys, 2004), actively crossing and uncrossing two “golfclubs” (Maravita et al., 2002b), using two golfclubs to press buttons (Holmes et al., 2004), or using tools to perceive distal vibrotactile stimuli (the experiments reported here, and Klatzky et al., 2003; Yamamoto and Kitazawa, 2001; Yamamoto et al., 2005). The effects of this vast array of different tool-use actions on subsequent sensorimotor or multisensory behavioural responses surely cannot only be relevant to the understanding of the neural representation of peripersonal space? Perhaps the tools are irrelevant? Perhaps the nature of the task and the spatial distribution of stimuli, responses, and other cues determine which regions of space will be behaviourally relevant, and in which regions of space increased multisensory

interactions will be observed? The tools, then, may simply be a means of performing those tasks, and are not essential for the reported effects to emerge. The Role of the Parietal Cortex in Tool-Use and Peripersonal Space The parietal cortex contains multiple heterogeneous cortical regions, sub-regions, and cell populations, which represent sensory stimuli, cognitive processes, behavioural intentions, and motor responses in numerous different reference frames, at different times throughout the evolution of a given sensory-motor task, and in different proportions within a given neuronal population (e.g., Anderson, 1997). It is therefore vital to rule out a number of potential confounding factors when any given neuronal responses are being assessed. For example, in Iriki and colleagues’ (Iriki et al., 1996b, 2001; Obayashi et al., 2000) studies of tool-use the nature of the visual stimuli, and the spatial distribution of those visual stimuli, need to be considered in relation to the nature and distribution of the neuronal responses. In order to assess the visual response properties (i.e., receptive fields and reference frames) of premotor “peripersonal space” neurons, Graziano et al. (1997) used neutral objects (e.g., ping pong balls or a square of cardboard), presented robotically along parallel tracks controlled by a computer (i.e., stimulating each portion of the putative receptive field of the neurons equally often). Interestingly, standard visual stimuli like bars, gratings and edges, presented on a two-dimensional flat tangent screen, were not effective in exciting these neurons in the premotor cortex (see also Järveläinen et al., 2001; Rizzolatti et al., 1996). This suggests that these premotor neurons representing visual peripersonal space are of a very different type than those cells studied by Iriki et al. (2001) several years later, in which the presentation of a food object (Iriki et al., 2001, p. 165) on a television monitor was sufficient to excite neurons in the anterior/medial bank of the intraparietal sulcus (Brodmann’s areas 2 and 5). The reasons for these differences, and the nature of the information that these two classes of cells represent, are important issues for future research. In contrast to the visual stimuli used by Graziano et al. (1997), in Iriki et al.’s (1996b, p. 2326; 2001, p. 165) studies, the visual stimulus was a small piece of food, presented in the experimenter’s hand, moving towards and away from the putative receptive field in a centripetalcentrifugal fashion. The latter method of stimulus presentation meant that the centre of the putative visual receptive field (i.e., the space around the hand and the tool) was stimulated more frequently than any other part of the field (see also Obayashi et al., 2000, p. 3501). It is therefore difficult to determine whether the responses of the neurons

Tools, attention, and peripersonal space

studied were indeed spatially selective for the space around the hand since the distribution of stimuli was weighted or biased towards the hand region. Furthermore, it is not clear whether the use of a piece of food as a visual stimulus will help to rule out other motor, motivational, or movementpreparation-related activity, for example (e.g., Graziano et al., 1997; Iriki et al., 1996a; for further discussion, see Holmes and Spence, 2004). In order to be certain that tool-use does indeed have selective effects on the neural representation of visual space surrounding individual body parts (or indeed surrounding the whole body, in a bodycentred reference frame), further control studies are required. Such studies should examine the effects of neutral, irrelevant three-dimensional visual stimuli presented robotically equally often to all regions of space near and far from the hand and tool, and should manipulate the position of the monkey’s hand independently from the position of the visual stimulus. Furthermore, the influence of eye movements, gaze direction, head position, and arm position should be studied and independently controlled to assess in which reference frame the effects of tool-use are strongest. Finally, to assess the effects of tool-use on the proposed extension of peripersonal space in comparison with the effects of similar difficult goal-directed movement tasks, some spatial measurement of the putative visual receptive fields with respect to the position of the hand needs to be taken and compared statistically between experimental conditions. Without such spatial measurements and statistical comparisons, it is difficult to determine exactly how much “extension” of these putative visual receptive fields has actually occurred. Finally, as the results of the present study suggest, it may be very difficult to study the hypothesised effects of tool-use on the representation of peripersonal space, in the absence of any confounding effects of multisensory spatial attention. For example, Inoue et al. (2001) reported activation of the ipsilateral intraparietal sulcus in their positron emission topographic study of tooluse (i.e., activation in the right hemisphere – opposite what one might expect for a right-handed tool-use task). Participants in their study manipulated a pair of tongs to move a target from the midline to the left side of the workspace every few seconds (‘tool-use’) during a functional brain scanning session. This task was compared with a task in which the target object was moved from the midline to the right side of the workspace every few seconds (‘hand-use’). While the required motor activity was carefully matched across these two conditions, thus ruling out any confounding effects of task difficulty, the position of the target and the task-relevant part of the workspace was not matched. The subtraction of Tool-use (in the left hemispace) minus Hand-use (in the right hemispace) revealed ipsilateral (right hemisphere)

487

intraparietal activation. This activation, though interpreted by the authors in terms of the “body schema” and the representation of peripersonal space, is also fully consistent with the participants paying more visuospatial attention (either overt and/or covert) to the left hemispace during the tool-use task compared to the hand-use task. This latter interpretation is perhaps a more parsimonious one than the interpretation based on the “body schema” or on “peripersonal space” (see e.g., Macaluso and Driver, 2001, for very similar results of multisensory spatial attention in the intraparietal sulcus; and Johnson-Frey, 2004, on the general dominance of the left hemisphere in sequential tool-use related motor activity; and see also Rumiati, 2005 for an alternative view). We are currently performing functional magnetic resonance imaging experiments in healthy human participants in order to attempt to distinguish the effects of peripersonal space and spatial attention on visualtactile interactions in tool-use. CONCLUSION In the five experiments presented here, 120 neurologically normal human participants performed one of five different versions of a simple behavioural study of visual-tactile interactions involving long wooden tools. The results showed that holding a single tool on one side of space throughout a block of trials significantly increased the magnitude of visualtactile interactions on the side of the tool compared to the side opposite the tool. Holding and using two tools, or actively crossing the tool after every four trials, by contrast, reduced this tool-dependent spatial effect. From the available behavioural and neuropsychological evidence published to date, including the results of the present experiments, we conclude that, in order to provide strong evidence in support of the “extending peripersonal space” hypothesis, future research needs to rule out the possible confounding effects of spatial attention, induced by the use of a single tool on one side of space during behavioural testing. Acknowledgements. Nicholas P. Holmes was supported by a Wellcome Prize Studentship (number 065696/Z/01/A) from The Wellcome Trust. Gemma A. Calvert was supported by a Career Development Award from The Wellcome Trust.

REFERENCES ACKROYD K, RIDDOCH MJ, HUMPHREYS GW, NIGHTINGALE S and TOWNSEND S. Widening the sphere of influence: Using a tool to extend extrapersonal visual space in a patient with severe neglect. Neurocase, 8: 1-12, 2002. ANDERSEN RA. Multimodal integration for the representation of space in the posterior parietal cortex. Philosophical Transactions of the Royal Society of London B, 352: 14211428, 1997.

488

Nicholas P. Holmes and Others

BECK BB. Animal Tool Behavior: The Use and Manufacture of Tools by Primates. New York: Garland STPM Press, 1980. BERTI A and FRASSINETTI F. When far becomes near: Remapping of space by tool use. Journal of Cognitive Neuroscience, 12: 415-420, 2000. BROOKS JL, WONG Y and ROBERTSON LC. Crossing the midline: Reducing attentional deficits via interhemispheric interactions. Neuropsychologia, 43: 572-582, 2005. BUETI D, COSTANTINI M, FORSTER B and AGLIOTI SA. Uni- and cross-modal temporal modulation of tactile extinction in right brain damaged patients. Neuropsychologia, 42: 1689-1696, 2004. BUTLER BC, ESKES GC and VANDORPE RA. Gradients of detection in neglect: Comparison of peripersonal and extrapersonal space. Neuropsychologia, 42: 346-358, 2004. COWEY A, SMALL M and ELLIS S. Left visuo-spatial neglect can be worse in far than in near space. Neuropsychologia, 32: 10591066, 1994. DRIVER J and SPENCE C. Cross-modal links in spatial attention. Philosophical Transactions of the Royal Society of London B, 353: 1319-1331, 1998. EIMER M. Crossmodal links in spatial attention between vision, audition, and touch: Evidence from event-related brain potentials. Neuropsychologia, 39: 1292-1303, 2001. FARNÈ A, IRIKI A and LÀDAVAS E. Shaping multisensory actionspace with tools: Evidence from patients with cross-modal extinction. Neuropsychologia, 43: 238-248, 2005. FARNÈ A and LÀDAVAS E. Dynamic size-change of hand peripersonal space following tool use. Neuroreport, 11: 16451649, 2000. FORTI S and HUMPHREYS GW. Visuomotor cuing through tool use in unilateral visual neglect. Journal of General Psychology, 131: 379-410, 2004. GRAZIANO MS and GROSS CG. The representation of extrapersonal space: A possible role for bimodal, visual-tactile neurons. In Gazzaniga MS (Ed), The Cognitive Neurosciences. Cambridge, MA: MIT Press, 1995. GRAZIANO MS, HU XT and GROSS CG. Visuospatial properties of ventral premotor cortex. Journal of Neurophysiology, 77: 2268-2292, 1997. HALLIGAN PW and MARSHALL JC. Left neglect for near but not far space in man. Nature, 350: 498-500, 1991. HANLEY C and GOFF DP. Size constancy in extended haptic space. Perception and Psychophysics, 15: 97-100, 1974. HIHARA S, OBAYASHI S, TANAKA M and IRIKI A. Rapid learning of sequential tool use by macaque monkeys. Physiology and Behavior, 78: 427-434, 2003. HOLMES NP, CALVERT GA and SPENCE C. Extending or projecting peripersonal space with tools? Multisensory interactions highlight only the distal and proximal ends of tools. Neuroscience Letters, 372: 62-67, 2004. HOLMES NP, SANABRIA D, CALVERT GA and SPENCE C. Multisensory interactions follow the hands across the midline: Evidence from a non-spatial tactile discrimination task. Brain Research, 1077: 108-115, 2006. HOLMES NP and SPENCE C. The body schema and the representation(s) of peripersonal space. Cognitive Processing, 5: 193-200, 2004. HOLMES NP and SPENCE C. Beyond the body schema: Visual, prosthetic, and technological contributions to bodily perception and awareness. In Knoblich G, Thornton IM, Grosjean M and Shiffrar M (Eds), Human Body Perception from the Inside Out. New York: Oxford University Press, 2006. HOOVER RE. The cane as a travel aid. In Zahl PA (Ed), Blindness. Princeton: Princeton University Press, 1950. HUMPHREYS GW, RIDDOCH MJ, FORTI S and ACKROYD K. Action influences spatial perception: Neuropsychological evidence. Visual Cognition, 11: 411-427, 2004. INOUE K, KAWASHIMA R, SUGIURA M, OGAWA A, SCHORMANN T, ZILLES K and FUKUDA H. Activation in the ipsilateral posterior parietal cortex during tool use: A PET study. NeuroImage, 14: 1469-1475, 2001. IRIKI A, TANAKA M and IWAMURA Y. Attention-induced neuronal activity in the monkey somatosensory cortex revealed by pupillometrics. Neuroscience Research, 25: 173-181, 1996a. IRIKI A, TANAKA M and IWAMURA Y. Coding of modified body schema during tool use by macaque postcentral neurones. Neuroreport, 7: 2325-2330, 1996b. IRIKI A, TANAKA M, OBAYASHI S and IWAMURA Y. Self-images in the video monitor coded by monkey intraparietal neurons. Neuroscience Research, 40: 163-173, 2001. ISHIBASHI H, HIHARA S and IRIKI A. Acquisition and development

of monkey tool-use: Behavioral and kinematic analyses. Canadian Journal of Physiology and Pharmacology, 78: 958966, 2000. ISHIBASHI H, HIHARA S, TAKAHASHI M, HEIKE T, YOKOTA T and IRIKI A. Tool-use learning induces BDNF expression in a selective portion of monkey anterior parietal cortex. Molecular Brain Research, 102: 110-112, 2002a. ISHIBASHI H, HIHARA S, TAKAHASHI M, HEIKE T, YOKOTA T and IRIKI A. Tool-use learning selectively induces expression of brain-derived neurotrophic factor, its receptor trkB, and neurotrophin 3 in the intraparietal multisensory cortex of monkeys. Cognitive Brain Research, 14: 3-9, 2002b. JÄRVELÄINEN J, SCHÜRMANN M, AVIKAINEN S and HARI R. Stronger reactivity of the human primary motor cortex during observation of live rather than video motor acts. Neuroreport, 12: 3493-3495, 2001. JOHNSON-FREY SH. The neural bases of complex tool use in humans. Trends in Cognitive Sciences, 8: 71-78, 2004. KELLER I, SCHINDLER I, KERKHOFF G, VON ROSEN F and GOLZ D.Visuospatial neglect in near and far space: Dissociation between line bisection and letter cancellation. Neuropsychologia, 43: 72-73, 2005. KLATZKY RL, LEDERMAN SJ, HAMILTON C, GRINDLEY M and SWENDSEN RH. Feeling textures through a probe: Effects of probe and surface geometry and exploratory factors. Perception and Psychophysics, 65: 613-631, 2003. LAMOTTE RH. Softness discrimination with a tool. Journal of Neurophysiology, 83: 1777-1786, 2000. MACALUSO E and DRIVER J. Spatial attention and crossmodal interactions between vision and touch. Neuropsychologia, 39: 1304-1316, 2001. MARAVITA A, CLARKE K, HUSAIN M and DRIVER J. Active tool-use with contralesional hand can reduce crossmodal extinction of touch on that hand. Neurocase, 8: 411-416, 2002a. MARAVITA A, HUSAIN M, CLARKE K and DRIVER J. Reaching with a tool extends visual-tactile interactions into far space: Evidence from cross-modal extinction. Neuropsychologia, 39: 580-585, 2001. MARAVITA A and IRIKI A. Tools for the body (schema). Trends in Cognitive Sciences, 8: 79-86, 2004. MARAVITA A, SPENCE C and DRIVER J. Multisensory integration and the body schema: Close to hand and within reach. Current Biology, 13: 531-539, 2003. MARAVITA A, SPENCE C, KENNETT S and DRIVER J. Tool-use changes multimodal spatial interactions between vision and touch in normal humans. Cognition, 83: 25-34, 2002b. MCDONALD JJ and WARD LM. Spatial relevance determines facilitatory and inhibitory effects of auditory covert spatial orienting. Journal of Experimental Psychology: Human Perception and Performance, 25: 1234-1252, 1999. OBAYASHI S, SUHARA T, KAWABE K, OKAUCHI T, MAEDA J, AKINE Y, ONOE H and IRIKI A. Functional brain mapping of monkey tool use. NeuroImage, 14: 853-861, 2001. OBAYASHI S, SUHARA T, KAWABE K, OKAUCHI T, MAEDA J, NAGAI Y and IRIKI A. Fronto-parieto-cerebellar interaction associated with intermanual transfer of monkey tool-use learning. Neuroscience Letters, 339: 123-126, 2003. OBAYASHI S, TANAKA M and IRIKI A. Subjective image of invisible hand coded by monkey intraparietal neurons. Neuroreport, 11: 3499-3505, 2000. PEGNA AJ, PETIT L, CALDARA-SCHNETZER A-S, KHATEB A, ANNONI J-M, SZTAJZEL R and LANDIS T. So near yet so far: Neglect in far or near space depends on tool use. Annals of Neurology, 50: 820-822, 2001. PREVIC FH. The neuropsychology of 3-D space. Psychological Bulletin, 124: 123-164, 1998. RAPP B and HENDEL SK. Principles of cross-modal competition: Evidence from deficits of attention. Psychonomic Bulletin and Review, 10: 210-219, 2003. RIGGIO L, GAWRYSZEWSKI LG and UMILTÀ C. What is crossed in crossed-hand effects? Acta Psychologica, 62: 89-100, 1986. RIZZOLATTI G, FADIGA L, GALLESE V and FOGASSI L. Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3: 131-141, 1996. RUMIATI RI. Right, left or both? Brain hemispheres and apraxia of naturalistic actions. Trends in Cognitive Sciences, 9: 167-169, 2005. SCHENDEL K and ROBERTSON LC. Reaching out to see: Arm position can attenuate human visual loss. Journal of Cognitive Neuroscience, 16: 935-943, 2004. SHAMS L, KAMITANI Y and SHIMOJO S. What you see is what you hear. Nature, 408: 788, 2000. SHORE DI, BARNES ME and SPENCE C. Temporal aspects of the

Tools, attention, and peripersonal space visuotactile congruency effect. Neuroscience Letters, 392: 96100, 2006. SPENCE C and DRIVER J. Covert spatial orienting in audition: Exogenous and endogenous mechanisms facilitate sound localization. Journal of Experimental Psychology: Human Perception and Performance, 20: 555-574, 1994. SPENCE C and MCDONALD JJ. The crossmodal consequences of the exogenous spatial orienting of attention. In Calvert GA, Spence C and Stein BE (Eds), The Handbook of Multisensory Processes. Cambridge, MA: MIT Press, 2004. SPENCE C, NICHOLLS ME and DRIVER J. The cost of expecting events in the wrong sensory modality. Perception and Psychophysics, 63: 330-336, 2001a. SPENCE C, PAVANI F and DRIVER J. Crossmodal links between vision and touch in covert endogenous spatial attention. Journal of Experimental Psychology: Human Perception and Performance, 26: 1298-1319, 2000a. SPENCE C, PAVANI F and DRIVER J. Spatial constraints on visual-tactile cross-modal distractor congruency effects. Cognitive, Affective, and Behavioral Neuroscience, 4: 148169, 2004a. SPENCE C, PAVANI F, MARAVITA A and HOLMES NP. Multisensory contributions to the 3-D representation of visuotactile peripersonal space in humans: Evidence from the crossmodal congruency task. Journal of Physiology, 98: 171-189, 2004b. SPENCE C, RANSON J and DRIVER J. Cross-modal selective attention: On the difficulty of ignoring sounds at the locus of

(Received 31 January 2005; accepted 17 June 2005)

489

visual attention. Perception and Psychophysics, 62: 410-424, 2000b. SPENCE C, SHORE DI and KLEIN RM. Multisensory prior entry. Journal of Experimental Psychology: General, 130: 799-832, 2001b. SPENCE C and WALTON ME. On the inability to ignore touch when responding to vision in the crossmodal congruency task. Acta Psychologica, 118: 47-70, 2005. SUNANTO J and NAKATA H. Indirect tactual discrimination of heights by blind and blindfolded sighted subjects. Perceptual and Motor Skills, 86: 383-386, 1998. VAUGHT GM, SIMPSON WE and RYDER R. “Feeling” with a stick. Perceptual and Motor Skills, 26: 848, 1968. VROOMEN J, BERTELSON P and DE GELDER B. The ventriloquist effect does not depend on the direction of automatic visual attention. Perception and Psychophysics, 63: 651-659, 2001. WARD R, GOODRICH SJ and DRIVER J. Grouping reduces visual extinction: Neuropsychological evidence for weight linkage in visual selection. Visual Cognition, 1: 101-129, 1994. YAMAMOTO S and KITAZAWA S. Sensation at the tips of invisible tools. Nature Neuroscience, 4: 979-980, 2001. YAMAMOTO S, MOIZUMI S and KITAZAWA S. Referral of tactile sensation to the tips of L-shaped sticks. Journal of Neurophysiology, 93: 2856-2863, 2005. Nicholas P. Holmes, Espace et Action, 16 avenue du Doyen Lepine, Bron, France. e-mail: [email protected]; Web: http://www.neurobiography.info

capturing multisensory spatial attention - NeuroBiography

closed-circuit television camera and monitor was installed in front of the ...... (1997) used neutral objects (e.g., ping pong balls or a square of cardboard), ...

423KB Sizes 1 Downloads 343 Views

Recommend Documents

Selective attention to spatial and non-spatial visual ...
and the old age group on the degree to which they would be sensitive to .... Stimulus presentation was controlled by a personal computer, running an ...... and Brain Sciences 21, 152. Eason ... Hartley, A.A., Kieley, J., Mckenzie, C.R.M., 1992.

Improved selective and divided spatial attention in early ...
Feb 7, 2006 - sighted individuals in sensory sensitivity tasks, simple reaction time task, and selective and divided attention tasks. (see Fig. 1) with individually ...

On the spatial extent of attention in object-based visual ...
(ANOVA)ofthe median RT data, with the factors ofcon- dition (3 levels) and ... sponses recorded, using an IBM-PC-compatible computer at- tached to a VGA ... 0.2 em in diameter and was located as for the center ofthe dash that it replaced.

Capturing Architectural Requirements.pdf
PDF-XCHANGE ... The database will be Oracle 8i. ... requirements pertaining to order processing or stock control, for example. ... The "+" in the FURPS+ acronym is used to identify additional categories that generally represent constraints.

On the spatial extent of attention in object-based visual ...
1981; Hoffman et aI., 1983) as with any object-based limit. Baylis and Driver (1993) devised a way to compare one- object and two-object judgments for identical ...

Sources of Spatial and Feature-Based Attention in the ...
Sep 17, 2008 - tion (e.g., on top of the desk) and one or more of its visual features (e.g., yellow). Both types of information are typically used to guide the deployment of attention and speed the search process. Converging evidence from neuropsy- c

Scope of attention, control of attention, and intelligence ...
that variance was shared between scope and control, and the rest was unique to one or the other. Scope and control of ... ward an object appearing on the screen, an “antisaccade” movement ...... service of memory that was poorer in children.

Comparing Alternatives for Capturing Dynamic ...
the application domain, like airplanes or automobiles or even unlabeled moving ... then modeling and/or tracking moving objects share the common drawback of ..... LIBSVM: a library for support vector machines, 2001, software available at.

Spatial Statistics
without any treatments applied (called a uniformity trial in the statistical litera- ture). The overall .... Hence, we will analyze the data of figure 15.IB with a classical ...

Attention Athletes!
Make the Varsity, Earn a Starting Position. Earn All-Conference or League Honors. SPEED ENHANCEMENT ACADEMY. Win the Most Improved Player Award. Be a Key Contributor to Your Team's Success. OUTLAST THEM ALL! Stay on the Field at the End of the Game.

Attention and Celebration.pdf
Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps.

Multisensory Integration: Maintaining the Perception of ...
case of temporal information and vision in the case of spatial information .... In Toward a science of consciousness: The ... 4: Job Design, Product Design and Human-. Computer Interaction, D. Harris, Ed. (Ashgate Publishing,. Hampshire), pp.

Attention please jdrama
Page 1 of 23. Thecureforall diseases.Indianajonesand theraiders ofthelostark yify.97594152230 - Download Attention pleasejdrama.The Return of. Quantumand Woody.Testing Plan - For my testing plan, Rachel has givenmethefigures oflastmonth'sattention pl

1987 attention fillettesss!
1987 attention fillettesss! pelicula_________________________________.pdf. 1987 attention fillettesss! pelicula_________________________________.pdf.

Multisensory gain within and across hemispaces in ... - DIAL@UCL
Nov 29, 2010 - ability that the RT of the fastest channel will be faster than the mean RT. Statistically, RTs .... To test for multisensory interactions in the RT data,.