Neuron

Article An Automatic Valuation System in the Human Brain: Evidence from Functional Neuroimaging Mae¨l Lebreton,1,3,5 Soledad Jorge,1,3,5 Vincent Michel,4 Bertrand Thirion,4 and Mathias Pessiglione1,2,3,* 1Institut

du Cerveau et de la Moe¨lle e´pinie`re (CR-ICM), INSERM UMR 975, F-75013 Paris, France de Neuroimagerie de Recherche (CENIR), Hoˆpital Pitie´-Salpeˆtrie`re, F-75013 Paris, France 3Universite ´ Pierre et Marie Curie (UPMC - Paris 6), F-75005 Paris, France 4NeuroSpin, CEA, INRIA, F-91191 Gif/Yvette, France 5These authors contributed equally to this work *Correspondence: [email protected] DOI 10.1016/j.neuron.2009.09.040 2Centre

SUMMARY

According to economic theories, preference for one item over others reveals its rank value on a common scale. Previous studies identified brain regions encoding such values. Here we verify that these regions can valuate various categories of objects and further test whether they still express preferences when attention is diverted to another task. During functional neuroimaging, participants rated either the pleasantness (explicit task) or the age (distractive task) of pictures from different categories (face, house, and painting). After scanning, the same pictures were presented in pairs, and subjects had to choose the one they preferred. We isolated brain regions that reflect both values (pleasantness ratings) and preferences (binary choices). Preferences were encoded whatever the stimulus (face, house, or painting) and task (explicit or distractive). These regions may therefore constitute a brain system that automatically engages in valuating the various components of our environment so as to influence our future choices. INTRODUCTION In classical economic theory (Samuelson, 1938; Von Neumann and Morgenstern, 1944), preferences revealed in binary choices are used to rank the value of different options on a common scale: choosing option A over option B means that Value(A) is greater than Value(B). This theory suggests the existence of a brain system that encodes the values underlying revealed preferences (Rangel et al., 2008). A further question is whether the brain engages in value judgment under all circumstances or only when a choice is to be made (Seymour and McClure, 2008). Neural correlates of both value judgments and choices/ preferences have been largely investigated but mostly in separate studies. Interestingly, regions comprising the limbic frontostriatal circuits have been implicated in both valuation and choice tasks (Hare et al., 2009; Kim et al., 2007; Knutson et al., 2007; McClure et al., 2004b; Paulus and Frank, 2003; Plassmann

et al., 2008). These regions include the ventral striatum (VS) and its main inputs, namely the ventral prefrontal cortex, amygdala, and hippocampus (Alexander et al., 1986; Brown and Pluck, 2000; Haber, 2003). More precisely, activity in these regions has been found to reflect values in a parametric manner, both when subjects were asked to rate how much they liked visual items and when they were instructed to choose a preferred item from a pair. However, it is unclear whether these valuerelated activations emerge naturally or if they are artificially elicited by the instructions given to subjects. In other words, the question is whether the same regions also encode values when it is not relevant for the ongoing task, i.e., when subjects are engaged in a distractive task. The aim of the present functional magnetic resonance imaging (fMRI) study was to identify such an automatic valuation system in the human brain. We investigated value encoding in a noneconomic context, with the hypothesis that the same regions would be involved as in economic situations, for instance when choosing whether or not to purchase a good. A requisite for a brain valuation system (BVS) is to reflect both values, as explicitly reported by subjects, and preferences, as revealed in binary choices. We imposed the constraint that value and preference encoding should be generic, in accordance to the common neural currency hypothesis, which assumes that a same brain system can appraise options associated with different categories of pleasurable experiences (Montague and Berns, 2002). Various categories of visual stimuli, such as food, faces, paintings, sculptures, scenic views, or vacation projects have been explored to date (Di Dio et al., 2007; Kawabata and Zeki, 2004; O’Doherty et al., 2003; Plassmann et al., 2007; Sharot et al., 2009; Yue et al., 2007), but these categories have not been compared in a single experimental design. In the present study, we varied the category of the stimuli presented by including pictures of faces, houses, and paintings. The pictures were chosen so as to allow considerable intersubject variability in expressed preferences, such that the underlying valuation system would reflect personal taste and not commonalities. The pictures were displayed one by one in the MRI scanner and subjects had to rate either their pleasantness (explicit task) or their age (distractive task) on an analog scale (Figure 1). After the scanning sessions, the same pictures were presented in pairs, and subjects had to choose the one they preferred. We then searched for those brain regions that encoded both values and preferences, across

Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc. 431

Neuron Automatic Valuation in the Human Brain

A

B

20

30

40

C

-10

50

0

^

5

1700

1800

1900

1600

1800

-10

2000

^

500

1400

-10

2000

Self-paced

0

10

^

500

^ 3000

10

^

500

0

10

^ 3000

Self-paced

Self-paced

Figure 1. Tasks Overview Age rating (A) and pleasantness rating (B) tasks were performed during scanning, whereas choice task (C) was conducted after scanning. Successive screenshots displayed during a given trial are illustrated from left to right, with durations in milliseconds. In the rating tasks, subjects had to move a cursor on an analog scale to indicate the age or the pleasantness of a picture, which could be a face, house, or painting. In the choice task, subjects had to state which picture they preferred between two of those in the same category (face, house, or painting).

subjects and regardless of task and category. These brain regions would hence constitute a valuation system that is both generic and automatic. RESULTS Behavior We conducted several analyses to establish the validity of the experimental design. First, to verify that age and pleasantness are orthogonal dimensions of our stimuli, we corroborated that there was no correlation between age and pleasantness ratings (r = 0.049, p = 0.35), thus discarding the possibility that age ratings may be linked to affective values. Pleasantness ratings were neutral on average (0.56 ± 2.51) but extended over a wide range along the scale (Figure 2A). The interstimuli standard deviation (2.51) was similar to the intersubject one (2.78), indicating that, as each subject used a large range of values to rate the different stimuli, a same stimulus was variably rated by the different subjects. The variability was smaller for the age ratings (interstimuli SD: 1.32, intersubjects SD: 1.35), probably because this is a more objective dimension. Next, we verified that preferences were not shared between subjects, but revealed personal tastes. To this aim the comparisons were fixed in the choice task: all subjects were presented with the same pairs of pictures. Each picture was presented once in an easy comparison and once in a hard comparison, making a total of 180 easy and 180 hard comparisons for all 360 pictures (120 pictures per category). Easy and hard comparisons were generated from the average picture rankings obtained in a pilot study. In the easy comparisons, the pictures to be compared had an adjacent position in the ranking, whereas in the hard comparisons they were 60 positions apart (the one half mark of the ranking for a given category). The median agreement rate (proportion of subjects choosing the same picture)

432 Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc.

was 60%–65% for hard comparisons and 70%–75% for easy comparisons (Figure 2B). In other words, among the 20 subjects, 12 or 13 chose one picture and 7 or 8 chose the other in a typical hard comparison (14 or 15 and 5 or 6 for an easy comparison, respectively). Thus, preferences were not trivially driven by the stimuli, as different subjects made different choices. Last, in keeping with the hypothesis that pleasantness ratings and preferences are based on the same underlying values, we checked the relationship between the two measures. Prediction scores (proportion of cases where subjects chose A after rating A as greater than B) were 74.7% ± 1.6% for easy comparisons and 68.9% ± 1.4% for hard comparisons. For the 17 subjects who returned 1 month later to repeat the choice task, prediction scores were unchanged: 74.2% ± 1.5% (easy comparisons) and 67.2% ± 1.9% (hard comparisons). Moreover, preferences were found to be relatively stable, as 76.1% ± 1.5% of easy and 71.8% ± 1.7% of hard comparisons elicited the same choice on the immediate (just after scanning) and the delayed (1 month later) choice tasks. All prediction scores were significantly (p < 0.01) above chance level (50%) whatever the delay and comparison, establishing a link between pleasantness ratings and preferences (Figure 2C). This link was further confirmed by calculating the average difference in rating between the preferred and nonpreferred pictures (Figure 2D), which was also significant for the two delays and difficulty levels (all p < 0.001). As expected, this difference was higher for easy relative to hard comparisons (immediate session: +0.94 ± 0.17, t19 = 5.66; delayed session: +1.00 ± 0.15, t16 = 6.47, both p < 0.001). Thus, through introspection, subjects accessed explicit values that were tightly linked to their preferences, even when assessed after a 1 month delay. The link between pleasantness ratings and preferences could also be inferred from decision times. An intuitive rule is that decisions take longer if alternatives have closer values

Neuron Automatic Valuation in the Human Brain

Figure 2. Behavioral Results

B 10 8 6 4 2 0 -2 -4 -6 -8 -10

40

Easy Comparisons

35 Number of pairs

Pleasantness ratings

A

Hard Comparisons

30 25 20 15 10 5 0

-10 -8 -6 -4 -2

0

2

4

6

8 10

50%

60%

Age ratings

Prediction score

90%

*

80%

*

70% 60%

*

*

*

*

50% Easy

Hard

Immediate

Response time (ms)

E

Easy

Hard

Pleasantness rating (δV)

D

100%

1700 1500

80%

90%

100%

*

*

* Easy

* Hard

Immediate

2100

2100 1900

3,5 3 2,5 2 1,5 1 0,5 0

Delayed

Response time (ms)

C

70%

Shared preferences

1900

* * Easy

Hard Delayed

(A) Correlation between age and pleasantness ratings. The dots correspond to the 360 pictures. Ratings were averaged across the 20 subjects. The black line indicates linear regression fit. (B) Distribution of agreement rates over the 180 hard and 180 easy comparisons. Plots show the number of stimuli pairs for which a specific percentage of subjects expressed the same preference. (C) Prediction scores (percentage of pairs for which the preferred picture got the higher pleasantness rating), averaged separately for easy and hard comparisons, and for both the immediate and delayed choice task sessions. (D) Difference in pleasantness ratings (dV) between preferred and nonpreferred pictures (P NP), averaged separately for easy and hard comparisons, in both the immediate and delayed choice task sessions. (E) Correlation between response times and z-scored dV in immediate (left) and delayed (right) choice tasks. The dots represent 18 comparisons, ranked according to dV and averaged across subjects. Error bars represent intersubject SEM. (*), significant difference between conditions in black (p < 0.01, two-tailed paired t test) or from chance in white (p < 0.001, one-tailed paired t test).

1700 1500

2004; Reddy and Kanwisher, 2006), we found significantly different activations 1300 in a variety of brain regions (Figure 3): in -2 -1 0 1 2 3 -2 -1 0 1 2 3 Z-scored δV Z-scored δV the occipital cortex (primary and associative visual areas), in the parietal cortex (precuneus and supramarginal gyrus), in (FitzGerald et al., 2009), as seen with conflict effects in percep- the temporal cortex (fusiform, lingual, and para-hippocampal tual decision-making (Gold and Shadlen, 2007). We reasoned gyri), in the amygdala, and in the anterior cingulate cortex that the difference in pleasantness ratings (dV), between (ACC). Thus, the different categories of stimuli matched the funcpreferred and nonpreferred pictures (P NP), should predict tional specialization of different brain regions, mostly in the response times (RTs) in the choice task. To test this hypothesis, ventral and dorsal visual streams. z-scored pleasantness ratings were ranked and divided into 10 We next contrasted expression patterns for pleasantness subsets of ascending dV for each individual. Correlations with versus age ratings, after z-scoring within each subject, task, RT were calculated over these 10 data points and tested for and category. This contrast was meant to isolate regions that significance across subjects. The correlation between dV and specifically encode subjective values, independent of processes RT (Figure 2D) was significant for both the immediate and that would be common to age estimation, such as assigning delayed sessions (immediate: R = 0.49 ± 0.09, t19 = 5.73; a number to a picture or moving a cursor along a scale. Signifidelayed: R = 0.41 ± 0.09, t16 = 4.57; both p < 0.001). This cant activations (Figure 4A) were seen in a large cluster with suggests that the uncertainty of preferences was expressed main maxima located in the ventromedial prefrontal cortex as both smaller dV and longer RT. Note that this relation did (VMPFC), orbitofrontal cortex (OFC), ACC, VS, amygdala, hipponot hold when preferences contradicted pleasantness ratings campus, posterior cingulate cortex (PCC), and primary visual (for negative dV). cortex (V1). Apart from these last two regions, the neural network encoding subjective values appeared to match the limbic frontostriatal circuits as described in anatomical studies (Haber, 2003). Neuroimaging A prerequisite for further examination of the BVS was to verify The two dimensions to be rated in the different tasks (age and that the different stimuli categories (face, house, and painting) pleasantness) thus yielded significantly different activations. and the different behavioral tasks (pleasantness and age ratings) There was no activation in the reverse contrast (age versus activate different brain areas. We first contrasted each stimulus pleasantness ratings), suggesting that the brain has no general category with the two others, at the time of picture viewing. system dedicated to age estimation, or at least not for the Consistent with previous studies (Grill-Spector and Malach, wide range of ages (from years to centuries) tested here. 1300

Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc. 433

Neuron Automatic Valuation in the Human Brain

Axial glass brain

Sagittal slice

Frontal slice

Figure 3. Statistical Parametric Maps of Stimulus Category Each category (face, house, painting) was contrasted with the two others at the individual level. Slices were taken at maxima of interest indicated by red pointers on glass brains. Areas shown in gray/black on glass brains and in red/yellow on coronal slices showed significant group level random effect (one-sample t test, p < 0.001, uncorrected). [x y z] coordinates of the maxima refer to the Montreal Neurological Institute space. Color scales on the right indicate t values.

Faces

[62, -38, 44]

Houses

[22, -44, -12]

Paintings

[10, -92, 14]

Our definition of the BVS included both parametric encoding of subjective values and prediction of preferences in choice tasks. We thus performed a formal conjunction between the above contrast (pleasantness versus age ratings) and a P NP contrast. Whether a picture was preferred or not was derived from the individual preferences expressed during the postscan choice task that assessed easy comparisons, and was thus unique for each subject. From this conjunction the list of activated regions was restricted to the VMPFC, VS, hippocampus, PCC, and V1 (Figure 4B). Although V1 activation is an interesting example of Axial glass brain

Sagittal slice

Value * preference

Value

[4, 10, -8]

[-8, 8, -4]

434 Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc.

top-down modulation of early sensory areas, we excluded V1 as probably too specific to the visual nature of our stimuli, and hence less likely to be part of a generic valuation system. The BVS that will be further characterized below therefore includes the VMPFC, VS, hippocampus, and PCC. Note that these regions correspond to the clusters activated in the simple contrast between preferred and nonpreferred pictures, which played a limiting role in the conjunction. The blood oxygen-level dependant (BOLD) signal was deconvolved by fitting a canonical hemodynamic response to every single trial, at the time of picture viewing. The response magnitudes (betas) to all pictures were extracted from spheres centered on the different BVS regions. We analyzed these response magnitudes in relation to pleasantness ratings and preferences. As seen above with RT, correlations between response magnitudes and pleasantness ratings were calculated over 10 data points and tested for significance across subjects (Figure 5). The correlation was highly significant in the VMPFC (R = 0.40 ± 0.06, t19 = 6.23, p < 0.001), VS (R = 0.24 ± 0.05, t19 = 4.89, p < 0.001) and

Frontal slice

Figure 4. Statistical Parametric Maps of Values and Preferences (Top) Correlation with pleasantness ratings. (Bottom) conjunction between pleasantness rating (above contrast) and preference contrast (preferred minus nonpreferred pictures). The bottom map was used to identify regions comprising the brain valuation system (BVS). Slices were taken at maxima of interest indicated by red pointers on glass brains. Areas shown in gray/ black on glass brains and in red/yellow on coronal slices showed significant group level random effect (one-sample t test, p < 0.001, uncorrected). [x y z] coordinates of the maxima refer to the Montreal Neurological Institute space. Color scales on the right indicate t values.

Neuron Automatic Valuation in the Human Brain

Hippocampus [-18, -36, -10]

2,5

0,5

2

0 -0,5 -1

1,5 1 0,5 0

-2 -1 0 1 2 Z-scored value

-2

VMPFC [-10, 44, -8]

[-8, 8, -4]

coefficients

1 coefficients

coefficients

[-10, -54, 16]

VS

-1 0 1 2 Z-scored value

2 1,5 1 0,5 0 -0,5 -1 -1,5

coefficients

PCC

-2 -1 0 1 2 Z-scored value

2 1,5 1 0,5 0 -0,5 -1 -2

-1 0 1 2 Z-scored value

Figure 5. Value Encoding in the Brain Valuation System Regression coefficients (betas) were extracted in the different brain valuation regions, located at the intersection of blue lines on the sagittal slices. These coefficients were plotted against z-scored values expressed as pleasantness ratings. The dots represent 18 pictures, ranked in order of ascending values and averaged across subjects. Error bars represent between-subject SEM.

hippocampus (R = 0.28 ± 0.04, t19 = 6.55, p < 0.001), and moderately significant in the PCC (R = 0.14 ± 0.07, t19 = 1.98, p = 0.03). The P NP contrasts in the easy comparisons were also estimated separately for the different BVS regions (Figure 6A). The contrast was highly significant in all regions (VMPFC: +0.27 ± 0.09, t19 = 2.89, p = 0.005; VS: +0.39 ± 0.10, t19 = 3.86, p < 0.001; hippocampus: +0.30 ± 0.07, t19 = 4.11, p < 0.001; and PCC: +0.19 ± 0.03, t19 = 5.58, p < 0.001). Of course these last results are not surprising, since the BVS regions were selected both to encode subjective values reported as pleasantness ratings and to predict choices in easy comparisons. To get an independent estimation of how good these regions are at encoding preferences, we used the P NP

A

0,4 P-NP contrast

0,4 0,3 0,2

*

*

0,1

*

VMPFC

VS

0,2 0,1

H

*

0,2

*

*

Easy

PCC

Shared

Personal

Hard

Easy

Immediate

0,4

0,4

0,3

0,3 0,2 0,1

*

*

*

Faces

Houses

* Hard Delayed

0,5

0,2 0,1

*

*

Pleasantness

Age

0

0

0

*

E P-NP contrast

0,3

*

0

D

0,4

0,1

0,3

*

0

*

P-NP contrast

P-NP contrast

0,5

P-NP contrast

Figure 6. Preference Encoding in the Brain Valuation System

B 0,6

C

contrast in the hard comparisons (Figure 6B). Because there was no significant difference in preference encoding between regions (Figure 6B), we pooled them to form a single BVS. This BVS was significantly more activated by pictures that were later preferred in the hard comparisons (+0.17 ± 0.04, t19 = 4.75, p < 0.001). We also verified that the same contrast was still significant for preferences expressed 1 month later for both easy comparisons (+0.20 ± 0.07, t16 = 3.01, p < 0.001) and hard comparisons (+0.10 ± 0.04, t16 = 2.48, p = 0.01). We now turn to testing the hypothesized features of the BVS: personal, generic, and automatic. To assess whether the BVS encodes personal and not shared preferences, the comparisons were split into two subsets below and above the median

Painngs

Regression coefficients (betas) were contrasted between preferred and nonpreferred pictures (P NP) in the different brain valuation regions, separately for the different experimental conditions. (A) Comparison between brain regions. VMPFC, ventromedial prefrontal cortex; VS, ventral striatum; H, hippocampus; PCC, posterior cingulate cortex. (B) Comparison between choice tasks (easy and hard comparisons in both immediate and delayed sessions). (C) Comparisons between shared and personal preferences, separated by median agreement rate. (D) Comparison between stimulus categories (face, house, and painting). (E) Comparison between rating tasks (pleasantness/explicit and age/distractive). Bars represent mean ± intersubject SEM. (*), significant difference between conditions in black (p < 0.05, two-tailed paired t test) or from chance in white (p < 0.05, one-tailed paired t test).

Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc. 435

Neuron Automatic Valuation in the Human Brain

agreement rate (Figure 6C). Then we tested the P NP contrast separately for the two subsets. This contrast was significant for both the highly personal (+0.24 ± 0.05, t19 = 5.17, p < 0.001; agreement rate: 58.35% ± 4.98%) and highly shared (+0.21 ± 0.06, t19 = 3.73, p < 0.001; agreement rate: 81.14% ± 8.24%) preferences, with no difference between personal and shared preferences (p = 0.70). This shows that when subjects disagree, the BVS follows the individual and not the average choices. To assess whether the BVS can generically express preferences across categories, we tested the P NP contrast separately for faces, houses, and paintings (Figure 6D). The contrast was significant for all categories (faces: +0.31 ± 0.10, t19 = 3.08, p = 0.003; houses: +0.22 ± 0.09, t19 = 2.45, p = 0.01; paintings: +0.32 ± 0.08, t19 = 4.07, p < 0.001). There was no difference between categories (faces versus houses: t19 = 0.70, p = 0.48; faces versus paintings: t19 = 0.08, p = 0.93; houses versus paintings: t19 = 0.79, p = 0.44), indicating that the BVS can encode preferences for various types of objects. To assess the automaticity of the BVS (Figure 6E), we tested the P NP contrast separately for the explicit task (pleasantness rating) and the distractive task (age rating). The contrast was highly significant for both the explicit task (+0.29 ± 0.08, t19 = 3.74, p < 0.001) and the distractive task (+0.28 ± 0.06, t19 = 4.83, p < 0.001), with no difference between the two (t19 = 0.15, p = 0.88). Thus, BVS activation encodes preferences, even when it is recorded during a task that does not require assigning subjective values. We checked whether this effect was driven by subjects who performed the pleasantness rating first, which might have primed the BVS. There was a nonsignificant trend (t18 = 1.84, p = 0.08, nonpaired t test) for a higher effect in these subjects, but crucially, the P NP contrast was significant in both subgroups (age rating first: +0.18 ± 0.07, t9 = 2.56, p = 0.01; pleasantness rating first: +0.38 ± 0.08, t9 = 4.54, p < 0.001). The BVS was therefore automatically engaged during the distractive task, even in subjects who were not aware that they would later have to rate the stimuli’s pleasantness and state their preference. DISCUSSION Our results provide a confirmation of previous studies that characterized limbic fronto-striatal circuits as a valuation system. These limbic circuits were indentified first by axon tracing in monkeys (Alexander et al., 1986; Haber, 2003) and then by fiber tracking in humans (Draganski et al., 2008; Lehericy et al., 2004). Different functional partitions of these circuits have been proposed (Brown and Pluck, 2000), with the so-called limbic or affective circuit generally including the amygdala, hippocampus, and ventral prefrontal cortex as the main inputs to the ventralbasal ganglia circuit (VS, ventral pallidum, ventral tegmental area, and medio-dorsal thalamus). Considerable evidence from both human and nonhuman species supports the idea that the limbic circuit is involved in reward processing (Daw and Doya, 2006; Knutson and Cooper, 2005; McClure et al., 2004c; O’Doherty et al., 2007; Padoa-Schioppa, 2007; Rushworth and Behrens, 2008). Here we found that at least three components of this limbic circuit, the hippocampus, VS, and VMPFC, reflected subjective values assigned to visual items that are not universal rewards such as food or money. This finding is consistent with

436 Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc.

a variety of studies showing that seeing something one likes produces activations similar to those of universal rewards (Hare et al., 2009; Kawabata and Zeki, 2004; Kim et al., 2007; Knutson et al., 2007; McClure et al., 2004b; O’Doherty et al., 2003; Paulus and Frank, 2003; Yue et al., 2007). A parsimonious view would attribute to these regions the capacity to valuate all sorts of objects, with universal rewards forming a particular subset that score high in everybody. Our BVS also included the PCC, which may be less expected, as the PCC has been implicated, together with the hippocampus, in episodic and autobiographic memory (Beckmann et al., 2009; Sugiura et al., 2005). It could be speculated that preferred pictures are associated with the retrieval of personal memories, although PCC activation is in fact frequently reported in preference studies, including those comparing monetary options (Di Dio et al., 2007; FitzGerald et al., 2009; Kable and Glimcher, 2007; McClure et al., 2004a; Paulus and Frank, 2003; Schiller et al., 2009). In accordance with classical economic views, our BVS reflected both the pleasantness ratings of individual objects and the preferences observed in binary choices. This may not be surprising, as subjects reliably preferred the picture that they had rated more highly. The interesting idea here is that decision-making might be based on values linearly encoded in the BVS. Accordingly, RTs measured during choice performance were inversely proportional to the distance between the two option values, even when measured 1 month later. Our experiment, however, cannot explain how these values are used to make a binary choice, as we did not acquire brain images during the choice task. Additional processes are required to reach a decision, notably the representation of the two option values (not just one), which could be separated either in time or space. Then the two option values have to be compared, and the highest one, selected. Several previous studies have tried to dissociate regions involved in decision-making from those involved in encoding values. One attractive possibility is that choice selection involves more dorsal fronto-striatal circuits, including the caudate nucleus (FitzGerald et al., 2009; O’Doherty et al., 2004), but this has not been consistently supported (Chaudhry et al., 2009; Sharot et al., 2009). Further studies are therefore needed to elucidate how values are integrated into decision-making processes. As the same brain regions are activated by basic universal rewards and preferred neutral objects, the question arises of how subjective the BVS is. This question has rarely been addressed explicitly, although most previous studies implicitly assume that the BVS represents personal preferences. Here we formally addressed this question by comparing pictures that had adjacent values on the group level ranking of pleasantness ratings averaged across subjects. Preference in these hard comparisons was encoded in BVS activation, as reliably as in easy comparisons between pictures with distant values. Moreover, encoding was similar for comparisons that elicited personal or shared preferences (with low and high agreement rate, respectively). Preference encoding was therefore not driven by stimuli that were obviously preferable to a majority of individuals. This makes the BVS a system in which activity depends on the subject’s and not the stimulus’ individual characteristics, in keeping with the classical distinction between value-based and perception-based decisions (Sugrue et al., 2005).

Neuron Automatic Valuation in the Human Brain

To push this idea further, we also tested whether BVS activation would depend on the category of presented stimuli. Faces and houses were chosen because they are known to activate different regions within the ventral visual stream (Grill-Spector and Malach, 2004; Reddy and Kanwisher, 2006). Paintings were added to test whether stimuli that are culturally recent could be valuated by the same system as faces, the processing of which may appear much more crucial with respect to biological survival. We found that preference encoding in the BVS was similar for faces, houses, and paintings, despite these categories activating different regions of the visual system. This might be another example of cultural recycling of brain systems that evolved to fulfill more ancestral functions (Dehaene and Cohen, 2007). Accordingly, sophisticated rewards, such as acquiring a good reputation (Izuma et al., 2008) or giving money to a charity (Harbaugh et al., 2007), have been found to activate these regions just as more basic rewards like receiving money (O’Doherty et al., 2001) or food (Plassmann et al., 2007) do. That the same regions are involved in valuating different categories of objects is consistent with the common neuronal currency hypothesis, which proposes an explanation of how the brain arbitrates between incommensurable options (Montague and Berns, 2002). This idea is supported by existing literature, as separate studies using different stimuli (like faces, scenes, paintings, sculptures, or food items) report similar activations in relation to subjective values (Di Dio et al., 2007; Kawabata and Zeki, 2004; O’Doherty et al., 2003; Plassmann et al., 2007; Yue et al., 2007). The main key finding is that a BVS with the above characteristics still generates values even when they are unnecessary for the task at hand. Indeed, even when subjects were only asked to estimate the age of faces, houses, or paintings, with no reference to pleasantness or preference, activity in the BVS was found to be higher for pictures that were later preferred. Importantly, we controlled the conditions such that age and pleasantness ratings were not correlated and that the two tasks activated different brain regions. This finding suggests that the brain automatically engages in assigning affective values to visual objects in the individual’s surroundings. A similar result was observed in these same regions (the VS and medial OFC) during binary choice between faces that had been presented repeatedly (Kim et al., 2007). Here we extend this observation by showing that automatic valuation occurs right from the first presentation of a visual stimulus in a choice-free context, that it reflects not only binary choice but also appraisal of individual pictures in a linear fashion, and that it applies not only to faces but also to other categories such as houses and paintings. These results were obtained in a noneconomic context, in keeping with the idea that sophisticated economic choices rely on more basic valuation systems that evolved before money was invented, and which are shared with nonhuman species. The automaticity of BVS activation suggests that values might come first, providing a basis for subsequent potential choices. Thus valuebased decisions would involve probing the BVS, just as sensory systems are probed during perception-based decisions. Where these values arise from is beyond the scope of this study. It is likely that both evolution and development played a role, though perhaps their contributions differ for values that are well shared

and those that are more personal. Another implication of BVS automaticity is that values assigned to objects or persons at first sight could affect unrelated decisions. Such a mechanism could account for some deviations from rationality that are well documented in behavioral economics and psychology (Colman, 2003; Kahneman, 2003; Mellers et al., 1998). In summary, we have characterized a brain system that encodes subjective values in a parametric manner during explicit pleasantness ratings, and that encodes binary choices expressed 1 month later. This BVS, which includes the VMPFC, VS, hippocampus, and PCC, appears to be both generic (dealing with various types of visual objects) and automatic (functioning when attention is distracted by another task). We do not imply here that the BVS encodes absolute rather than relative values: it seems plausible that subjects took the early pictures as reference points, classically termed anchors (Seymour and McClure, 2008). It also remains uncertain whether the pleasantness ratings represent cardinal or ordinal values: subjects may have first ranked each picture relative to the previous ones, and then inferred its position on the scale. However, the fact that the BVS also showed preference encoding during the distractive task argues for a more straightforward valuation process. Finally, choices were not entirely determined by option values, as some of the preferred pictures had been accorded a lower rating. Further experiments are needed to examine whether these preference reversals come from stochastic noise or from some systematic bias inherent in decision-making processes. EXPERIMENTAL PROCEDURES Subjects The study was approved by the Ethics Committee for Biomedical Research of the Pitie´-Salpeˆtrie`re Hospital, where the study was conducted. Subjects were recruited via the Relais d’Information en Sciences Cognitives (RISC) website and screened for exclusion criteria: age below 18 or above 39, regular use of drugs or medications, history of psychiatric or neurological disorders, and contraindications to MRI scanning (pregnancy, claustrophobia, metallic implants). All subjects gave informed consent prior to partaking in the study. We scanned 20 subjects: 10 males (aged 22.0 ± 2.7 years) and 10 females (aged 21.5 ± 3.0 years). Among this group, 17 subjects came back 1 month after scanning to perform an additional choice task (see below). Stimuli We used 120 faces, 120 houses, and 120 paintings, for a total of 360 pictures. These pictures were selected to cover a large range of ages: 20–50 years for faces, 0–300 years for houses, and 0–600 years for paintings. Faces were drawn from the Productive Aging Lab Face Database (Minear and Park, 2004), specifically the 20- to 50-year-old white subset, and represented different eye and hair colors as well as both genders. The faces had a neutral facial expression, to avoid trivial stimuli-driven valuations, as would be obtained for instance with smiling faces. Houses and paintings were gathered from the Internet. They were chosen to cover a large variety of styles, such that different esthetic tastes could be expressed, for example in terms of modern versus older periods as far back as the Middle Ages. We controlled that size and luminance were approximately matched between pictures. In a pilot behavioral study, 10 subjects rated the pleasantness of the 360 pictures. This was primarily done to ensure that both the interstimuli and intersubject variances were sufficiently high for linear regressions. These pilot ratings were also used to select the pictures to be compared during the choice task. The picture pairs were fixed to allow estimation of intersubject agreement rate. For all comparisons the two pictures belonged to the same category (face, house, or painting), and to the same gender for faces. Average

Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc. 437

Neuron Automatic Valuation in the Human Brain

pleasantness ratings were ranked and two types of comparisons were set up for each category (120 pictures). In easy comparisons, pictures ranked n were compared to pictures ranked n+60, whereas in hard comparisons pictures ranked n were compared to pictures ranked n+1. The 360 stimuli were randomly assigned to the six scanning sessions, such that each session employed 60 new stimuli, equally divided between picture categories (20 faces, 20 houses, and 20 paintings). Tasks All tasks were programmed on a PC using the Cogent 2000 (Wellcome Department of Imaging Neuroscience, London, UK) library of Matlab functions for stimuli presentation. Choice tasks were performed after scanning sessions, at the end of the experiment. During scanning, pictures were displayed one by one and subjects had to rate either their age or their pleasantness. The age and pleasantness rating tasks, also referred to as the distractive and explicit tasks, were blocked in separate consecutive sessions. Half of the subjects (n = 10) performed the three age-rating sessions first, and then the three pleasantness-rating sessions, whereas the other half (n = 10) followed the reverse order. This procedure has the advantage of cancelling out order effects while preserving the possibility of scanning the (distractive) age rating task before subjects heard about the (explicit) pleasantness rating and choice tasks. Importantly, subjects were trained on a small set of pictures that never appeared in the scanner, and only for the task they were to perform first (age or pleasantness rating). After the third session they were instructed through earphones that the task would change, and new rules were explained. At the trial level, a similar rating procedure was implemented for both the distractive and explicit tasks. The picture was displayed on the screen for 3 s, following a fixation cross (Figure 1). Then appeared the rating scale, graduated between 10 and 10 for pleasantness rating, and for age rating between 20 and 50 years (faces), 1700 and 2000 (houses), or 1400 and 2000 (paintings). We used the date of construction for houses and the date of creation for paintings to make the response more intuitive, but we inversed the ratings during analysis to make them proportional to the age, and hence comparable with faces. In all cases the scale had 21 steps (10 values on each side of the center), but only three or four reference graduations were shown. Subjects could move the cursor by pressing a button with the right index finger to go left or with the right middle finger to go right. Rating was self-paced and subjects had to press a button with the left index finger to validate their responses and go to the next trial. The initial position of the cursor on the scale was randomized to avoid confounding the ratings with the movements they involved. The average duration of rating sessions (n = 60 pictures) was 6.84 ± 0.61 min. Note because each picture was only displayed once, subjects rated the age of half the pictures (n = 180), and the pleasantness of the other half (n = 180). The choice task was administered twice: for the first time just after scanning sessions (within the scanner), and for a second time 1 month later (outside the scanner). Subjects performed the 180 hard comparisons first and then the 180 easy comparisons, each type divided into three blocks of 60 trials. The pictures selected to be compared were fixed (see above), but the order of presentation was randomized across subjects. The two pictures were displayed side by side, following a fixation cross. The relative position of the two pictures on the screen was also randomized. Subjects were asked to indicate a preference by pressing one of two buttons corresponding to the left and right stimuli. We recorded not only choices but also RTs (delays between picture presentation and button press). Prediction score was 1 if the chosen picture had been rated as more pleasant, 0.5 if the two pictures had been equally rated, or 0 if the choice went against the pleasantness ratings (a phenomenon called preference reversal). Stability score was 1 (otherwise 0) if the same picture was chosen in the immediate (just after scanning) and delayed (1 month later) choice sessions. Average ratings, RTs, prediction and stability scores, and correlations (Pearson’s coefficients) were z-scored at the subject level and tested for significance at the group level. Two-tailed paired t tests were used for comparisons between experimental conditions, and one-tailed paired t tests for comparisons against chance level. We considered three significance levels: 0.05, 0.01, and 0.001. All statistical analyses were performed with Matlab Statistical Toolbox (Matlab R2006b, The MathWorks, Inc., USA).

438 Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc.

Neuroimaging T2*-weighted echo planar images (EPIs) were acquired with BOLD contrast on a 3.0 Tesla magnetic resonance scanner. We employed a tilted plane acquisition sequence designed to optimize functional sensitivity in the OFC and medial temporal lobes (Deichmann et al., 2003). To cover the whole brain with good spatial resolution, we used the following parameters: TR = 2.29 s, 35 slices, 2 mm slice thickness, 1 mm interslice gap. T1-weighted structural images were also acquired, coregistered with the mean EPI, normalized to a standard T1 template, and averaged across subjects to allow group level anatomical localization. EPI data were analyzed in an event-related manner, within a general linear model (GLM), using the statistical parametric mapping software SPM5 (Wellcome Trust center for NeuroImaging, London, UK) implemented in Matlab. The first five volumes of each session were discarded to allow for T1 equilibration effects. Preprocessing consisted of spatial realignment, normalization using the same transformation as structural images, and spatial smoothing using a Gaussian kernel with a full-width at half-maximum (FWHM) of 8 mm. We used three GLMs to explain individual subject level time series with fixed effects. All models incorporated only one event per trial, which was the onset of picture display. We also tested models with boxcars over stimulus and response epochs, which we do not detail here as they yielded similar results. In the first model, trials were sorted according to the stimulus category (face, house, or painting), with no parametric modulation. In the second model, trials were sorted according to the task (age or pleasantness ratings), with the recorded rating as parametric modulation. In the third model, trials were sorted according to whether the picture was preferred or not in the postscanner easy comparison, with no parametric modulation. All regressors of interest were convolved with a canonical hemodynamic response function (HRF). To correct for motion artifacts, subject-specific realignment parameters were modeled as covariates of no interest. Linear contrasts of regression coefficients were computed at the subject level and then taken to a group level random effect analysis using a onesample t test. A conjunction analysis (Friston et al., 2005) was performed to find brain regions activated in both pleasantness and preference contrasts. Regions of interest (ROIs) were identified in statistical parametric maps (SPMs) at a threshold of p < 0.001 (uncorrected) and selected as part of the BVS if they contained more than 20 voxels. To further characterize the BVS, we extracted regression coefficients (betas) within spheres of 8 mm in diameter (corresponding to the FWHM of the Gaussian kernel used for spatial smoothing) centered on the selected clusters maxima. Regression coefficients were averaged within and between ROIs for each subject. Factors of interest (stimulus category, rating task, delay, and difficulty level) and correlations with ratings were tested for significance at the group level using paired t tests and Pearson’s coefficients, as was done for the behavioral data.

ACKNOWLEDGMENTS We wish to thank Nathalie George for her useful advices on the experimental design and Hilke Plassmann for her helpful suggestions on an earlier version of the manuscript. We are also grateful to the CENIR staff (Eric Bardinet, Eric Bertasi, Kevin Nigaud, and Romain Valabregue) for their skilful assistance in MRI data acquisition and analysis. The study was funded by the Fyssen Foundation. Accepted: September 17, 2009 Published: November 11, 2009

REFERENCES Alexander, G.E., DeLong, M.R., and Strick, P.L. (1986). Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Annu. Rev. Neurosci. 9, 357–381. Beckmann, M., Johansen-Berg, H., and Rushworth, M.F. (2009). Connectivitybased parcellation of human cingulate cortex and its relation to functional specialization. J. Neurosci. 29, 1175–1190.

Neuron Automatic Valuation in the Human Brain

Brown, R.G., and Pluck, G. (2000). Negative symptoms: the ‘pathology’ of motivation and goal-directed behaviour. Trends Neurosci. 23, 412–417. Chaudhry, A.M., Parkinson, J.A., Hinton, E.C., Owen, A.M., and Roberts, A.C. (2009). Preference judgements involve a network of structures within frontal, cingulate and insula cortices. Eur. J. Neurosci. 29, 1047–1055. Colman, A.M. (2003). Cooperation, psychological game theory, and limitations of rationality in social interaction. Behav. Brain Sci. 26, 139–153. Daw, N.D., and Doya, K. (2006). The computational neurobiology of learning and reward. Curr. Opin. Neurobiol. 16, 199–204. Dehaene, S., and Cohen, L. (2007). Cultural recycling of cortical maps. Neuron 56, 384–398. Deichmann, R., Gottfried, J.A., Hutton, C., and Turner, R. (2003). Optimized EPI for fMRI studies of the orbitofrontal cortex. Neuroimage 19, 430–441. Di Dio, C., Macaluso, E., and Rizzolatti, G. (2007). The golden beauty: brain response to classical and renaissance sculptures. PLoS ONE 2, e1201. Draganski, B., Kherif, F., Kloppel, S., Cook, P.A., Alexander, D.C., Parker, G.J., Deichmann, R., Ashburner, J., and Frackowiak, R.S. (2008). Evidence for segregated and integrative connectivity patterns in the human Basal Ganglia. J. Neurosci. 28, 7143–7152. FitzGerald, T.H., Seymour, B., and Dolan, R.J. (2009). The role of human orbitofrontal cortex in value comparison for incommensurable objects. J. Neurosci. 29, 8388–8395. Friston, K.J., Penny, W.D., and Glaser, D.E. (2005). Conjunction revisited. Neuroimage 25, 661–667. Gold, J.I., and Shadlen, M.N. (2007). The neural basis of decision making. Annu. Rev. Neurosci. 30, 535–574. Grill-Spector, K., and Malach, R. (2004). The human visual cortex. Annu. Rev. Neurosci. 27, 649–677.

McClure, S.M., Li, J., Tomlin, D., Cypert, K.S., Montague, L.M., and Montague, P.R. (2004b). Neural correlates of behavioral preference for culturally familiar drinks. Neuron 44, 379–387. McClure, S.M., York, M.K., and Montague, P.R. (2004c). The neural substrates of reward processing in humans: the modern role of FMRI. Neuroscientist 10, 260–268. Mellers, B.A., Schwartz, A., and Cooke, A.D. (1998). Judgment and decision making. Annu. Rev. Psychol. 49, 447–477. Minear, M., and Park, D. (2004). A lifespan database of adult facial stimuli. Behav. Res. Methods Instrum. Comput. 36, 630–633. Montague, P.R., and Berns, G.S. (2002). Neural economics and the biological substrates of valuation. Neuron 36, 265–284. O’Doherty, J., Kringelbach, M.L., Rolls, E.T., Hornak, J., and Andrews, C. (2001). Abstract reward and punishment representations in the human orbitofrontal cortex. Nat. Neurosci. 4, 95–102. O’Doherty, J., Winston, J., Critchley, H., Perrett, D., Burt, D.M., and Dolan, R.J. (2003). Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness. Neuropsychologia 41, 147–155. O’Doherty, J., Dayan, P., Schultz, J., Deichmann, R., Friston, K., and Dolan, R.J. (2004). Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science 304, 452–454. O’Doherty, J.P., Hampton, A., and Kim, H. (2007). Model-based fMRI and its application to reward learning and decision making. Ann. N Y Acad. Sci. 1104, 35–53. Padoa-Schioppa, C. (2007). Orbitofrontal cortex and the computation of economic value. Ann. N Y Acad. Sci. 1121, 232–253. Paulus, M.P., and Frank, L.R. (2003). Ventromedial prefrontal cortex activation is critical for preference judgments. Neuroreport 14, 1311–1315.

Haber, S.N. (2003). The primate basal ganglia: parallel and integrative networks. J. Chem. Neuroanat. 26, 317–330.

Plassmann, H., O’Doherty, J., and Rangel, A. (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J. Neurosci. 27, 9984–9988.

Harbaugh, W.T., Mayr, U., and Burghart, D.R. (2007). Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science 316, 1622–1625.

Plassmann, H., O’Doherty, J., Shiv, B., and Rangel, A. (2008). Marketing actions can modulate neural representations of experienced pleasantness. Proc. Natl. Acad. Sci. USA 105, 1050–1054.

Hare, T.A., Camerer, C.F., and Rangel, A. (2009). Self-control in decisionmaking involves modulation of the vmPFC valuation system. Science 324, 646–648.

Rangel, A., Camerer, C., and Montague, P.R. (2008). A framework for studying the neurobiology of value-based decision making. Nat. Rev. Neurosci. 9, 545–556.

Izuma, K., Saito, D.N., and Sadato, N. (2008). Processing of social and monetary rewards in the human striatum. Neuron 58, 284–294.

Reddy, L., and Kanwisher, N. (2006). Coding of visual objects in the ventral stream. Curr. Opin. Neurobiol. 16, 408–414.

Kable, J.W., and Glimcher, P.W. (2007). The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625–1633.

Rushworth, M.F., and Behrens, T.E. (2008). Choice, uncertainty and value in prefrontal and cingulate cortex. Nat. Neurosci. 11, 389–397.

Kahneman, D. (2003). A perspective on judgment and choice: mapping bounded rationality. Am. Psychol. 58, 697–720.

Samuelson, P.A. (1938). The numerical representation of ordered classifications and the concept of utility. Rev. Econ. Stud. 6, 65–70.

Kawabata, H., and Zeki, S. (2004). Neural correlates of beauty. J. Neurophysiol. 91, 1699–1705.

Schiller, D., Freeman, J.B., Mitchell, J.P., Uleman, J.S., and Phelps, E.A. (2009). A neural mechanism of first impressions. Nat. Neurosci. 12, 508–514.

Kim, H., Adolphs, R., O’Doherty, J.P., and Shimojo, S. (2007). Temporal isolation of neural processes underlying face preference decisions. Proc. Natl. Acad. Sci. USA 104, 18253–18258.

Seymour, B., and McClure, S.M. (2008). Anchors, scales and the relative coding of value in the brain. Curr. Opin. Neurobiol. 18, 173–178.

Knutson, B., and Cooper, J.C. (2005). Functional magnetic resonance imaging of reward prediction. Curr. Opin. Neurol. 18, 411–417. Knutson, B., Rick, S., Wimmer, G.E., Prelec, D., and Loewenstein, G. (2007). Neural predictors of purchases. Neuron 53, 147–156. Lehericy, S., Ducros, M., Van de Moortele, P.F., Francois, C., Thivard, L., Poupon, C., Swindale, N., Ugurbil, K., and Kim, D.S. (2004). Diffusion tensor fiber tracking shows distinct corticostriatal circuits in humans. Ann. Neurol. 55, 522–529. McClure, S.M., Laibson, D.I., Loewenstein, G., and Cohen, J.D. (2004a). Separate neural systems value immediate and delayed monetary rewards. Science 306, 503–507.

Sharot, T., De Martino, B., and Dolan, R.J. (2009). How choice reveals and shapes expected hedonic outcome. J. Neurosci. 29, 3760–3765. Sugiura, M., Shah, N.J., Zilles, K., and Fink, G.R. (2005). Cortical representations of personally familiar objects and places: functional organization of the human posterior cingulate cortex. J. Cogn. Neurosci. 17, 183–198. Sugrue, L.P., Corrado, G.S., and Newsome, W.T. (2005). Choosing the greater of two goods: neural currencies for valuation and decision making. Nat. Rev. Neurosci. 6, 363–375. Von Neumann, J., and Morgenstern, O. (1944). Theory of Games and Economic Behavior (Princeton, NJ: Princeton University Press). Yue, X., Vessel, E.A., and Biederman, I. (2007). The neural basis of scene preferences. Neuroreport 18, 525–529.

Neuron 64, 431–439, November 12, 2009 ª2009 Elsevier Inc. 439

An Automatic Valuation System in the Human Brain - Semantic Scholar

Nov 11, 2009 - According to economic theories, preference for one item over others reveals its rank value on a common scale. Previous studies identified brain regions encoding such values. Here we verify that these regions can valuate various categories of objects and further test whether they still express preferences.

1MB Sizes 3 Downloads 331 Views

Recommend Documents

An Automatic Valuation System in the Human Brain
Nov 11, 2009 - tures, scenic views, or vacation projects have been explored to date (Di Dio et al., ... 360 pictures (120 pictures per category). Easy and hard ...

An Automatic Valuation System in the Human Brain
Nov 11, 2009 - of a visual stimulus in a choice-free context, that it reflects not only binary ... the Pitié -Salpê trie` re Hospital, where the study was conducted. .... software SPM5 (Wellcome Trust center for NeuroImaging, London, UK) imple-.

Predicting Human Reaching Motion in ... - Semantic Scholar
algorithm that can be tuned through cross validation, however we found the results to be sparse enough without this regularization term (see Section V).

Automatic term categorization by extracting ... - Semantic Scholar
sists in adding a set of new and unknown terms to a predefined set of domains. In other .... tasks have been tested: Support Vector Machine (SVM), Naive Bayes.

Automatic, Efficient, Temporally-Coherent Video ... - Semantic Scholar
Enhancement for Large Scale Applications ..... perceived image contrast and observer preference data. The Journal of imaging ... using La*b* analysis. In Proc.

Automatic Speech and Speaker Recognition ... - Semantic Scholar
7 Large Margin Training of Continuous Density Hidden Markov Models ..... Dept. of Computer and Information Science, ... University of California at San Diego.

Automatic term categorization by extracting ... - Semantic Scholar
We selected 8 categories (soccer, music, location, computer, poli- tics, food, philosophy, medicine) and for each of them we searched for predefined gazetteers ...

Approachability: How People Interpret Automatic ... - Semantic Scholar
Wendy Ju – Center for Design Research, Stanford University, Stanford CA USA, wendyju@stanford. ..... pixels, and were encoded using Apple Quicktime format.

The human pyramidal syndrome Redux - Semantic Scholar
Wiesendanger M. Pyramidal tract function and the clinical 'pyramidal syndrome'. Human Neurobiol 1984; 2:227–234. 4. Brucki S, Nitrini R, Caramelli P, Bertolucci PHF, Okamoto IH. Suggestions for utilization of the mini-mental state examination in Br

Database Mining Tools in the Human Genome ... - Semantic Scholar
2. Proteomics. 3. Genome database mining is the identification of the protein-encoding regions of a ..... (124) http://www.merck.com/mrl/merck_gene_index.2.html.

Database Mining Tools in the Human Genome ... - Semantic Scholar
Abstract. The Human Genome Initiative is an international research program for the creation of detailed genetic and physical maps .... Online Mendelian Inheritance in Man (OMIM). ... (82) http://www.cs.jhu.edu/labs/compbio/glimmer.html#get. 16. .....

An Energy-Efficient Heterogeneous System for ... - Semantic Scholar
IV, we describe the system architecture. In Section V, we present the performance characteristics of the workloads. In. Section VI, we evaluate the energy ...

Thermo-Tutor: An Intelligent Tutoring System for ... - Semantic Scholar
The Intelligent Computer Tutoring Group (ICTG)1 has developed a ... the years in various design tasks such as SQL queries [9], database ..... Chemical and Process Engineering degree at the University of ..... J. Learning Sciences, vol. 4, no. 2,.

t12: an advanced text input system with phonetic ... - Semantic Scholar
systems for key-starved mobile devices. The text input schemes are needed not just for SMS (Short Message. Service), but also email, Internet access, contacts,.

Thermo-Tutor: An Intelligent Tutoring System for ... - Semantic Scholar
hard to master for novices. In addition to ... in the best funded institutions, the student-to-teacher ratio is ... best. Low achieving students find the work difficult and never ... complement to traditional courses: it assumes that the students hav

Thermo-Tutor: An Intelligent Tutoring System for ... - Semantic Scholar
developed several ITSs for procedural skills, such as data normalization within relational ..... transmission and temporary storage of data. ASPIRE uses the ..... “Intelligent Tutoring Goes to School in the Big City”. Int. J. Artificial. Intellig

An Audit of the Processes Involved in Identifying ... - Semantic Scholar
The Commission for Racial Equality (Special Educational Needs. Assessment in Strathclyde: Report of a Formal Investigation,. CRE, London, 1996) highlighted ...

An Experimental Study on the Capture Effect in ... - Semantic Scholar
A recent measurement work on the cap- ture effect in 802.11 networks [12] argues that the stronger frame can be successfully decoded only in two cases: (1) The.

in chickpea - Semantic Scholar
Email :[email protected] exploitation of ... 1990) are simple and fast and have been employed widely for ... template DNA (10 ng/ l). Touchdown PCR.

in chickpea - Semantic Scholar
(USDA-ARS ,Washington state university,. Pullman ... products from ×California,USA,Sequi-GenGT) .... Table 1. List of polymorphic microsatellite markers. S.No.

The Developing Social Brain: Implications for ... - Semantic Scholar
video or sound recordings showed no .... requires the online use of theory of mind information when making ... conferencing, and which makes it incomparable to ...

A handy multi-coupon system - Semantic Scholar
... of a coin. Indeed, in a coupon system, every coupon is redeemed to the service provider that has ... are useful means to draw the attention of potential customers. Due to the di- ...... An efficient online electronic cash with unlinkable exact ..

A 3D Shape Measurement System - Semantic Scholar
With the technical advancement of manufacturing and industrial design, the ..... Matlab, http://www.vision.caltech.edu/bouguetj/calib_do c/. [3] O. Hall-Holt and S.