Scientific Jury Selection: Does It Work? RICHARD SELTZER1 Department of Political Science Howard University

This article describes different methodologies used to predict the likely disposition of jurors in order to guide the exercise of peremptory challenges. The actual use of these methodologies in 27 telephone surveys, 9 focus groups, and 2 studies of jurors after the case was decided is examined. It is concluded that the efficacy of scientific jury selection depends, in part, on the type of case.

Beginning with a series of political trials in the early 1970s (Harrisburg 8, Camden 28, and Gainesville 8), social science has been used to help attorneys select juries (McConahay, Mullin, & Frederick, 1977; Schulman, Shaver, Coleman, Emrich, & Christie, 1973). The use of the social sciences in evaluating juries has changed since its initial conception. Early scientific jury selection concentrated on using demographic characteristics to predict the likely vote of potential jurors. Today, most jury consultants shy away from the term scientific jury selection (SJS) and are less likely to stress the prediction of juror voting, especially in complex cases. The focus is on the development of a theory of the case and how this relates to an entire strategy for jury selection. Techniques also have changed over two decades. The mass survey is now supplemented, if not overshadowed, by focus groups or trial simulations. Although early SJS typically was used in political trials (Harrisburg 7, Attica, and Wounded Knee), jury consulting today has become a multimillion-dollar-per-year industry used by corporations, well-heeled individuals, and even public defenders. Recent cases in which parties have used SJS include O. J. Simpson, the Menendez brothers, Martha Stewart, Bernie Ebbers, the first Rodney King trial, the William Kennedy Smith rape trial, and the $1 million McDonald’s verdict (Strier, 1999; Strier & Shestowsky, 1999). Jury consultants are no longer mom-and-pop businesses. The largest 1 Correspondence concerning this article should be addressed to Richard Seltzer, Department of Political Science, Howard University, Washington, DC 20059. E-mail: rseltzer@ howard.edu

2417 Journal of Applied Social Psychology, 2006, 36, 10, pp. 2417–2435. r 2006 Copyright the Authors Journal compilation r 2006 Blackwell Publishing, Inc.

2418 RICHARD SELTZER jury consulting firm (DecisionQuest) was sold in December 2002 to Browne Business for $31 million. The present article examines the efficacy of using survey data to predict attitudes, and discusses changes in techniques and concentration that have occurred in jury consulting over the past several years. From the outset, it is important to clarify that few present-day jury consultants claim the ability to predict a verdict. Most jury consultants try to predict attitudes toward a case. In fact, the term scientific jury selection is no longer used by jury consultants. However, because of the visibility of the term in the academic literature, I have decided to use it in this article. But, as Krauss and Bonora (2003) pointed out: Surveys provide data on group tendencies, which are useful in estimating the likelihood that those same attitudes will be held by others who are members of the same groups. Group tendencies cannot be expected to reliably predict the votes and the reasoning processes of individual jurorsFmuch less of whole juries, made up of people from a variety of backgrounds. (p. 10.04) Typically, SJS involves a telephone survey of the jury-eligible population or a focus group/mock trial.2 The purpose of jury research is to develop case strategy and to develop a profile of favorable and unfavorable jurors. This profile will be used to guide attorneys in the use of peremptory challenges (for an examination of how peremptory challenges affect jury verdicts, see Zeisel & Diamond, 1978). The research might also be used in motions for changes of venue, different voir dire conditions, and jury composition challenges. The following four types of questions usually are asked in the survey instrument. 1. Questions are asked about standard demographic information that one believes will become available from voir dire (age, race, sex, crime victimization experience in criminal cases, type of car driven in an automobile liability case, etc.). 2. Often, demographic type questions will be asked, even if it is not believed that the information will become available during voir dire. For example, in a capital case, a person’s level of religiosity may be an important determinant of his or her attitudes toward the death penalty. However, many judges will not allow questions 2 Strier and Shestowsky (1999) received responses from 107 members of the American Society of Trial Consultants and found that focus groups/mock trials were more likely to occur than were community surveys.

SCIENTIFIC JURY SELECTION

2419

about religion to be asked during voir dire. Nevertheless, an understanding of the relationship between religiosity and attitudes toward the death penalty can help develop a theory of the case that can be used both during jury selection and in the development of case strategy. 3. A series of questions often is asked that uncovers background attitudes that are assumed to be important in the particular case. For example, in a criminal case, one might ask whether respondents agree or disagree that ‘‘People accused of crimes are usually guilty.’’ In a corporate liability case, one might ask about whether corporations are too greedy, whether government regulations hurt the economy, or whether there are too many frivolous lawsuits. 4. Finally, a series of questions about the actual case often is asked. Respondents might be asked how much they know about the case and where they obtained this information. Typically, the respondent is given some background information about the case and is asked to react to different dimensions of the case. In developing a jury profile, the first type of analysis usually involves simple cross-tabulation. Every question from the first and second series of questions is cross-tabulated with every question from the third and fourth series. In addition, the third series is also cross-tabulated with the fourth series. In social science methodology courses, we are taught to avoid fishing for significant relationships. Theory should guide our data analysis. However, the results from fishing are often surprising. For example, in one pornography case, there was a strong relationship between race and attitudes toward the case. In another similar case, but in a different jurisdiction, there was also a strong relationship between these two variables, but in the reverse direction. Our theory about how juror decision making is affected by the background, experiences, and attitudes of jurors simply is not developed sufficiently. In SJS, data mining becomes the norm. Therefore, I usually begin by cross-tabulating each independent variable with each dependent variable. Substantial caution is called for in this type of fishing. For example, in one case in which 35 mock jurors saw a simulated tax-fraud case, I tested for 228 (19  12) different cross-tabulations. Since I used a .10 criterion level of significance in the reporting (in small focus groups, I often relax the criterion of significance level), I would expect that 10% of these crosstabulations would be statistically significant if the numbers were generated at random. Therefore, with pure random numbers, one would have expected

2420 RICHARD SELTZER 23 significant relationships. There were actually 27 such relationships. This is consistent with random numbers. The client was not happy when told that it was unlikely that a juror profile would help during jury selection. What one is looking for first is regularity. An independent variable should be strongly related to a number of the dependent measures. Second, one attempts to rule out the effect of third variables. I usually run a series of loglinear models in an attempt to determine whether the relationship between the two variables is simply an artifact of a third variable. For example, if I found significant relationships between both race and education with the dependent variable, I would test whether both variables independently (and interactively) relate to the dependent variable. After the cross-tabulations are concluded, the second stage of statistical analysis begins. In this stage, an attempt is made first to develop a single index to represent the case. For the sake of simplicity in interpretation, the index is usually made to range from 0 to 100. Factor analysis is used in combination with simple additive indexes. In most cases, a single index is sufficient. Other cases are more complicated. The dependent variables might form more than one coherent index. When the analysis of two separate indexes seems to conflict, this is an important finding and points to the necessity of paying attention to different or conflicting case strategies. An obvious example is that of a capital case in which questions on guilt/innocence might not correlate with questions on the sentence, or when a temporary insanity, defense might conflict with a claim of self-defense. After the index (or indexes) is developed, multiple regression is used to develop a jury profile. A series of stepwise regressions are computed, which allow for testing for interactions, nonlinear relationships, and problems of multicollinearity. A combination of dummy and interval-level variables is used, often resulting in over 50 independent variables in the initial regression runs. Considerable judgment (which is difficult to codify) is used in deciding whether to include or exclude variables in the final equation. Particular attention is paid to making sure that the results of the regressions and the results of the cross-tabulations are consistent. If inconsistencies are found, this often helps one to understand the complexities of the case. In almost all cases, one has some information about potential jurors prior to voir dire: sex, age, occupation, address, and so on. In some cases (three of the cases used in the present article), credit checks, drive-bys, and party registration data can supplement information from the venire list. Therefore, it is reasonable to include these additional variables (i.e., income, type of housing unit, and party identification) as predictor variables. If the regression equations suggest that predictability is possible (a reasonably high R2), I use the results of the regression equations to develop an initial predictive score (0 to 100) for each juror. As additional information becomes

SCIENTIFIC JURY SELECTION

2421

available (through jury questionnaires3 or voir dire), a more refined predictive score is computed. Extreme caution is warranted throughout this approach. Information that is obtained through voir dire is usually of far greater importance than information obtained from the jury-qualification questionnaire. Obviously, in a capital case, a juror’s attitude toward the death penalty (which would be learned from voir dire) likely would overshadow any demographic information that one might have. Some information may not be amenable to preestablished statistical coding. For example, in a libel suit against a newspaper, it would not be possible (because of a small subsample size) to have as an independent variable whether or not the respondent was a reporter for a newspaper. However, if the juror was a reporter for a newspaper, this fact would probably overshadow any other information about the potential juror. You must be willing to throw out any preconceived scores that you have developed. Ellsworth (1993) also correctly cautioned that there is not a ‘‘global conviction proneness’’ (p. 45). One must examine the peculiarities of each case (Patterson, 1986). SJS is most useful when little other information is available about the jurors, such as when voir dire is limited or when some jurors are not forthcoming.

Controversy About SJS The use of SJS has been controversial for at least two reasons: It has been argued that it subverts the criminal justice system and that it is ineffective. Some commentators believe that SJS undermines the criminal justice system in four different ways. First, Etzioni (1974) complained that SJS favors those who are wealthy or celebrated since they are the only ones who can take advantage of the techniques. Strier (1999) noted the irony of techniques that originally were developed to help the poor now are used mostly by the wealthy. Second, Rachlinski (1993) indicated that using the information obtained by SJS undermines Batson v. Kentucky (1986) because jurors will be struck simply because of their demographic profile. However, Strier and Shestowsky (1999) noted that one cannot blame SJS for this. Even without SJS, attorneys exercise peremptory strikes given a preconceived set of 3 It is becoming more common in complex and highly publicized cases for the judge to employ a juror questionnaire. These questionnaires usually have a wider range of questions and may include some case-specific and attitudinal questions. As Strier (1999) noted in his review of SJS, the addition of case-specific questions allows for greater predictability.

2422 RICHARD SELTZER stereotypes. Nevertheless, some have argued for either abolishing or limiting the use of peremptory challenges (Anderson, 1998; Brown, 2003; Hoffman, 1997, 1999; Jonakait, 2003; Montz & Montz, 2000). Third, Lilly (2001) attacked SJS because by typically removing the brighter jurors, there is a ‘‘dumbing down’’ of the jury. The fourth criticism comes from Lane (1999), Barber (1994), Hoffman (1999), and Brown (2003), who noted that by creating the appearance that SJS can control the outcome of a case, this creates a public perception that the process is rigged. However, Rose (2003), in questioning 207 jurors from North Carolina, found that they were not unduly upset by being excused. Saks (1976) criticized SJS from a different angle. He indicated that SJS is ineffective. In his research on 480 mock jurors who saw a videotape of a burglary trial, he found that by using the four most powerful attitudinal variables, R2 was only .13. He concluded that evidence is far more important than jurors’ attitudes. Only in close cases could SJS possibly make a difference. Saks (1997) sharpened his criticism of SJS in his review of jury experiments. Saks’ views also are held by Diamond (1990), who concluded in a review essay on the subject that SJS ‘‘can have a modest effect at best and that it can decrease as we well as increase the probability of a favorable verdict’’ (p. 179; emphasis in the original). Kressel and Kressel (2002) were only somewhat more sympathetic to the utility of SJS when they concluded the following: Although this line of research (SJS) has not settled the matter once and for all, it appears that evidence determines verdicts far more often than jurors’ backgrounds, personalities, attitudes, predispositions, beliefs, or other biases. Only rarely does an irresponsible jury bring in one verdict while the evidence points decidedly in another direction. (p. 105) Similarly, Jonakait (2003) summarized his review essay when he stated the following: Social science studies have consistently found that the overwhelming determinant of verdicts is the evidence presented to the jury. Much research has been done trying to find correlations between the race, gender, age, economic status, political views, and other characteristics of jurors and their verdicts. Although every finding is not precisely the same, one conclusion is consistently reachedFverdicts cannot be predicted accurately simply by knowing the makeup of the jury. (p. 159; emphasis in the original)

SCIENTIFIC JURY SELECTION

2423

Other researchers have found a range of effects.  Penrod (1980): R2s ranging from .20 to .23 in four separate cases using 367 mock jurors.  Hastie, Penrod, and Pennington (1983): Using basic background information as independent variables, they found an R2 of .03 in a mock murder trial.  Feild and Barnett (1978): An R2 of .26 was achieved in combining background variables and attitudes toward rape.  Moran and Comfort (1982): Obtained an R2 of .11 in a mail survey of 319 in Florida.  Mills and Bohannon (1980): Conducted a mail questionnaire among real jurors. Their R2s ranged from .10 in murder cases to .16 in robbery cases.  Visher (1987): Found that demographic variables did little to predict juror decision making in her study of 331 jurors who served in 38 forcible sexual assault cases in a large midwestern city.  Hepburn (1980): Found in St. Louis that demographic variables had little predictive value in a hypothetical murder trial involving a Black male. Authors differ in their interpretation of these R2s. For example, Penrod believed that his R2 of .20 was low. However, when Fulero and Penrod (1990) examined this issue 10 years later, they noted ‘‘although the percentage of variance explained may be small, the potential improvement in selection performance is not insignificant’’ (p. 250). Strier and Shestowsky (1999) also concluded that explained variances of 5% to 15% are not trivial because ‘‘substantially improving the odds, if not the certainty, of victory should not be dismissed as inconsequential’’ (p. 464). My interpretation is that R2s in the .15 range are not particularly low when compared to R2s that appear in many academic social science articles where the author is claiming substantively significant results. Or, as critics (Sachs, 1997) of standardized educational testing point to R2s below .20 in meta-analysis of the relationship between Scholastic Aptitude Test (SAT) testing and grade point average in college students’ first year. Perhaps the most consistent juror demographic relationship concerns race. African Americans are far more likely than are Whites to believe that the criminal justice system discriminates against Blacks and that conspiracies against Blacks occur in the present and not simply in the past (Smith & Seltzer, 2000). Race has been shown to have a substantial effect in capital

2424 RICHARD SELTZER cases (also see Eisenberg, Garvey, & Wells, 2001), and jury decision making in general (King, 1993). Bowers, Steiner, and Sandys (2001) analyzed data from the Capital Jury Project’s national study of 340 capital trials from 14 states. They found that juries with five or more White males were far more likely than were other juries to sentence Black defendants to death when the victim was White (63.2% v. 23.1%). Thus, it is no surprise that Baldus, Woodworth, Zuckerman, Weiner, and Broffit (2001) foundFin a study of 317 capital venires in Philadelphia County, Pennsylvania, between 1981 and 1997Fthat prosecutors struck 51% of Black venire members, compared to 26% of non-Black venire members. The role of juror race was documented also in several studies of students and people waiting in airports conducted by Sommers and Ellsworth (2000, 2001). Clearly, in some situations, a demographic profile can be used to predict juror attitudes.

Telephone Surveys Since 1979, I conducted 27 separate telephone surveys that were used for SJS. As can be seen in Table 1, these cases range across many different areas. In retrospect, it became apparent that reanalysis of these cases might prove useful in determining whether or not background variables, across a variety of cases, significantly relate to the attitudes that I was attempting to predict. These cases occurred in 12 different states. Sample sizes ranged from 153 to 1,000. The cases were about one third civil. In all cases, I included as independent variables only those background characteristics that I believed could potentially be obtained via voir dire or by credit checks, drive-bys, or from voter registration in the three cases in which these techniques were employed. As a special circumstance, in some cases I also used frequency of religious service attendance. Obviously, not all potential questions are asked during voir dire. In almost all surveys, respondents were asked their sex, race, age, marital status, level of education, employment status, location, frequency of religious service attendance, and media habits. Then depending on the type of case, other background questions were asked. For example, in the obscenity cases, respondents were asked whether or not they had ever seen an X-rated movie. In criminal cases, respondents were asked whether or not they had ever been the victim of a serious crime. No direct attitudinal variables are used as independent variables in the analysis discussed in this article. This will result in lower R2s compared to some articles discussed previously that included attitudinal measures.

SCIENTIFIC JURY SELECTION

2425

Table 1 Scientific Jury Selection Telephone Surveys Survey topic

R2

Type of case

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.

.50 .45 .44 .38 .37 .35 .35 .32 .29 .29 .29 .26 .22 .18 .18 .18 .17 .17 .15 .14 .14 .14 .13 .11 .09 .06 .04

CRM CRM CIV CRM CRM CRM CIV CRM CIV CIV CIV CRM CRM CRM CRM CIV CRM CIV CIV CRM CRM CIV CRM CRM CIV CRM CRM

Pornography Pornography Civil liberties violations Murder police officer Pornography Death penalty Repression of people with AIDS Pornography Black land right case KKKFracial violence KKKFracial violence Abortion rights Pornography Race-related robbery Sale of drugs Bank fraud Airplane crash Japanese firm anti-trust Accounting firm negligence Defense fraud Fraud by religious leader KKKFracial violence Death penalty Death penalty Slander of corporation Sentencing Terrorism

Note. CRM 5 criminal; CIV 5 civil.

N 502 511 579 329 484 277 323 396 300 429 100 153 503 329 323 636 501 1,000 496 502 502 599 610 210 398 306 313

2426 RICHARD SELTZER I do not include data on whether a case was won or lost. This determination is too subjective. The following examples are instructive as to why it is problematic to code cases with regard to win or loss.  A capital case in which the defendant claims that he is innocent, although it is apparent that he has little to offer in his defense. The jury comes back guilty, but with a verdict of life instead of death.  A criminal case in which a member of the defense team decides to ‘‘wing it’’ at the last moment and ignore all advice from the jury consultant.  A civil case in which the mock trial shows a client losing $10 million. The client decides to go to trial and loses $10 million (or $2 million).  A civil case in which the mock trial shows a client losing $10 million. The client decides to settle.  The defendant is accused of first-degree murder. The jury acquits on first-degree murder, but convicts on voluntary manslaughter. One needs to be very careful in assessing a jury consultant’s supposed win–loss ratio. It is not clear how to interpret pleas, settlements, criminal cases in which the defendant is convicted of a lesser crime, civil cases in which the jury comes back with the substantial amount that was predicted or with a smaller amount (but still substantial) than was predicted, and cases in which the attorney ignores the jury consultant’s advice. For these reasons, I decided not to report supposed win–loss rates. In 21 of these cases, questions were asked directly about the case. In all cases, general attitude questions were asked that were thought to be germane. The dependent variable is an index (either additive or using factor analysis) of these two types of questions. All potential background questions were included as independent variables. Many of these background variables were included as dummy variables. In some situations, transformed variables were included because of curvilinear relations. In perusing Table 1, a couple observations can be made. Cases truly differed from one another. The highest R2s (.50 and .45) were for two of the five pornography cases. However, in another pornography case, the R2 was half that of the two highest (.22). The three cases were almost identical in nature, and the independent variables employed were fairly similar. In looking at the table of cases sorted by R2 case, it becomes apparent that the case with the lower R2 came from an area of the country where religion plays a role more central than normal in shaping attitudes toward the issue.

SCIENTIFIC JURY SELECTION

2427

None of the corporate cases had high R2s (bank fraud, anti-trust, slander). One might think that these cases were ‘‘boring’’ and that respondents had not developed firm attitudes. Although this might be partially true, somewhat low R2s also were seen in two of the three surveys on capital punishment (the capital case with a relatively high R2 [.35], had a defendant who was very well-known), in a well publicized case against a religious leader, and in a much publicized case involving terrorism. In the capital cases, it is quite possible that people are very conflicted on this issue. For example, religious people are usually more conservative on issues related to crime. However, religious respondents are more likely to oppose capital punishment than are nonreligious people for reasons probably relating to a religious ethic of forgiveness. Perhaps these tendencies canceled each other out. In short, it is difficult to predict in what type of cases SJS will be most useful. There is no scientific basis for deciding what is high or low. Nevertheless, my rule of thumb is that R2s greater than .30 signify substantial predictability (30% of the cases), R2s between .20 and .29 signify moderate predictability (17%), R2s between .10 and .19 signify some predictability (38%), and cases less than .10 signify poor predictability (10%). Focus Groups/Mock Trial Only in recent years have I begun to statistically analyze mock jury trials. Usually, the sample sizes are too small to allow for full-scale regression analysis. Instead, only simple cross-tabulations are utilized. Special caution is warranted in making inferences from focus groups. In particular, it is very difficult to obtain a representative sample. In the typical mock trial, ‘‘jurors’’ are recruited by some variation of random telephone dialing, using past jury rolls, or by using a recruiting firm.4 After screening respondents on the telephone to make sure that they are jury-eligible, unlikely to be immediately removed for cause, and so forth, respondents are offered a fee (up to $250) for coming to either a half-day or full-day session. Jurors could view a videotape, or the trial could be enacted live. In both scenarios, opening and closing statements are given, witnesses

4 In his meta-analysis, Bornstein (1999) found that in most jury simulations, it made little difference whether one used students versus real jurors or whether one used a ‘‘live trial’’ or a brief written summary. I disagree and believe that it is important to put substantial effort into at least approximating a random sample in mock trials. Left to their own devices, most focusgroup firms will recruit people who have attended other focus groups in the past; in essence, professional focus-group attendees!

2428 RICHARD SELTZER Table 2 Cramer’s V of Scientific Jury Selection Focus Groups Focus group 1. 2. 3. 4. 5. 6. 7. 8. 9.

Cramer’s V

Internet drug sales Pornography Pornography Oil spill Defense fraud Breach of contract Tax fraud Consumer fraud Pornography

.55 .52 .48 .45 .41 .38 .37 .37 .34

Type of case CRM CRM CRM CRM CRM CIV CIV CRM CRM

N 27 24 .40 28 36 31 34 35 34

Note. CRM 5 criminal; CIV 5 civil.

are examined, and judges give instructions. The jurors complete individual questionnaires, and their group deliberations are videotaped. Mock jury trials have several advantages over telephone interviews. The mock trial is superior at simulating the reality of a trial. A 20-min telephone interview does not compare with a 4-hr focus group/mock trial. In the focus group/mock trial, it is easier to discuss complex issues and to get extended feedback. The information that one receives can be quite helpful in deciding issues around case strategy. Problems with mock trials (see Diamond, 1997) include the fact that it is difficult to get a representative sample; for the same amount of money, the sample sizes are far smaller; and it takes tremendous skill to develop an adequate script. Table 2 lists the nine mock trials for which there are data. Sample sizes in these cases ranged from 24 to 40, and they occurred in five different states. In this table, I list the highest Cramer’s V that was found for the variable that related most consistently to the dependent variable.5 With the exception of one focus group, the results (direction and strength of relationships) were consistent with the three parallel telephone surveys that occurred. The one exception, upon further analysis, was suspect because of an unrepresentative sample for that focus group. Extreme caution is warranted in simply looking for the highest level of association. For 5 Cramer’s Vs will be higher than the R2s presented in Table 1 with the telephone surveys. For a very crude comparison, square the Cramer’s Vs.

SCIENTIFIC JURY SELECTION

2429

example, in the tax-fraud case that was discussed earlier, the highest Cramer’s V was credible (.37). However, the results probably were simply a result of what could occur with random numbers.6 Interpretation is often more an art form than an exact science and should be used in conjunction with a healthy sense of skepticism. In three cases, I had data from focus groups as well as telephone surveys. This allowed me to assess the effectiveness of our predictions (for a similar methodology, see Frederick, 1984). In these three cases, I developed a regression equation from the demographic information available from the parallel telephone survey. I used this regression equation to make predictions of the matching focus-group jurors. In one pornography case, 58.3% of the 24 focus-group jurors voted not guilty. However, if I could choose the 12 best jurors, based on their predicted scores, 66.6% of the jurors would have voted not guilty: a gain of 1 juror. In a second set of focus groups for the same case (N 5 32), the percentage of not-guilty verdicts increased from 34.4% to 50.0%: a gain of 2 jurors. In a second pornography case, 42.5% of the 40 focus-group jurors voted not guilty. Among the 12 jurors with the highest predicted score, the percentage was 75.0%: a gain of 4 jurors. A third case that I examined pointed to how different theories of the case might have different effects on jury selection. In this case (a defense contractor’s fraud case), two factors emerged: (a) overall attitudes toward defense fraud; and (b) attitudes toward witnesses. Although the first factor appeared stronger (i.e., had a higher eigenvalue, a larger R2, and higher correlations with verdict questions in the telephone survey), using the results of its regression did not improve the makeup of the focus-group jurors. Before using the regression results, 69.4% of the 36 focus-group jurors voted not guilty. After using the regression results, the percentage who voted not guilty among the top 12 jurors was almost identical (66.6%). However, when I used the results of the second regression, the percentage of jurors who voted not guilty among the top 12 jurors rose to 83.3%: a gain of 2 jurors. Of course, the rosy scenario that was described previously does not occur in real trials. One does not get to pick the 12 jurors that he or she wants, attorneys strike jurors instead of choosing them, the other side also uses strikes, not all demographic information will become available, and jurors do not necessarily tell the truth during voir dire. On the other hand, additional information becomes available during voir dire about juror attitudes that might overshadow any demographic information that will be 6 Extreme caution is warranted in looking at the highest Cramer’s V. In one experiment, I correlated 200 series of random numbers. I found a mean absolute correlation of .09, and the highest correlation with these random numbers was .39.

2430 RICHARD SELTZER obtained. Nevertheless, the results discussed previously put the regression results in a more intuitive context.

Which Variables Help to Predict I can also shed some light on which variables are the most important. For each case, I coded up to three independent variables that had relatively high predictability. In this analysis, I have aggregated the telephone surveys and the focus groups. In general, I excluded variables whose betas (or Cramer’s Vs) were below .20. This was not a hard-and-fast rule, as jury profiling relies on more than one regression equation. There are situations in which a variable is perceived as important because of other analyses. Because of the proprietary nature of the data, it is inappropriate to list these variables with each case. However, I have aggregated them, as displayed in Table 3. The ‘‘usual suspects’’ had the highest level of influence. That is, education, race, church attendance (in part, because of the pornography cases), and age were influential in over 25% of the 36 studies. Gender, media habits, employment status, and occupation were influential in between 10% and 25% of the studies. Some common variables had little predictability: income, number of children, marital status, and voter registration status. Great caution must be used in interpreting these results. Not all questions were asked in each case. For example, income was asked in only about half of the cases (as was church attendance), and type of car was asked in only two cases. In addition, these 36 studies are a highly idiosyncratic sample of cases.

Post-Trial Interviews Post-trial interviews are an integral part of SJS. Only by talking with real jurors can one understand what really happens in the jury room: why jurors made the decisions they did; whether or not the jury consultant’s theory of the case was correct; and whether or not the jury consultant’s predictions about the actions of the individual jurors were accurate. This certainly helps one to choose jurors and to decide case strategies in future cases. Washington, D.C. Post-Trial Interviews I conducted two systematic studies of actual jurors. In the first study, 190 jurors were interviewed in 31 separate criminal trials in the Superior Court of the District of Columbia between 1984 and 1987 (Seltzer, Venuti, &

SCIENTIFIC JURY SELECTION

2431

Table 3 Important Variables in Jury Selection Variable 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.

Education Race Church attendance Age Gender Media (frequency of reading paper, watching TV, favorite magazine) Employment status Occupation Political party Religion Location (ZIP code, country) Importance of church Income Type of friend Number of children Where born Own home Registered to vote Type of car Marital status

Number of important cases

Percentage of studies

11 10 10 9 6 5

31 28 28 25 17 14

5 4 4 3 3 1 1 1 1 1 1 1 1 1

14 11 11 8 8 3 3 3 3 3 3 3 3 3

Lopes, 1991). Researchers observed the entire trial and conducted in-depth, face-to-face interviews with jurors after the verdicts were returned. The case ranged across the gamut of criminal cases: from simple assault to distribution of PCP to murder. During these interviews, jurors were asked how they voted initially in the jury room and were asked standard demographic questions. A 3-point scale was used as the dependent variable (guilty, don’t know, or not guilty). The R2 was .13. The most important variables (out of more than 20 that were included) were whether or not the juror had ever been a crime

2432 RICHARD SELTZER victim, whether or not the juror had ever taken a street law course (a course taught in the Washington, D.C. public schools), whether or not the juror had ever been a police officer, and occupation. Although there are statistical problems with using 3-point scales as dependent variables in regression equations, the results point to the finding that using demographic variables to predict juror verdicts has some success. Maryland Post-Trial Interviews in Capital Cases In 1986 and 1987, interviews of 38 jurors from 8 capital cases were conducted throughout the state of Maryland. These interviews were face to face and averaged 1 hr. Jurors were asked a standard set of demographic questions, as well as how they voted initially during the sentencing phase. Again, a 3-point dependent variable was used (life, don’t know, or death). The R2 was .21 for a regression equation using three independent variables (religiosity, age, and whether or not the juror had ever been a victim of a serious crime). The sample size was low, but the R2 is similar to those seen in the D.C. post-trial interviews (X 5 .23), as well as the mean R2 found in the 4 death-penalty studies seen in Table 1 (X 5 .19). Does Scientific Jury Selection Work? The answer to this seemingly simple question is not straightforward. Clearly, there has been a great deal of hype on the issue of SJS. Jurors cannot be predicted with the type of accuracy associated with experiments in physics. However, the R2s seen in the previous examples suggest that some predictability is possible and that the level of predictability is affected by the unique characteristics of each case. As Bonazzoli (1998) noted, SJS requires researchers to focus on ‘‘case-specific facts, locality, current manifestations of community bias, and other factors particular to the case currently being litigated’’ (p. 303). Critics of SJS have noted that attorneys are not helpless without consultants. One really needs to compare the results of SJS not just with a random venire, but also with jury selection conducted by attorneys without the benefit of consultants. I have no data to help answer this question except the anecdotal: In about half (I have no way of backing up this number) of the cases on which I worked, the attorneys expressed surprise with the results. In over half of the cases, I have seen the attorney for the opposition (who did not use a jury consultant) make what I would consider a serious blunder. It would be too simplistic to suggest that we can estimate our ability to predict, given the R2 of a regression equation. There are simply too many problems with our instruments and our ability to replicate the jury expe-

SCIENTIFIC JURY SELECTION

2433

rience in a mock trial, particularly a telephone interview. However, it is clear that when R2s are ‘‘respectable’’ (i.e., 4 .15), our use of SJS is likely to be of some help in choosing the jury, particularly when there is limited voir dire. In close cases, it might even have an impact on the jury decision. I wish to thank Marjorie Fargo and Jeff Frederick for their helpful comments on this article. References Anderson, J. F. (1998). Catch me if you can! Resolving the ethical tragedies in the brave new world of jury selection. New England Law Review, 32, 344-400. Baldus, D. C., Woodworth, G., Zuckerman, D., Weiner, N. A., & Broffit, B. (2001). The use of peremptory challenges in capital murder trials: A legal and empirical analysis. University of Pennsylvania Journal of Constitutional Law, 3, 3-170. Barber, J. W. (1994). The jury is still out: The role of jury science in the modern American courtroom. American Criminal Law Review, 31, 1225-1252. Batson v. Kentucky 476, U.S. 79. (1986). Bonazzoli, M. J. (1998). Jury selection and bias debunking invidious stereotypes through science. Quinnipiac Law Review, 18, 247-305. Bornstein, B. H. (1999). The ecological validity of jury simulations: Is the jury still out? Law and Human Behavior, 23, 75-91. Bowers, W. J., Steiner, B. D., & Sandys, M. (2001). Death sentencing in Black and White: An empirical analysis of the role of jurors, race, and jury racial composition. University of Pennsylvania Journal of Constitutional Law, 3, 171-274. Brown, L. T., Jr. (2003). Racial discrimination in jury selection: Professional misconduct, not legitimate advocacy. Review of Litigation, 22, 209-317. Diamond, S. S. (1990). Scientific jury selection: What social scientists know and do not know. Judicature, 73, 178-183. Diamond, S. S. (1997). Illuminations and shadows from jury simulations. Law and Human Behavior, 21, 561-571. Eisenberg, T., Garvey, S. P., & Wells, M. T. (2001). The deadly paradox of capital jurors. Southern California Law Review, 74, 371-397. Ellsworth, P. C. (1993). Some steps between attitudes and verdicts. In R. Hastie (Ed.), Inside the juror: The psychology of juror decision making (pp. 42-64). Cambridge, UK: Cambridge University Press. Etzioni, A. (1974). Creating an imbalance. Trial, pp. 28, 30.

2434 RICHARD SELTZER Feild, H. F., & Barnett, N. J. (1978). Simulated jury trials: Student vs. ‘‘real’’ people as jurors. Journal of Social Psychology, 104, 287-293. Frederick, J. T. (1984). Social science involvement in voir dire: Preliminary data on the effectiveness of scientific jury selection. Behavioral Sciences and the Law, 2, 375-394. Fulero, S. M., & Penrod, S. D. (1990). The myths and reality of attorney jury selection folklore and scientific jury selection: What works? Ohio Northern University Law Review, 17, 229-253. Hastie, R., Penrod, S. D., & Pennington, N. (1983). Inside the jury. Cambridge, MA: Harvard University Press. Hepburn, J. R. (1980). The objective reality of evidence and the utility of systematic jury selection. Law and Human Behavior, 4, 89-101. Hoffman, M. B. (1997). Peremptory challenges should be abolished: A trial judge’s perspective. University of Chicago Law Review, 64, 809-871. Hoffman, M. B. (1999). Abolishing peremptory challenges. Judicature, 82, 203. Jonakait, R. N. (2003). The American jury system. New Haven, CT: Yale University Press. King, N. J. (1993). Post-conviction review of jury discrimination: Measuring the effects of juror race on jury decisions. Michigan Law Review, 92, 63-130. Krauss, E., & Bonora, B. (2003). Jurywork: Systematic techniques. New York: Clark Boardman. Kressel, N. J., & Kressel, D. F. (2002). Stack and sway: The new science of jury consulting. Boulder, CO: Westview. Lane, M. E. (1999). Twelve carefully selected not so angry men: Are jury consultants destroying the American legal system? Suffolk University Law Review, 32, 463-480. Lilly, G. C. (2001). The decline of the American jury. University of Colorado Law Review, 72, 53-91. McConahay, J. B., Mullin, C. J., & Frederick, J. (1977). The uses of social science in trials with political and racial overtones: The trial of Joan Little. Law and Contemporary Problems, 41, 205-229. Mills, C. J., & Bohannon, W. E. (1980). Juror characteristics: To what extent are they related to jury verdicts? Judicature, 64, 23-31. Montz, V. T., & Montz, C. L. (2000). The Peremptory challenge: Should it still exist? An examination of federal and Florida law. University of Miami Law Review, 54, 451-495. Moran, G., & Comfort, J. C. (1982). Scientific jury selection: Sex as moderator of demographic and personality predictors of impaneled felony juror behavior. Journal of Personality and Social Psychology, 43, 1052-1063.

SCIENTIFIC JURY SELECTION

2435

Patterson, A. H. (1986). Scientific jury selection: The need for a case-specific approach. Social Action and the Law, 11, 105-109. Penrod, S. (1980). Study of attorney and scientific jury selection models. Doctoral dissertation, Harvard University. Rachlinski, J. J. (1993). Scientific jury selection and the equal protection rights of venire persons. Pacific Law Review, 24, 1497-1566. Rose, M. R. (2003). The jury in practice: A voir dire of voir dire. Listening to jurors’ views regarding the peremptory challenge. Chicago–Kent Law Review, 78, 1061-1098. Sachs, P. (1997, March/April). Standardized testing: Meritocracy’s crooked yardstick. Change, 24-31. Saks, M. J. (1976). The limits of scientific jury selection: Ethical and empirical. Jurimetrics, 3, 3-22. Saks, M. J. (1997). What do jury experiments tell us about how juries (should) make decisions? Southern California Interdisciplinary Law Journal, 6, 1-53. Schulman, J., Shaver, P., Coleman, R., Emrich, B., & Christie, R. (1973). Recipe for a jury. Psychology Today, 37(6), 37-44. Seltzer, R., Venuti, M. A., & Lopes, G. M. (1991). Juror honesty during the voir dire. Journal of Criminal Justice, 19, 451-462. Smith, R. C., & Seltzer, R. (2000). Contemporary controversies and the American racial divide. Lanham, MD: Rowan & Littlefield. Sommers, S. R., & Ellsworth, P. C. (2000). Race in the courtroom: Perceptions of guilt and dispositional attributions. Personality and Social Psychology Bulletin, 26, 1367-1379. Sommers, S. R., & Ellsworth, P. C. (2001). White juror bias: An investigation of prejudice against Black defendants in the American courtroom. Psychology, Public Policy, and Law, 7, 201-229. Strier, F. (1999). Whither trial consulting? Issues and projections. Law and Human Behavior, 23, 93-115. Strier, F., & Shestowsky, D. (1999). Profiling the profilers: A study of the trial consulting profession, its impact on trial justice, and what, if anything, to do about it. Wisconsin Law Review, 3, 441-499. Visher, C. A. (1987). Juror decision making: The importance of evidence. Law and Human Behavior, 11, 1-17. Zeisel, H., & Diamond, S. S. (1978). The effect of peremptory challenges on jury and verdict: An experiment in a federal district court. Stanford Law Review, 30, 491-531.

Scientific Jury Selection: Does It Work?

significant relationships. Theory should guide our data analysis. However, ... In SJS, data mining becomes the norm. Therefore, I usually ..... Important Variables in Jury Selection. Variable. Number of important cases. Percentage of studies. 1. Education. 11. 31. 2. Race. 10. 28. 3. Church attendance. 10. 28. 4. Age. 9. 25. 5.

128KB Sizes 44 Downloads 232 Views

Recommend Documents

Peremptory Challenges and Jury Selection
This effect of pushing mass towards the tails of the jury distribution is generally true for ... occurs is that the peremptory challenges are not eliminating the tails.

Does Content-Focused Teacher Professional Development Work?
Three recent random assignment studies from the Institute of Education ...... 1 The large investment in PD is one policy response to continuing concerns about ...

Symmetry: Where does it come from? Where does it go?
"The mathemaacal study of symmetry," says. MathWorld, "is systemaazed and formalized in the extremely powerful and beauaful area of mathemaacs called group theory." But there's much more to it. From flowers to crystals to bipeds like ourselves, symme

When Does Corporate Reorganization Work?
Mar 30, 2011 - focused primarily on individuals and small businesses, providing for .... reorganization has been included as part of the general federal.

does acupuncture work for tinnitus.pdf
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... does acupunct ... tinnitus.pdf. does acupunct ... tinnitus.pdf. Open. Extract. Open with. Sign

Does Supported Employment Work?
advantage of a unique panel data set of all clients served by the SC Department of. Disabilities and .... six months of on-site training followed by at least six months of follow-up in which the .... via monthly phone calls or visits). Hence we .....

Acheulean variation and selection: does handaxe ...
Tel.: þ44 151 7945787. E-mail address: [email protected]. Contents lists available ...... Lithic archaeology, or, what stone tools can (and can't) tell us about.

Acheulean variation and selection: does handaxe ...
This knapping process tends to result in a broadly symmetrical form, although the extent of such symmetry is known to vary widely across time and space (Wynn, 2002; Clark, 1994). Currently, classic handaxe forms are known from across Africa, western

Electoral College System: Why Does it Work This Way?
Oct 2, 2013 - The Founding Fathers feared the direct popular election option. There were no organized national political parties yet, no structure by which to ...

How does selection operate on whole-organism ...
... Center, Amherst, MA 01003, USA, 2Department of Biology, Virginia Tech, ..... Andrews (2002) reported similar significant support for selection favouring high ... specialists (Losos, 1994)], hinting at a microevolutionary mechanism for ..... In Ec

Jury Selection.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Jury Selection.pdf. Jury Selection.pdf. Open. Extract. Open with.

How Does Philosophical Counseling Work? Judgment ...
Most people I encounter in the world who have no philosophical training, and many who do, will reject the notion that one can reason one's way to feeling better in a time of crisis. Emotions are matters of the heart and not the mind, we cannot explai