Journal of Statistical Planning and Inference 104 (2002) 439–458

www.elsevier.com/locate/jspi

Comparisons of search designs using search probabilities Subir Ghosh ∗ , Lance Teschmacher Department of Statistics, University of California, Riverside, CA 92521-0138, USA Received 14 March 2000; received in revised form 20 March 2001; accepted 26 June 2001

Abstract Search designs are considered for searching and estimating one nonzero interaction from the two and three factor interactions under the search linear model. We compare three 12-run search designs D1, D2, and D3, and three 11-run search designs D4, D5, and D6, for a 24 factorial experiment. Designs D2 and D3 are orthogonal arrays of strength 2, D1 and D4 are balanced arrays of full strength, D5 is a balanced array of strength 2, and D6 is obtained from D3 by deleting the duplicate run. Designs D4 and D5 are also obtained by deleting a run from D1 and D2, respectively. Balanced arrays and orthogonal arrays are commonly used factorial designs in scienti7c experiments. “Search probabilities” are calculated for the comparison of search designs. Three criteria based on search probabilities are presented to determine the design which is most likely to identify the nonzero interaction. The calculation of these search probabilities depends on an unknown parameter  which has a signal-to-noise ratio form. For a given value of , Criteria I and II are newly proposed in this paper and Criteria III is given in Shirakura et al. (Ann. Statist. 24 (6) (1996) 2560). We generalize Criteria I–III for all values of  so that the comparison of search designs can be made without requiring a speci7c value of . We have developed simpli7ed methods for comparing designs under these three criteria for all values of . We demonstrate, under all three criteria, that the balanced array D1 is more likely to identify the nonzero interaction than the orthogonal arrays D2 and D3, and the design D4 is more likely to identify the nonzero interaction than the designs D5 and D6. The methods of comparing designs developed in this paper are applicable to other factorial c 2002 Elsevier Science B.V. experiments for searching one nonzero interaction of any order.  All rights reserved. MSC: primary 62K15 Keywords: Factorial designs; Interactions; Resolution III plans; Search designs; Search linear models; Search probabilities

1. Introduction Consider a 2m factorial experiment with n runs, where n ¡ 2m . A resolution III plan permits us to estimate the general mean and the main eBects under the standard ∗ Corresponding author. E-mail address: [email protected] (S. Ghosh).

c 2002 Elsevier Science B.V. All rights reserved. 0378-3758/02/$ - see front matter  PII: S 0 3 7 8 - 3 7 5 8 ( 0 1 ) 0 0 2 5 8 - 0

440 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458

linear model assuming no interactions are present. Such an assumption may not be true in reality because there can be a few nonzero interactions present. Consequently, the estimates of the parameters are biased. This motivates the use of “search designs” for estimating the general mean and main eBects as well as searching and identifying the nonzero interactions under the “search linear model” introduced in Srivastava (1975). In this paper, we consider the problem of searching and identifying one nonzero interaction from two and three factor interactions. A design for resolving this problem is called a search design (see Srivastava, 1975). Once the nonzero interaction is identi7ed, we can test its signi7cance using the standard statistical procedures (see Srivastava, 1975; Ghosh, 1987). We consider a class of linear models containing the general mean, main eBects, and one interaction from either the two or three factor interactions. For any two models in this class, the general mean and main eBects are common parameters, but the interaction terms are diBerent. There are 2 = ( m2 ) + ( m3 ) possible models in this class. A search procedure identi7es a model which best 7ts the data generated from the search design. This model identi7es the possible nonzero interaction. To search for the nonzero interaction, we use the sum of squares of error (SSE) of each model (Srivastava, 1975). For example, if the SSE for the 7rst model (M1) is smaller than the SSE for the second model (M2), then M1 provides a better 7t and is selected over M2. Consequently, the nonzero interaction term is more likely to be in M1 than M2. Under the search procedure, we 7t all 2 models to the data and the model with the smallest SSE is used to identify and estimate the possible nonzero interaction. As an example, suppose the number of factors, m, is 4. The number of two and three factor interactions 2 , is ( 42 ) + ( 43 ) = 10. Hence, there are 10 models to 7t and the model with the smallest SSE is selected. The probability of selecting a model over another model depends on 2 , the noise. To see this, let M0 be the true model in the class of models described above. Furthermore, let M1 be a competing model where M1 = M0. In the noiseless case, 2 = 0 , and the SSE for M0, SSE(M0), is zero, which is always smaller than the SSE(M1). Hence, M0 will de7nitely be selected over M1. Therefore, the correct nonzero interaction will always be identi7ed with probability one. Thus, P[SSE(M0) ¡ SSE(M1) | M0; M1; 2 = 0] = 1. In reality 2 ¿ 0 and the SSE(M0) may not be less than SSE(M1). Therefore, M0 may not necessarily be selected over M1. Hence, the probability of correctly identifying the nonzero interaction is less than one and we write P[SSE(M0) ¡ SSE(M1) | M0; M1; 2 ¿ 0] ¡ 1. In the case of in7nite noise, M0 and M1 are indistinguishable, and so the probability of selecting M0 over M1 is 1=2, and we write P[SSE(M0) ¡ SSE(M1) | M0; M1; 2 = ∞] = 1=2. For 0 ¡ 2 ¡ ∞, P[SSE(M0) ¡ SSE(M1) | M0; M1; 2 ] is called the search probability for a given M0, M1 and 2 . Note that the search probability is between 1=2 and 1. The calculation of the search probability is based on the normality assumption for the models, and will be presented later. There are many of these search probabilities to consider. We note that for a given true model M0, there are (2 − 1) competing models of M1. Since the true model M0

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 441

is unknown, we consider all 2 (2 − 1) possible pairs of (M0,M1) and calculate all the search probabilities for a given 2 . From these search probabilities, we form a 2 × 2 search probability matrix, SPM, where the columns correspond to the possible true models and the rows correspond to the possible competing models. The oB-diagonal elements of the SPM represent the search probabilities corresponding to all possible pairs of M0 and M1 for a given 2 . Since the true model M0 is diBerent from the competing model M1, the diagonal elements of the SPM are not meaningful and therefore left blank. For m = 4, there are 90 oB-diagonal search probability elements of the SPM. As m increases, this number goes up very fast. When comparing two designs, we would like to determine which design has a greater chance of identifying the true nonzero interaction term. One way to achieve this is to 7nd which of these two designs has, in general, the higher search probabilities for a given 2 . A method for doing this is by comparing the SPMs of the two designs for a given 2 . The following search linear model is now considered for a 2m factorial experiment: E(y) = A1 1 + A2 2 ;

V (y) = 2 I;

(1)

where y (n × 1) is the vector of observations, 1 (1 × 1); 1 = 1 + m, is the vector of the general mean and main eBects, and 2 (2 × 1); 2 = ( m2 ) + ( m3 ), is the vector of two and three factor interactions. The matrices A1 and A2 are design dependent. The elements of 1 are unknown parameters. We know that at most one element of 2 is nonzero but we do not know which element is nonzero. The goal is to search for and identify the nonzero element of 2 and then estimate it along with the elements of 1 . Such a model is called a search linear model. When 2 = 0, the search linear model becomes the standard linear model. Let A22 be any (n × 2) submatrix of A2 . A design is a search design (Srivastava, 1975) if, for every submatrix A22 , Rank [A1 ; A22 ] = 1 + 2:

(2)

In Table 1, we present three search designs, D1, D2, and D3, which satisfy (2). For all three designs, we have m = 4 and n = 12. Design D1 is a balanced array of full strength (Srivastava, 1972), D2 is an orthogonal array of strength 2 obtained from the 12-run Plackett–Burman design (Plackett and Burman, 1946), and D3 is an orthogonal array of strength two w.r.t. columns A, B, and C. Balanced and orthogonal arrays are commonly used factorial designs. In Table 2, we present three 11-run search designs, D4, D5 and D6, obtained form D1, D2 and D3 by deleting one run. Design D4 is a balanced array of full strength, D5 is a balanced array of strength 2, and D6 is given in Ohnishi and Shirakura (1985). In this paper, we compare the three designs in each table to determine which design is more likely to identify the nonzero interaction from 2 . To determine this, we

442 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Table 1 D1, D2, and D3 with 12 runs and 4 factors D1 A

D2 B

C

+ + + − − − − − − − − + − + − + − − − − + − + − + − − − + + + − + + + − a Represents identical runs.

D3

D

A

B

C

D

A

B

C

D

+ − + − − − + + + − − −

+ + − + + + − − − + − −

− + + − + + + − − − + −

+ − + + − + + + − − − −

− + − + + − + + + − − −

− + − − − + − − + + + +

+ + − − + − − + − + − +

− + − + − − + + − + + −

−a + − − −a − + + + − + +

Table 2 D4, D5, and D6 with 11 runs and 4 factors Design

Obtained from the Design in Table 1

By deleting the run

D4 D5 D6

D1 D2 D3

(− − − −) (− − − −) (− + − −)

develop three criteria for the pairwise comparison of designs. These criteria use “search probabilities” derived from the search linear model given in (1). In pairwise comparison between two designs, Di and Dj, we say that Di is “better” than Dj if it is determined that Di is more likely to identify the nonzero interaction than Dj under a particular criterion. In general, we 7nd that D1 is “better” than D2 and D3, and that D4 is “better” than D5 and D6. In Section 2, we present the expression for the search probability. Section 3 presents the three criteria for pairwise comparisons between designs based on search probabilities. Sections 4 and 5 present the pairwise comparisons of D1, D2, and D3 as well as D4, D5, and D6. The appendix contains theoretical results for these pairwise comparisons of designs using search probabilities. 2. Search probabilities Let 0 be the true unknown nonzero element of 2 and  be any other element of 2 , a(0 ) be the column of A2 for 0 and a() be the column of A2 for . Consider the following two models from (1): M0: E(y) = A1 1 + a(0 )0 ; M1: E(y) = A1 1 + a();

V (y) = 2 I; V (y) = 2 I:

Note that M0 and M1 are distinguishable by 0 and , respectively.

(3)

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 443

The search procedure described in Srivastava (1975) selects the true model M0 over a competing model M1 if SSE (M0) ¡ SSE (M1), where SSE is the sum of squares due to error. The search probability for a given M0, M1, and 2 is de7ned as P[SSE(M0) ¡ SSE(M1) | M0; M1; 2 ]:

(4)

Assuming y is N (A1 1 + a(0 )0 ; 2 I ), the expression of the search probability given in Shirakura et al. (1996) can be rewritten as  G(0 ; ; ) = 0:5 + 2[(c1 (0 ; )) − 0:5][( r(0 ; 0 ) − c12 (0 ; )) − 0:5]; (5) where (•) is the standard normal cdf, Q = A1 (A1 A1 )−1 A1 ; r(0 ; ) = a (0 )[I − Q]a(); r(0 ; 0 ) = a (0 )[I − Q]a(0 ); r(; ) = a ()[I − Q]a();   x(0 ; ) = r(0 ; )= r(0 ; 0 )r(; ); c1 (0 ; ) = (r(0 ; 0 )=2)(1 − |x(0 ; )|); and  = (|0 |=):

(6)

Notice that the search probability expression in (5) depends on M0 through 0 , M1 through , and 2 through . In the appendix, we present Theorems 1, 2, and Corollary 1. Theorem 1 gives some relevant properties of G(0 ; ; ). Theorem 2 presents the comparison between Gi (0 ; ; ) of Di and Gj (0 ; ; ) of Dj, for all , by comparing c1Di (0 ; ) with c1Dj (0 ; ) and ri (0 ; 0 ) with rj (0 ; 0 ). Corollary 1 compares the search probabilities from the same design using the results of Theorem 2. For a given M0, there are (2 − 1) choices for M1 in (4). Since we do not know what M0 is, there are 2 (2 − 1) search probabilities, G(0 ; ) to consider. 3. Three criteria for pairwise comparison between designs For a design Di, we form a 2 × 2 search probability matrix, SPMi (), whose elements are Gi (0 ; ; ) de7ned in (5). The columns of SPMi () correspond to all 0 representing possible true models M0 and the rows correspond to all  representing possible competing models M1. Since 0 = , the diagonal elements of the SPM are not meaningful and therefore left blank. 3.1. Criterion I We now compare two search designs, Di and Dj, by using the matrix (SPMi () − − SPMj ()). Let n+ 1 and n1 be the numbers of strictly positive and strictly negative − oB-diagonal elements of (SPMi () − SPMj ()). Note that n+ 1 and n1 depend on i; j, − + and . Clearly, n1 + n1 6 2 (2 − 1), where the equality holds when there are no zero oB-diagonal elements. Under Criterion I, if n+ 1 ¿ (2 (2 − 1)=2), for all , then the majority of search probabilities for Di are greater than the search probabilities for Dj. Consequently, Di is considered “better” than Dj for all  under Criterion I.

444 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458

3.2. Criterion II To reduce the number of comparisons under Criterion I, we note that only one column of the SPM represents the true unknown interaction term, 0 . Hence, we examine the columns of SPMi (). In each column, by suNciently increasing the minimum search probability, we increase the remaining search probabilities in that column. We now construct the minimum search probability vector (MSPVi ()) from the minimum search probabilities in each column of SPMi (). For comparing Di and Dj, we use the vector (MSPVi () − MSPVj ()). Let n+ 2 and − n2 be the numbers of strictly positive and strictly negative elements of this vector. − − + Note that n+ 2 and n2 depend on i; j, and . Clearly n2 + n2 6 2 , where the equality holds when there are no zero elements of the above vector. Under Criterion II, if n+ 2 ¿ (2 =2) for all , then the majority of minimum search probabilities for Di are greater than the minimum search probabilities for Dj. Consequently, Di is considered “better” than Dj for all  under Criterion II. 3.3. Criterion III To further reduce the number of comparisons under Criterion II, we examine the global minimum search probability GMSPi () in the SPMi (), which is also the minimum search probability in the MSPVi (). SuNciently increasing the GMSPi () will increase the search probabilities in both the SPMi () and MSPVi (). For comparing Di and Dj, if GMSPi () − GMSPj () ¿ 0 for all , then Di is considered “better” than Dj for all  under Criterion III. 3.4. Pairwise comparisons − Note that n+ u and nu ; u = 1; 2, depend on the true value of  which is unknown. Thus, for a meaningful comparison between two designs under Criteria I and II, we − would need to calculate n+ u and nu for all values of  ¿ 0. This enormous task can be greatly simpli7ed by using Theorems 2, 3, and Proposition 1 given in the appendix. + − − In developing these simpli7ed methods, we generalize n+ u and nu to n˜u and n˜u , Di respectively, which do not depend on . For this generalization, let c1 (0 ; ) and c2Di (0 ; ) be the value of c1 (0 ; ) and c2 (0 ; ) for the ith design, i = 1; 2. Furthermore, let ri (0 ; 0 ) be the value of r(0 ; 0 ) and Gi (0 ; ; ) be the value of G(0 ; ; ) for the − 0 ith design, i = 1; 2. In Table 3, we de7ne n˜+ u , n˜u , and n˜u as the number of comparisons + − 0 − 0 under the uth criterion. Note that n˜1 + n˜1 + n˜1 = 2 (2 − 1) and n˜+ 2 + n˜2 + n˜2 = 2 . + − + − + Note that for any , n1 ¿ n˜1 and n1 ¿ n˜1 . Thus, if n˜1 ¿ (2 (2 − 1)=2) then Di is + “better” than Dj under Criterion I, for all  by Theorem 3. Similarly for any ; n+ 2 ¿ n˜2 − + − and n2 ¿ n˜2 . Thus, if n˜2 ¿ (2 =2) then Di is “better” than Dj under Criterion II, for all  by Theorem 3.

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 445 Table 3 − − + 0 0 The n˜+ 1 , n˜1 , n˜1 , n˜2 , n˜2 , and n˜2 Criterion

Number of comparisons

c1D1 (0 ; ) c1D2 (0 ; )

r1 (0 ; 0 ) r2 (0 ; 0 )

Theorem with 0 = 0∗ ;  = ∗

I

n˜+ 1

¿

¿

2(b)

I

n˜− 1

6

6

2(a)

I

n˜10

¿ 6

6 ¿

2(c)

Criterion

Number of comparisons

c1D1 (0 ) c1D2 (0 ; ∗ )

r1 (0 ; 0 ) r2 (0 ; 0 )

Theorem with 0 = 0∗ only

II

n˜+ 2

¿

¿

2(b)

II

n˜− 2

6

6

2(a)

II

n˜20

¿ 6

6 ¿

2(c)

To calculate GMSPi (), we need to 7nd the element of MSPVi() that has both the smallest c1Di (0 ; ) and the smallest r(0 ; 0 ). Such an element may not exist. In such cases, we use Corollary 1 and form a subset, Si (), of possible candidates for the GMSPi (), having either the smallest c1Di (0 ; ) or the smallest r(0 ; 0 ). Under Criterion III, we make all pairwise comparisons between c1Di (0 ; ) and c1Dj (0 ; ) as well as ri (0 ; 0 ) and rj (0 ; 0 ). If Theorem 2(b) holds for each pairwise comparison, then Di is better than Dj for all . Similarly, if Theorem 2(a) holds, then Dj is “better” than Di for all .

4. Comparison of D1, D2, and D3 The SPMi () for Di; i = 1; 2; 3, are presented in Table 4. Note that the SPM1 () has 5 distinct elements denoted by a–e, the SPM2 () has 2 distinct elements denoted by f and h, and the SPM3 () has 28 distinct elements denoted by ‘(i); i = 01; : : : ; 28. Tables 5 and 6 present the values of c1Di (0 ; ) and ri (0 ; 0 ) for Di; i = 1; 2; 3. For pairwise comparisons between Di and Dj; i ¡ j; i; j ∈ {1; 2; 3}, we use Theorem 2 and form Tables 7–9. Table 10 presents the MSPVi (); i = 1; 2; 3. The MSPVi () obtained from the SPMi () by using Proposition 1. − 0 From Tables 7–9, we determine the values of n˜+ u ; n˜u , and n˜u for u = 1; 2; which are given in Table 11. From Table 11, n˜+ 1 ¿ (2 (2 − 1)=2) = 45 for D1 vs. D2 and D1 vs. D3. Therefore, by Theorem 3 (b.1), D1 is “better” than both D2 and D3, under Criterion I for all . − Since neither n+ 1 nor n1 is greater than (2 (2 − 1)=2), the comparison between D2 and D3 is inconclusive for all values of  under Criterion I.

446 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Table 4 The SPMi() for Di, i = 1; 2, and 3 SPM1 ()

SPM2 ()

— a a a a b

a — a a b a

a a — b a a

a a b — a a

a b a a — a

b a a a a —

d d d d d d

d d d d d d

d d d d d d

d d d d d d

— f f f f h

f — f f h f

f f — h f f

f f h — f f

f h f f — f

h f f f f —

f f h f h h

f h f h f h

h f f h h f

h h h f f f

c c c c

c c c c

c c c c

c c c c

c c c c

c c c c

— e e e

e — e e

e e — e

e e e —

f f h h

f h f h

h f f h

f h h f

h f h f

h h f f

— f f f

f — f f

f f — f

f f f —

‘ ‘ ‘ ‘ ‘ ‘

(26) (25) (27) (26) (28) (27)

‘ ‘ ‘ ‘ ‘ ‘

(19) (09) (22) (13) (08) (14)

SPM3 () — ‘ (03) ‘ (04) ‘ (02) ‘ (15) ‘ (18)

‘ (11) — ‘ (24) ‘ (11) ‘ (20) ‘ (24)

‘ (13) ‘ (23) — ‘ (19) ‘ (08) ‘ (14)

‘ (02) ‘ (03) ‘ (18) — ‘ (15) ‘ (04)

‘ (05) ‘ (07) ‘ (01) ‘ (05) — ‘ (01)

‘ (19) ‘ (23) ‘ (14) ‘ (13) ‘ (08) —

‘ ‘ ‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

— ‘ (24) ‘ (10) ‘ (24)

(03) (04) (06) (18)

(16) (12) (10) (12)

(09) (14) (21) (22)

(03) (18) (06) (04)

(07) (01) (17) (01)

(09) (22) (21) (14)

(11) (16) (12) (11) (20) (12)

‘ ‘ ‘ ‘ ‘ ‘

(13) (09) (14) (19) (08) (22)

‘ (23) — ‘ (21) ‘ (14)

‘ (25) ‘ (27) — ‘ (27)

‘ (23) ‘ (14) ‘ (21) —

Table 5 Values of c1 (0 ; ) and r(0 ; 0 ) for D1 and D2 Elements of the SPM D1 c1 r

D2

a

b

c

d

e

f

g

2.3094 10.6667

1.6330 10.6667

1.9890 10.6668

1.9259 10.0000

2.1602 10.0000

2.0000 9.3333

1.6330 9.3333

Also from Table 11, n˜+ 2 ¿ (2 =2) = 5 for all three comparisons. Therefore, by Theorem 3 (b.2), D1 is “better” than both D2 and D3, and D2 is “better” than D3, under Criterion II for all . Table 12 is obtained from Table 10 by using Corollary 1 and Theorem 3(c) for pairwise comparisons of designs using Criterion III. We conclude from Table 12 that D1 is “better” than both D2 and D3, and D2 is “better” than D3, under Criterion III for all . From all pairwise comparisons of D1, D2, and D3 in Table 13, it follows that D1 is “better” than both D2 and D3, and D2 is, in general, “better” than D3. Consequently, D1 is the “better” design over D2 and D3 under all three criteria.

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 447 Table 6 Values of c1 (0 ; ) and r(0 ; 0 ) for D3 SPM element

c1 (0 ; )

r(0 ; 0 )

SPM element

c1 (0 ; )

r(0 ; 0 )

‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘

2.4495 2.2678 2.2201 2.1030 1.9646 2.0240 1.9195 2.1381 2.0461 2.0878 2.0523 2.0247 1.9646 1.9518

12.0000 10.4762 10.4762 10.4762 12.0000 10.4762 12.0000 9.1429 9.1429 8.9524 8.9524 8.9524 9.1429 9.1429

‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘ ‘

1.9646 1.9024 1.7745 1.7886 1.6709 1.6579 1.6356 1.6330 1.5034 1.4881 1.6956 1.5195 1.3145 1.2448

9.1429 8.9524 12.0000 10.4762 9.1429 8.9524 9.1429 9.1429 9.1429 8.9524 5.9048 5.9048 5.9048 5.9048

(01) (02) (03) (04) (05) (06) (07) (08) (09) (10) (11) (12) (13) (14)

(15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28)

Table 7 Comparison of D1 and D2 Comparison a, b, c, c, e, d, d,

f h f h f f h

c1D1

c1D2 ¿ = ¡ ¿ ¿ ¡ ¿

r1

r2 ¿ ¿ ¿ ¿ ¿ ¿ ¿

Number of comparisons

Theorem with 0 = 0∗ ;  = ∗

24 6 12 12 12 12 12

2(b) 2(b) 2(c) 2(b) 2(b) 2(c) 2(b)

5. Comparison of D4, D5, and D6 The general structure of SPM4 () for D4 is identical to SPM2 () for D2 given in Table 2. This can be seen from the fact that by adding the run (+ + + +) to D4, the resulting design becomes an Orthogonal Array of strength two, which is the same type as D2. The two distinct elements of SPM4 () are denoted by f and h. The SPMi (); i = 5; 6; are presented in Tables 14 and 15. Note that the SPM5 () has 11 distinct elements denoted by t(i); i = 01; : : : ; 11; and the SPM6 () has 27 distinct elements denoted by k(i); i = 01; : : : ; 27. Tables 16 and 17 present the values of c1Di (0 ; 0 ) and ri (0 ; 0 ) for Di; i = 4; 5; 6: For pairwise comparisons between Di and Dj; i ¡ j; i; j ∈ {4; 5; 6}; we use Theorem 2 and form Tables 18–20. Table 21 presents the MSPVi (); i = 4; 5; 6. The MSPVi () is obtained from the SPMi () by using Proposition 1. − 0 From Tables 18–20, we determine the values of n˜+ u ; n˜u , and n˜u for u = 1; 2, which are given in Table 22.

448 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Table 8 Comparison of D1 and D3 Comparison a; ‘ (01) c; ‘ (01) a; ‘ (02) a; ‘ (03) c; ‘ (03) a; ‘ (04) c; ‘ (04) a; ‘ (05) c; ‘ (06) b; ‘ (07) c; ‘ (07) a; ‘ (08) d; ‘ (08) c; ‘ (09) d; ‘ (09) c; ‘ (10) e; ‘ (10) a; ‘ (11) d; ‘ (11) c; ‘ (12) d; ‘ (12) a; ‘ (13) d; ‘ (13) a; ‘ (14) c; ‘ (14) d; ‘ (14) e; ‘ (14) a; ‘ (15) c; ‘ (16) d; ‘ (16) c; ‘ (17) b; ‘ (18) c; ‘ (18) b; ‘ (19) d; ‘ (19) b; ‘ (20) d; ‘ (20) c; ‘ (21) e; ‘ (21) c; ‘ (22) d; ‘ (22) a; ‘ (23) e; ‘ (23) a; ‘ (24) e; ‘ (24) d; ‘ (25) e; ‘ (25) d; ‘ (26) d; ‘ (27) e; ‘ (27) d; ‘ (28)

c1D3

c1D1 ¡ ¡ ¿ ¿ ¡ ¿ ¡ ¿ ¡ ¡ ¿ ¿ ¡ ¡ ¡ ¡ ¿ ¿ ¡ ¡ ¡ ¿ ¡ ¿ ¿ ¡ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¡ ¿ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿

r1

r3 ¡ ¡ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¡ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿

Number of comparisons

Theorem with 0 = 0∗ ;  = ∗

2 2 2 2 2 2 2 2 2 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 1 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 1 1 2 2 2 1

2(a) 2(a) 2(b) 2(b) 2(c) 2(b) 2(c) 2(c) 2(c) 2(a) 2(c) 2(b) 2(c) 2(c) 2(c) 2(c) 2(b) 2(b) 2(c) 2(c) 2(c) 2(b) 2(c) 2(b) 2(b) 2(c) 2(b) 2(b) 2(b) 2(b) 2(c) 2(c) 2(b) 2(c) 2(b) 2(c) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b)

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 449 Table 9 Comparison of D2 and D3 c1D3

c1D2

Comparison f ; ‘ (01) f ; ‘ (02) f ; ‘ (03) f ; ‘ (04) f ; ‘ (05) h; ‘ (06) h; ‘ (07) f ; ‘ (08) h; ‘ (09) f ; ‘ (10) f ; ‘ (11) h; ‘ (12) f ; ‘ (13) f ; ‘ (14) f ; ‘ (15) f ; ‘ (16) h; ‘ (17) h; ‘ (18) h; ‘ (19) h; ‘ (20) f ; ‘ (21) h; ‘ (22) f ; ‘ (23) f ; ‘ (24) f ; ‘ (25) h; ‘ (26) f ; ‘ (27) h; ‘ (28)

r2

¡ ¡ ¡ ¡ ¿ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¿ ¿ ¿ ¿ ¡ ¡ ¡ ¡ ¿ = ¿ ¿ ¿ ¿ ¿ ¿

r3 ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¡ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿

Number of comparisons

Theorem with 0 = 0∗ ;  = ∗

4 2 4 4 2 2 2 4 4 2 4 4 4 8 2 2 1 4 4 2 4 4 4 4 2 2 4 1

2(a) 2(a) 2(a) 2(a) 2(c) 2(a) 2(a) 2(c) 2(c) 2(c) 2(c) 2(c) 2(b) 2(b) 2(b) 2(b) 2(a) 2(a) 2(c) 2(c) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b)

Table 10 The MSPVi () for Di; i = 1; 2; 3 i

MSPVi ()

1 2 3

(b (h ‘ (18)

b h ‘ (24)

b h ‘ (23)

b h ‘ (18)

b h ‘ (17)

b h ‘ (23)

d h ‘ (24)

d h ‘ (23)

d h ‘ (28)

d) h) ‘ (23)

Table 11 − 0 The values of n˜+ u ; n˜u , and n˜u for u = 1; 2 Comparison D1 vs. D2 D1 vs. D3 D2 vs. D3

Criterion I

Criterion II

n˜+ 1

n˜− 1

n˜10

n˜+ 2

n˜− 2

n˜20

66 53 41

0 5 23

24 32 26

10 7 7

0 1 3

0 2 0

450 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Table 12 The GMSPi () for Di; i = 1; 2; 3 Comparison

Si ()

Sj ()

Comparison between GMSPi ()

D1 vs. D2 D1 vs. D3 D2 vs. D3

(b,d) (b,d) h

h ‘ (28) ‘ (28)

Theorem GMSPj ()

Min(b,d) ¿ h Min (b,d) ¿ ‘ (28) h ¿ ‘ (28)

3(c.2) 3(c.2) 3(c.2)

Table 13 Summary comparisons of D1, D2, and D3 for ¿0 Comparison

“Better” design under

D1 vs. D2 D1 vs. D3 D2 vs. D3

Criterion I

Criterion II

Criterion III

D1 D1 Inconclusive

D1 D1 D2

D1 D1 D2

Table 14 The SPM for D5 SPM5 () — t(03) t(07) t(03) t(07) t(09)

t(01) — t(01) t(02) t(08) t(02)

t(07) t(03) — t(09) t(07) t(03)

t(01) t(02) t(08) — t(01) t(02)

t(07) t(09) t(07) t(03) — t(03)

t(08) t(02) t(01) t(02) t(01) —

t(07) t(03) t(05) t(03) t(05) t(09)

t(10) t(11) t(10) t(11) t(10) t(11)

t(05) t(03) t(07) t(09) t(05) t(03)

t(05) t(09) t(05) t(03) t(07) t(03)

t(07) t(04) t(05) t(05)

t(01) t(06) t(01) t(08)

t(05) t(04) t(07) t(05)

t(01) t(06) t(08) t(01)

t(05) t(04) t(05) t(07)

t(08) t(06) t(01) t(01)

— t(04) t(07) t(07)

t(10) — t(10) t(10)

t(07) t(04) — t(07)

t(07) t(04) t(07) —

Table 15 The SPM for D6 SPM6 () — k(02) k(06) k(03) k(10) k(15)

k(11) — k(23) k(11) k(18) k(23)

k(13) k(22) — k(17) k(07) k(12)

k(03) k(02) k(15) — k(10) k(06)

k(04) k(08) k(01) k(04) — k(01)

k(17) k(22) k(12) k(13) k(07) —

k(11) k(16) k(14) k(11) k(18) k(14)

k(13) k(09) k(12) k(17) k(07) k(19)

k(25) k(24) k(26) k(25) k(27) k(26)

k(17) k(09) k(19) k(13) k(07) k(12)

k(02) k(06) k(05) k(15)

k(16) k(14) k(11) k(14)

k(09) k(12) k(20) k(19)

k(02) k(15) k(05) k(06)

k(08) k(01) k(21) k(01)

k(09) k(19) k(20) k(12)

— k(23) k(11) k(23)

k(22) — k(20) k(12)

k(24) k(26) — k(26)

k(22) k(12) k(20) —

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 451 Table 16 The values of c1 (0 ; ) and r(0 ; 0 ) for D4 and D5 Design

SPM element

c1 (0 ; )

r(0 ; 0 )

D4

f h

1.9518 1.6330

9.1429 9.1429

D5

t(01) t(02) t(03) t(04) t(05) t(06) t(07) t(08) t(09) t(10) t(11)

2.0382 1.9518 1.8606 1.6810 1.6330 1.5545 1.5119 1.4379 1.3126 1.3021 1.0992

9.1429 9.1429 7.6190 7.6190 7.6190 9.1429 7.6190 9.1429 7.6190 4.5714 4.5714

Table 17 The values of c1 (0 ; ) and r(0 ; 0 ) for D6 SPM element

c1 (0 ; )

r(0 ; 0 )

SPM element

c1 (0 ; )

r(0 ; 0 )

k(01) k(02) k(03) k(04) k(05) k(06) k(07) k(08) k(09) k(10) k(11) k(12) k(13) k(14) k(15)

2.2184 2.2019 2.0889 1.9300 1.9773 1.9590 2.0474 1.8611 1.9787 1.8571 2.0656 1.8942 1.8790 1.9352 1.7548

10.4727 9.6970 9.6970 10.4727 9.6970 9.6970 8.9212 10.4727 8.9212 9.6970 8.5333 8.9212 8.9212 8.5333 9.6970

k(16) k(17) k(18) k(19) k(20) k(21) k(22) k(23) k(24) k(25) k(26) k(27)

1.7889 1.6831 1.6800 1.6330 1.6308 1.5197 1.5097 1.4766 1.7056 1.5316 1.3170 1.1326

8.5333 8.9212 8.5333 8.9212 8.9212 10.4727 8.9212 8.5333 5.8182 5.8182 5.8182 5.8182

− 0 In comparing D4 vs. D5 under Criterion I, we note that n˜+ 1 +n˜1 +n˜1 = 84 ¡ 2 (2 −1) = 90. This is due to the fact that there are 6 comparisons for which c1D4 (0 ; ) = c1D5 (0 ; ) and r4 (0 ; 0 ) = r5 (0 ; 0 ). Consequently, the 6 elements of the SPM4 () and SPM5 () corresponding to these comparisons are identical for all . Thus, n˜+ 1 + 0 + − + n ˜ =  ( −1)−6 = 84, and so we compare n ˜ and n ˜ with (( ( −1)−6)=2) = 42. n˜− 2 2 2 2 1 1 1 1 + From Table 22, n˜+ ¿ 42 for D4 vs. D5 and n ˜ ¿ 45 for D6 vs. D5. Therefore, by 1 1 Theorem 3(b.1), both D4 and D6 are “better” than D5 under Criterion I for all . − Since neither n˜+ 1 nor n˜1 is greater than (2 (2 − 1)=2), the comparison between D4 and D6 is inconclusive for all  under Criterion I.

452 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Table 18 Comparison between D4 and D5 Comparison

c1D4

f, t(01) f, t(02) f, t(03) f, t(04) h, t(05) h, t(06) f, t (07) h, t(08) h, t(09) f, t(10) h, t(11)

¡ = ¿ ¿ = ¿ ¿ ¿ ¿ ¿ ¿

c1D5

r4

r5

= = ¿ ¿ ¿ = ¿ = ¿ ¿ ¿

Number of comparisons

Theorem with 0 = 0∗ ;  = ∗

12 6 12 6 12 3 18 6 6 6 3

2(a) — 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b)

Number of comparisons

Theorem with 0 = 0∗ ;  = ∗

4 4 2 2 2 4 4 2 4 2 6 8 4 4 4 2 4 2 4 4 1 4 4 2 2 4 1

2(a) 2(a) 2(a) 2(c) 2(a) 2(a) 2(c) 2(a) 2(c) 2(c) 2(c) 2(b) 2(b) 2(c) 2(a) 2(b) 2(c) 2(c) 2(b) 2(b) 2(c) 2(b) 2(b) 2(b) 2(b) 2(b) 2(b)

Table 19 Comparison between D4 and D6 Comparison

c1D3

f, k(01) f, k(02) f, k(03) f, k(04) h, k(05) f, k(06) f, k(07) h, k(08) h, k(09) f, k(10) f, k(11) f, k(12) f, k(13) h, k(14) h, k(15) f, k(16) h, k(17) h, k(18) h, k(19) f, k(20) h, k(21) f, k(22) f, k(23) f, k(24) h, k(25) f, k(26) h, k(27)

¡ ¡ ¡ ¿ ¡ ¡ ¡ ¡ ¡ ¿ ¡ ¿ ¿ ¡ ¡ ¿ ¡ ¡ = ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¿

c1D4

r3 ¡ ¡ ¡ ¡ ¡ ¡ ¿ ¡ ¿ ¡ ¿ ¿ ¿ ¿ ¡ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¿ ¿ ¿ ¿ ¿

r4

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 453 Table 20 Comparison between D6 and D5 Comparison

i

c1D6

k(01), k(02), k(03), k(04), k(05), k(06), k(06), k(07), k(07), k(08), k(09), k(09), k(10), k(10), k(11), k(11), k(12), k(12), k(13), k(13), k(14), k(14), k(15), k(16), k(16), k(17), k(17), k(18), k(18), k(19), k(19), k(20), k(20), k(21), k(22), k(22), k(22), k(23), k(23), k(24), k(24), k(25), k(25), k(26), k(27),

03,04,07 01,02,03,07 01,03 03,07 05,08 01 02,04,07 01 07,10 05,09 08 05,09,11 01 07 01,02 03,07 01 03,04,10 02 03,07,10 05,09 06,08 05,06,08,09 01 03 08 05,09,11 05 08 06 05,11 01 07,10 05 02 03,07 10 01,02 04,07 03 07 05 09 03,04,07 05

¿ ¿ ¿ ¿ ¿ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¿ ¿ ¡ ¿ ¡ ¿ ¿ ¿ ¿ ¡ ¡ ¿ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¡ ¡ ¡ ¿ ¡ ¡ ¡ ¿ ¡ ¿ ¡ ¡

t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i) t(i)

c1D5

r6 ¿ ¿ ¿ ¿ ¿ ¿ ¿ ¡ ¿ ¿ ¡ ¿ ¿ ¿ ¡ ¿ ¡ ¿ ¡ ¿ ¿ ¡ ¿ ¡ ¿ ¡ ¿ ¿ ¡ ¡ ¿ ¡ ¿ ¿ ¡ ¿ ¿ ¡ ¿ ¡ ¡ ¡ ¡ ¡ ¡

r5

Number of comparisons

Theorem with 0 = 0∗ ;  = ∗

4 4 2 2 2 1 3 1 3 2 1 3 1 1 3 3 2 6 1 3 3 2 4 1 1 1 3 1 1 1 3 1 3 1 1 2 1 2 2 1 1 1 1 4 1

2(b) 2(b) 2(b) 2(b) 2(b) 2(c) 2(b) 2(c) 2(b) 2(b) 2(c) 2(b) 2(c) 2(b) 2(c) 2(b) 2(a) 2(b) 2(a) 2(b) 2(b) 2(c) 2(b) 2(a) 2(c) 2(c) 2(b) 2(b) 2(c) 2(c) 2(b) 2(a) 2(b) 2(c) 2(a) 2(c) 2(b) 2(a) 2(c) 2(a) 2(c) 2(a) 2(c) 2(a) 2(a)

454 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Table 21 The MSPVi () for Di; i = 4; 5; 6 i

MSPVi ()

4 5 6

(h (t(09) (k(15)

h t(08) k(23)

h t(09) k(22)

h t(08) k(15)

h t(09) k(21)

h t(08) k(22)

h t(09) k(23)

h t(11) k(22)

h t(09) k(27)

h) t(09)) k(22))

Table 22 − 0 The values of n˜+ u ; n˜u , and n˜u for u = 1; 2 Comparison D4 vs. D5 D4 vs. D6 D6 vs. D5

Criterion I

Criterion II

n˜+ 1

n˜− 1

n˜10

n˜+ 2

n˜− 2

n˜20

72 39 55

12 22 15

0 29 20

10 7 7

0 2 1

0 1 2

Sj ()

Comparison between

Table 23 The GMSPi () for Di; i = 4; 5; 6 Comparison

Si ()

GMSPi () D4 vs. D5 D4 vs. D6 D6 vs. D5

h h k(27)

t(11) k(27) t(11)

Theorem GMSPj ()

h ¿ t(11) h ¿ k(27) k(27) ¿ t(11)

3(c.2) 3(c.2) 3(c.2)

Table 24 Summary comparisons of D4, D5, and D6 for  ¿ 0 Comparison

“Better” design under Criterion I

Criterion II

Criterion III

D4 vs. D5 D4 vs. D6 D6 vs. D5

D4 Inconclusive D6

D4 D4 D6

D4 D4 D6

Also from Table 22, n˜+ 2 ¿ (2 =2) = 5 for all three comparisons. Therefore, by Theorem 3(b.2), D4 is “better” than D5 and D6, and D6 is “better” than D5, under Criterion II for all . Table 23 is obtained from Table 21 by using Corollary 1 and Theorem 3(c) for pairwise comparisons of designs using Criterion III. We conclude from Table 23 that D4 is “better” than both D5 and D6, and D6 is “better” than D5, under Criterion III for all . From all pairwise comparisons of D4, D5, and D6 in Table 24, it follows that both D4 and D6 are “better” than D5, and D4 is, in general, “better” than D6. Consequently, D4 is preferred over D5 and D6.

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 455

6. Conclusions In factorial experiments with each factor at two levels, search designs are considered for searching and estimating one nonzero interaction from two and three factor interactions under the search linear model. Three criteria based on search probabilities are presented to determine the design which is most likely to identify the nonzero interaction. The calculation of these search probabilities depends on an unknown parameter . We have developed simpli7ed methods for comparing designs under these three criteria for all values of . This requires the comparison of the values of c1 (0 ; ) and r(0 ; 0 ) from the search probabilities speci7ed under a given criterion. Note that such comparisons using c1 (0 ; ) and r(0 ; 0 ) do not depend on . This permits us to compare designs for all values of . In comparing D1, D2 and D3, we 7nd that D1 is “better” than both D2 and D3 in identifying the nonzero interaction under all three criteria and D2 is “better” than D3 under Criteria II and III. Consequently, D1 is preferred over D2 and D3. In comparing D4, D5, and D6, we 7nd that both D4 and D6 are “better” than D5 in identifying the nonzero interaction under all three criteria and D4 is “better” than D6 under Criteria II and III. Consequently, D4 is preferred over D5 and D6. In this paper, we have focused on 24 factorial experiments where we search one nonzero interaction from the two and three factor interactions. However, the techniques developed here are applicable to all factorial experiments for searching one nonzero interaction of any order.

7. Uncited References Ghosh, 1980; Ghosh and Rao, 1996; Shirakura, 1991; Srivastava and Mallenby, 1985.

Acknowledgements The authors are thankful to an Associate Editor and two referees for their detailed reading of the earlier version of this paper.

Appendix Theorem 1. The following properties of G(0 ; ; ) are true: (a) 0:5 6 G(0 ; ; ) 6 1 for  ¿ 0: (b) G(0 ; ; ) = G(0 ; ; −). Without loss of generality; we de6ne  = (|0 |= ¿ 0: (c) As  increases; G(0 ; ; ) increase monotonically. (d) As r(0 ; 0 ) increases for a 6xed c1 (0 ; ) and ; then G(0 ; ; ) increases monotonically.

456 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458

 (e) As c1 (0 ; ) increases for a 6xed r(0 ; 0 ) and ; 0 6 c1 (0 ; ) 6 r(0 ; 0 )=2 then G(0 ; ; ) increases monotonically with the minimum at c1 (0 ; ) = 0 and the  maximum at c1 (0 ; ) = r(0 ; 0 )=2. Proof. See Shirakura et al. (1996). Theorem 2. (a) If c1D1 (0 ; ) 6 c1D2 (0∗ ; ∗ ) and r1 (0 ; 0 ) 6 r2 (0∗ ; 0∗ ); then G1 (0 ; ; ) 6 G2 (0∗ ; ∗ ; ) for all  ¿ 0. (b) If c1D1 (0 ; ) ¿ c1D2 (0∗ ; ∗ ) and r1 (0 ; 0 ) ¿ r2 (0∗ ; 0∗ ); then G1 (0 ; ; ) ¿ G2 (0∗ ; ∗ ; ) for all  ¿ 0: (c) If c1D1 (0 ; ) ¡ c1D2 (0∗ ; ∗ ) and r1 (0 ; 0 ) ¿ r2 (0∗ ; 0∗ ); or if c1D1 (0 ; ) ¿ c1D2 (0∗ ; ∗ ) and r1 (0 ; 0 ) ¡ r2 (0∗ ; 0∗ ); then the relationship between G1 (0 ; ; ) and G2 (0∗ ; ∗ ; ) depends on :  √ Proof. To prove part (a), we note that, for 0 6 c1D1 6 r1 =2; 0 6 c1D2 6 r 2 =2; c1D1 6 c1D2 , and r1 6 r2  [(c1D1 ) − 0:5][( r1 − (c1D1 )2 ) − 0:5]  6 [(dD2 ) − 0:5][( r2 − (c1D1 )2 ) − 0:5]; by Theorem 1 (d); 1  D2 2 6 [(dD2 1 ) − 0:5][( r2 − (c1 ) ) − 0:5]; by Theorem 1 (e); Hence, G1 (0 ; ; ) 6 G2 (0∗ ; ∗ ; ) for all  6 0 when c1D1 6 c1D2 and r1 6 r2 . The proof of (b) is similar. For (c) when c1D1 ¿ c1D2 and r1 ¡ r2 , the 7rst inequality in the above proof holds but not the second inequality, and when c1D1 ¡ c1D2 and r1 ¿ r2 , the 7rst inequality does not hold. This completes the proof. Corollary 1. The results of Theorem 2 also holds for comparing search probabilities from the same design. Proof. Let G and G ∗ be two search probabilities from the same design, where G depends on c1 ; r;  and G ∗ depends on c1∗ ; r ∗ ; : The proof of Corollary 1 follows from Theorem 2 by letting c1 = c1D1 ; c1∗ = c1D2 ; r = r1 ; r ∗ = r2 ; G = G1 and G ∗ = G2 . This completes the proof. Theorem 3. (a) If it follows from the conditions of Theorem 2(a) that ∗ ∗ (a) n˜− 1 ¿ (2 (2 − 1)=2) when 0 = 0 and  =  ; then D2 is better than D1 for all  under Criterion I; ∗ (a:2) n˜− 2 ¿ (2 =2) when 0 = 0 ; then D2 is better than D1 for all  under Criterion II.

S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 457

(b) If it follows from the conditions of Theorem 2(b) that ∗ ∗ (b:1) n˜+ 1 ¿ (2 (2 − 1)=2) when 0 = 0 ; and  =  ; then D1 is better than D2 for all  under Criterion I; ∗ (b:2) n˜+ 2 ¿ (2 =2) when 0 = 0 ; then D1 is better than D2 for all  under Criterion II. (c) Applying Corollary 1 to MSPVi (); i = 1; 2; we form a subset; Si (); of possible candidates for the GMSPi (). When making all pairwise comparisons between the elements of S1 () and S2 () (c:1) if Theorem 2(a) holds for all these comparisons; then D2 is better than D1 for all  under Criterion III; (c:2) if Theorem 2(b) holds for all these comparisons; then D1 is better than D2 for all  under Criterion III. Proof. If n˜− 1 ¿ (2 (2 − 1)=2) under Theorem 2(a), then the majority of the search probabilities of the SPM2 () are greater than the corresponding search probabilities of the SPM1 () for all values of : Therefore, under the de7nition of Criterion I, D2 is better than D1 for all . When n˜+ 1 ¿ (2 (2 − 1)=2), a similar argument holds. If n˜− 2 ¿ (2 =2) under Theorem 2(a), then, under the de7nition of Criterion II, the majority of the search probabilities in the MSPV2 () are greater than the corresponding search probabilities in the MSPV1 () for all values of . When n˜+ 2 ¿ (2 =2), a similar argument holds. Finally, part (c) follows from Corollary 1, Theorem 2(a) and (b), and the de7nition of Criterion III. Proposition 1. For a given column of the SPM; the minimum search probability occurs for all values of  when the value of c1 (0 ; ) is the smallest for the elements in the given column. Proof. For a given column of the SPM, r(0 ; 0 ) is 7xed. From the monotonicity property of G(0 ; ; ) given in Theorem 1(e), G(0 ; ; ) decreases monotonically as c1 (0 ; ) decreases. It follows that the minimum search probability in a column occurs at the smallest value of c1 (0 ; ) for a given . Since c1 (0 ; ) is independent of , this result is true for all values of . This completes the proof. References Ghosh, S., 1980. On main eBect plus one plans for 2m factorials. Ann. Statist. 8, 922–930. Ghosh, S., 1987. InQuential nonnegligible parameters under the search linear model. Commun. Statist. Theory Methods 16, 1013–1025. Ghosh, S., Rao, C.R., 1996. Design and Analysis of Experiments. North-Holland, Elsevier, Amsterdam. Ohnishi, T., Shirakura, T., 1985. Search designs for 2m factorial experiments. J. Statist. Plann. Inference 11, 241–245. Plackett, R.L., Burman, J.P., 1946. The design of optimum multifactorial experiments. Biometrika 33, 305–325. Shirakura, T., 1991. Main eBect plus one or two plans for 2m factorials. J. Statist. Plann. Inference 27, 65–74.

458 S. Ghosh, L. Teschmacher / Journal of Statistical Planning and Inference 104 (2002) 439–458 Shirakura, T., Takahashi, T., Srivastava, J.N., 1996. Searching probabilities for nonzero eBects in search designs for the noisy case. Ann. Statist. 24 (6), 2560–2568. Srivastava, J.N., 1972. Some general existence conditions for balanced arrays of strength t and 2 symbols. J. Combin. Theory 13, 198–206. Srivastava, J.N., 1975. Designs for searching non-negligible eBects. In: Srivastava, J.N. (Ed.), A Survey of Statistical Design and Linear Models. North-Holland, Elsevier, Amsterdam, pp. 507–519. Srivastava, J.N., Mallenby, D.M., 1985. On a decision rule using dichotomies for identifying nonnegligible parameters in certain linear models. J. Multivariate Anal. 16, 318–334.

Comparisons of search designs using search ...

The search probability for a given M0, M1, and 2 is defined as. P[SSE(M0) .... This enormous task can be greatly ... In Table 3, we define ˜n+ u , ˜n− u , and ˜n0.

128KB Sizes 1 Downloads 236 Views

Recommend Documents

Enhancing mobile search using web search log data
Enhancing Mobile Search Using Web Search Log Data ... for boosting the relevance of ranking on mobile search. .... However, due to the low coverage.

A Comparison of Information Seeking Using Search Engines and ...
Jan 1, 2010 - An alternative, facilitated by the rise of social media, is to pose a question to one's online social network. In this paper, we explore the pros and ...

Minimum Distance Estimation of Search Costs using ...
Sep 9, 2016 - The innovation of HS is very useful since price data are often readily available, for ... defining the distance function using the empirical measure that leads to a ... (2002), who utilize tools from empirical process theory to derive .

Enabling Federated Search with Heterogeneous Search Engines
Mar 22, 2006 - tional advantages can be gained by agreeing on a common .... on top of local search engines, which can be custom-built for libraries, built upon ...... FAST software plays an important role in the Vascoda project because major.

Enabling Federated Search with Heterogeneous Search Engines
Mar 22, 2006 - 1.3.1 Distributed Search Engine Architecture . . . . . . . . . . 10 ..... over all covered documents, including document metadata (author, year of pub-.

Search features
Search Features: A collection of “shortcuts” that get you to the answer quickly. Page 2. Search Features. [ capital of Mongolia ]. [ weather Knoxville, TN ]. [ weather 90712 ]. [ time in Singapore ]. [ Hawaiian Airlines 24 ]. To get the master li

Anatomy of a Search.
Anatomy of a Search. Evaluate the site's reputation: Google looks at how often other websites link to these pages to determine how popular or useful each one is ...

Google Search Appliance Google Search for Your Business
The Google Search Appliance 6.14 offers your company the vast power expected from Google.com, which has come to define Internet searching for the past.

Google Search Appliance Google Search for Your ... Cloud
On the web, where people are free to choose any search engine, most ... G100. Indexes up to 20 million documents. Auto Language Detection. Arabic, Chinese (Traditional and Simplified), .... from the Google Apps domain (Google Docs and.

QAD implements universal search with the Google Search Appliance ...
product modules that are installed in building blocks to support different rules, industry regulations and manufacturing styles of different countries.” As a competitive imperative, QAD must provide easy access to complex, detailed product informat

Google Search Appliance Google Search for Your ... - anexlyn
matter where the content is stored so they can do their job more efficiently. It is fast to ... most relevant information, reducing their need to contact the call center.

Google Site Search Google Website Search for Your Organization
highly customizable Google-like site search solution for your website (or ... of your website on Google.com. .... search results via your own UI code, or you may.

Google Search Appliance Google Search for Your ... - anexlyn
Oracle Content Server. • Oracle RightNow. • SAP KM. • Talisma Knowledgebase .... and serve as hot backup units. Advanced reporting. View and export hourly ...

Google Search Appliance Google Search for Your Organization
Filter search results using specific metadata attributes such as keywords. Users can select multiple attributes .... segmentation. Offers ability to split phrases into ...

Google Site Search Google Website Search for Your Organization
A SaaS solution with no hardware or software required, Google Site Search: ... “I would definitely recommend Google Site Search to other companies looking.

Google Site Search Google Website Search for Your Organisation
A large majority of our customers report improved website metrics after deploying ... Best of all, people know how to use it right away – with zero learning curve". .... (3 million+ queries) Google Site Search receives Pager and Phone support for.