Environmental and Ecological Statistics 4, 247±264 (1997)

Using hidden Markov chains and empirical Bayes change-point estimation for transect data J A Y M . V E R H O E F 1 and N O E L C R E S S I E 2 1 2

Alaska Department of Fish and Game, 1300 College Road, Fairbanks, Alaska, 99701, USA Department of Statistics, Iowa State University, Ames, Iowa, 50011, USA

Received May 1996. Revised 15 October 1996 Consider a lattice of locations in one dimension at which data are observed. We model the data as a random hierarchical process. The hidden process is assumed to have a (prior) distribution that is derived from a two-state Markov chain. The states correspond to the mean values (high and low) of the observed data. Conditional on the states, the observations are modelled, for example, as independent Gaussian random variables with identical variances. In this model, there are four free parameters: the Gaussian variance, the high and low mean values, and the transition probability in the Markov chain. A parametric empirical Bayes approach requires estimation of these four parameters from the marginal (unconditional) distribution of the data and we use the EM-algorithm to do this. From the posterior of the hidden process, we use simulated annealing to find the maximum a posteriori (MAP) estimate. Using a Gibbs sampler, we also obtain the maximum marginal posterior probability (MMPP) estimate of the hidden process. We use these methods to determine where change-points occur in spatial transects through grassland vegetation, a problem of considerable interest to plant ecologists. Keywords: spatial statistics, image analysis, EM-algorithm, simulated annealing, Gibbs sampler

1. Introduction Identifying relatively homogenous patches in the environment is an important applied problem in the ecological literature. There are many different models that ecologists use, but one of the most prevalent spatial models is that data come from a process composed of relatively homogenous patches marked by discontinuity, as opposed to data coming from a process that changes gradually and continuously. For example, Karieva (1994) talks about `spatial mosaics,' Thompson et al. (1996) talk about `spatial patchiness,' and homogenous but distinct spatial patches form part of the models of Tilman (1994), Molofsky (1994), and Fortin (1994). These are but a few examples. A transect, or a line with samples located on it, is widely used in ecological studies to determine `edges,' which have more frequently been called `change-points' in the statistical literature (see for example Lombard, 1989). It is usually included in texts on methods for vegetation sampling (for example Kershaw and Looney, 1985, pp. 29±33). These edges will identify the different patches in the environment. Numerous statistical analyses of these transects can be found in the ecological literature, with examples of `edge-detection' 1352-8505  1997 Chapman & Hall

Ver Hoef and Cressie

248

analyses being found inter alia in Webster (1973), Wierenga et al. (1987), Ludwig and Cornelius (1987), and OrloÂci and OrloÂci (1990). These methods generally use a moving window to locate areas of high variability, which indicates the presence of a change-point. However, we often wish to do more than estimate the presence of a change-point. We would also like to know the mean value of the variable that defines a patch, the average size of patches, and so forth, along with some quantification of our uncertainty about these values. To do this, we would like to observe and model the complete patch structure. Unfortunately, we often cannot observe the underlying patch structure because it is hidden by random variation. However, Bayesian image analysis techniques have been developed for these situations, and in this paper we adapt some of them to be used on data from a transect through grassland vegetation. Suppose that the random variables fZi : i ˆ 1; 2; . . . ; ng are used to model spatial data on a line transect. The data are collected at equally spaced intervals along the line, or, as is often the case, from equally sized contiguous sample plots. We observe fzi g but we wish to make inference on a hidden process fMi g, which is the mean of each of the fzi g. In the engineering literature this is seen as a signal-detection problem, and in image analysis it is called image restoration. In this paper, we will consider the case where realizations fmi g of the hidden process consist of a two-phase patch work marked by transitions from one phase to the other. That is ; mi 2 f ; g, where < and the two-phase pattern contains change-points. Here we define a change-point to be any point along a transect where one mean changes to another, that is, where mi ÿ mi‡1 6ˆ 0; i < n. Let …mi ; mi‡1 † 

…mi ; mi‡1 †2

…1†

… ÿ †2

be an indicator function of a change-point at mi . Then c…m† 

nÿ1 X

…mi ; mi‡1 †

…2†

iˆ1

is the number of change-points along the transect. In this article, we give parametric empirical Bayesian methods for determining both the locations and number of change-points based on noisy data z. Assume the following model. All random variables Zi are conditionally independently distributed as g…zi j mi ; †, where contains `nuisance' parameters additional to mi ; i ˆ 1; . . . ; n. Thus, the model density of z given m is f…z j m; † ˆ

n Y

g…zi j mi ; †

…3†

iˆ1

Now suppose that m has a prior distribution, …m j †

…4† 0

0 0

which in general is not written as a product of individual probabilities. Let  ˆ … ;  † where includes the two states and and define A  f ; g. The marginal density of the data z is X X X ... f…z j m; † …m j † …5† h…z j † ˆ m1 2A m2 2A

mn 2A

Empirical Bayes Change-Point Estimation

249

and hence, marginally, the Z1 ; . . . Zn are no longer independent. The posterior density of the image m, given the data, is …m j † …6† h…z j † The main problem we wish to address is determining where change-points have occurred in the unobserved realization of m from the observed realization of z. This is not a new problem; it arises in image analysis (see, for example, Ripley, 1988, pp. 74±120 or Cressie, 1993, pp. 499± 534), where the Bayesian framework has garnered recent attention (Geman and Geman, 1984; Besag, 1986; Greig et al., 1989, among others). However, to implement this approach the data analyst must specify the parameters . Qian and Titterington (1990, 1991, 1992) developed methods for estimating  from the data z. In this article, we unite these ideas for estimating the location of change-points in m. Philosophically, we adopt a parametric empirical Bayes approach (Morris, 1983). We are primarily interested in inference on m but, rather than specify , we obtain an estimate ^ from h…z j † given by (5). Notice that our models are very similar to those of HoÈgmander and Moller (1995), but they use maximum pseudolikelihood (Besag, 1974) to estimate  because the marginal distribution h…z j † is intractable in their ^ given by (6), case. Then, we make inference on m from the posterior distribution p…m j z; † ^ In order to estimate change-points, we shall reconstruct the using the estimated values . whole `image', fmi g, and from that determine where change-points have occurred. There are three main problems to be solved when using the parametric empirical Bayes approach, as outlined above. The first is obtaining estimates of  from h…z j †. After describing the models in Section 2, Section 3 deals with estimation of , where we use the EM-algorithm to obtain maximum likelihood estimates from the marginal distribution h…z j †. In Section 4, we concentrate on several problems. Once ^ has been obtained, the second problem concerns the need to reconstruct the whole image fmi gto determine where change-points have occurred. We use simulated annealing (Geman and Geman, 1984) to obtain the m that maximizes the poster^ (MAP estimate). The third problem is to quantify the uncertainty in ior distribution p…m j z, † our estimates. For the MAP estimate, we use the Gibbs sampler (Geman and Geman, 1984) to sample from the posterior distribution, which also yields the maximum marginal posterior probability (MMPP) estimate, namely the m that minimizes a misclassification loss function. In Section 5, we use these methods to analyse transect data from grassland vegetation and make some conclusions in Section 6. p…m j z; † ˆ f…z j m; †

2. Models We consider several models for g…zi j mi ; † in (3), namely Gaussian, Poisson, and binomial distributions. For the prior (4), we consider the spatial model, …m j † ˆ

exp‰ÿc…m†Š …†

where recall that c…m† is the number of change-points (2) along the transect and …† normalizing constant. The prior distribution …m j † is both a first-order Markov chain first-order Gibbs chain (Qian and Titterington, 1990). It is a slight generalization of studied by Greig et al. (1989) and Qian and Titterington (1990, 1991), where they use

…7† is the and a priors ˆ0

Ver Hoef and Cressie

250

and ˆ 1. The parameter  is a `smoothing' parameter that, as  gets larger, puts relatively more weight on fewer change-points. There are several properties of (7) that we bring together here in a theorem. Theorem 1. For the prior distribution …m j † given by (7) (i) (ii)

The normalizing constant is …† ˆ 2…exp…ÿ† ‡ 1†nÿ1 The number of change-points c…m† is binomially distributed:   n ÿ 1 c…m† f‰c…m†Š ˆ  …1 ÿ †nÿ1ÿc…m† c…m†

where   exp…ÿ†‰exp…ÿ† ‡ 1Š. (iii) The expected number of change-points is …n ÿ 1†. Proof: See Appendix. The importance of (i), as will be seen below, is that it allows us to obtain the marginal distribution h…z j † (5) which is intractable for the two-dimensional case (e.g. HoÈgmander and Moller, 1995), and so allows us to estimate  with maximum likelihood through the EM-algorithm.

3. Estimating parameters: the EM-algorithm Qian and Titterington (1990) give an EM-algorithm based on recursion for solving estimation problems with a Gibbs distribution. Here, we use the more general Monte Carlo methods described in Qian and Titterington (1991). Estimates of  are obtained from h…z j † given by (5), where g…zi j mi ; † in (3) could be any of a variety of commonly-used distributions; several examples are given below. The EM-algorithm (Dempster et al., 1977) consists of two steps, the E-step (expectation) and M-step (maximization). Let L…† ˆ ln‰f…z j m; †Š ‡ ln‰…m j †Š;  ˆ … 0 ; †0 . For the E-step, compute Emjz; ‰L…†Š  ln Q… j t † where the expectation is taken with respect to the posterior distribution p…m j z; t † from (6) that uses the tth update t for . The current estimate of t is updated in the M-step. Take t‡1 to be the maximum of Q… j t † with respect to . Then return to the E-step.

3.1 Example 1: Gaussian model Suppose g…zi j mi ; † is Gaussian; that is, Zi  N…mi ; 2 †; mi 2 A ˆ f ; g and < . The loglikelihood is, L…† ˆ

n nÿ1 n  ÿn X X n‡1 o …zi ÿ mi †2 ÿ  …m1 ; mi‡1 † ÿ ln 2 exp…ÿ† ‡ ln…22 † ÿ 2 2 2 iˆ1 iˆ1

The maximization step is given first. Differentiating Q… j t † with respect to yields n X @Q 2…zi ÿ †I…mi ˆ t † ˆ Emjz;t @ iˆ1

…8†

Empirical Bayes Change-Point Estimation

251

where I…† is the indicator function. Setting this equal to zero, we obtain the estimate Emjz;t

n X

^t‡1 ˆ Emjz;t

zi I…mi ˆ t †

iˆ1 n X

…9† I…mi ˆ t †

iˆ1

with an analogous expression for ^t‡1 . In a like manner, Emjz;t

^2t‡1 ˆ

n X …zi ˆ mi † iˆ1

…10†

n

where mi 2 f t ; t g  At . Finally, # "  n ÿ 1 ÿ1 ^t‡1 ˆ ln n X …zi ˆ mi † Emjz;t

…11†

iˆ1

Taking the expectations in (9)±(11) looks to be very difficult because they require taking all 2n possible configurations for m for each summand. However, the Gibbs sampler, as described in …k† …k† …k† …k† Appendix A.2, allows us to approximate these expectations. Let mt ˆ …m1;t ; m2;t ; . . . ; mn;t †0 be t the kth realization of the posterior distribution p…m j z;  † using the Gibbs sampler, k ˆ 1; 2; . . . ; K, based on the current parameter values t . Let a…m† be some function of m. Then, for large K, K   1X …k† a…mt † …12† Emjz;t a…m†  K kˆ1 (Qian and Titterington, 1991). For example, let a…m† be n X

zi I…mi ˆ t †

iˆ1

as in (9). Then, from (12), Emjz;t

n X

zi I…mi ˆ t † 

iˆ1

K X n 1X …k† zi I…mi;t ˆ t † K kˆ1 iˆ1

so from (9) and (12), K X n X

^t‡1 ˆ

…k†

zi I…mi;t ˆ t †

kˆ1 iˆ1 K X n X kˆ1 iˆ1

…13† …k† I…mi;t

ˆ t †

with an analogous expression for ^t‡1 . Likewise, from (10) and (12) we obtain,

Ver Hoef and Cressie

252

^2t‡1 ˆ

1 K

K X n X …k† …zi ÿ mi;t †2 kˆ1 iˆ1

…14†

n

and from (11) and (12) # "  …n ÿ 1†K ÿ1 ^t‡1 ˆ ln K X nÿ1 X …k† …k† …mi;t ; mi‡1;t † where recall that

kˆ1 iˆ1 …k† …k† …mi;t ; mi‡1;t †

…15†

is defined by (1)

3.2 Example 2: Poisson and binomial models Suppose g…zi j mi ; † is Poisson; that is, zi  Poi…mi †; where mi 2 A  f ; g and 0  < . The log-likelihood is, L…† ˆ ÿ

n X iˆ1

mi ‡

n X

zi ln…mi † ÿ

iˆ1

n X iˆ1

ln…zi !† ÿ

nÿ1 X

n  nÿ1 o …mi ; mi‡1 † ÿ ln 2 exp…ÿ† ‡ 1

iˆ1

The maximization step is given first. Differentiating Q… j t † with respect to yields " # n n X @Q 1X I…mi ˆ t † ‡ zi I…mi ˆ t † ˆ Emjz; ÿ @ iˆ1 iˆ1 ^t‡1 with an analogous expression for Setting this equal to zero we obtain the estimate (9) for ^t‡1 ; for ^t‡1 we also obtain the same estimate (11). The iterative algorithm is similar to ^t‡1 is the same as in (13) and ^t‡1 is the same as in (15), except now we Example 1, where simulate m using the Gibbs sampler with a model (3) that is a product of Poisson distributions rather than Gaussian distributions. The same methods can be used if g…zi j mi ; † has a binomial distribution; that is, zi  Bin…i ; mi †, where mi 2 A  f ; g; 0  <  1, and i is the known sample size. The ^t‡1 is the same as in iterative algorithm is similar to the Gaussian and Poisson examples, where (13) with an analogous expression for ^t‡1 ; for ^t‡1 we also obtain (15). The only difference is that now we simulate m using the Gibbs sampler with a model (3) that is a product of binomial distributions.

3.3 Change-points Once  has been estimated for any of the examples given above, we have an estimate of the expected number of change-points from Theorem 1(ii) and Theorem 1 (iii), ^ d ˆ …n ÿ 1† exp…ÿ† c…m† ^ exp…ÿ† ‡ 1

…16†

Empirical Bayes Change-Point Estimation

253

To estimate where the change-points actually occur requires that we estimate each mi through the image restoration techniques outlined in the next section.

4. Image restoration and change-points ^ we wish to estimate the realized value m. The first estimate we After obtaining the estimate , consider is the MAP estimate and then we consider the MMPP estimate.

4.1 Maximum a posteriori (MAP) estimates using simulated annealing The maximum a posteriori (MAP) estimate is the Bayes estimate for a 0±1 loss function, where the loss is 0 only if the estimated image is exactly the true image, and 1 otherwise. The MAP estimate is the maximum of p…m j z; † for all m (Geman and Geman, 1984), and in the context ^ Thus, a straightforward procedure would be to maximize of unknown  we use p…m j z; †. ^ over all 2n possibilities for m, but this is computationally prohibitive if n is even p…m j z; † moderately large, so we propose using simulated annealing (Appendix 1). Simulated annealing is an iterative process where m is simulated from the rescaled posterior conditional probabilities, and an annealing temperature slowly lets each iteration settle toward the maximum value of m.

4.2 Maximum marginal posterior probability (MMPP) estimates Suppose we have the following loss function: ^  l…m; m†

n X

^ i† I…mi ˆ m

iˆ1

where I…† is the indicator function. This loss function counts the number of misclassifications of ^ (Abend, 1968). ^ i that maximizes p…mi j z; † mi . The Bayes decision rule is obtained by finding m Using the Gibbs sampler, we obtained 2000 realizations from the posterior distribution ^ After obtaining these samples, we estimated the marginal posterior probability p…m j z; †. ^ for all locations by taking the percentage of times that m ^i ˆ ^ from the 2000 reaPr…mi j z; † ^ > 0:5, otherwise m ^ ^ if Pr…mi ˆ j z; † ^ i ˆ . ^i ˆ lizations. The MMPP estimate is simply: m ^ ^ ^ i ˆ j z; † allow us to assess our confidence in the occurrence of a The estimates of Pr…m change-point for both MAP and MMPP estimation; see the examples in the following section.

5. Estimating change-points in grassland transects We took a set of data from a 30 m transect of 300 contiguous 10  10 cm plots that ran through a chalk grassland in the Gerendal Nature Reserve, The Netherlands. A more complete description of these data may be found in Ver Hoef et al. (1989). Along the 30 m transect, each 50 cm segment was photographed from ground level through the 10 cm wide strip of vegetation. Each segment was backed by a white screen with a vertical scale on it. The 60 photographs were each

Ver Hoef and Cressie

254

digitized, and the computer-based images were used to compute the per cent of non-white pixels in a vertical column 1.5 m tall by 10 cm wide, making a total of n ˆ 300 observations on the vegetation density along the transect. The vegetation density data were log-transformed due to skewness of the data towards large values. The presence/absence of all species was also recorded in the 10  10 cm plots, creating a data set for each species of 300 values of 0±1 binary variables along the transect.

5.1 Gaussian model For the vegetation density data, a mixture of two Gaussian models is justified after observing a kernel estimate of the marginal distribution of the data (Fig. 1). An S-plus kernel density function estimate with a Gaussian kernel and a window of 0.2 was used to obtain Fig. 1. The data appear to be bimodal, suggesting a mixture of two bell-shaped distributions. From Fig. 1, it appears that  2:4 (the first peak in Fig. 1) and  2:7 (the second peak in Fig. 1). After applying the EM-algorithm for Gaussian data, as described in Example 1 of Section 3.1, ^ ˆ 2:285, ^ ˆ 2:655, ^2 ˆ 0:03332 and ^ ˆ 2:678. From we obtained the following estimates: (16), the expected number of change-points is 19.22. Figure 2 shows the convergence of all 4 parameter estimates with each EM-algorithm step, using the Gibbs sampler to compute expectations as described in Section 3. For the Gibbs sampler, after each successive 300 full sweeps through the data, one realization of the posterior distribution was kept, and K ˆ 200 such realizations were stored. From these 200 realizations, the expectation in (9) was computed for each step of the EM-algorithm. Figure 2a shows that the initial estimate for was 2.4, as guessed from Fig. 1. Then Fig. 2a shows (13) for t ˆ 0; 1; . . . ; 49 iterations for the EM-algorithm. Figures 2(b±d) shows that the estimates of ; 2 , and  had also stabilized by t ˆ 49. The MAP estimate of m, using simulated annealing as described in Appendix 1, along with ^ i ˆ ^ for each i is also given, the raw data, are given in Fig. 3. The posterior probability that m using the Gibbs sampler. In Fig. 3, the fmi g and fzi g are positioned with respect to the left ordinate, while the posterior probabilities are positioned with respect to the right ordinate, and

Figure 1. Kernel density estimation for the vegetation density data using an S-Plus density function with a Gaussian kernel and window width = 0.2.

Empirical Bayes Change-Point Estimation

255

^t , (B) Figure 2. Iteration values from EM-algorithm for vegetation density data, Example 1. (A) ^t , (C) ^2t , and (D) ^t ; t ˆ 0; 1; 2; . . . ; 49.

^ of the left ordinate and its 1 is equal to ^ of the right ordinate is scaled so that its 0 is equal to the left ordinate. The posterior probabilities serve to show how `confident' we are that we have ^ i correctly. Figure 3 shows that the areas where change-points have been estimated estimated m are subject to the most uncertainty. That is, the edges of patches are hard to distinguish exactly. For example, changing from the first -patch to the -patch in Fig. 3, location 40 was estimated to come from the -patch, but the marginal posterior probability indicates an only somewhat smaller probability that it belongs to the -patch. Thus, we are not very certain that the

256

Ver Hoef and Cressie

Figure 3. MAP estimates (+) and marginal posterior probabilities …Pr…mi ˆ j z†† for vegetation density data, Example 1. Observed data }. The MAP estimate is shown (+), and the MMPP estimate is mi ˆ ^ if the marginal posterior probability at transect location i is greater than 0.5, ^ otherwise mi ˆ .

Empirical Bayes Change-Point Estimation

257

Figure 4. Iteration values from EM-algorithm for Festuca rubra presence/absence data, Example ^t , (B) ^t , and (C) ^t ; t ˆ 0; 1; 2; . . . ; 69. 2. (A)

change-point should occur at location 40 or location 41. Note that ^ ˆ 2:678 is quite large, putting a strong emphasis on large patches with few change-points. Therefore, location 53 with a value of 2.27 was estimated to come from a patch with mean ^ ˆ 2:655, rather than ^ ˆ 2:285, because it was surrounded by neighbours with large values. Other one with a mean similar examples are seen in Fig. 3. Also note that location 181 was estimated to have mean ^ but from the Gibbs sampler the posterior probability is less than 0.5 that m ^ ^ i ˆ . ^ i ˆ , m The MMPP estimate of m can be determined from Fig. 3 as well. The marginal posterior ^ i ˆ ^ for each i is given, so whenever it is > 0.5, the MMPP estimate is probability that m ^ mi ˆ . It is interesting to compare the MMPP estimate to the MAP estimate. The MMPP estimate is, for this example, the same as the MAP estimate except at location 181. This interesting difference is due to the different loss functions. The MAP estimate is the single best ^ However, there may be many other estimate, for which location 181 was estimated to be . estimates with relatively high probability that show up in the sample from the Gibbs sampler, ^ the ensemble of which makes location 181 more likely to be .

Ver Hoef and Cressie

258

Figure 5. MAP estimates and marginal posterior probabilities …Pr…mi ˆ j z†† …  † for vegetation density data, Example 2. Observed data, …}†. The MAP estimate is shown (+), and the MMPP ^ i ˆ ^ if the marginal posterior probability at transect location i is greater than 0.5, estimate is m ^ i ˆ . ^ otherwise m

5.2 Binary model As another example, we examine the presence/absence of Festuca rubra; that is, the data are binary 0±1. Here, we have a binomial model for the data, with all sample sizes fi g equal to 1. Figure 4 shows the stages in the EM-algorithm as described in Section 3.2 and the stabilization ^ ˆ 0:182, ^ ˆ 0:635, and ^ ˆ 3:375. of the parameter estimates. The final estimates were The MAP estimate of m, using simulated annealing as described in Appendix 1, along with ^ i ˆ ^ for each i is also given. the raw data, are given in Fig. 5. The posterior probability that m Figure 5 is similar to Fig. 3; the fmi g and fzi g are positioned with respect to the left ordinate, while the marginal posterior probabilities are positioned with respect to the right ordinate, and ^ of the left ordinate and its 1 is equal to ^ of the right ordinate is scaled so that its 0 is equal to the left ordinate. Figure 5 shows where change-points have been estimated and their uncertainty. Notice for example, that beginning around plot 130, there is a cluster of 0's. However,

Empirical Bayes Change-Point Estimation

259

Figure 6. Convergence for the Gibbs sampler for location 87 in Fig 3. 5000 full sweeps of the data were grouped into batches of 50, and the proportion of samples that were estimated to be ^ were plotted.

^ but there is because of the rather large value of , the mi s estimated for these locations are , considerable uncertainty about these estimates. Other instances can be seen along the transect. The MMPP estimate of m can be determined from Fig. 5 as well. The marginal posterior ^ i ˆ ^ for each i is given, so whenever it is > 0:5, the MMPP estimate is probability that m ^ mi ˆ . In contrast to the previous example, the MMPP estimate for this example creates 4 new change-points in the area between location 190 and location 200. Here some marginal pos^ i ˆ are greater than 0.5, but the MAP estimate classifies these terior probabilities that m ^ i ˆ . estimates as m

6. Extensions to multiple patch values It is possible to extend the methods outlined in the paper to the case where the patches take on more than two values, e.g., A ˆ f 1 ; 2 ; . . . ; p g. We note that (1) is simply …mi ; mi‡1 †  I…mi 6ˆ mi‡1 †, an indicator function. As long as the probability of the prior remains a function of only the number of change-points, and not the actual values within the patches, the normalizing constant becomes  nÿ1 …† ˆ p …p ÿ 1† exp…ÿ† ‡ 1 see the proof to Theorem 1 in the Appendix. The last term of the likelihood (8) now becomes lnfp‰…p ÿ 1† exp…ÿ† ‡ 1Šnÿ1 g, so the estimates (9) and (10) do not change for the EM-algorithm, but the estimator for  becomes

Ver Hoef and Cressie

260

^t‡1 ˆ ln…p ÿ 1† ‡ ln

"

nÿ1 nÿ1 X …mi ; mi‡1 † Emjz;t

#

 ÿ1

iˆ1

in the EM-algorithm. Also, simulated annealing and the Gibbs sampler can easily accommodate more than 2 states (see for example Cressie, 1991, p. 512); thus we still have all the techniques to carry out our analysis.

7. Conclusions The methods outlined in this paper were useful in estimating where change-points occur in grassland transects. The EM-algorithm was successfully used for Gaussian and binomial models. The parametric empirical Bayes approach has an advantage over the moving-window methods mentioned in the introduction because here we do more than estimation; that is, through the posterior distribution we attach some uncertainty to our assessment of where a change-point occurs. We used two methods, MAP and MMPP estimation, for reconstructing the hidden process. Both methods are computationally intensive; the MAP estimate uses simulated annealing and MMPP uses the Gibbs sampler. Both methods gave similar results. For ecologists, the choice of the method depends on which loss function is most relevant. In general, the MAP estimate looks for a global solution and tends to be smoother (fewer change-points, as in the binary example), while the MMPP estimate looks locally. It seems ecologists will generally be looking for the single best `image', rather than minimizing misclassification. For these reasons, we think that MAP estimation will generally be preferred over MMPP estimation; however, HoÈgmander and Moller (1995) prefer MMPP. Ultimately, the choice belongs to the investigator and, using our approach, it is possible to obtain both estimates.

Acknowledgements Financial support for this work was provided to Ver Hoef by Federal Aid in Wildlife Restoration to the Alaska Department of Fish and Game and through funds from the North Atlantic Treaty Organization (NATO). Cressie's research was supported by the Office of Naval Research (N00014-93-1-0001) and the US Environmental Protection Agency (CR822919-01-0).

References Abend, K. (1968) Compound decision procedures for unknown distributions and dependent states of nature. In Pattern Recognition, L. Kanal (ed.), Thompson, Washington DC, pp. 207±49. Besag, J.E. (1974) Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society B, 26, 192±236. Besag, J.E. (1986) On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society B, 48, 259±302. Cressie, N.A.C. (1993) Statistics for Spatial Data, revised edition, Wiley, New York. Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39, 1±38.

Empirical Bayes Change-Point Estimation

261

Fortin, M. (1994) Edge detection algorithms for two-dimensional ecological data. Ecology, 75, 956±65. Gelman, A. and Rubin, D.B. (1992) Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457±511. Geman, S. and Geman D. (1984) Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, 609±28. Geyer, C.J. (1992) Practical Markov chain Monte Carlo. Statistical Science, 7, 473±511. Greig, D.M., Porteous, B.T. and Seheult, A.H. (1989) Exact maximum a posteriori estimation for binary images. Journal of the Royal Statistical Society B, 51, 271±79. HoÈgmander, H. and Moller, J. (1995) Estimating distribution maps from atlas data using methods of statistical image analysis. Biometrics, 51, 393±404. Kareiva, P. (1994) Space: The final frontier for ecological theory. Ecology, 75, 1. Kershaw, K.A. and Looney, J.H.H. (1985) Quantitative and Dynamic Plant Ecology, Edward Arnold, London. Lombard, F. (1989) Some recent developments in the analysis of changepoint data. South African Statistical Journal, 23, 1±21. Ludwig, J.A. and Cornelius, J.M. (1987) Locating discontinuities along ecological gradients. Ecology, 68, 448±50. Molofsky, J. (1994) Population dynamics and pattern formation in theoretical populations. Ecology, 75, 30±9. Morris, C.N. (1983) Parametric empirical Bayes inference: Theory and applications. Journal of the American Statistical Assocation, 78, 47±55. OrloÂci, L. and OrloÂci, M. (1990) Edge detection in vegetation: Jornada revisited. Journal of Vegetation Science, 1, 311±24. Qian, W. and Titterington, D.M. (1990) Parameter estimation for hidden Gibbs chains. Statistics and Probability Letters, 10, 49±58. Qian, W. and Titterington, D.M. (1991) Estimation of parameters in hidden Markov models. Philosophical Transactions of the Royal Society of London, Series A, 337, 407±28. Qian, W. and Titterington, D.M. (1992) Stochastic relaxations and EM algorithms for Markov random fields. Journal of Statistical Computation and Simulation, 40, 55±69. Ripley, B.D. (1988) Statistical Inference for Spatial Processes. Cambridge University Press. Tanner, M.A. (1993) Tools for Statistical Inference. Springer±Verlag, New York. Thomson, J.D., Weiblen, G., Thomson, B.A., Alfaro, S. and Legendre, P. (1996) Untangling multiple factors in spatial distributions: lilies, gophers, and rocks. Ecology, 77, 1698±715. Tilman, D. (1994) Competition and biodiversity in spatially structured habitats. Ecology, 75, 2±16. Ver Hoef, J.M., Glenn-Lewin, D.C. and Werger, M.J.A. (1989) Relationship between horizontal pattern and vertical structure in a chalk grassland. Vegetatio, 83, 147±55. Webster, R. (1973) Automatic soil-boundary location from transect data. Mathematical Geology, 5, 27±37. Wierenga, P.J., Hendricks, J.M.H., Nash, M.H., Ludwig, J.A. and Daugherty, L.A. (1987) Variation of soil and vegetation and distance along a transect in the Chihuahuan Desert. Journal of Arid Environments, 13, 53±63.

Ver Hoef and Cressie

262

APPENDIX A.1 Proof of Theorem 1 To prove (i), notice that the probability …m j † does not depend on the values of and , only on their pattern, which is evident from (7). For any given number of change-points c…m†, there   nÿ1 c…m† different ways to get c…m† change-points, each with an equal probability. For fixed locations of change-points, all can be swapped with without changing the probability, so that there are  nÿ1  X nÿ1 ˆ 2n 2 c…m† c…m†ˆ0 different patterns for m, and  nÿ1  X   nÿ1 nÿ1 exp ÿ c…m† ˆ 2 exp…ÿ† ‡ 1 …† ˆ 2 c…m† c…m†ˆ0 To prove (ii), note that the probability for the number of change-points is      nÿ1  2 c…m† exp ÿ c…m† n ÿ 1 c…m†  …1 ÿ †nÿ1ÿc…m†  nÿ1 ˆ c…m† 2 exp…ÿ† ‡ 1 where  ˆ eÿ =…eÿ ‡ 1†. Result (iii) follows immediately from (ii). For the case where we have multiple patch values A ˆ f 1 ; 2 ; . . . ; p g, we have a slight modification to the proof of Theorem 1(i). Again, fix the number of change-points as c…m†. Then there are p choices for all mi before the first change-point, and p ÿ 1 choices for all mi between each successive change-point. Thus, there are p

 nÿ1  X nÿ1 …p ÿ 1†c…m† ˆ pn c…m† c…m†ˆ0

possibilities. Then, …† ˆ p

 nÿ1  X  nÿ1 c…m† nÿ1  ˆ p …p ÿ 1† exp…ÿ† ‡ 1 …p ÿ 1†eÿ c…m† c…m†ˆ0

Empirical Bayes Change-Point Estimation

263

A.2 Simulated annealing and Gibbs sampler Here we give the details of simulated annealing and the Gibbs sampler for the models presented in this paper. Define the joint conditional probability distribution,  ÿ1  1=T p…m j z; t † pT …m j z; t †  !…T † where it is possible that T is a function, i.e. for simulated annealing, T ! 0; and !…T † is a normalizing constant that ensures that pT …m j z; t † is a probability. For our purposes here, t does not change; however, we retain the superscript on t for notational consistency with earlier sections. We use the Gibbs sampler in the EM-algorithm and t changes with each EM-iteration, but for simulated annealing we use t ˆ ^ as estimated by the EM-algorithm. Now, because p…m j z; t † is a Markov chain, this implies that pT …m j z; t † is also a Markov chain ^ i;t;r ; i ˆ 1; 2; . . . ; n, (such as taking (Geman and Geman, 1984). Begin with initial estimates m ^ or ^ randomly with equal probability). Let r denote the rth iteration con^ i;t;r to be either m ^ i;t;r g using parameter t . Notice again we retain t in the subscript to be sisting of simulating fm consistent with earlier notation, although here t does not change. To simulate a realization from pT …m j z; t †, all n sites are visited repeatedly in a fixed order, here i ˆ 1; 2; :::; n. Then, ^ with probability, ^ i;t;r ; i > 1 and i < n, is chosen to be m ^ i;t;r ˆ ^jm ^ iÿ1;t;r ; m ^ i‡1;t;rÿ1 ; zi † ˆ pT ˆ g…m

…p †1=T …p †

1=T

‡ …p †1=T

^ m ^ i;t;r ˆ †… ^ m ^ i;t;r ˆ ^jm ^ iÿ1;t;r ; m ^ i‡1;t;rÿ1 † and p ˆ g…zi j m ^ i;t;r ˆ †… ^ i;t;r where p ˆ g…zi j m ^ ^ ^ iÿ1;t;r ; m ^ i‡1;t;rÿ1 †. Obviously pT …m ^ i;t;r ˆ j m ^ iÿ1;t;r ; m ^ i‡1;t;rÿ1 ; zi † ˆ 1 ÿ pT …m ^ i;t;r ˆ ^j ˆ jm ^ i‡1;t;rÿ1 ; zi †. For i ˆ 1 or i ˆ n, this formula is modified in an obvious way. Iterating ^ iÿ1;t;r ; m m ^ t;1 ; . . . ; m ^ t;R , where the iteration is stopped at step R and ^ t;0 ; m produces a series of estimates m ^ t;r …m ^ 1;t;r ; m ^ 2;t;r ; . . . ; m ^ n;t;r †0 . m Simulated annealing takes T to be a function T …r†, called the annealing schedule, which determines the rate at which T …r† approaches zero. As T ! 0; pT …m j z; t † becomes concentrated on ^ (Geman and Geman 1984). Note that we hold T …r† constant for a whole the MAP estimate m ^ 1;t;r . With an sweep of the data; that is, T …r† changes only when we start a new iteration at m ^ t;r converges to the MAP estimator as r ! 1. Geman and appropriate annealing schedule, m Geman (1984) show that theoretically a log annealing schedule T …r† ˆ = ln r achieves convergence, where is a tuning constant, suggested to be either 3 or 4. However, this converges very slowly, so we used a linear annealing schedule, T …r† ˆ =r with ˆ 2000 for Gaussian data and

ˆ 10 000 for Bernoulli data (obtained by trial and error). There is some risk that the algorithm may get caught in a local maximum; the faster the annealing schedule, the more the algorithm resembles the iterated conditional modes algorithm of Besag (1986). In our version, we still have the opportunity of avoiding some local entrapment, a feature that we like. The algorithm was terminated when 10 subsequent full sweeps of the data produced no further changes, which usually occurred at r  20 000 full sweeps through the data. The Gibbs sampler is identical to simulated annealing, except T …r† ˆ 1 for all r. We started ^ and let the algorithm run until r ˆ 300, and ^ t;0 with components randomly assigned ^ or , m ^ …1† ^ t;300 , to be one realization of the Gibbs sampler, call it m took this, m t . We obtained multiple …1† …2† …K† ^ ^ t ; . . . ; %mt up to K ˆ 2000. ^t ;m realizations by repeating this process to obtain m

264

Ver Hoef and Cressie

A.3 Convergence of simulated annealing and Gibbs sampler To assess convergence of simulated annealing, we restarted the procedure about 30 times. In a large majority of cases, we obtained the results that are given in the text. Occasionally, we obtained other estimates, but we simply observed that the posterior distribution evaluated at those estimates was less than the one given so we could discard them as not being maximum. However, it is theoretically possible that we have only attained a broad local maximum and missed a very narrow global maximum. For the Gibbs sampler, we note that there are two main problems. One is the `burn-in' period. For example, see the `witch's hat' distribution (Tanner, 1993, p. 115). Secondly, there may be multiple modes in the distribution, and it may be difficult for the Gibbs sampler to move easily among modes and give a representative sample from the posterior distribution if the sampler is too short. Because we had so many parameters (300), we did not use the more elaborate techniques (see Gelman and Rubin, 1992, and Geyer, 1992, and the following discussion), but simply observed individual estimates through time. For example, consider location 87 from Fig. 3, which may exhibit multimodality of the posterior because some samples ( 70%) had location ^ Figure 6 shows one long Gibbs sampler for 87 as ^ while other samples ( 30%) had it as . location 87 with 5000 full sweeps through the data. Then, in batches of 50, we computed the ^ Notice that it seems to be `burned-in' after 100 full number of times that the estimate was . sweeps through the data. It is also interesting to note that at around simulation 2700 (batch 54), ^ but only 100 sweeps later (batch 56) it is mostly estithe parameter is largely estimated to be , ^ This demonstrates that, for this model, it can swing rather quickly to sample from mated to be . different modes. We made plots similar to Fig. 6 for many of the parameters and determined that it was safe to accept a sample after a `burn-in' period of 300 sweeps. To ensure independence of samples, we then restarted the Gibbs sampler from random starting points.

Biographical sketches Jay M. Ver Hoef obtained a BS in Botany from Colorado State University in 1979, an MS in Botany from the University of Alaska, Fairbanks in 1985, and a co-major PhD in both Statistics and EEB (Ecology and Evolutionary Biology) at Iowa State University in 1991. Since then he has been a biometrician with the Alaska Department of Fish and Game in Fairbanks. He acts as a consulting statistician on a variety of wildlife research and management projects, and he continues his research in applying spatial statistical methods and empirical Bayesian methods to wildlife and environmental data. He is also an adjunct faculty member with the Department of Mathematical Sciences at the University of Alaska, Fairbanks. Noel A.C. Cressie received the BSc degree with First Class Honors in Mathematics from the University of Western Australia and the MA and PhD degrees in Statistics from Princeton University in 1973 and 1975, respectively. In 1975 and 1976 he held postdoctoral and lectureship positions at the Ecole des Mines de Paris, France, and Imperial College, London, England, respectively. Between 1976 and 1983 he was Lecturer and Senior Lecturer at The Flinders University of South Australia. From 1983 he has been Professor of Statistics and, since 1993, Distinguished Professor in Liberal Arts and Sciences at Iowa State University. He is also a faculty member of the Ecology and Evolutionary Biology Program at Iowa State University. His research interests are in statistical modelling and analysis of spatiotemporal data, including statistical image analysis and remote sensing.

Using hidden Markov chains and empirical Bayes ... - Springer Link

Page 1 ... Consider a lattice of locations in one dimension at which data are observed. ... distribution of the data and we use the EM-algorithm to do this. From the ...

393KB Sizes 1 Downloads 319 Views

Recommend Documents

The Equivalence of Bayes and Causal Rationality in ... - Springer Link
revised definition of rationality given that is not subject to this criticism. .... is nonempty for every w ∈W (i.e. we assume that Bi is serial: (∀w)(∃x)wBix).

The Equivalence of Bayes and Causal Rationality in ... - Springer Link
Page 3 .... is expressed formally by the own-choice knowledge condition: (OK) For all i and for all w,x∈W, if wBix then ... In order to introducing trembles, we would need to make ... be inter-connected: my opponents' choices given that I play si.

Extended Hidden Vector State Parser - Springer Link
on the use of negative examples which are collected automatically from the semantic corpus. Second, we deal with .... TION, PLATFORM, PRICE, and REJECT because only these concepts can be parents of suitable leaf ..... Computer Speech.

Stable particles in anisotropic spin-1 chains - Springer Link
Sep 6, 2006 - breather states does not require the presence of external magnetic fields. This work was supported by the TMR network EUCLID Con- tract No.

Ranking policies in discrete Markov decision processes - Springer Link
Nov 16, 2010 - Springer Science+Business Media B.V. 2010. Abstract An ... Our new solution to the k best policies problem follows from the property: The .... Bellman backup to create successively better approximations per state per iteration.

Clustering Finite Discrete Markov Chains
computationally efficient hybrid MCMC-constrained ... data. However, this distribution is rather complex and all of the usual distribution summary values (mean,.

Lumping Markov Chains with Silent Steps
a method for the elimination of silent (τ) steps in Markovian process ...... [8] M. Bravetti, “Real time and stochastic time,” in Formal Methods for the. Design of ...

Empirical Bayesian Test of the Smoothness - Springer Link
oracle inequalities, maxisets. A typical approach to the ... smoothness selection method may pick some other β2 = β1 which may lead to a better quality simply because the underlying θ may ... the observed data X. The inference will be based on a s

Learning Hidden Markov Models Using Probabilistic ...
Abstract Hidden Markov Models (HMM) provide an excellent tool for building probabilistic graphical models to describe a sequence of observable entities. The parameters of a HMM are estimated using the Baum-Welch algorithm, which scales linearly with

Multi-topical Discussion Summarization Using ... - Springer Link
marization and keyword extraction research, particularly that for web texts, such as ..... so (1.1), for this reason (1.3), I/my (1.1), so there (1.1), problem is (1.2), point .... In: International Conference on Machine Learning and Cybernetics (200

Mining Trajectory Patterns Using Hidden Markov Models
'a day' in a traffic control application since many vehicles have daily patterns, ..... Peng, W.C., Chen, M.S.: Developing data allocation schemes by incremental ...

Speech emotion recognition using hidden Markov models
tion of pauses of speech signal. .... of specialists, the best accuracy achieved in recog- ... tures that exploit these characteristics will be good ... phone. A set of sample sentences translated into the. English language is presented in Table 2.

TCSOM: Clustering Transactions Using Self ... - Springer Link
Department of Computer Science and Engineering, Harbin Institute of ... of data available in a computer and they can be used to represent categorical data.

Multi-topical Discussion Summarization Using ... - Springer Link
IBM Research – Tokyo. 1623-14 Shimotsuruma, Yamato, Kanagawa, Japan [email protected]. 3. Graduate School of Interdisciplinary Information Studies, University of Tokyo ... School of Computer Science, University of Manchester ... twofold: we first t

Hidden Markov Models - Semantic Scholar
A Tutorial for the Course Computational Intelligence ... “Markov Models and Hidden Markov Models - A Brief Tutorial” International Computer Science ...... Find the best likelihood when the end of the observation sequence t = T is reached. 4.

Hidden Markov Models - Semantic Scholar
Download the file HMM.zip1 which contains this tutorial and the ... Let's say in Graz, there are three types of weather: sunny , rainy , and foggy ..... The transition probabilities are the probabilities to go from state i to state j: ai,j = P(qn+1 =

Improving FPGA Performance and Area Using an ... - Springer Link
input sharing and fracturability we are able to get the advantages of larger LUT sizes ... ther improvements built on the ALM we can actually show an area benefit. 2 Logic ..... results comparing production software and timing models in both cases an

Improving FPGA Performance and Area Using an ... - Springer Link
that a 4-LUT provides the best area-delay product. .... This terminology is necessary in order to account for area later. ... a 12% overall savings in ALM area.