Computational Statistics & Data Analysis 40 (2002) 801 – 824 www.elsevier.com/locate/csda

Improved estimation of clutter properties in speckled imagery Francisco Cribari-Netoa; ∗ , Alejandro C. Freryb , Michel F. Silvac a Departamento

de Estat stica, Universidade Federal de Pernambuco, Cidade Universit aria, Recife, PE 50740-540, Brazil b Centro de Inform atica, Universidade Federal de Pernambuco, Cidade Universit aria, Recife, PE 50732-970, Brazil c Departamento de Estat stica, Universidade de S˜ ao Paulo, Caixa Postal 66281, S˜ao Paulo, SP 05315-970, Brazil Received 1 March 2001; received in revised form 1 February 2002

Abstract This paper’s aim is to evaluate the e1ectiveness of bootstrap methods in improving estimation of clutter properties in speckled imagery. Estimation is performed by standard maximum likelihood methods. We show that estimators obtained this way can be quite biased in 6nite samples, and develop bias correction schemes using bootstrap resampling. In particular, we propose a bootstrapping scheme which is an adaptation of that proposed by Efron (J. Amer. Statist. Assoc. 85 (1990) 79). The proposed bootstrap does not require the quantity of interest to have closed form, as does Efron’s original proposal. The adaptation we suggest is particularly important since the maximum likelihood estimator of interest does not have a closed form. We show that this particular bootstrapping scheme outperforms alternative forms of bias reduction mechanisms, thus delivering more accurate inference. We also consider interval estimation using bootstrap methods, and show that a particular parametric bootstrap-based con6dence interval is typically more reliable than both the asymptotic con6dence interval and other bootstrap-based c 2002 Elsevier con6dence intervals. An application to real data is presented and discussed.  Science B.V. All rights reserved. Keywords: Bias; Bootstrap; Maximum likelihood estimation; Speckle; Synthetic aperture radar

1. Introduction Speckle noise appears in images obtained with coherent illumination, e.g., B-scan ultrasound, sonar and synthetic aperture radar (SAR) imagery. This noise deviates ∗

Corresponding author. E-mail addresses: [email protected] (F. Cribari-Neto), [email protected] (A.C. Frery).

c 2002 Elsevier Science B.V. All rights reserved. 0167-9473/02/$ - see front matter  PII: S 0 1 6 7 - 9 4 7 3 ( 0 2 ) 0 0 1 0 2 - 0

802

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

from the classical model, which assumes that the corruption is a Gaussian noise, independent of the signal, that adds to the true value. The speckle noise enters the data in a multiplicative fashion, and in the amplitude and intensity formats it does not obey the Gaussian law. Speckle noise is known to make image analysis dif6cult, since its ‘salt-and-pepper e1ect’ tends to corrupt the information or ground truth. There are a number of di1erent approaches for extracting the information contained in speckled imagery, the statistical framework being the one that has provided users with the best models and tools for image processing and analysis. We focus on a particular distribution which is useful for modeling speckled imagery, namely the G0A (; ; n) law, and consider the estimation of the parameters that index such a distribution. In particular, we evaluate the performance of maximum likelihood point estimation and of asymptotic con6dence intervals based on maximum likelihood estimates. We also consider di1erent schemes of 6nite-sample bias correction of point estimates based on the bootstrap and bootstrap-based con6dence intervals. The bootstrap is a computer-intensive method which avoids the need for analytical expansions when designing estimation approaches with superior 6nite-sample performance. This is a clear advantage of the bootstrap approach since these expansions can be quite cumbersome. (See Ferrari and Cribari-Neto, 1998, for a comparison of computer-intensive and analytical bias reduction methods.) Our results suggest that a particular approach we propose for numerically biascorrecting the maximum likelihood point estimates can deliver substantial 6nite-sample improvement over the original maximum likelihood parameter estimates and over other bootstrap-based bias-corrected estimation procedures. Our proposal is based on the bootstrapping scheme designed by Efron (1990), but unlike the original proposal our method does not requite the estimator of interest to have a closed-form expression. The results also reveal that BCa bootstrap con6dence intervals typically display better parameter probability coverage than asymptotic intervals when the sample size is small. Therefore, image processing and analysis based on maximum likelihood methods can be substantially improved by using bootstrap methods when speckle noise degrades the data. The paper unfolds as follows. In Section 2 the model of interest is presented and it is applied to real data in Section 3. Section 4 discusses likelihood inference for the considered model whereas Section 5 presents bootstrap methods for point and interval inference with the proposal of a new bootstrapping scheme that is developed as an adaptation of the scheme proposed by Efron (1990). Section 6 presents and discusses the main numerical results. These results are applied in Section 7 to real SAR image analysis. Finally, Section 8 concludes the paper. 2. The model The speckle noise is always associated with coherent-illuminated scenes, such as those obtained by microwaves, laser, B-scan ultrasound, etc. This type of noise appears due to interference phenomena between the reGected signals.

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

803

The multiplicative model is a common framework used to explain the stochastic behavior of data obtained with coherent illumination. It assumes that the observations for these kind of images are the outcome of the product of two independent random variables, X and Y , representing the terrain backscatter and the speckle noise, respectively (Goodman, 1985). The former is frequently assumed real and positive, while the latter can be complex (if the considered image is in complex format) or positive real (amplitude or intensity formats). Only the amplitude format will be considered here, since the other two are seldom used in practice. Complex speckle, YC , is usually assumed to follow a bivariate normal distribution, with independent and identically distributed components having zero mean and variance 1 2 . Multilook amplitude speckle is obtained by taking the square root of the average of n independent samples of YC 2 . That is, the multilook amplitude speckle is given by  Y = n−1 (YC 21 + · · · + YC 2n ). It follows the square root of the gamma distribution, denoted here as Y ∼ 1=2 (n; n), thus having density function fY (y) =

nn 2n−1 exp{−ny2 }; y (n)

n ¿ 1; y ¿ 0;

where (·) is the gamma function. It is possible to show that the variance, the skewness and the excess kurtosis of this distribution converge to zero when n → ∞. The image processing unit has, to some extent, control over this parameter, and techniques that increase n, known as multilook processing, can be used to reduce the inGuence of speckle noise in the image at the expense of poorer spatial resolution. The backscatter (or clutter) that describes the ground truth (X ) may exhibit di1erent degrees of homogeneity, and di1erent models can be used to encompass this characteristic. Three main models have proved useful in modeling amplitude backscatter: a constant (whenever the area is homogeneous to the sensor), the square root of gamma distributed random variable (for heterogeneous areas) and, more recently, the square root of the reciprocal of a gamma distributed random variable (for extremely heterogeneous areas). These three situations, whose adequacy to real data will be shown in Section 3, are uni6ed by the square root of the generalized inverse Gaussian distribution, whose density function is given by fX (x) =

   (=)=2 2−1  x exp − 2 − x2 ; x K (2 )

x ¿ 0;

(1)

where K denotes the modi6ed Bessel function of the third kind and order , with the parameter space given by: (i)  ¿ 0;  ¿ 0 if  ¡ 0; (ii)  ¿ 0;  ¿ 0 if  = 0; (iii)  ¿ 0;  ¿ 0 if  ¿ 0. The distribution induced by the density given in Eq. (1) is denoted here as X ∼N−1=2 (; ; ). For detailed properties and applications of the square of the N−1=2 (; ; ) distribution (known as the generalized inverse Gaussian distribution), the reader is referred to Barndor1-Nielsen and Blaesild (1981) and JHrgensen (1982). The square root of the generalized inverse Gaussian distribution can be reduced to several important particular cases, but the following three are of special interest for

804

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

modeling the backscatter in speckled images: (i) the square root of the gamma distribution, when  = 0, denoted here as 1=2 (; ); (ii) the distribution of the reciprocal of the square root of a gamma distributed random variable when  = 0, denoted here as −1=2 (; ); (iii) a constant value . Assuming that the speckle noise obeys the square root of gamma law, these three types of backscatter lead to the following three distributions for the return Z (Frery et al., 1997): (i) The amplitude K distribution, whose density function is √ 4nz fZ (z) = (nz 2 )(+n)=2−1 K−n (2z n); n ¿ 1; ; ; z ¿ 0: ()(n) This distribution is often seen in heterogeneous areas, such as forests, and is denoted KA . (ii) The model that will be discussed here, whose distribution is given in Eq. (2) below. (iii) A scaled square root of gamma distribution, with density given by fZ (z) = 2

nn z 2n−1 exp{−n(z=)2 }: (n)2n

This distribution is characteristic of homogeneous areas, such as crops, deforested spots etc. If X ∼ N−1=2 (; ; ) and Y ∼ 1=2 (n; n) are independent random variables, then the product Z = XY has a distribution which is called amplitude G. This distribution will be denoted here by GA (; ; ; n). Its density function is (−n)=2    + nz 2 2nn (=)=2  fZ (z) = z 2n−1 K−n (2 ( + nz 2 )); z ¿ 0  (n)K (2 ) with n ¿ 1. The parameter space for (; ; ) is the same as that for the generalized inverse Gaussian distribution. This distribution for the amplitude response is quite general. However, estimators for its parameters are diKcult to obtain by maximum likelihood methods. Frery et al. (1997) have shown that a particular case, namely when X ∼ −1=2 (; ), leads to a special distribution for Z, denoted here as GA0 (; ; n). This distribution has the following attractive properties: (i) its density only involves simple functions, since it is given by fZ (z) =

2nn (n − )z 2n−1  (n)L(−)( + nz 2 )n−

with n ¿ 1 and −; ; z ¿ 0;

(2)

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

805

(ii) it allows the modeling of homogeneous, heterogeneous and very heterogeneous clutter; speci6cally, data from deforested areas, from primary forest and from urban areas are very well 6tted by this distribution; (iii) its cumulative distribution function is easily obtained, since the GA0 (; ; n) distribution is readily seen to be proportional to the square root of the well-known Snedecor F2n; −2 distribution (see Vasconcellos and Frery, 1996); it can be evaluated using the relation FZ (t) = M2n; −2 (−t 2 =);

(3)

where M;  is the cumulative distribution function of an F;  distributed random variable. Since the F distribution arises in many important statistical problems, its cumulative distribution function M is obtainable in a wide variety of statistical tables and systems. The parameter  in Eq. (2) describes the degree of roughness, with small values (say,  ¡ − 15) usually associated with homogeneous targets, like pasture, values in the [−15; −5] interval usually observed in heterogeneous clutter, like forests, and large values (−5 ¡  ¡ 0, for instance) commonly seen when extremely heterogeneous areas are imaged. The  parameter is related to the scale of the distribution, in the sense that √ if Z  is GA0 (; 1; n) distributed then Z = Z  follows a GA0 (; ; n) distribution. 0 The rth moment of the GA (; ; n) distribution is given by E(Z r ) =

  r=2 (− − r=2)(n + r=2) ; n (−)(n)

 ¡ − r=2; n ¿ 1;

(4)

when −r=2 6  ¡ 0 the rth order moment is in6nite. Fig. 1 shows three densities of the GA0 (; ; n) distribution for the 6ve looks (n = 5) case. These densities are normalized so that the expected value is 1 for every value of the roughness parameter. This is obtained using Eq. (4) for setting the scale parameter 2  (−)(n) :  = ; n = n (− − 1=2)(n + 1=2) These densities illustrate the three typical situations described above: homogeneous areas ( = −15), heterogeneous clutter ( = −5), and an extremely heterogeneous target ( = −1:1). Following Barndor1-Nielsen and Blaesild (1981), it is interesting to look at these densities in log probability functions, particularly because the GA0 distribution is closely related to the class of hyperbolic distributions (Frery et al., 1995). Fig. 2 shows the densities of the GA0 (−3; 1; 1) and Gaussian N(3=16; 1=2 − 92 =256) distributions in semilogarithmic scale, along with their mean value  = 3=16. The parameters were chosen so that these distributions share the same mean and variance. The di1erent decays of their tails in the semilogarithmic plot are evident: the former behaves logarithmically while the latter decays quadratically. This behavior ensures the ability of the GA0 distribution to model data with extreme variability.

806

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Fig. 1. Densities of the GA0 (; ; 5 ; 5) distribution with  ∈ {−1:1; −5; −15} (solid line, dashes, dash–dot, respectively).

Fig. 2. Densities of the GA0 (−3; 1; 1) (solid line) and N(3=16; 1=2 − 92 =256) (dashes) distributions in semilogarithmic scale, along with the value  = 3=16 (dash–dot).

3. The multiplicative model and real data A comparison of di1erent estimation techniques (maximum likelihood, various estimators based on the substitution principle, quantile-based estimators and an estimator

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

807

Fig. 3. Single look amplitude E-SAR airborne image over Oberpfa1enhofen, Germany.

based on a transformation procedure) is presented in Mejail et al. (2000). The scale parameter and the number of looks were assumed known. They showed that the maximum likelihood estimation typically outperformed other estimation methods in 6nite samples. All estimators tended to underestimate the true parameter value, thus displaying negative bias. Our simulation results (Section 6) also show that the maximum likelihood estimator has a tendency to underestimate the roughness parameter. We also note that the bias in the estimator of the  parameter goes in the opposite direction. The systematic underestimation of  may lead to wrong conclusions, since the smaller the value of , the more homogeneous the area and, therefore, using the maximum likelihood estimator users may, e.g., take an urban area for a forest or a woodland for a crop 6eld. Fig. 3 shows a single look (n = 1) amplitude SAR image obtained by the E-SAR airborne sensor over surroundings of MPunchen, Germany, originally of 1024 × 600 pixels. Several types of land use are visible in this image, markedly crops (where little or no texture is visible), forest (where there is some texture) and urban areas (where the texture is intense). Samples from these three classes will be used to show the adequacy of the distributions presented in Section 2: 26,681 (15,886 and 65,650, respectively) pixels from crops (forest and urban area, respectively) are marked with ‘C’ (‘F’ and ‘U’, respectively) in the original area. The order of these samples is not uncommon to some image analysis applications, where users are allowed to pick as many convenient samples as possible and where high resolution sensors make huge amounts of data available. When dealing with 6ltering and other image processing techniques, algorithms are applied over as few observations as possible, in order to reduce the blurring e1ect. This paper is concerned with problems that arise with the use of small samples.

808

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Table 1 Sample sizes, estimated parameters and p-value of the 2 test (in brackets) for the three considered distributions

Land use

Sample size

Crops Forest Urban area

26,681 15,886 65,650

2 1=2 (1; 1= ˆ )

ˆ 1) KA (; ˆ ;

GA0 (; ˆ ;ˆ 1)

91,156.5 [0.47] 85,550.6 [0.00] 2,71,205.0 [0.00]

(31:60; 3:46 × 10−4 ) [0.47] (3:83; 4:47 × 10−5 ) [0.84] (0:36; 1:34 × 10−6 ) [0.00]

(−24:10; 2:12 × 106 ) [0.50] (−4:02; 2:63 × 105 ) [0.66] (−1:20; 7:65 × 104 ) [0.43]

Three laws were contrasted to these datasets, namely the 1=2 , KA and GA0 distributions with the following procedure: (a) Estimate the parameters of each distribution using the complete dataset, i.e., using zA = {zij : (i; j) ∈ A}, where A is the set of coordinates of interest for each type of area (pasture, forest and urban spots). (b) Subsample each dataset regularly in a one-every-nine basis, i.e., consider (zA ) = {zij ; (i; j) ∈ A: i mod 3 = j mod 3 = 0}, in order to reduce the spatial correlation e1ect. (c) Perform the 2 test over the subsampled data. The parameters, estimated using the methods presented in Frery et al. (1997), are presented in Table 1, along with the p-value of the 2 test in squared brackets. The result of 6tting those distributions to the crops (forest, urban, respectively) data is shown in Figs. 4 (5, 6, respectively). From these 6gures and from the results in Table 1 we see that: (i) For homogeneous areas all the three distributions perform well (note that the di1erences between the 6tted densities are barely visible in Fig. 4). The roughness parameters  of both KA and GA0 distributions are large in absolute value, thus suggesting a good approximation by the 1=2 distribution. (ii) For heterogeneous areas (Fig. 5) the 1=2 distribution yields a poorer 6t than those provided by the other two distributions. (iii) When the area becomes extremely heterogeneous, as is the case of the urban area data presented in Fig. 6, only the GA0 distribution is capable of providing a good description of the observed values. These conclusions, along with the fact that the GA0 distribution does not require the evaluation of Bessel functions, illustrate why this distribution was proposed as a more general and useful model than the KA distribution for the 6tting of SAR data. In doing so, it is of paramount interest to develop precise inference techniques, which are a1ordable from both the analytical and computational viewpoints. Inference on the value of the parameter  may be used as support for costly, time-consuming and strategic decisions. This value can be used as an indicator of a Gooded area, of (possibly illegal) deforestation or to distinguish between residential and industrial targets.

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

809

Fig. 4. Histogram of crops data, along with three 6tted densities: 1=2 , KA and GA0 (dash, long dash, solid lines, respectively).

4. Likelihood inference The maximum likelihood estimator (MLE) enjoys a number of optimality properties, such as consistency, asymptotic normality, asymptotic eKciency, and invariance; see, e.g., Bickel and Doksum (2001) for details. However, the MLE can be quite biased when the sample size is small or moderate. Bias is a systematic error, and hence it is useful to develop methods that can reduce the bias of maximum likelihood estimates. This will be done in the next section. Let z1 ; : : : ; zN be samples from independent and identically distributed random variables, each following a GA0 (; ; n) distribution. The resulting likelihood function is

N 2n−1 N  n 2n (n − )− i=1 zi ; L(; ; z) =

N 2 n− (n) (−) i=1 ( + nzi ) the log-likelihood function thus being ‘(; ; z) = N log(2nn ) + N log (n − ) − N log  + (2n − 1)

N

log zi

i=1

− N log (n) − N log (−) − (n − )

N i=1

log( + nzi2 ):

810

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Fig. 5. Histogram of forest data, along with three 6tted densities: 1=2 , KA and GA0 (dash, long dash, solid lines, respectively).

Thus, the MLEs of  and  are obtained by solving   N ˆ + nzi2 = 0; log N [ (−) ˆ − (n − )] ˆ + ˆ i=1

N

N ˆ 1 − (n − ) ˆ = 0; ˆ ˆ + nzi2 i=1

where (·) is the digamma function. The MLEs do not have closed-form expressions, and it is necessary to use a nonlinear iterative algorithm to obtain the maximum likelihood estimates, ˆ and . ˆ In what follows, we shall use the quasi-Newton method known as BFGS (with analytical 6rst derivative), which is ‘generally regarded as the best performing method’ (Mittelhammer et al., 2000, p. 199). Interval estimation based on the MLE is performed using the asymptotic normality of the MLE. Let &ˆ denote the MLE of & obtained from independent and identically distributed variates X1 ; : : : ; XN . Under mild regularity conditions, D N 1=2 (&ˆ − &) → Np (0; k(&)−1 ) as N → ∞, where k(&) denotes Fisher’s information D

for one observation and → denotes convergence in distribution (or law); see, e.g., D SerGing (1980, Chapter 4). That is, N 1=2 (&ˆr − &r ) → N(0; k(&)rr ) as N → ∞, where ˆ ˆ &r and &r are the rth elements of the vectors & and &, respectively, and k(&)rr is the

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

811

Fig. 6. Histogram of urban data, along with three 6tted densities: 1=2 , KA and GA0 (dash, long dash, solid lines, respectively).

rth diagonal element of k(&)−1 . The quantity T = N 1=2 {k(&)rr }−1=2 (&ˆr − &r ) is thus D asymptotically pivotal: T → N(0; 1) as N → ∞. It is possible to show, after some algebra, that for the GA0 distribution Fisher’s information and its inverse are given, respectively, by (1)

(−) − (1) (n − ) n=(n − ) k(; ) = n=(n − ) −n=2 (n −  + 1) and



n=2 (n −  + 1) − |k(; )|  k(; )−1 =    −n=(n − ) |k(; )|

−n=(n − ) |k(; )|



  :  (1) (1) (−) − (n − )  |k(; )|

Let 0 ¡ ) ¡ 1=2. It then follows that the 100(1 − 2))% asymptotic con6dence intervals for  and  are, respectively, 

1=2

1=2  2 2 (1−)) (1−)) z n = ˆ  ˆ n = ˆ  ˆ (n −  ˆ + 1) (n −  ˆ + 1) z ˆ − ; − ; ˆ + 1=2 − N 1=2 |k(; ˆ )| ˆ N |k(; ˆ )| ˆ

812

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824



z (1−)) ˆ − 1=2 N



z (1−)) ˆ + 1=2 N

(1)



(−) ˆ − (1) (n − ) ˆ |k(; ˆ )| ˆ

(1)

1=2 ;

(−) ˆ − (1) (n − ) ˆ |k(; ˆ )| ˆ

1=2  ;

where z (1−)) is the 1 − ) upper point from the standard normal distribution.

5. Bootstrap methods This section presents di1erent bootstrapping schemes that can be used to reduce the bias of the MLE. Let X be a random sample of size N where each entry represents a random draw from the random variable X that has distribution function F = F(&). Here, & is the parameter that indexes the distribution and is viewed as a functional of F, i.e., & = t(F). Let &ˆ be an estimator of & obtained using the random sample above, ˆ & = s(X). The idea behind the bootstrap method proposed by Efron (1979) is to use resampling to obtain additional pseudo-samples, and then to extract information from these samples to improve inference. We can broadly classify bootstrap methods into two classes, depending on how the sampling is performed: parametric and nonparametric. ˆ whereas in In the parametric version, the bootstrap samples are obtained from F(&) the nonparametric version they are obtained from the empirical distribution function ˆ through sampling with replacement. Note that the nonparametric bootstrap does (F) not entail parametric assumptions. Suppose we have R bootstrap samples x∗ = (x1∗ ; : : : ; xN∗ ), and hence R estimates ∗i s(x∗ ). We then have R bootstrap estimates of &, denoted &ˆ , i = 1; : : : ; R. The R bootstrap replications of &ˆ are then used to estimate the distribution function of &ˆ and its associated features and moments. Our interest lies in obtaining an estimate for the ˆ bias of the MLE, so that we can use it to de6ne a bias-corrected estimator. Let BF (&; &) denote the bias of an estimator &ˆ = s(X), that is, ˆ = EF [&ˆ − &] = EF [s(X)] − & = EF [s(X)] − t(F); BF (&; &) where the statistic s(X) is a function of independent and identically distributed random variables, each having distribution function F. The parametric and nonparametric bootstrap bias estimates are obtained replacing F in the expression above by F&ˆ and ˆ respectively, thus yielding F, ˆ = EF [s(X∗ )] − t(F ˆ) BF&ˆ (&; &) & &ˆ

and

ˆ = E ˆ [s(X∗ )] − t(F): ˆ BFˆ (&; &) F

R ∗ ∗i The idea is then to approximate EF&ˆ [s(X∗ )] and EFˆ [s(X∗ )] using &ˆ (·)=R−1 i=1 &ˆ . Therefore, the parametric and nonparametric bootstrap estimates for the bias of &ˆ are,

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

813

respectively, ˆ = &ˆ ∗ (·) − s(x) Bˆ F&ˆ (&; &)

and

ˆ = &ˆ ∗ (·) − s(x); Bˆ Fˆ (&; &)

ˆ where x denotes the original sample and s(x) the functionals t(F&ˆ) and t(F). A better bootstrap bias estimate was proposed by Efron (1990). This nonparametric method uses an auxiliary vector known as the resampling vector, which records the proportions of the original observations x = (x1 ; x2 ; : : : ; xN ) present in the bootstrap sample. We denote the resampling vector by P∗ = (P1∗ ; P2∗ ; : : : ; PN∗ ). Its components Pj∗ , j =1; : : : ; N , are de6ned with respect to a given bootstrap sample x∗ =(x1∗ ; x2∗ ; : : : ; xN∗ ) as Pj∗ = N −1 (#{xk∗ = xj }), where k = 1; : : : ; N . The vector P0 = (1=N; : : : ; 1=N ) corresponds to the original sample. ∗ Note that a bootstrap replication &ˆ can be de6ned as a function of the resampling V then vector. For instance, if &ˆ = s(X) = X, ∗ ∗ ∗ ∗ #{xk∗ = x1 }x1 + #{xk∗ = x2 }x2 + · · · + #{xk∗ = xN }xN x + x2 + · · · + x N = &ˆ = 1 N N (P1∗ N ) x1 + (P2∗ N )x2 + · · · + (PN∗ N )xN = P∗ x  : N Also, since this bootstrap method is nonparametric by nature, the original sample x is 6xed. Suppose we can write the estimate of interest, obtained from x, as T (P 0 ). We ∗i can then obtain bootstrap estimates &ˆ using the resampling vectors P∗i , i = 1; : : : ; R, as T (P ∗i ). The new bootstrap bias estimate, say BV Fˆ , is de6ned as (Efron, 1990) =

ˆ = &ˆ ∗ (·) − T (P∗ (·)); BV Fˆ (&; &)

where P∗ (·) =

R

1 ∗i P ; R i=1



ˆ since Bˆ ˆ (&; &) ˆ = &ˆ (·) − T (P0 ). which di1ers from Bˆ Fˆ (&; &) F By using the three bootstrap bias estimates discussed above, we arrive at three bias-corrected estimators, namely: ˆ = 2s(x) − &ˆ ∗ (·); &V1 = s(x) − Bˆ Fˆ (&; &) ˆ = s(x) − &ˆ ∗ (·) + T (P∗ (·)); &V2 = s(x) − BV Fˆ (&; &) ˆ = 2s(x) − &ˆ ∗ (·); &V3 = s(x) − Bˆ F&ˆ (&; &) where the corrected estimates &V1 and &V3 are called constant bias correcting (CBC) by MacKinnon and Smith (1998). Note, however, that the estimator &V2 cannot be computed for the GA0 distribution. As proposed by Efron (1990), the bias corrected estimator &V2 requires the original estimator &ˆ to have closed form and, as noted before, the MLE of & = (; ) for the GA0 distribution does not have closed form. This diKculty is common to other distributions such as the gamma and beta distributions, and is unrelated to the numerical diKculties of 6nding an estimate. To circumvent this problem, we shall now propose an adaption of Efron’s method so that it can be applied to estimators that do not have closed form. We propose to use the resampling vector to adjust the log-likelihood function,

814

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

and then maximize the adjusted likelihood. The basic idea is to write the log-likelihood function in terms of P0 , replace this quantity by P∗ (·), and then maximize the resulting modi6ed log-likelihood function. The advantage of the proposed approach is that it does not require the original estimator to have closed form. Let zn ; : : : ; zN denote a random sample of size N from the GA0 distribution. The log-likelihood function can then be written in terms of P0 as ‘(; ; z) = N log(2nn ) + N log (n − ) − N log  + (2n − 1)N P0 (log z) − N log (n) − N log (−) − (n − )N P0 (log( + nz2 )) ; where log z = (log z1 log z2 : : : log zN ) and log ( + nz2 ) = {log( + nz12 ) log( + nz22 ) : : : log( + nzN2 )}. Then, one replaces P0 by P∗ (·), after obtaining P∗ (·) from a nonparametric bootstrapping scheme that uses R replications, and maximizes the transformed log-likelihood function. That is, we maximize ‘(; ; z) = N log(2nn ) + N log (n − ) − N log  + (2n − 1)N P∗ (·)(log z) − N log (n) − N log (−) − (n − )N P∗ (·)(log ( + nz2 )) ; instead of maximizing the original log-likelihood function. The approach we propose here is expected to deliver accurate point estimates. Indeed, we shall see in Section 6 that this is the case. MacKinnon and Smith (1998) argue that the estimators &V1 and &V3 , which they call CBC, are designed to work well when the bias function B(&) is Gat, i.e., does not depend on &. They consider the situation where the bias function is linear, i.e., B(&) = a + c&. The estimation of the bias function now involves the estimation of two parameters, namely a and c. We then need to obtain estimates of two points of the bias line, and use these points to obtain our estimates of a and c. The procedure can be summarized as follows. Using the initial sample x, we compute our estimate: &ˆ = s(x). We then use parametric bootstrap resampling to obtain a ∗ ˆ As for the second estimate, we can use ˆ as &ˆ (·) − &. bootstrap bias estimate, say B, ˆ say &. ˜ Using a second parametric bootstrapping scheme the bias-adjusted version of &, ˜ where &˜ ∗ (·) ˜ as &˜ ∗ (·) − &, we obtain an estimate for the bias of this estimator, say B, ˜ Finally, using the point estimates &ˆ is the average of the bootstrap replications of &. ˜ ˆ ˜ we solve the following system and &, and their respective estimated biases, B and B, of two simultaneous equations: Bˆ = a˙ + c˙&ˆ and

˜ B˜ = a˙ + c˙&;

whose solution is Bˆ − B˜ ˆ & and a˙ = Bˆ − &ˆ − &˜

c˙ =

Bˆ − B˜ : &ˆ − &˜

We then arrive at the following bias-corrected estimator, known as linear bias correcting (LBC) and denoted &V4 : 1 &V4 = (&ˆ − a): ˙ 1 + c˙

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

815

ˆ var(&V4 ) = (1 + The variance of this estimator is a function of the variance of &: −2 ˆ c) ˙ var (&). If the value of c˙ belongs to the set A = {(−2; 0) − {−1}}, then the ˆ variance of &V4 will exceed that of &. The bootstrap can also be used to construct interval estimates. At the outset, consider ∗i using the R bootstrap point replications &ˆ , i=1; : : : ; R, to estimate the distribution funcˆ This is done using the empirical distribution function of the bootstrap replication of &. ∗i ˆ ˆ The percentile interval with coverage 1−2), 0 ¡ ) ¡ 1=2, tions & , denoted here as G. ˆ i.e., [Gˆ −1 ()); Gˆ −1 (1 − ))]. Note is obtained from the corresponding percentiles of G, that this interval will not include values outside the parameter space, which can happen to the standard asymptotic con6dence interval based on the MLE. Efron (1981, 1982) introduced a di1erent bootstrap interval estimator, known as ‘bias corrected’ (BC). A related method for obtaining bootstrap con6dence intervals is known as ‘bias corrected and accelerated’ (BCa). The BCa con6dence interval with covegare (1 − 2)) is given by −1 −1 Gˆ (1(z[)])); Gˆ (1(z[1 − )]));

where z[)] = z0 +

z0 + z ()) ; 1 − a(z0 + z ()) )

z[1 − )] = z0 +

z0 + z (1−)) 1 − a(z0 + z (1−)) )

and 1(·) denotes the distribution function of a standard normal variate. The quantities z0 and a can be written as

N ˆ ∗i ˆ (&(·) − &ˆ(i) )3 #{&ˆ ¡ &} −1 and aˆ = Ni=1 ; zˆ0 = 1 R 6{ i=1 (&ˆ(·) − &ˆ(i) )2 }3=2 respectively. Similar to the percentile method, the lower and upper limits for the 100(1 − 2))% ˆ but now they correspond to BCa con6dence interval are obtained as percentiles of G, the )1 and )2 percentiles, where     zˆ0 + z ()) zˆ0 + z (1−)) and )2 = 1 zˆ0 + : )1 = 1 zˆ0 + 1 − a( ˆ zˆ0 + z ()) ) 1 − a( ˆ zˆ0 + z (1−)) ) For details on bootstrap methods and their application, the reader is referred to Davison and Hinkley (1997) and Efron and Tibshirani (1993).

6. Numerical results The simulation of GA0 (; ; n) independent deviates can be performed using Eq. (3) and applying the inversion method, i.e., generating a sequence u1 ; : : : ; um observations of independent identically distributed U(0; 1) random variables, and using zi =  −M−1 2n; −2 (ui )=,

i = 1; : : : ; m.

This

method

requires

having

a

reliable

816

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Table 2 Results for n = 1 and N = 49

Parameter

Estimator

Mean

Bias

Variance

MSE

Bootstrap

 = −1

ˆ V1 V2 V3 V4

−1:183 −0:967 −0:936 −0:933 −1:043

−0:183

0.334 0.376 0.182 0.333 3.866

0.368 0.377 0.186 0.337 3.868

— NP NP P P

 = 0:405

ˆ V1 V2 V3 V4

0.543 0.370 0.346 0.349 0.442

0.138

0.172 0.180 0.082 0.157 2.347

0.191 0.182 0.085 0.161 2.348

— NP NP P P

0.033 0.064 0.067 −0:043 −0:035 −0:059 −0:056

0.037

implementation of the inverse cumulative distribution function of F distributions. The uniform random number generator we used is a multiply-with-carry generator proposed by George Marsaglia in 1997 which has period approximately equal to 260 (Marsaglia, 1997) and passes stringent randomness tests. The Monte Carlo simulation experiment involved the standard maximum likelihood estimators, ˆ and , ˆ and also the following bias-adjusted estimators: V1 and V1 (nonparametric CBC), V2 and V2 (which use the resampling vector P∗ (·)), V3 and V3 (parametric CBC), V4 and V4 (LBC). The number of Monte Carlo replications was set at 5000 (6ve thousand) and the number of bootstrap replications at 2000 (two thousand). Likelihood maximization was performed using the quasi-Newton method of Broyden, Fletcher, Goldfarb and Shanno (generally referred to as ‘the BFGS method’); see Press et al. (1992, pp. 425 – 430). Note that the simulations are very computer-intensive since they involve millions of nonlinear maximizations. All simulations were performed using the object-oriented matrix programming language Ox (Doornik, 2001). In the tables bellow ‘NP’ denotes nonparametric bootstrap and ‘P’ denotes parametric bootstrap. The sample sizes considered were N = 49 and 121 corresponding, respectively, to 7 × 7 and 11 × 11 windows. The tables contain the true parameter values, mean estimates, bias, variance, and mean squared error (which is the sum of the squared bias and the variance). Table 2 relates to the case where N =49 and n=1, the parameter values being =−1 and  = 0:405. The value of  was set, as before, so that the resulting distribution has mean equal to one. The 6gures in this table show that the bias-adjusted estimators displayed smaller bias (in absolute value) than the standard MLEs. For instance, the bias of V1 was ¡ 15 (in absolute value) that of , ˆ the standard MLE of . Also, ˆ the standard the absolute biases of V1 and V4 represent approximately 14 that of , MLE of . However, it is important to note that the only estimators to substantially reduce both the bias and the variance were V2 and V2 , i.e., the estimators that use the information contained in the resampling vector P∗ (·). These estimators thus deliver sizeable reductions in the mean squared error, which is a measure that balances both

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

817

Table 3 Results for n = 3 and N = 49

Parameter

Estimator

Mean

Bias

Variance

MSE

Bootstrap

 = −1

ˆ V1 V2 V3 V4

−1:082 −0:985 −0:983 −0:982 −0:997

−0:082

0.077 0.054 0.054 0.051 0.057

0.084 0.054 0.054 0.051 0.057

— NP NP P P

 = 0:346

ˆ V1 V2 V3 V4

0.392 0.337 0.336 0.336 0.344

0.046

0.022 0.014 0.014 0.014 0.016

0.024 0.015 0.015 0.014 0.016

— NP NP P P

0.015 0.017 0.018 0.003

−0:009 −0:010 −0:010 −0:002

Table 4 Results for n = 1 and N = 49

Parameter

Estimator

Mean

Bias

Variance

MSE

Bootstrap

 = −3

ˆ V1 V2 V3 V4

−3:522 −3:546 −2:742 −3:523 −7:125

−0:522 −0:546

0.258 0.523 4.125

3.419 7.358 2.611 8.138 377.469

3.692 7.656 2.677 8.412 394.481

— NP NP P P

ˆ V1 V2 V3 V4

3.528 3.557 2.555 3.535 8.145

0.646 0.675 −0:327 0.653 5.263

5.193 11.180 3.908 12.229 699.150

5.610 11.635 4.015 12.656 726.844

— NP NP P P

 = 2:882

bias and variance. We see in Table 2 that the MSE of V2 is nearly half of that of the MLE of , . ˆ It is also noteworthy that the LBC estimator leads to large variance inGation, thus resulting in large MSEs when N =49. Although we do not present results for N = 121, we note that in this case the performance of the di1erent bias-corrected estimators become more similar, and they all outperform the standard MLEs both in terms of smaller absolute biases and in terms of smaller variances. Here, the LBC estimators are the ones to achieve the most signi6cant bias reductions. Table 3 reports results for the case where =−1, =0:346, n=3 (that is, the number of looks is increased to 3), and N =49. We see that the biases of the di1erent estimators, including the MLEs, are reduced relative to the case where n = 1, an expected feature of multilook processing. All corrected estimators outperformed the standard MLEs both in terms of absolute bias and in terms of variance, thus achieving smaller MSEs. Tables 4 and 5 relate to situation where  = −3,  = 2:882 and n = 1, corresponding to N =49 and N =121, respectively. Consider the 7×7 window case (Table 4). We see

818

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Table 5 Results for n = 1 and N = 121

Parameter

Estimator

Mean

Bias

Variance

MSE

Bootstrap

 = −3

ˆ V1 V2 V3 V4

−3:488 −3:329 −2:949 −3:283 −4:071

−0:488 −0:329 −0:283 −1:071

0.051

2.257 4.035 1.809 4.091 36.938

2.495 4.143 1.811 4.171 38.086

— NP NP P P

ˆ V1 V2 V3 V4

3.489 3.296 2.818 3.243 4.371

0.607 0.414 −0:064 0.361 1.489

3.527 6.370 2.805 6.416 76.794

3.896 6.541 2.809 6.546 79.010

— NP NP P P

 = 2:882

Table 6 Results for n = 3 and N = 49

Parameter

Estimator

Mean

Bias

Variance

MSE

Bootstrap

 = −3

ˆ V1 V2 V3 V4

−3:609 −3:073 −2:860 −3:031 −3:416

−0:609 −0:073 −0:031 −0:416

0.140

2.971 3.716 1.850 4.003 25.899

3.341 3.721 1.869 4.004 26.071

— NP NP P P

ˆ V1 V2 V3 V4

3.079 2.539 2.319 2.498 2.919

0.620 0.080 −0:140 0.039 0.460

3.109 3.904 1.893 4.200 32.434

3.493 3.910 1.913 4.202 32.645

— NP NP P P

 = 2:459

that the MLEs of  and  are seriously biased. For example, the absolute relative bias of ˆ is 0:522=3 = 17:4%, similar to the situation where  = −1 (Table 2), in which case we had 0:183=1 = 18:3%. However, here we note that the only bias-adjusted estimators of  and  to achieve bias reduction were the ones that use the resampling vectors, namely V2 and V2 . (Recall that these estimators are obtained by maximizing a modi6ed log-likelihood function, as proposed in Section 5.) These estimators are also the only ones to achieve variance reduction. For example, the variance of V2 was nearly 20% smaller than that of the MLE of . The LBC estimators, in particular, have a very poor showing, both in terms of bias and variance. Consider now the case of 11×11 windows (Table 5). The decrease in the bias of the MLEs was small, despite the increase in the number of observations. The corrected estimators V2 and V2 again outperformed all others in all criteria (bias, variance, MSE). The biases of these estimators decreased substantially as the number of observations increased. The results for the situation where =−3, =2:459 and n=3 (three looks) for N =49 and N = 121 are presented in Tables 6 and 7, respectively. Again, the increase in the

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

819

Table 7 Results for n = 3 and N = 121

Parameter

Estimator

Mean

Bias

Variance

MSE

Bootstrap

 = −3

ˆ V1 V2 V3 V4

−3:239 −2:977 −2:960 −2:959 −2:996

−0:239

0.749 0.598 0.518 0.573 0.610

0.806 0.599 0.519 0.575 0.610

— NP NP P P

 = 2:459

ˆ V1 V2 V3 V4

2.703 2.437 2.421 2.419 2.458

0.244

0.760 0.603 0.520 0.578 0.617

0.819 0.604 0.521 0.580 0.617

— NP NP P P

0.023 0.040 0.041 0.004

−0:022 −0:038 −0:040 −0:001

number of looks leads to better 6nite-sample estimation performance. The estimators V2 and V2 outperformed all others in terms of MSE. However, the parametric and nonparametric CBC estimators did achieve slightly larger bias reductions. The main conclusion that can be drawn from the results displayed in Tables 2–7 is that the most reliable estimators of the parameters that index the GA0 distribution in speckled data processing are the ones we proposed in Section 5, i.e., the ones obtained by modifying the log-likelihood function using the resampling vector from a nonparametric bootstrapping scheme prior to maximizing it. We now move to interval estimation, and our interest lies in the evaluation of the 6nite-sample performance of the standard asymptotic interval obtained from the standard MLE, the bootstrap percentile interval, and the BCa bootstrap interval. We consider both the parametric and nonparametric versions of the two bootstrap con6dence intervals. (All con6dence intervals are two-sided.) Fig. 7 displays the results for N = 49 (7 × 7 window),  = −1,  = 0:405, and n = 1 (one look). All con6dence intervals are two-sided and for . Fig. 7 also presents a histogram of the Monte Carlo realizations of the MLE of  such that ˆ ¿ − 6. This truncation is performed in order to enhance the visualization of the data, and discards only nine out of 5000 observations. It is noteworthy, though, that the smallest estimate was ˆ = −12:64. For each con6dence level (0.90, 0.95 and 0.95) we have a set of 6ve vertical bars corresponding to the following interval estimators: asymptotic (solid line), nonparametric percentile (dots), nonparametric BCa (dashes), parametric percentile (dot–dash), parametric BCa (dot–dot–dot–dash). The lines are drawn from the lower limit to the upper limit of the interval, and the numbers above (below) it indicate the percentages of the time that the true parameter was below (above) the lower (upper) limit of the con6dence intervals. We see that the length of the asymptotic con6dence interval is smaller than those of the bootstrap intervals. As for symmetry, the asymptotic interval is symmetric whereas the bootstrap intervals can capture the asymmetry observed in the distribution of the MLE. (The ‘x’ in the asymptotic intervals denote the mean of all Monte Carlo realizations of the ’s.) ˆ We note that the bootstrap

820

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Fig. 7. Interval estimation results for n = 1 and N = 49.

intervals have shown to be asymmetric, with their upper limits being closer to the ML estimate than their lower limits. When the evaluating criterion is coverage probability, the best performing method is the parametric BCa. For example, when the nominal coverage was 90%, the BCa interval included the true parameter value 89.32% of the time. The coverage probability of the asymptotic interval is also close to the nominal coverage probability. However, we see that the asymptotic interval concentrates all of the noncoverage probability in one tail of the distribution, namely the lower tail. This is because the distribution of the MLE is highly skewed when N is small, and the asymptotic interval does not capture that. The bootstrap intervals deliver more balanced two-sided intervals in the sense that the noncoverage probability is better divided in the two tails of the distribution. For brevity, we do not present the interval estimation results for . It is noteworthy, however, that when the parameter of interest is  the asymmetry of the distribution occurs in the opposite direction, and is captured by the bootstrap con6dence intervals which are all asymmetric. Again the best performing con6dence interval as far as coverage probability goes was the one obtained from the parametric BCa method. And again the asymptotic interval concentrated all of the noncoverage probability in one tail of the distribution, this time the upper tail. Since the results for the other cases are qualitatively similar to the one described above, they are omitted. Overall, the results suggest that BCa con6dence intervals obtained from parametric bootstrapping schemes can be more accurate than standard asymptotic con6dence intervals based on the asymptotic normality of the MLE.

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

821

Fig. 8. Estimated roughness over an urban area with ML dashes and corrected ML (solid line).

7. Application to in-)ight analysis As an application, two slices of the original image shown in Fig. 3 were analyzed using both the ML and the corrected ML estimators. (For the analysis, we used the

822

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

simplest corrected estimator, namely the nonparametric CBC estimator.) These slices were taken from in-Gight data, presented as the white arrow in Fig. 3, and they correspond to an urban area and to a forest region. The results of the estimated roughness for these slices, using a sliding window of 11 × 11 pixels, are presented in Figs. 8 and 9, respectively. If only ML estimation were used (dashes in Fig. 8), the analysis of the urban data would lead to the conclusion that amongst the extremely heterogeneous targets (with values ranging in the [ − 1; −5] interval, approximately) there is a heterogeneous spot, the one corresponding to the value −9. The corrected estimator, on the other hand, suggests the existence of several heterogeneous spots and of a very homogeneous area. The ground truth suggests that the latter analysis is the correct one, since the data correspond to small houses separated by gardens with trees (heterogeneous targets) and grass (homogeneous targets). These 6gures show both the analysis of the data (top) and the data themselves (bottom), as a dark strip over a light background. Each pixel corresponds to, approximately, 1 m2 on the ground. The forest slice was taken over a closed forest, with no man-made objects, so values in the [ − 5; −15] interval are expected. Most of the values produced by the ML estimator (dashes in Fig. 9) over this region would lead to the erroneous conclusion that extremely heterogeneous spots are under analysis. The corrected estimator leads to more accurate conclusions, namely that heterogeneous rather than extremely heterogeneous spots are under analysis, as can be seen in the solid line. A few very homogeneous targets are also detected by the corrected estimator, that may correspond to small clearings in the forest.

8. Concluding remarks This paper used bootstrap methods to improve the processing of images obtained with coherent illumination that are contaminated by speckle noise. We used a model that proved to be quite useful for such images, and obtained bias-adjusted maximum likelihood estimates for the parameters that index the model using di1erent bootstrapping schemes. We proposed a particular bootstrapping scheme, and have shown that it typically outperforms other bootstrap methods when the number of observations is small. Bootstrap-based BCa con6dence intervals based on parametric resampling were also shown to outperform standard asymptotic con6dence intervals based on maximum likelihood estimates. By contrasting real data, ML inference, bootstrap inference, and ground truth we conclude that improved estimator yields more accurate conclusions in a synthetic aperture radar image.

Acknowledgements The partial 6nancial support from CAPES, CNPq, FACEPE and Vitae is gratefully acknowledged. We also thank two anonymous referees for comments and suggestions.

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

823

Fig. 9. Estimated roughness over a forested region with ML (dashes) and corrected ML (solid line).

References Barndor1-Nielsen, O.E., Blaesild, P., 1981. Hyperbolic distributions and rami6cations: contributions to theory and applications. In: Taillie, C., Baldessari, B.A. (Eds.), Statistical Distributions in Scienti6c Work. Reidel, Dordrecht, pp. 19–44.

824

F. Cribari-Neto et al. / Computational Statistics & Data Analysis 40 (2002) 801 – 824

Bickel, P.J., Doksum, K.A., 2001. Mathematical Statistics: Basic Ideas and Selected Topics, Vol. 1, 2nd Edition. Prentice-Hall, Upper Saddle River. Davison, A.C., Hinkley, D.V., 1997. Bootstrap Methods and their Application. Cambridge University Press, New York. Doornik, J.A., 2001. Ox: an Object-oriented Matrix Programming Language, 4th Edition. Timberlake Consultants, London, & http://www.nuff.ox.ac.uk/Users/Doornik, Oxford. Efron, B., 1979. Bootstrap methods: another look at the jackknife. Ann. Statist. 7, 1–26. Efron, B., 1981. Nonparametric standard errors and con6dence intervals. Canad. J. Statist. 9, 139–172. Efron, B., 1982. The Jackknife, the Bootstrap and Other Resampling Plans. Society for Industrial and Applied Mathematics, Philadelphia. Efron, B., 1990. More eKcient bootstrap computations. J. Amer. Statist. Assoc. 85, 79–89. Efron, B., Tibshirani, R.J., 1993. An Introduction to the Bootstrap. Chapman and Hall, New York. Ferrari, S.L.P., Cribari-Neto, F., 1998. On bootstrap and analytical bias corrections. Econom. Lett. 58, 761– 782. Frery, A.C., Yanasse, C.C.F., Sant’Anna, S.J.S., 1995. Alternative distributions for the multiplicative model in SAR images. In: Quantitative Remote Sensing for Science and Applications, GARSS’95 Proc., IEEE, Florence, pp. 169 –171. Frery, A.C., MPuller, H.-J., Yanasse, C.C.F., Sant’Anna, S.J.S., 1997. A model for extremely heterogeneous clutter. IEEE Trans. Geosci. Remote Sensing 35, 648–659. Goodman, J.W., 1985. Statistical Optics. Wiley, New York. JHrgensen, B., 1982. Statistical Properties of the Generalized Inverse Gaussian Distribution. Springer, Berlin, New York. MacKinnon, J.G., Smith Jr., A.A., 1998. Approximate bias correction in econometrics. J. Econometrics 85, 205–230. Marsaglia, G., 1997. A random number generator for C. Discussion Paper, posting on usenet newsgroup sci.stat.math. Mejail, M., Jacobo-Berlles, J., Frery, A.C., Bustos, O.H., 2000. Parametric roughness estimation in amplitude SAR images under the multiplicative model. Rev. de Teledet. 13, 37–49. Mittelhammer, R.C., Judge, G.G., Miller, D.J., 2000. Econometric Foundations. Cambridge University Press, New York. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992. Numerical Recipes in C: The Art of Scienti6c Computing, 2nd Edition. Cambridge University Press, New York. SerGing, R.J., 1980. Approximation Theorems of Mathematical Statistics. Wiley, New York. Vasconcellos, K.L.P., Frery, A.C., 1996. Maximum likelihood 6tting of extremely heterogeneous radar clutter. In: First Latin-American Seminar on Radar Remote Sensing: Image Processing Techniques, First Latino-American Seminar on Radar Remote Sensing: Image Processing Techniques (SP 407 Fev. 1997), Noordwijk, The Netherlands, ESA, pp. 97–101.

Improved estimation of clutter properties in ... - Semantic Scholar

in speckled imagery, the statistical framework being the one that has provided users with the best models and tools for image processing and analysis. We focus ...

644KB Sizes 1 Downloads 315 Views

Recommend Documents

Improved estimation of clutter properties in ... - Semantic Scholar
0167-9473/02/$-see front matter c 2002 Elsevier Science B.V. All rights reserved. ... (See Ferrari and Cribari-Neto, 1998, for a comparison of computer-intensive and analytical ... degrees of homogeneity, and different models can be used to encompass

nonparametric estimation of homogeneous functions - Semantic Scholar
xs ~the last component of Ix ..... Average mse over grid for Model 1 ~Cobb–Douglas! ... @1,2# and the mse calculated at each grid point in 1,000 replications+.

nonparametric estimation of homogeneous functions - Semantic Scholar
d. N~0,0+75!,. (Model 1) f2~x1, x2 ! 10~x1. 0+5 x2. 0+5!2 and «2 d. N~0,1!+ (Model 2). Table 1. Average mse over grid for Model 1 ~Cobb–Douglas! s~x1, x2! 1.

Improved Competitive Performance Bounds for ... - Semantic Scholar
Email: [email protected]. 3 Communication Systems ... Email: [email protected]. Abstract. .... the packet to be sent on the output link. Since Internet traffic is ...

LEARNING IMPROVED LINEAR TRANSFORMS ... - Semantic Scholar
each class can be modelled by a single Gaussian, with common co- variance, which is not valid ..... [1] M.J.F. Gales and S.J. Young, “The application of hidden.

Improved prediction of nearly-periodic signals - Semantic Scholar
Sep 4, 2012 - A solution to optimal coding of SG signals using prediction can be based on .... (4) does not have a practical analytic solution. In section 3 we ...

decentralized set-membership adaptive estimation ... - Semantic Scholar
Jan 21, 2009 - new parameter estimate. Taking advantage of the sparse updates of ..... cursive least-squares using wireless ad hoc sensor networks,”. Proc.

Estimation, Optimization, and Parallelism when ... - Semantic Scholar
Nov 10, 2013 - Michael I. Jordan. H. Brendan McMahan. November 10 ...... [7] J. Ma, L. K. Saul, S. Savage, and G. M. Voelker. Identifying malicious urls: An ...

gender discrimination estimation in a search ... - Semantic Scholar
and occupation, has long been considered a possible indication of prejudice ...... The availability of administrative data has made the construction of data sets ..... “An Equilibrium Model of Health Insurance Provision and Wage Determination,”.

Uncertainty quantification and error estimation in ... - Semantic Scholar
non-reactive simulations, Annual Research Briefs, Center for Turbulence Research, Stanford University (2010) 57–68. 9M. Smart, N. Hass, A. Paull, Flight data ...

DC Conductivity and Dielectric Properties in Silver ... - Semantic Scholar
Sep 28, 2009 - theory of the influence of dipoles on the dielectric constant in solution25 and the Fisher- ...... DSM wishes to thank Eugene Kotomin, Radha Banhatti, Eugene Heifets and Rotraut. Merkle for helpful discussions. JM would like to dedicat

An Improved Version of Cuckoo Hashing: Average ... - Semantic Scholar
a consequence, new implementations have been suggested [5–10]. One of ... such a case, the whole data structure is rebuild by using two new hash functions. ... by s7,s6,...,s0 and assume that f denotes an array of 32-bit random integers.

An Improved Version of Cuckoo Hashing - Semantic Scholar
Proof (Sketch). We model asymmetric cuckoo hashing with help of a labelled bipartite multigraph, the cuckoo graph (see [11]). The two sets of labelled nodes ...

Development of pre-breeding stocks with improved ... - Semantic Scholar
sucrose content over the zonal check variety CoC 671 under early maturity group. Key words: Recurrent selection – breeding stocks - high sucrose – sugarcane.

DC Conductivity and Dielectric Properties in Silver ... - Semantic Scholar
Sep 28, 2009 - pair theory for liquid electrolytes in the restricted primitive model (RPM) to the solid case. As DHL generally ... the context of solid-state defect modeling; the idea of comparing them to the lattice simulations is to ..... the side

Improved prediction of nearly-periodic signals - Semantic Scholar
Sep 4, 2012 - †School of Engineering and Computer Science, Victoria University of ... tive measures and in terms of quality as perceived by test subjects. 1.

Timing properties of gene expression responses to ... - Semantic Scholar
Feb 7, 2009 - Computer Science Department, Stanford University. Stanford ... tosolic ribosomal genes are only repressed after mitochondrial ribosome is ..... gene expression profile for three methods: polynomials of degree 2 and 3, and.

Distributed representation and estimation of WFST ... - Semantic Scholar
Even using very efficient specialized n-gram rep- resentations (Sorensen ... the order of 400GB of storage, making it difficult to access and ...... anced shards. Additionally, the Flume system that .... Computer Speech and Lan- guage, 8:1–38.

Estimation of the Minimum Canopy Resistance for ... - Semantic Scholar
Nov 4, 2008 - ence of air temperature (F4) assumes all vegetation photosynthesizes ...... vegetation network during the International H2O Project. 2002 field ...

Timing properties of gene expression responses to ... - Semantic Scholar
Feb 7, 2009 - namic analysis for determining the quality of time-series expression profiles. Nature Biotechnology 23:1503–1508. [9] Storey J, Xiao W, Leek J, ...

Some Properties of the Lemoine Point - Semantic Scholar
Jun 21, 2001 - Let A B C be the pedal triangle of an arbitrary point Z in the plane of a triangle. ABC, and consider the vector field F defined by F(Z) = ZA + ZB + ...

Magnetic Properties of Materials, Dilute Magnetic ... - Semantic Scholar
Dec 10, 2003 - with the Fermionic (half integers) and Bosonic (integers) character of particles which can be proved in Dirac's .... particles having non-zero mass, particles would have angular momentum as well. ... magnetic field a second order effec

Some Properties of the Lemoine Point - Semantic Scholar
21 Jun 2001 - system about K gives the system x = −λ1x, y = −λ2y. Let us call the axes of this coordinate system the principal axes of the Lemoine field. Note that if △ABC is a right triangle or an isosceles triangle (cf. conditions. (5)), th

Adaptive minimax estimation of a fractional derivative - Semantic Scholar
We observe noisy data. Xk ј yk ю exk; k ј 1; 2; ... ,. (1) where xk are i.i.d. Nр0; 1Ю, and the parameter e>0 is assumed to be known. Our goal is to recover a vector.