1398

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

Constrained Decentralized Estimation Over Noisy Channels for Sensor Networks Tuncer Can Aysal, Student Member, IEEE, and Kenneth E. Barner, Senior Member, IEEE

Abstract—Decentralized estimation of a noise-corrupted source parameter by a bandwidth-constrained sensor network feeding, through a noisy channel, a fusion center is considered. The sensors, due to bandwidth constraints, provide binary representatives of a noise-corrupted source parameter. Recently, proposed decentralized, distributed estimation, and power scheduling methods do no consider errors occurring during the transmission of binary observations from the sensors to fusion center. In this paper, we extend the decentralized estimation model to the case where imperfect transmission channels are considered. The proposed estimator, which operates on additive channel noise corrupted versions of quantized noisy sensor observations, is approached from maximum likelihood (ML) perspective. The resulting ML estimate is a root, in the region of interest (ROI), of a derivative polynomial function. We analyze the natural logarithm of the polynomial within the ROI showing that the function is log-concave, thereby indicating that numerical methods, such as Newton’s algorithm, can be utilized to obtain the optimal solution. Due to complexity and implementation issues associated with the numerical methods, we derive and analyze simpler suboptimal solutions, i.e., the two-stage and mean estimators. The two-stage estimator first estimates the binary observations from noisy fusion center observations utilizing a threshold operation, followed by an estimate of the source parameter. The optimal threshold is the maximum a posteriori (MAP) detector for binary detection and minimizes the probability of binary observation estimation error. Optimal threshold expressions for commonly utilized light-(Gaussian) and heavy-tailed (Cauchy) channel noise models are derived. The mean estimator simply averages the noisy fusion center observations. The output variances of means of the proposed suboptimal estimators are derived. In addition, a computational complexity analysis is presented comparing the proposed ML optimal and suboptimal two-stage and mean estimators. Numerical examples evaluating and comparing the performance of proposed ML, two-stage and mean estimators are also presented. Index Terms—Distributed estimation, maximum likelihood estimation, parameter estimation, sensor networks.

I. INTRODUCTION HE problem of decentralized estimation is studied in the context of distributed control [1]–[4], tracking [5], multirobot localization [6], data fusion [7]–[11], and recently in wireless sensor networks (WSNs) [12]–[18]. WSNs comprise a

T

Manuscript received January 10, 2007; revised July 27, 2007. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Steven M. Kay. This work was supported in part by NSF under Grant 0728904. The authors are with the Electrical and Computer Engineering Department, University of Delaware, Newark, DE 19716 USA (e-mail: [email protected]. udel.edu; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSP.2007.909006

large number of geographically distributed nodes characterized by power constraints and limited computation capability. While a number of works address sensor collaboration for distributed detection [7]–[11], the challenging problem of distributed estimation has recently received attention. In distributed estimation for WSNs, each sensor has available a subset of the observations that must be transmitted to a central node, or fusion center. Various WSN implementations and quantizer design issues are considered in [12]–[14], and [18]–[21]. A constraint in many WSNs is that bandwidth is limited, necessitating the use and transmission of quantized binary versions of the original noisy observations. Many recent efforts address the estimation of a deterministic source signal from quantized noisy observations [12]–[17]. When the probability density function (pdf) of the sensor noise is known, transmitting a single bit per sensor leads to minimal loss in estimator variance compared with a clairvoyant estimator (estimator based on unquantized measurements) [15], [16], [22]. Alternatively, when the sensor noise pdf is unknown, pdf-unaware estimators based on quantized sensor data have also been introduced recently [14], [16], [17]. The distributed estimation techniques considered in the previously proposed methods are based on quantized noisy sensor observations. These methods thus subsequently assume that the transmission of binary observations from sensors to fusion center is perfect. In this paper, we extend the distributed estimation model to admit transmission imperfectness, i.e., we consider the case where the quantized noisy sensor observations are corrupted by additive noise during transmission from sensor to fusion center. Our estimator is hence based on noisy quantized versions of noisy sensor observations. Utilizing this extended WSN model, we derive the maximum likelihood (ML) estimate of a deterministic source signal. The ML optimal estimate, however, relies on polynomial root finding and selection. Presented analysis shows that the polynomial is log-concave in the region of interest, enabling the use of the Newton’s method to obtain the optimal solution. To further address the complexity and implementation issues of the optimal ML estimator, we propose two, fast, simple and practical suboptimal solutions: The two-stage estimation technique, in which the binary sensor outputs are first estimated utilizing their corrupted versions, followed by estimation of the source signal. The binary sensor output estimation, in this case, is based on a thresholding operation where the threshold is obtained through the maximum a posteriori (MAP) detector and it minimizes the probability of binary sensor output estimation error. The optimal thresholds are derived for the Gaussian and Cauchy pdfs, which are successfully utilized in the literature to model light- and heavy-tailed environments, respectively. The

1053-587X/$25.00 © 2008 IEEE

AYSAL AND BARNER: CONSTRAINED DECENTRALIZED ESTIMATION

1399

mean estimator is the second proposed suboptimal estimator. The mean estimator simply averages the noisy observations received at the fusion center. The proposed methods are analyzed through the determination of estimator variances and computation costs, which are the critical criterions in WSN applications. Finally, numerical experiments evaluating and comparing the performance and processing times of the ML, two-stage, and mean estimators are presented. The remainder of this paper is organized as follows. The problem formulation and the extended WSN model admitting transmission noise is introduced in Section II. The estimator of a deterministic source signal utilizing the corrupted quantized noisy sensor observations is derived in Section III. Computationally attractive solutions relying on a two-stage algorithm and averaging are presented in Section IV, along with the probability of error and optimal threshold derivations. The output variances of these estimators and computational complexity analysis are also derived in this section. Section V details the experiments evaluating and comparing the performance of the ML, two-stage and mean estimators. Finally, conclusions are drawn in Section VI. II. PROBLEM FORMULATION Consider a set of distributed sensors, each making observations of a deterministic source signal . The observations are corrupted by additive noise and are described by [12]–[18] (1) Noise samples are assumed zero-mean, spatially uncorrelated, and independent and identically distributed (i.i.d.). Furthermore, the density function , where of the sensor noise is denoted by denotes the scale parameter of . Note that this model is especially valid when the sensors are very close to each other, and their relative distances are smaller than their distances to the source, so that they observe the identical source signal . Suppose a fusion center is to estimate based on the noisy . If the fusion sensor observations center has knowledge of the sensor noise density function and sensors are capable of sending the observations to the fusion center without distortion, then the fusion center can simply perform the ML estimate of , i.e., clairvoyant estimator (2) where . This scheme is only applicable in a centralized estimation situation where observations are either centrally located, or can be transmitted to a central location without distortion. Neither of these requirements is realistic in a WSN, where the sensor nodes are bandwidth constrained and the communication links between the fusion center and sensors are noisy. Constraints on sensor cost, bandwidth, and energy budget dictate that low quality sensor observations observations have to be aggressively

p

Fig. 1. Decentralized, noisy channels WSN scheme with a fusion center where g(m(k)) = E m(k).

quantized [23]. To this end, we consider the quantization operation as the construction of a set of indicator variables, which are binary observations [12]–[18] (3) is a threshold defining , denotes the set of where is the indicator function. In addition, due real numbers, and to imperfections of communication links between sensor nodes and the fusion center, we further extend the model to include channel noise utilizing binary phase shift keying (BPSK) modulation (4) is the signal transmitted from the where are assumed to be sensor output, the zero-mean, unit variance, i.i.d. channel noise samples, is are the noisy obserthe bit energy and vations received at the fusion center. Moreover, the normalized . density function of the link noise is denoted by The signal-to-noise ratio (SNR) is, thus, defined as SNR

(5)

As a result, we consider the extended decentralized scheme shown in Fig. 1, where denote the sensors. III. ESTIMATION BASED ON NOISY BINARY OBSERVATIONS Consider the most demanding bandwidth constraint case, in obserwhich sensors are restricted to transmit one bit per vation. Instrumental to the WSN scheme presented in Section II is a Bernoulli random variable with paramis the fact that eter (6) is the cumulative distribution function of . where and Also of note is that . The probability density function of the noisy observations received at the fusion center, , is then given by i.e., (7)

1400

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

where (8a) and (8b) The likelihood function of is given by

based on the noisy observations

. Although con(0,1) yielding the maximum value of ceptually simple, polynomial root finding is not desirable due to its high complexity and precision problems for high-order polynomials [24]. To overcome the drawbacks of root finding algorithms, we utilize the Newton’s iteration technique. Convergence of Newton’s algorithm to the optimal solution is guaranteed [25] in this case since the following proposition proves that is log-concave in (0,1). Proposition 2: Let us denote

for

(14)

and (15a) (9) where

and

. Let us define

(15b) (10)

Note that the is the probability that the binary sensor observais zero, i.e., , and is restricted to tion the open interval (0,1). To simplify the problem, we first derive the estimate for and utilize the invariance of the ML estimate , thus, to estimate using (10). The ML estimate of reduces to

The function is log-concave in such that concave in Proof: Consider

: for

is .

(16) The first and second derivatives of the above is given by

(11) (17) where (12) The following proposition, the proof of which follows directly from (12) is omitted for brevity, brings together some important . properties of Proposition 1: Denote (13) The following statements hold: is a polynomial in of degree i) The function and the roots of are . has a finite number [at most ] ii) The function of local extrema. lying iii) The estimate is one of the local maxima of in the (0,1) interval. can be calculated as a root of Hence, the estimate for the order polynomial derivative of lying in since the roots are given (0,1). It is easy to generate values. It by the from . The roots of is also simple to obtain the function can be calculated utilizing the traditional polynomial root finding techniques, for example, the LAPACK and EISPACK routines [24], and is selected as the root in

(18) indicating that for . This subsequently indicates that is concave in . Since cannot be found in closed-form, and is logconcave, Newton’s algorithm is utilized to obtain optimal . Newton’s algorithm is based on the following iteration: (19) where denotes the iteration number. Newton’ algorithm is guaranteed to converge to the optimal solution regardless of the is concave. initialization since Furthermore, the ML estimate for is now given by (20) where we utilized the invariance of ML estimate. To ensure the bijectivity of the cumulative density function, we assume that satisfies for all , i.e., the cumulative distribution function is strictly increasing, where denotes the support of . Remark 1: The ML estimator for the decentralized scheme with noisy communication channels reduces to the estimator for

AYSAL AND BARNER: CONSTRAINED DECENTRALIZED ESTIMATION

1401

decentralized scheme with perfect channels as SNR tends to infinity, i.e.

. The estimate where variable with

is thus a Bernoulli random

(21) where

, which implies (22) (24)

A careful inspection shows that this estimator is equivalent to the estimator proposed in [12] and [15]. Proof: See Appendix A. Although well established, Newton’s algorithm requires the and at each iteration. evaluation of In addition, since Newton’s algorithm is recursive, these functimes, where denotes the tion evaluations are performed total number of iterations. Due to these issues, we develop practical, easy-to-implement and fast suboptimal solutions in the following.

to a simple In the cases, we can simplify thresholding test such as, , e.g., Gaussian channel noise case, the above reduces to (25) where we define

and

IV. FAST AND EFFECTIVE SUBOPTIMAL SOLUTIONS Two computationally effective, easy-to-implement and accurate suboptimal solutions are proposed in this section. The two-stage estimator first estimates the binary observations from noisy fusion center observations utilizing a threshold, then provides an estimate for the source parameter. The optimal threshold is designed to minimize the probability of error during the binary observation estimation. Optimal threshold values for commonly utilized light- and heavy-tailed channel noise models are derived, including for the Gaussian and Cauchy density functions that are justified by the Central and Generalized Central Limit Theorems. The mean estimator, which requires the minimum amount of information and computational load, simply averages the noisy fusion center observations to provide an estimate of the source parameter. A. Two-Stage Estimator The estimation of is decomposed here into two stages. The values are first estimated from the . The noisy fusion center observations resulting estimates are Bernoulli random variables with param. Subsequently, is estimated utilizing the eter values. Finally, we provide an estimate for using the invariance of the ML estimate. In the following analysis, we approach the sensor output detection from a maximum a posteriori (MAP) perspective. Direct solutions are derived for channel noise density functions that result in threshold realizations for the MAP detector, e.g., Gaussian channel noise. Note that the presented approach is also applicable to general noise density functions with the resulting solution requiring direct evaluation of the MAP criteria. terms are binary Recall that the random values. The estimation of from the noisy fusion center observations reduces to (23)

(26)

, given the estimated The estimate of Bernoulli random variables, simply reduces to (27) Given the estimate for

, the estimate for

is found as (28)

The two-stage estimator consists of three steps: 1) Estimate the binary observations . utilizing the 2) Estimate the Bernoulli random variable estimates. 3) Compute utilizing the (invariant) relation between and . Remark 2: Consider the case where SNR, , i.e., . This is the case in which the noisy transmission channel WSN model reduces to the noise-free transmission channel WSN model previously considered in the literature [12], [15]. Considering the cases where the optimal detection reduces to thresholding, the optimal threshold, hence, reduces . The two-stage estimator in this case is to

(29) The proof follows similarly to Appendix A and is thus omitted for brevity. The estimator in (29), as expected, is equivalently derived as the estimator of from quantized noisy sensor observations [12], [15]. The two-stage algorithm provides a computationally attractive, simple and easy-to-implement solution

1402

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

for the source parameter estimation problem. However, due to values the direct estimation of the and subsequent use of these estimates, the two-stage approach introduces estimation error and requires the design of detection procedures, both of which are investigated in the following. and Probability of Error: Since the binary 1) Design of values are estimated from the noisy fusion center observations, , in the two-stage estimation algorithm, this estimation is cast as a binary detection problem in the following. The optimal threshold is approached from MAP detector perspective [26], [27]. Let the two hypotheses to be is the sensor output where WSNs,

Cauchy Distributed Channel Noise: Consider next the case , in which the channel noise samples, obey the Cauchy density

(36) is the scale parameter. where Substituting the Cauchy density in (31), after some manipulations, gives

(30)

(37)

. The MAP decision rule is, noting that in the and , thus, given by

to avoid cumbersome notation. where we denoted , the roots of Note that (37) is a second order polynomial in which are given by

(31)

(38) Now, let

where we define

and accordingly . The MAP optimal detection reduces to

(32) It is easy to show that the MAP detector, obtained by solving (31), minimizes the probability of binary detection error [27], which, in this case, is given by

if if (39) , the optimal detection threshold is . and for case, which occurs for Remark 3: Note that the , yields , in both of the Gaussian and Cauchy cases. This is expected since

(33) where denote the complementary set. 2) Optimal Detections for the Gaussian and Cauchy Density Cases: The optimal thresholds for the Gaussian and Cauchy density functions are given here. The Gaussian density is a member of Generalized Gaussian density (GGD) family with tail parameter two. The Cauchy density function is a member ) and is the only of alpha-Stable density family (where closed-form expression in the family other than the Gaussian density [28]. The optimal thresholds for these commonly utilized models are established in the following. Discussion regarding the optimal thresholds is presented after the derivations. Gaussian Distributed Channel Noise: Let the channel noise , obey the Gaussian density samples, function (34) The MAP optimal by

(40) Since this case implies that , it is clear that the optimal threshold must reduce to zero. Note that the optimal thresholds are functions of the unknown parameter . Iterative algorithms, such as the ones discussed in [12], [18] can be utilized to iteratively improve the fusion center detection threshold in Gaussian and non-Gaussian cases. Moreover, a stronger result holds for the Gaussian case as, invoking can be estimated directly from the law of large numbers, the fusion center observations since yielding an estimator for (41)

B. Mean Estimator

, is obtained by solving (31) and is given

An estimate for that requires minimum information and complexity is defined as

(35)

(42)

AYSAL AND BARNER: CONSTRAINED DECENTRALIZED ESTIMATION

1403

where . This estimator assumes that the channel noise has a finite mean, a constraint that holds for Gaussian density but not for the Cauchy density. To see the effectiveness of the mean estimator in finite mean valued channel noise cases, consider (43) which follows from the law of large numbers. Now note that

series method), which provides the asymptotic variance of an estimator [29] (47) where is a twice differentiable function. The utilization of the delta method requires differentiation of inverse cdf which is discussed in the following lemma (the proof of which is elementary). is given Lemma 1: The derivative of an inverse cdf by (48)

(44) yielding (45) Now, inverting the equation and solving for gives the mean estimator. Remark 4: As in the case of the two-stage estimator, the case, to the estimean estimator also reduces, in the mator in (29), i.e., the estimator of based strictly on quantized noisy sensor observations [12]–[16]. This is seen by noting that , .

where is the corresponding pdf. The asymptotic variance of the two-stage estimator is given in the following proposition. Proposition 3: The asymptotic variance of the two-stage estimator is given by (49), shown at the bottom of the page, where and are obtained substituting in their expressions. Proof: See Appendix B. Consider next the variance of the mean estimator, which is given by

(50)

C. Estimator Variances Variance is an important evaluation criteria for estimators. The CRLB of any unbiased estimator operating on a WSN with noisy channels and the variance of the ML estimator are intractable due to expectation of inverted squared random variable and the utilization of Newton’s algorithm. However, note that the ML estimators asymptotically achieve the CRLB and that the variance of the optimal ML estimator is addressed through extensive simulations in Section V. The variances of the estimates obtained by the two-stage and mean estimators are, however, tractable and derived in the following for the cases where the optimal detection reduces to a thresholding test. Consider first the variance of the two-stage estimator. It is easy to see that the variance of the two-stage estimator is given by (46) is problematic. To Note that the derivation of avoid this difficulty, we make use of the delta method (or Taylor

The asymptotic variance of the mean estimator is given in the following proposition. Proposition 4: The asymptotic variance of the mean estimator, for channel noise with finite variance, is given by

(51) where again denotes the SNR. Proof: See Appendix C. The following proposition considers the asymptotical statistical expectations of the two-stage and mean estimators. Proposition 5: The two-stage and mean estimators are asymptotically unbiased estimators of . The proof follows from the weak law of large numbers and , and is omitted here for brevity. the fact that The two-stage and mean estimator variances given in (49) and (51) can be contrasted with the CRLB of the estimator operating

(49)

1404

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

Fig. 3. Minimum variance function () as a function of SNR; , in the optimal  =  case. The critical crossover point is represented with dotted vertical and horizontal lines.

versely, the mean estimator provides better performance that the two-stage estimator for low SNR. This is attributable to the reerror increase, which thereby sulting decreases the two-stage estimator performance. The two-stage and mean estimator variances are also plotted for the optimal case, Fig. 2(b). Observations similar to previous case are drawn. Suboptimal Estimator Variance Comparison in The Optimal Case: The minimum variance bound, (52), is , which in the Gaussian sensor noise case achieved when yields [12], [15] Fig. 2. Two-stage and mean estimator variances as function of number of sensors, K , and SNR,  in dB. (a)  = 1:0;  = 1:5 and  = 1. (b) Two-stage and mean estimator variances for optimal case  =  .

directly on , i.e., the estimator operating on the quantized noisy sensor observations and given in (29). The CRLB of this estimator is [12], [15]

(53) It is easy to show that the two-stage and mean estimators also achieve their minimum at . Moreover, the minimum variances of the two-stage and mean estimators, for Gaussian sensor and channel noise case, are discussed in the following corollaries the proofs of which are omitted since they simply follow in (49) and (51). by replacing Corollary 1: The minimum variance of the two-stage estimator is given by

(52) It is straightforward to show that variance of the two-stage and mean estimators, i.e., (49) and (51), reduces to the variance of the estimator operating on the quantized noisy sensor observations in the absence of channel noise. The two-stage estimator and mean variances are plotted in Fig. 2(a) as a function of number of sensors, , and SNR, in (dB), for the Gaussian channel noise case. The parameters are: and . As expected, the estimator variances decrease as increases and decreases. Also of note is that for high SNR, the two-stage estimator exhibits smaller vari, ance than the mean estimator. This is expected since as the estimation error goes to zero. Con-

(54) where we define to simplify the notation in the two-stage estimator variance case. Corollary 2: The minimum variance of the mean estimator is given by (55) where is the SNR. denote the minimum asymptotic variance ratio Now let of the two-stage and mean estimators for the Gaussian channel

AYSAL AND BARNER: CONSTRAINED DECENTRALIZED ESTIMATION

1405

TABLE I THE NUMBER OF OPERATIONS REQUIRED FOR OPTIMAL AND SUBOPTIMAL ALGORITHMS

noise case (56) The minimum variances ratio is plotted in Fig. 3 as a function of SNR for the Gaussian case. Note that the TSE provides a smaller variance than the ME in high SNR regimes. This is due the fact that the estimation errors of the binary observations from their corrupted versions is very small, resulting a performance close to the BE. In low SNR regimes, the ME has a smaller variance than the TSE since, in this case, the binary observation estimation errors are large due to low SNR. Also, as expected, the since both estimators conratio converges to unity as verges to the estimator operating on binary values [12], [15]. , the two-stage estimator It is clear that when (mean estimator) outperforms the mean estimator (two-stage into estimator). Substituting and utilizing the fact that , yields (57) Utilizing the series expansion for high precision [30]

, to the tenth power for

(58) and solving for the roots utilizing the LAPACK and EISPACK routines [24] indicates that (59) This is also equivalent to the case (dB). This indicates that the two-stage estimator (mean estimator) outperforms the mean estimator (two-stage estimator) when the system is (dB). Also of note is that characterized by the relative performance is independent of the sensor noise parameter, , since it is canceled out in the ratio operation. In the heavy-tailed Cauchy channel noise case, the mean estimator , indicating that the output variance is , since two-stage estimator outperforms the mean estimator for all . D. Computational Complexity Computational complexity is an important consideration in on-line processing systems such as WSNs [31]–[33]. The computational complexity of the proposed optimal and suboptimal

solutions are investigated in this section. For the two-stage estimator, we consider the cases where the optimal detector is a thresholding operation. Consider first the optimal maximum likelihood estimator. Recall that the optimal maximum likelihood estimator utilizes the Newton’s iteration technique [25] which requires the compuvalues. tation of the The number of operation required to compute these values is , where and desummarized as note the addition operation and the number of operations (additions, multiplications, function evaluations, etc., ) to compute times. The computation of , , requires at for a given each iteration. The overall complexity of the optimal maximum likelihood estimator is, hence, given by , where is denotes the computational the number of iterations and load for (20), i.e., the number of operations required to calculate for given . To evaluate the complexity of the two-stage estimator, rethresholding operations, where call that first stage requires the corresponding complexity of this operation is denoted as . Thresholding is followed by averaging of, in the worst samples, which requires and . The case, and two-stage estimator also requires the computation of , with the corresponding computation complexity of each, , where denotes the required opdenoted as erations to calculate the cdf once. The overall complexity of the two-stage estimator is then . Noting that thresholding is equivalent to mulreduces to tiplication with a step function, , where denotes the number of operations required to evaluate the step function. The complexity is then . The computational complexity of the mean estimator is . The overall complexities simply of the proposed algorithms are summarized in Table I, where OMLE denotes the optimal ML estimator. It is clear from the table of results that the OMLE has the highest complexity, followed by the TSE and ME, respectively. Although the , as TSE requires the derivation of the optimal threshold, overhead, this calculation is rather simple since it only requires , multiplications, and additions. function evaluations such as V. NUMERICAL EXPERIMENTS The proposed optimal ML estimator (OMLE), and suboptimal two-stage (TSE) and mean estimator (ME) fusion centers are evaluated through illustrative numerical experiments. Considered are the output variances and processing times of the proposed estimators. The variances of the clairvoyant estimator (CE) and the estimator operating on quantized (binary)

1406

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

A. Estimator Variances To evaluate the fusion center performance for the various estimation techniques, consider an examples in which the sensor and channel noise are taken as Gaussian distributed random variables. The parameters of the experiment are: . The variance of the estimate (ensemble average of 10 000 experiments) is plotted as a function of the number of in Fig. 4(a). Fig. 4(b) plots the variance of the sensors, proposed estimators for varying SNR in dB. The parameters are . Also plotted in Fig. 4(c) is the effect of the sensor threshold on the proposed estimators, and where the parameters are (dB). As expected, the CE provides the smallest variances in all cases, followed by the BE. Amongst the proposed estimation techniques, as expected, the OMLE provides the best performance. All estimator variances decrease with the number of sensors and SNR. For high SNR cases, the two-stage estimator provides better performance than the mean estimator, whereas the mean estimator outperforms the two-stage estimator in low SNR conditions. Furthermore, simulation results are in agreement with the theoretical results given in Section IV-C and the (dB) value is the critical crossover point. Of note is (dB). The that the performance gain is marginal after discrepproposed estimators’ variances increase with the corroborating with the ancy, and the optimal threshold is theoretical results. Similar experiments are also performed for the case where the channel noise is modeled with a heavy-tailed Cauchy distribution. Identical parameters are utilized and the SNR is set . The results for varying number of sensors, as SNR and sensor threshold are given in Fig. 5. The ME, as expected, provides results significantly worse than the OMLE in this heavy-tailed case due to its reliance on the averaging operation, whereas the TSE still provides results close to the OMLE. Also, the performance discrepancy of the OMLE and TSE with respect to BE and CE is increased due to the algebraic-tailed channel noise. Experiments, the plots of which are excluded for brevity, evaluating the performances of the proposed with varying are also ran and results similar to the Gaussian case are obtained. B. Processing Time

Fig. 4. Illustration of estimator variances for (a)  = 1;  = 1:5;  = 1 and  = 5 (dB) for varying K ; (b)  = 1:0;  = 1:5; K = 200;  = 1 for varying SNR; (c)  = 1:0; K = 200;  = 1 and  = 2:5 (dB) for varying  . Circles, square, diamonds, stars, and pluses represent the CE, BE, OMLE, TSE, and ME estimators, respectively.

noisy sensor observations (BE) (no channel noise) are utilized as benchmarks.

The processing time of the proposed OMLE, TSE, and ME methods are evaluated through examples. The utilized experiment parameters are: and . The simulations are run in MATLAB on a Dual Processor, 3.20 GHz, 2.00 GB RAM PC. Note that for each , the processing time given is the ensemble average of 10 000 trials. The results for both Gaussian and Cauchy channel milisecs noise cases are tabulated in Table II where is a normalizing unit of time and corresponds to the simplest case. ME Cauchy The processing time of the estimators, as expected, increases with the number of sensors. The results indicate that the mean estimator is the most computationally efficient with the least

AYSAL AND BARNER: CONSTRAINED DECENTRALIZED ESTIMATION

1407

TSE estimator. These results are in agreement with the computational analysis presented in Section IV-D. C. Optimal Threshold vs Suboptimal Threshold for TSE In the following, we compare the performance of the TSE with optimal detection threshold to that of the TSE operating with a suboptimal threshold obtained as the following for the Gaussian channel. The estimate of the optimal threshold, in the Gaussian case, is given by

(60) Fig. 6(a) shows that the optimal and suboptimal threshold for varying when . Note that, for large number of sensors and at high SNR regimes, the performance loss induced by utilizing a suboptimal threshold is very small. Furthermore, for small number of sensors and at low SNR regimes, the performance loss is only marginal. These results indicate that TSE, is a very appealing suboptimal estimator for communication environments characterized by Gaussian distribution since it provides significant performance improvements over the ME, and its complexity is similar to that of ME. In addition, TSE yields close-to-optimal performance in all studied cases. Also of note is that the location of the sensor threshold has no effect on the TSE operating with suboptimal detection threshold, see Fig. 6(b). VI. CONCLUDING REMARKS

Fig. 5. Illustration of estimator variances for (a)  = 1;  = 1:5;  = 1 and  = 5 (dB) for varying K . (b)  = 1:0;  = 1:5; K = 200;  = 1 for varying SNR. Circles, square, diamonds, stars and pluses represent the CE, BE, OMLE, TSE, and ME estimators, respectively.

TABLE II CPU TIMES IN MILISECS FOR OPTIMAL AND SUBOPTIMAL ALGORITHMS IMPLEMENTED IN MATLAB

processing time, which is approximately 2.3 times faster than the OMLE. The TSE is, however, approximately 2.1 times faster than the OMLE and the ME is also only slightly faster than the

The decentralized WSN estimation scheme is extended to admit imperfections occurring during the data transmission from sensors to fusion center. Based on the extended decentralized estimation scheme, a maximum likelihood estimator operating on corrupted quantized noisy sensor observation is proposed. The optimal ML estimate is a root in (0,1) of a derivative of a polynomial function. The natural logarithm of this polynomial in (0,1) is analyzed, showing that the polynomial function is log-concave in (0,1) and that numerical methods such as Newton’s algorithm can be utilized to obtain the optimal solution. Due to complexity and implementation issues associated with numerical solutions, we derive and analyze the two-stage and mean estimator suboptimal solutions. The two-stage estimator first estimates the binary observations from noisy fusion center observations utilizing a threshold, subsequently providing an estimate for the source parameter. The optimal threshold is the MAP detector and it minimizes the probability of error during the binary observation estimation. Optimal threshold values for the commonly utilized light-Gaussian and heavy-tailed Cauchy channel noise models are derived. The mean estimator, which requires the minimum amount of information and computational load, relies on simple averaging. A computational complexity analysis is provided indicating that the suboptimal solutions are computationally effective. Numerical examples evaluating and comparing the proposed techniques indicate the effectiveness of the optimal

1408

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

(62) (63) (64) where we used the fact that . Differentiating the above and setting it to zero gives the desired result. tends to infinity. Consider next the case when

(65) for finite (62) since, for finite

, the above again reduces to

(66) The rest follows similarly to the previous case.

APPENDIX B TWO-STAGE ESTIMATOR VARIANCE Letting and the two-stage estimator variance is determined as

Fig. 6. Illustration of an TSE operating with optimal (squares) and suboptimal (circles) detection threshold in a sensor network with  = 1;  = 1:5;  = 1. (a) (dB) = 5; 0; 5 for varying number of sensors. (b) (dB) = 5; 0; 5 for varying threshold location with K = 200.

f0

g

f0

g

ML estimator and suboptimal two-stage and mean estimators in varying SNR regimes.

,

(67)

where

. Consider first (68)

APPENDIX A PROOF OF REMARK 1

Recalling that

Note that tends to infinity when and/or . . For simplify the notation, Consider first the case when we take to be the unnormalized version in the following. By function, the following holds: basic properties of

gives (69)

Consider next

(61)

(70)

AYSAL AND BARNER: CONSTRAINED DECENTRALIZED ESTIMATION

1409

(72)

Recalling that is a Bernoulli random variable with variance

where the second line follows from the facts that independent. Furthermore, noting that

(71) gives (72), shown at the top of the page. Placing (69) and (72) into (67) yields (49).

and

are

(78) yields

APPENDIX C MEAN ESTIMATOR VARIANCE

(79)

Again, utilizing the delta method, the mean estimator variance reduces to

(73) Consider first the expected value of . It is easy to see that

(74a)

(74b) (75) where the first and second lines follow from the zero-mean noise is a Bernoulli random assumption and from the fact that variable. Consider next

(76a)

(76b) (77)

Now, placing (75) and (79) into (73) gives (51). REFERENCES [1] D. A. Castanon and D. Teneketzis, “Distributed estimation algorithms for nonlinear systems,” IEEE Trans. Autom. Control, vol. AC-30, no. 5, pp. 418–25, May 1985. [2] J. Speyer, “Computation and transmission requirements for a decentralized linear-quadratic-gaussian control problem,” IEEE Trans. Autom. Control, vol. AC-24, no. 2, pp. 266–269, Apr. 1979. [3] R. Vadigepalli and I. F. J. Doyle, “A distributed state estimation and control algorithm for plantwide processes,” IEEE Trans. Contr. Syst. Technol., vol. 11, no. 1, pp. 119–127, Jan. 2003. [4] E. Camponogara, D. Jia, B. Krogh, and S. Talukdar, “Distributed model predictive control,” IEEE Contr. Syst. Mag., vol. 22, no. 1, pp. 44–52, Feb. 2002. [5] A. S. Willsky, M. Bello, D. Castanon, B. Levy, and G. Verghese, “Combining and updating of local estimates and regional maps along sets of one-dimensional tracks,” IEEE Trans. Autom. Control, vol. AC-27, no. 4, pp. 799–813, Aug. 1982. [6] S. I. Roumeliotis and G. A. Bekey, “Distributed multi-robot localization,” IEEE Trans. Robot. Autom., vol. 18, no. 5, pp. 781–795, Oct. 2002. [7] Y. Sung, L. Tong, and A. Swami, “Asymptotically locally optimal detector for large-scale sensor networks under poisson regime,” in Proc. 2004 IEEE Int. Conf. Acoust., Speech, Signal Process., Montreal, QC, Canada, 2004, pp. 1077–1080. [8] P. K. Varshney, Distributed Detection and Data Fusion.. New York: Springer-Verlag, 1997. [9] R. Niu, B. Chen, and P. K. Varshney, “Fusion of decisions transmitted over Rayleigh fading channels in wireless sensor networks,” IEEE Trans. Signal Process., vol. 54, no. 3, pp. 1018–1027, Mar. 2006. [10] V. V. Veeravalli, T. Basar, and V. H. Poor, “Minimax robust decentralized detection,” IEEE Trans. Inf. Theory, vol. 40, no. 1, pp. 35–40, Jan. 1994. [11] J.-J. Xiao and Z.-Q. Luo, “Universal decentralized detection in a bandwidth-constrained sensor network,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 2617–2624, Aug. 2005. [12] A. Ribeiro and G. B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor networks-Part I: Gaussian case,” IEEE Trans. Signal Process., vol. 54, no. 3, pp. 1131–1143, Mar. 2006. [13] J.-J. Xiao, S. Cui, Z.-Q. Luo, and A. J. Goldsmith, “Power scheduling of universal decentralized estimation in sensor networks,” IEEE Trans. Signal Process., vol. 54, no. 2, pp. 413–422, Feb. 2006. [14] J.-J. Xiao and Z.-Q. Luo, “Decentralized estimation in an inhomogeneous sensing environment,” IEEE Trans. Inf. Theory, vol. 51, no. 10, pp. 3564–3575, Oct. 2005. [15] H. Papadopoulos, G. Wornell, and A. Oppenheim, “Sequential signal encoding from noisy measurements using quantizers with dynamic bias control,” IEEE Trans. Inf. Theory, vol. 47, no. 3, pp. 978–1002, Mar. 2001.

1410

[16] Z.-Q. Luo, “Universal decentralized estimaiion in a bandwidth constrained sensor network,” IEEE Trans. Inf. Theory, vol. 51, no. 6, pp. 2210–2219, Jun. 2005. [17] Z.-Q. Luo, “An isotropic universal decentralized estimation scheme for a bandwidth constrained ad hoc sensor network,” IEEE J. Sel. Areas Commun., vol. 23, no. 4, pp. 735–744, Apr. 2005. [18] A. Ribeiro and G. B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor networkspart ii: Unknown probability density function,” IEEE Trans. Signal Process., vol. 54, no. 7, pp. 2784–2796, Jul. 2006. [19] G. Mergen and L. Tong, “Type-based estimation over multiaccess channels,” IEEE Trans. Signal Process., vol. 54, no. 2, pp. 613–626, Feb. 2006. [20] J. Gubner, “Distributed estimation and quantizer design,” IEEE Trans. Inf. Theory, vol. 39, no. 4, pp. 1456–1459, Jul. 1993. [21] W. Lam and A. Reibman, “Quantizer design for decentralized systems with communication constraints,” IEEE Trans. Commun., vol. 41, pp. 1602–1605, Aug. 1993. [22] M. Abdallah and H. Papadopoulos, “Sequential signal encoding and estimation for distributed sensor networks,” in Proc. 2001 IEEE Int. Conf. Acoust., Speech, Signal Process., Salt Lake City, UT, 2001, pp. 2577–2580. [23] J. Xiao, A. Ribeiro, Z. Luo, and G. Giannakis, “Distributed compression-estimation using wireless sensor networks,” IEEE Signal Process. Mag., vol. 23, no. 4, pp. 27–41, Jul. 2006. [24] A. Edelman and H. Murakami, “Polynomial roots from companion matrix eigenvalues,” Math. Computat., vol. 64, no. 210, pp. 763–776, Apr. 1995. [25] S. Boyd and L. Vanderberghe, Convex Optimization.. Cambridge, U.K.: Cambridge Univ. Press, 2004. [26] C. Sgraja, J. Egle, and J. Lindner, “On pilot-assisted map detection in frequency-flat rayleigh fading channels,” in IEEE Wireless Commun. Netw. Conf. (WCNC) 2003, Mar. 2003, pp. 930–935. [27] H. Poor, An Introduction to Signal Detection and Estimation, 2nd ed. New York: Springer-Verlag, 1994. [28] C. L. Nikias and M. Shao, Signal Processing With Alpha-Stable Distributions and Applications.. New York: Wiley, 1995. [29] W. H. Greene, Econometric Analysis., 5th ed. Upper Saddle River, NJ: Prentice-Hall, 2003. [30] A. Erdelyi, W. Magnus, F. Oberhettinger, and F. Tricomi, Higher Transcendental Functions, Vol. I, II and II, The Bateman Manuscript Project.. New York: McGraw-Hill, 1953. [31] X. Li, “Blind channel estimation and equalization in wireless sensor networks based on correlations among sensors,” IEEE Trans. Signal Process., vol. 53, no. 4, pp. 1511–1519, Apr. 2005. [32] R. Puri, A. Majumbar, P. Ishwar, and K. Ramchandran, “Distributed video coding in wireless sensor networks,” IEEE Signal Process. Mag., vol. 23, no. 4, pp. 94–106, Jul. 2006. [33] M. Cetin, L. Chen, I. J. W. Fisher, A. Ihler, R. Moses, M. Wainwright, and A. Willsky, “Distributed fusion in sensor networks,” IEEE Signal Process. Mag., vol. 23, no. 4, pp. 42–55, Jul. 2006.

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 4, APRIL 2008

Tuncer Can Aysal (S’05) received the B.E. degree (high honors) from Istanbul Technical University, Istanbul, Turkey, in 2003, and the Ph.D. degree from the University of Delaware, Newark, in 2007, both in electrical and computer engineering. His Ph.D. dissertation was nominated by the Electrical and Computer Engineering Department for Allan P. Colburn Dissertation Prize in Mathematical Sciences and Engineering for the most outstanding doctoral dissertation in the mathematical and engineering disciplines. He is currently a Postdoctoral Research Fellow with the Electrical and Computer Engineering Department, McGill University, Montreal, QC, Canada. His research interests include distributed/decentralized signal processing, sensor networks, consensus algorithms, and robust, nonlinear, statistical signal and image processing. Dr. Aysal was a recipient of the University of Delaware Competitive Graduate Student Fellowship, a Signal Processing and Communications Graduate Faculty Award (award is presented to an outstanding graduate student in this research area), and a University Dissertation Fellowship. He was also a Best Student Paper finalist at the International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2007.

Kenneth E. Barner (S’84–M’92–SM’00) was born in Montclair, NJ, on December 14, 1963. He received the B.S.E.E. degree (magna cum laude) from Lehigh University, Bethlehem, PA, in 1987. He received the M.S.E.E. and Ph.D. degrees from the University of Delaware, Newark, in 1989 and 1992, respectively. For his dissertation, “Permutation Filters: A Group Theoretic Class of Non—Linear Filters,” he received the Allan P. Colburn Prize in Mathematical Sciences and Engineering for the most outstanding doctoral dissertation in the engineering and mathematical disciplines. He was the duPont Teaching Fellow and a Visiting Lecturer at the University of Delaware in 1991 and 1992, respectively. From 1993 to 1997, he was an Assistant Research Professor with the Department of Electrical and Computer Engineering, University of Delaware, and a Research Engineer with the duPont Hospital for Children. He is currently a Professor with the Department of Electrical and Computer Engineering, University of Delaware. His research interests include signal and image processing, robust signal processing, nonlinear systems, communications, haptic and tactile methods, and universal access. He is the coeditor of the book Nonlinear Signal and Image Processing: Theory, Methods, and Applications (CRC Press, 2004). Dr. Barner is the recipient of a 1999 NSF CAREER award. He was the Co-Chair of the 2001 IEEE—EURASIP Nonlinear Signal and Image Processing (NSIP) Workshop and a Guest Editor for a special issue of the EURASIP Journal of Applied Signal Processing on Nonlinear Signal and Image Processing. He is a member of the Nonlinear Signal and Image Processing Board. He is the Technical Program Co-Chair for the International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2005. He is also serving as an Associate Editor of the IEEE TRANSACTIONS ON SIGNAL PROCESSING, the IEEE SIGNAL PROCESSING MAGAZINE, and the IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING. He is also a member of the Editorial Board of the EURASIP Journal of Applied Signal Processing. He is a member of Tau Beta Pi, Eta Kappa Nu, and Phi Sigma Kappa.

Constrained Decentralized Estimation Over Noisy ...

tralized, distributed estimation, and power scheduling methods do .... Newton's method to obtain the optimal solution. ...... predictive control,” IEEE Contr. Syst.

747KB Sizes 4 Downloads 262 Views

Recommend Documents

BLIND DECENTRALIZED ESTIMATION FOR ...
fusion center are perfect. ... unlabeled nature of the fusion center observations makes the problem .... where ˆψML is the solution obtained through the EM algo-.

Blind Decentralized Estimation for Bandwidth ...
Bandwidth Constrained Wireless Sensor Networks. Tuncer C. Aysal ...... 1–38, Nov. 1977. [19] G. McLachlan and T. Krishnan, The EM Algorithm and Extensions.

DECENTRALIZED ESTIMATION AND CONTROL OF ...
transmitted by each node in order to drive the network connectivity toward a ... Numerical results illustrate the main features ... bile wireless sensor networks.

decentralized set-membership adaptive estimation ... - Semantic Scholar
Jan 21, 2009 - new parameter estimate. Taking advantage of the sparse updates of ..... cursive least-squares using wireless ad hoc sensor networks,”. Proc.

Scalar estimation and control with noisy binary ...
The data-rate-limited estimation and control problems have been treated in ... coder–decoder pair that can carry the source code “reliably” across the channel.

constrained optimization approaches to estimation of ...
Abstract. We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and. Judd (SJ, 2012). Their implementation of the nested fixed

Decentralized Position and Attitude Estimation Using ...
cation might be backup to GPS. ..... is chosen to get the best alignment possible, meaning the ... To be precise, there are two solutions to the arctan func-.

Constrained Inefficiency over the Life-cycle
Jun 13, 2018 - to the j − 1 problem. Otherwise, find a∗ ... wise, find n∗ j and s∗ .... Otherwise, repeat the process with the new distribution ˜Ψ. 4 Quantitative ...

Robust Coding Over Noisy Overcomplete Channels - IEEE Xplore
2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an ar- bitrary number of coding ...

LMS Estimation of Signals defined over Graphs - IEEE Xplore
novel modeling and processing tools for the analysis of signals defined over a graph, or graph signals for short [1]–[3]. Graph signal processing (GSP) extends ...

CONSTRAINED POLYNOMIAL OPTIMIZATION ...
The implementation of these procedures in our computer algebra system .... plemented our algorithms in our open source Matlab toolbox NCSOStools freely ...

Noisy Business Cycles
May 30, 2009 - Abstracting from nominal frictions best serves this purpose. ... (iii) In the RBC paradigm, technology shocks account for the bulk of short-run fluctuations. Many economists have ..... and how much to save (or borrow) in the riskless b

Collusion Constrained Equilibrium
Jan 16, 2017 - (1986).4 In political economy Levine and Modica (2016)'s model of ...... instructions - they tell them things such as “let's go on strike” or “let's ...