Statistical Signal Processing and Modeling (Project 2 Report) Jusub Kim May/12/2003 ENEE624 Advanced Digital Signal Processing Instructor: Dr. Babis Papadopoulos

1 Introduction In this project, we start with the implementation of Levinson-Durbin algorithm,  . After which is for solving a set of linear hermitian toeplitz equations implementation of the Levinson-Durbin algorithm, we verify the algorithm with a specific wide-sense stationary(WSS) process. The Levinson-Durbin algorithm makes it easy to find the Autoregressive model(or all-pole model) for modeling a WSS random process. Usually when we make a model for the random process, we start with the ergodic assumptions(mean and autocorrelation), which implies that we can draw a second-order stochastic property of the process from a specific sample path of it. The validity of the assumptions is justified by the performance of the algorithm that uses these estimates. We also explore the stochastic modeling and lossy compression.We develop an encoder and decoder pair for lossy compression of a specific random process with some quality criteria. One criterion is from an ensemble average point of view, and the other is not. First, we will find the appropriate model for the specific process, and based on that model, we will develop algorithms for lossy compression. The last problem we will deal with is a channel deconvolution problem. In this problem, we will recover an information bearing signal from an observation which is output of a causal and stable LTI system with unknown impulse response. For this blind deconvolution problem, we will use the Godard[1] algorithm, which is the most widely used blind deconvolution algorithm in practice. 1

At each part of the report, if necessary, we will give matlab functions so that anyone can see the results on a screen also.

2 Implementation of the Levinson-Durbin Algorithm My matlab function function [ap,b0,e,g]=LevinsonD(rx,p) %Levinson-Durbin Algorithm %rx: toeplitz autocorrelation column vector %p: number of poles desired %ap: denominator coefficients obtained %b0: numerator coefficient obtained %e: a vector of squared errors %g: a vector of reflection coefficients if p>length(rx) error(’p must be less than the length of rx vector.’); return; end ap=1; e=rx(1); for j=2:p+1; g(j-1)=-rx(2:j)’*flipud(ap)/e(j-1); ap=[ap;0]+g(j-1)*[0;conj(flipud(ap))]; e(j)=e(j-1)*(1-abs(g(j-1))ˆ2); end b0=sqrt(e(p+1));

3 Verifying the Levinson-Durbin Algorithm In this section, we verify the Levinson-Durbin algorithm with a specific widesense stationary process which has a following power spectrum. 2

           If we perform the inverse z transform, we get the following autocorrelation function of the WSS process x.

          If the algorithm identifies the all-pole model correctly, it has to give us two poles at 0.5 and -0.25, with as input the autocorrelation sequence       to the algorithm. It turns out that the algorithm works correctly finding two poles at 0.5 and -0.25.

4 Signal Modeling In this section, we model a process via an all-pole filter. We start with the ergodic assumption so that we can estimate the autocorrelation function from one sample path. We will justify the assumption later. Since we want to model the process by an all-pole filter, we can use the Levinson-Durbin algorithm as follows. My matlab function

% Find autocorrelation of BlackBoxA upto 100 time lag s=autocorr(A(1,:),100); % Finding all-pole model with p=5 [ap,b0,e,g]=LevinsonD(s’,5); % Creating random process F=filter(b0,ap,randn(1,1000)); First we determined the number of poles as 5 looking at the Figure 1. The obtained poles are shown in Figure 2. Figure 3 shows how well the designed all-pole model(p=5) creates a process which has the autocorrelation function of original process. 3

Figure 1: Error v.s. number of poles [Problem 2.3]

Figure 2: Obtained poles by Levinson-Durbin algorithm [Problem 2.3]

4

Figure 3: Original v.s. Created autocorrelation [Problem 2.3] Since we can have as many samples as we want to see from BlackBoxA, we can check WSS of the process, it turns out to be almost WSS. It’s mean of estimated E[x[n]] along n=1,2...,1000 is 0.001 and standard deviation was 0.13. also,  at each time lag k was 0.01. it’s mean of standard deviation of estimated Figure 4 shows it has almost WSS property. Since it turns out to be almost WSS, we can check ergodicity comparing the statistical autocorrelation function obtained by observing many sample paths with one from one sample path. Figure 5 shows how close they are. Though it looks a little bit different, we can see that the important first part of it (the first crossing zero point) is almost the same, which justify ergodicity. In practice, we have only one sample in many cases, in that case, the validity of ergodicity will be justified by the performance of the system.

5 Stochastic Modeling and Lossy Compression In this section, we explore the stochastic modeling and lossy compression strategy. Given BlackBoxB, first we need to find out the appropriate model for the process. After observing many sample paths, we modeled the process as follows.

   

    



5

          

Figure 4: WSS checking(autocorrelation) [Problem 2.3]

Figure 5: Ergodicity checking [Problem 2.3]

6

,where A, B and C are random variables which have 0 or 1 value with some   probabilities(Note that C plays a role of preventing zero signal happens). and    are two different regular processes. The sine term is harmonic process with random variables  and . Figure 6, 7 and 8 show the autocorrelation and Power Spectrum of each three components. Since random variable A , B and C turn on and off with some probability, sometimes we observe a mixed signal as in Figure 9 and sometimes we only observe the each component itself.

Figure 6: Regular process 1 [Problem 2.4]

Figure 7: Regular process 2 [Problem 2.4] We first explore the encoding and decoding algorithm satisfying an error criterion as follows. 7

Figure 8: Harmonic Process [Problem 2.4]

Figure 9: Mixed signal [Problem 2.4]

8

       



    





With this quality criterion in mind, we can think about applying pade or prony’s method to model each small part of a given sample path and sending only the coefficients of the models. That is we encode the given sample path by a set of model coefficients according to each piece of the sample path. If the number of coefficients are smaller than the number of samples modeled, we get compression gain clearly if it satisfies the error criterion. However, after applying pade or prony’s methods to given sample paths from BlackBoxB varying each piece’s length and pole and zero combinations, we realized that it is not appropriate to model the given signal with the pade or prony’s method. We could not get compression satisfying above error criterion. The only possible case satisfying above error criterion was when we use the pade method with as many zeros as the number of samples, which clealy is not what we want. However, when the sample path is just from the harmonic process, we could encode it very correctly with a two-pole and one-zero filter by the pade method. In other cases that the sample path comes from regular or mixed process, it does not seem to be possible to get compression with pade or prony’s methods satisfying the above quality criterion. So, we decided to adopt multirate coding scheme, which is to quantize the sample path with a two-channel perfect reconstruction filter bank. We adopted the lloydmax quantizer for optimizing the quantizer. And we used greedy approach to allocate the number of bits to each channel. For example, for one sample path, we obtained the following compression gain satisfying the above error criterion.       

            

Compression gain= 



,where and  are the number of bits allocated to each channel and N is the length of a given sample path. Overall, when we are given a sample path and the error criterion comes from not statistical point of view, the encoding/decoding scheme are shown in Figure 10 and corresponding matlab functions for the given process are as follows.

9

Encode/Decode by Two-pole and one zero filter coefficients YES ( It is Harmonic Process)

Is it modeled by Pade?

NO (It is Regular or mixed process) Encode/Decode by Two channel PR Filter Bank with Lloydmax Quantizer

Figure 10: Encoding/Decoding Scheme for BlackBoxB [Problem 2.4] My matlab function code=EncodePath1(original) recovered=DecodePath1(code) Especially, when we send a sample path from the Harmonic process, we get tremendous compression gain satisfying error criterion as follows.  Compression gain= 

   



  



,which is almost 100 times.

6 Channel Deconvolution In this section, we investigate the blind deconvolution algorithm. In the blind deconvolution problem, our task is to recover an i.i.d. information-bearing signal which is the output of a causal and stable LTI system with unknown but fixed impulse response. Since what we know about the soure signal is just probabilistic

10

source model, the information-bearing sequence can be recovered only in probabilistic sense. We will use the Godard algorithm which is the most widely used blind deconvolution algorithm in practice[1]. In Godard algorithm, specifically, the second special case which is called the constant modulus algorithm(CMA) will be used. The CMA adjust an equalizer weights by minimizing the cost function iteratively looking at one more sample.

       ,where y(k) is the output of a equalizer with w(k) filter coefficients at each k-th sample and  is a positive constant defined by



  

  

,where s(k) is a source signal. The CMA adapts equalizer filter coefficients w in a following way.

       

Figure 11: Error v.s. samples [Problem 2.5] ,where  is a small adaptive gain and x(k) is observed real signal. There is practical evidence for the conjecture that this algorithm will converge to a desired 11

global minimum if the equalizer is long enough and initialized with a nonzero center tap[1]. So we start with assigning 1 to the equalizer coefficient in the middle and 0 to other coefficients. In our problem,  is 1 since second-order and fourth-order moment are both 1. After several trials, we determined the order as 20 and  as 0.01. Big  made the algorithm diverge. Figure 11 shows after 400 samples, it converges to    estimated mean square error.

Figure 12: Equalizer filter’s impulse response [Problem 2.5]

Figure 13: Recovered IID sequence [Problem 2.5] The obtained equalizer coefficients are shown in Figure 12. And Figure13 shows the i.i.d. information-bearing signal recovered by this equalizer in a probabilistic sense. 12

References [1] Simon Haykin Adaptive Filter Theory, Third Edition, Prentice Hall, 1996 [2] Monson Hayes Statistical Digital Signal Processing And Modeling, Wiley, 1996

13

Statistical Signal Processing and Modeling

May 12, 2003 - which is output of a causal and stable LTI system with unknown impulse response. For this blind deconvolution problem, we will use the ...

136KB Sizes 0 Downloads 207 Views

Recommend Documents

Statistical Signal Processing: Detection, Estimation, and ...
Statistical Signal Processing of Complex-Valued Data: The Theory of Improper ... Pattern Recognition and Machine Learning (Information Science and Statistics)

Statistical Signal Processing Methods for the ...
Autocorrelation sequences of inter-onset intervals. (IOI) have been shown to aid the determination of meter in European. Classical Music recorded ... Digital sequencer-generated audio data, prepared for analysis with audio software tools was ...

Full Book PDF Statistical Signal Processing
Statistical Signal Processing: Detection, Estimation, and Time Series Analysis PDF, Read Online Statistical Signal Processing: Detection, Estimation, and Time ...

Digital Signal Processing - GitHub
May 4, 2013 - The course provides a comprehensive overview of digital signal processing ... SCHOOL OF COMPUTER AND COMMUNICATION. SCIENCES.

DIGITAL SIGNAL PROCESSING AND APPLICATIONS.pdf ...
b) Explain the bilinear transformation method of Digital Filter Design. 6. c) Using impulse invariance method, design digital filter from analog prototype. that has a ...

DIGITAL SIGNAL PROCESSING AND APPLICATIONS.pdf ...
... loge (1 + az–1), | z | > | a | 4. JE – 996. P.T.O.. Page 1 of 2 ... DIGITAL SIGNAL PROCESSING AND APPLICATIONS.pdf. DIGITAL SIGNAL PROCESSING AND ...

Synthesis Lectures on Signal Processing
pdf Processing of Seismic Reflection Data Using MATLAB (Synthesis Lectures on Signal Processing), Processing of. Seismic Reflection Data Using MATLAB (Synthesis Lectures on Signal Processing) Free download, download Processing of Seismic Reflection D