A Burst Error Correction Scheme Based on Block-Sparse Signal Reconstruction B.S. Adiga, M. Girish Chandra and Swanand Kadhe Innovation Labs, Tata Consultancy Services, Bangalore, INDIA {bs.adiga, m.gchandra and swanand.kadhe}@tcs.com Abstract— In the light of strong link between error control coding and Compressed Sensing (CS), it seems natural to recast the recent results of block sparse recovery to burst error correction. In this paper an attempt is made towards arriving at few complex (real) codes in terms of explicit construction of generator and parity check matrices, decoding using computationally efficient algorithms, code rate, guaranteed number of bursts that can be corrected and how the performance degrades as the number of bursts go beyond the theoretical limits. These initial results can trigger further research in the area of burst-error correction when viewed from the CS perspective. Keywords- Compressed Sensing; block sparsity; block coherence; burst error correction; block pursuit algorithms

I.

INTRODUCTION

The Compressed Sensing or Compressive Sensing (CS) has recently emerged as a very powerful field in signal processing. The methodologies of CS enable the acquisition of signals at a rate much smaller than what is commonly prescribed by Shannon-Nyquist [1], [2], [3] if the signals exhibit the property of sparsity or compressibility. A signal or data represented as a vector (say of length N) is referred to as s-sparse if the representation of that vector in a suitable basis or dictionary has at most s nonzero coefficient values. Compressibility is a weaker notion than sparsity, where the coefficients of representation, sorted in decreasing order of magnitude decay with a power law, and thus allowing the signal to be well approximated by, say, s coefficients. In either case, we typically have s  N . An interesting aspect about CS is its close connection to the areas like coding, high-dimensional geometry, sparse approximation theory, data streaming algorithms and random sampling (see references in [4]). The newer connections are getting established at quick pace and during the process the CS, many a times, is providing a fresh perspective on these areas. Also, there are vigorous efforts in exploring the application of CS in many different fields such as data mining, DNA microarrays, astronomy, tomography, digital photography, sensor networks, and A/D converters [5]. CS involves linear measurements obtained through (linear) projections and non-linear reconstruction algorithms. The M measurements represented by the vector y (of dimension M  1 ) can be viewed in terms of the transformation:

y  Ax

(1)

Since this paper is about error correction coding, we restrict to the case of data vectors x which are sparse in the canonical (Dirac) basis. The reason being, we can relate the sparse vector x to the error vector when coding is viewed from the perspective of CS (will be elaborated subsequently). With this assumption on x, the M  N matrix A in Eqn.1 can be referred to as the measurement or sensing matrix. The CS theory answers fundamental questions like [6]: How many measurements are required for accurate recovery and what sensing matrices facilitate recovery? How to recover the data vector from few measurements? Towards the latter, l1 -norm minimization (also called as basis pursuit) and greedy algorithms such as various matching pursuits are quite popular. In order to facilitate the recovery of an s-sparse vector by these algorithms, the measurement matrix has to satisfy certain conditions based on one of these desirable properties: spark, Null-Space Property (NSP), Restricted Isometry Property (RIP) or Coherence (see [3] and some classical references therein). These properties are related to each other as summarized neatly in [3]. The applicability of a particular property depends upon the situation like, whether the measurements are contaminated by noise or error (say, due to quantization) and whether the signal is exactly or approximately sparse (see [3] for more details). While the spark, NSP, and RIP all provide guarantees for the recovery of s-sparse signals, verifying that a general matrix A satisfies any of these properties has a combinatorial computational complexity [3]. In many cases, it is preferable to use properties of A that are easily computable to provide more concrete recovery guarantees. The coherence of a matrix is one such property (see [3] and the references within). In the last couple of years, the theory of CS has been extended to handle signals which exhibit block sparsity [5] [7]. Block sparse signals exhibit an additional structure in the form of the nonzero coefficients occurring in clusters. Such signals arise in various applications, e.g., DNA microarrays, equalization of sparse communication channels, magneto encephalography etc [5]. It has been shown that by explicitly taking this structure into account can yield better reconstruction properties than treating the signal as being sparse in the conventional sense (thereby ignoring the additional structure in the problem) [7]. The properties of measuring matrices like RIP and coherence for the conventional sparsity are extended to block-sparse case as well. Further, algorithms for block-sparse signal

reconstruction like mixed l 2 / l1 -norm optimization program, block version of orthogonal matching pursuit termed as block–OMP (BOMP) and block version of matching pursuit, the BMP, are also proposed [7]. It is to be noted that the conventional sparsity is referred to simply as sparsity to differentiate it from the block sparsity, following [7]. In our earlier work [4], we considered an explicit error correction scheme based on CS concepts. When viewed from the CS framework, the coding and error correction has the following key aspects: (1) The inputs, outputs and coding schemes (the generator and parity check matrices) are over real (in general, over complex) fields rather than over the Galois Field (2) The parity check matrix H is related to sensing matrix (3) The syndrome vector ~ y is related to the sparse error pattern e through the formulation similar to (1) (that is, ~ y  He ). Once the sparse error vector is estimated using CS sparse recovery algorithms, the corrupted vector can be corrected. A more detailed formulation for the conventional sparse error correction is available in [4]. It is to be noted that complex-number codes can be advantageous in many situations (see [8] and the references there in). Continuing our efforts in error correction further, in this paper, we propose a burst error correction scheme based on the block sparse signal recovery framework of [7]. The rationale is rather simple since when the errors are bursty in nature, they occur in clusters and hence direct mapping to block-sparse reconstruction is possible. Apart from identifying the link, we address the following important issues in any error correction scheme: (1) encoding and coding operations, explicitly bringing out the generator and parity check matrices (2) error correction performance in terms of guaranteed error correction (this is same as how many sparse blocks we can reconstruct for a given generator and hence parity check matrix) (3) the associated rate of the codes as well as how (4) the performance degrades with the number of error bursts are more than the guaranteed correction limits. To the best of our knowledge, the results provided in this paper form the first examination in the direction of complex (or real) burst error correction codes based on CS framework. Towards this goal, the paper is organized as follows. In Section II, the formulation of bursterror correction, including the relevant results for recovery performance guarantees and the associated remarks, are presented. The encoding and decoding issues are covered in Section III, including a discussion on a novel burst error correction code of rate 1 . Some simulation results and 3

remarks follow this in Section IV. Conclusions of the work, including future directions are provided in Section V. II.

BURST ERROR CORRECTION

Since the proposed burst error correction is extensively based on the block-sparsity frame work and results of [7], the relevant formulation and results from [7] are captured

next. Keeping the spirit of burst error correction in place, we use the symbols e (error vector), ~ y (syndrome) and H (parity-check matrix) straight away in the places of data vector, measurement vector and sensing matrix respectively (of [7], [5]), to arrive at the requisite burst-error correction formulation. A. Burst-Error Recovery Formulation In any coding scheme, we start with an uncoded vector u (dimension K  1 ) belonging to C K ( C being complex field) and obtain the encoded vector c  C N using the Generator matrix G :

c  Gu

(2)

where, G is of dimension N  K with N  K (thus introducing redundancy). Further, G has to be full rank to recover u from c. Now suppose that c is corrupted by an arbitrary vector e of dimension N  1 :

y  c  e  Gu  e

(3)

In this paper, as mentioned earlier, we are interested in burst errors, i.e., the non-zero values of e occur in clusters. Before proceeding further, it is worth recollecting that burst error correction is typically handled in the existing systems using Reed-Solomon (RS) codes, Fire codes, interleaving coupled with random error correcting codes (like convolutional codes), product codes, concatenated codes, etc, depending upon the nature of bursts. In the popular RS code, appropriate number of bits is grouped to form a symbol and the code is designed for the required number of symbol-error correction. This correction in turn leads to the capability to correct burst errors (of bits). The codes we are considering follow a similar grouping of complex values (instead of bits) to form a block and the idea is to restore the corrupted blocks, thus correcting burst errors. In this direction, it is useful to consider the codeword c as well as the error e as the concatenation of blocks of length b. Considering e to start with, we have   e  e1  eb eb1  e2b  e N b1  e N          e 1 e 2  e L   

T

(4)

where, in (4) e j  is the jth block and N  Lb . The block sparsity can be now formally defined following [7]. A vector e  C N is called block s -sparse if the number of blocks having non-zero Euclidean norm is at most s :

 I  e j  L

j 1

2



 0  s

(5)



where, I e j 



 0  1 if the Euclidean norm of the block

eigenvalue of the positive semidefinite matrix A H A , then   A   max . Important properties of  B and how some

Continuing from (3), our task is to reconstruct u from y. To accomplish this it is sufficient to reconstruct e since y  e  Gu , and from which u can be recovered due to full rank property of G. By using a matrix H whose kernel is in the range of G, it is possible to recast (3) into the form of (1):

results involving it reduce to the results of conventional coherence (for which b  1 ) are available in [7]. Now, the key result for burst error correction follows. If the block coherence of H satisfies the condition

2

e j  is greater than 0 (i.e. e j  2  0 ) and 0 otherwise.

~y  H * y  H * G u  e   H * e

since H G  0 , and H  is the complex conjugate of H with dimension M  N . Equation (6) is exactly the sparse reconstruction formulation we have started with. Of course, here we reconstruct e* , the complex conjugate of the error vector from the syndrome ~ y . Also, as remarked earlier, the parity check matrix is in fact related to the sensing matrix. Thus, to facilitate reconstruction, the parity-check matrix has to satisfy the properties suggested by CS theory (say, RIP). Since the focus of this paper is on burst errors, or in other words, error vector exhibiting block sparsity, we would be interested in the conditions which ensure block-sparse recovery through computationally efficient algorithms. In this direction, we follow [7], and list the relevant results based on block coherence, which is an extension of conventional coherence measure. For the decoding we adopt both the BOMP and BMP, again suggested in [7]. Couple of remarks about these algorithms are provided in Section III. B. Coditions for Burst-Error Recovery We start with the definition of the block coherence  B of H as the conditions on recovery are based on it. In this direction, it is useful to represent H as a concatenation of column blocks H  j  of size M  b [7]:  H  h1  hb  H1 

 hb1  h2b  hN b1  hN       H 2  H L  

T

(7)

where, h1 h2  hN are the columns of H , each of which are of unit Euclidean norm. It is to be noted that the dimension of H is M  N with M  N  K . Further, M  R b , where R is an integer, as the unit of interest is a block. The block coherence of H is defined as [7]:

1  B  max   M i, j  i , j i b

sb 

(6)

(8)

where M i , j   H H i H  j  and is the ij-th b  b block of

the N  N matrix M  H H H . Further, in (8),   A  , the spectral radius of A, is the square root of the maximum eigenvalue of A H A . That is, if max is the maximum



1 1 B  b 2



(9)

together with the block orthonormality

H H i H i   I b i

(10)

then guaranteed recovery of the s -sparse block error vector is possible using the BOMP and BMP algorithms. With the knowledge of e, we can obtain the transmitted data as mentioned earlier. It is worth noting at this juncture that the left-hand side of (9) is the conventional sparsity s (i.e. s  sb ) of the error vector. If we had considered the error recovery based on conventional sparsity, without exploiting the block or burst structure, then the recovery using OMP (not the block OMP) or MP were governed by

s where







1 1  1 2

(11)

is the conventional coherence given by

  max hiH h j i , j i

(12)

Since  B   (see [7] for a proof), exploiting the block structure by using BOMP or MP can result in potentially higher level recovery of sparsity [7]. So far, we focused on the properties of the parity-check matrix towards guaranteed block-sparse recovery. But, what is important in any coding scheme is the explicit construction of the parity check and generator matrices. Equally important is the decoding. These aspects would be covered in the following section. III.

ENCODING AND DECODING

A. Construction of Parity Check and Generator Matrices When viewed from the CS perspective, since H has to satisfy appropriate property (say, coherence) towards reconstruction by a suitable algorithm, preferably of low complexity, it would be logical to construct H first and then construct its null space to arrive at the generator matrix G. For the purpose of block error recovery, lower the block coherence of H, the better is the performance. Based on the remark in [7], designing matrices that lead to significant

improvements in the recovery thresholds, when exploiting block-sparsity, may be a difficult problem. Keeping this in mind, we start with a matrix suggested in [7], which is of the form:

H  Φ Ψ 

(13)

where,

Ψ  F Ub

(14)

with F being the Discrete Fourier Transform (DFT) matrix of dimension R  R (recollect that R  M ) and U b can be b

any unitary matrix. In (14),  is the well known Kronecker product of matrices. Further, Φ  I M (identity matrix of dimension M). This choice of Φ and Ψ matrices result in an H matrix with L  2 R and  B  1 . The latter is b R the optimal block coherence based on uncertainty relationship, see [7] for more details. Further, the number of blocks which can be corrected with guarantee is given by (note that H satisfies (10))



s 





R 1 2



(15)

Thus, one can choose for a given s  the value of R based on (15). Then depending upon the burst length b, the value of M follows as M  Rb . The problem with this H matrix is that the resultant rate ( r ) s always

N  Lb ) N  M 2 Rb  Rb 1 r   N 2 Rb 2

1 since (using 2

This code has rate 1 and can correct more errors than the 3

half rate code H (for a given block length b). More specifically, the bound s  of H given by (9) gets modified to s    for H 1 , where   0 . In fact, when s   1 ,  is always greater than zero and hence providing better burst error correction capability (see Section IV also). This can be proved as follows. Because of the normalization of columns in (17), keeping in mind the structure of Ψ , the columns of the middle portion of H  get scaled by 1 . 2 Hence, if  B is the block coherence of H, the block coherence  B1 of H 1 is given by

 B1 

1 2

B

(18)

In order express the relationship between s1 and s , we

1 1 and 2 s b  b  2 s1b  b respectively, again noting that H 1 satisfies (10). These are

replace  B1 and  B by

obtained by considering equality in (9). Then from (18), it follows that

s1 

2 2 s   1  1 2

(19)

Since   s1  s , we can write (16)

 s   s1  s  

In our explicit construction of H we have used the Discrete Cosine Transform (DCT) matrix for U b . Equally important is the construction of generator matrix G to encode the data. The necessary generator matrix G can be arrived at, using the null space of H. In the direction of arriving at a different code, we propose the following extension to (13). Starting from the H in (13), which is designed for a suitable block error correction capability s  , the idea is to obtain the new parity check matrix H  as below:

 I Ψ O H    O I Ψ 

where, in (17) the zero matrices are of suitable dimension. Since each of the columns of the parity-check matrix is to be normalized to have unit Euclidean norm, we normalize the columns of H  to arrive at actual parity-check matrix H 1 .

(17)

2 2 s   1  1  s  2





2 1 2s  1 (20) 2

In the left hand side of (20), we have written  as  s to

explicitly say that  is a function of s . From (20), it follows that   0 for all s  1 and thus proves the better error correction capability mentioned earlier. It is useful to note at this juncture that the elegant structures of both H and H 1 can be exploited to facilitate computationally efficient syndrome computation. With the associated generator matrices also exhibiting good structure, efficient encoding can also be achieved. Good structure of G also enables easy recovery u from c in the decoding process. B. Reconstruction using BOMP and BMP Keeping the computationally efficient decoding in mind, we considered both the BOMP and BMP algorithms in our

studies. The BOMP algorithm can correct s  block errors if (9) and (10) are satisfied in s  steps. Please see [7] for the main steps of BOMP. The least squares minimization over the blocks selected so far in a given step (Equation (34) in [7]) can be implemented using the pseudo inverse. This minimization step is altogether avoided in BMP making it simpler than BOMP (see [7]). But, BMP takes more number of iterations to achieve the required accuracy of decoding. IV.

SIMULATION RESULTS AND SOME REMARKS

The requisite parity check and the associated generator matrices are constructed as discussed in the previous section. The matrices corresponding to a null space are obtained using the MATLAB function “null”. It is to be noted that both the matrices H and H 1 (of (13) and (14) respectively) satisfy the orthonormality condition (10). Through elaborate simulations it is found that the codes corresponding to the said parity-check matrices correct s  and s    block errors respectively with guarantee. Both the codes exhibit graceful degradation of performance when the number bursts in error are more than the guarantee limit of (9). One set of curves is shown in Fig.1 demonstrate this aspect. Each curve is the average over 1000 iterations of randomly generated block positions and values of error vector, for the code parameters given in the caption. The decoding is done through BOMP. As expected the rate one third code performs much better than the half rate code. This improved error correction performance of H 1 comes at the cost of more “measurements” or more redundancy, which in turn manifest as lower rate (of 1 ), compared to 3 1 that of H (with rate ). 2

Even though we have considered two codes corresponding to (13) and (17), other matrices are also worth examining to arrive at different codes supporting different rates and code lengths. One possibility in this direction is to select an appropriate instance of randomly generated matrices (with elements independent and identically Gaussian distributed) for different L, M and b. By looking into the performance curves available in [7] for these randomly generated matrices, it can be inferred that most of the instances would give “useful” matrices and some of which can be frozen as candidate codes. But, the quest for high rate codes which can correct larger number of bursts still remains. V.

Based on the existing frame work of block sparse recovery methodology suggested in [7], the paper presented few preliminary results on a complex burst-error correction scheme. The requisite parity check and generator matrices together with their rate and guaranteed error correction capability when decoded with computationally tractable block versions of orthogonal matching pursuit and matching pursuit algorithms were explicitly brought out. Based on the studies carried out it appears that correcting one or two blocks of errors (with guarantee) can be achieved with high rate codes. Efforts are under way to use the frame work of expander graphs (our previous work [8]) with these single or double burst error correcting codes to arrive at more powerful burst error correcting codes. Another direction is to appropriately use the cyclic difference set based scheme of [4] to arrive at a burst-error correction scheme. REFERENCES [1]

[2] [3]

[4]

[5]

[6]

[7] Figure. 1 Performance of rate 1/2 and rate 1/3 codes with K=80 and b=8. Note that N=160 for rate 1/2 code and N=240 for rate 1/3 code.

CONCLUSIONS AND FUTURE DIRECTIONS

[8]

L. Jacques and P. Vandergheynst, “Compressed Sensing: When sparsity meets Sampling,” Book Chapter in Optical and Digital Image Processing - Fundamentals and Applications, Edited by G. Cristòbal, P. Schelkens and H. Thienpont, Wiley-VCH, April 2011. P. Boufounos, G. Kutyniok and H. Rauhut, “Compressed Sensing for Fusion Frames,” In Proc. SPIE Wavelets XIII, San Diego, Aug. 2009. M.A. Davenport, M.F. Duarte, Y. C. Eldar and G. Kutyniok “Introduction to Compressed Sensing,” Book Chapter in Compressed Sensing: Theory and Applications, Edited by Y. C. Eldar and G. Kutyniok, Cambridge University Press, 2011. B.S. Adiga, M. Girish Chandra and Shreenivas Sapre, “Guaranteed Error Correction Based on Fourier Compressive Sensing and Projective Geometry,” Accepted, International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, May 2011. M. Stojnic, F. Paravaresh and B. Hassibi, “On the Reconstruction of Block-Sparse Signals with an Optimal Number of Measurements,” IEEE Trans.Sig. Proc, Vol.57, No.8, Aug. 2009, pp. 3075-3085. Z. Charbiwala, S. Chakaraborty, et al, “Compressive Oversampling for Robust Data Transmission in Sensor Networks,” In Proceedings of INFOCOM, San Diego, March 2010. Y. C. Eldar, P. Kuppinger and H. Bolcskei, “Compressed Sensing of Block-Sparse Signals: Uncertainty Relations and Efficient Recovery,” IEEE Trans.Sig. Proc, Vol.58, No.6, June. 2010, pp.3042 - 3054. B. S. Adiga, M. Girish Chandra and Swanand Kadhe, “A Class of Real Expander Codes Based on Projective-Geometrically Constructed Ramanujan Graphs,” IJCSNS International Journal of Computer Science and Network Security, Vol.11, No.1, January 2011, pp.48-57.

A Burst Error Correction Scheme Based on Block ...

B.S. Adiga, M. Girish Chandra and Swanand Kadhe. Innovation Labs, Tata Consultancy ..... Constructed. Ramanujan Graphs,” IJCSNS International Journal of Computer. Science and Network Security, Vol.11, No.1, January 2011, pp.48-57.

100KB Sizes 1 Downloads 313 Views

Recommend Documents

Error Correction on a Tree: An Instanton Approach - Semantic Scholar
Nov 5, 2004 - of edges that originate from a node are referred to as its degree. In this Letter we discuss primarily codes with a uniform variable and/or check node degree distribution. Note that relations between the ..... [9] J. S. Yedidia,W. T. Fr

Dynamic forward error correction
Nov 5, 2003 - sponding error correction data therebetWeen during a plural ity of time frames. ..... mobile sWitching center, or any communication device that can communicate .... data according to a cyclical redundancy check (CRC) algo rithm and ...

A New Error Correction Code
communications (e.g., satellite communications, digital .... octal values have been shown in Table I (Iteration 2 and 3). Obviously, at the end of coding ... Figure 4. Error Resolution Table with hot-bits and the error-bit. TABLE III. COMPARISON OF T

On Packet Size and Error Correction Optimisations ... - Semantic Scholar
Recent sensor network platforms with greater computa- ... platforms [2], [3]. These resource-rich platforms have increased processing capabilities which make erasure code handling viable and efficient [3]. Moreover, improved radio designs [4] facilit

On Packet Size and Error Correction Optimisations ... - Semantic Scholar
it only accounts for interference and does not consider packet transmissions. Because CQ relies on the receiver ... latency of packets, which accounts for packet transmission time plus the inter-packet interval (IPI). Definition 1. ..... sage out of

Organic photovoltaic devices based on a block ...
controllable parameters in our measurements suggests that this system might ... and is replaced by an inhomogeneous morphology in the pure polymer film (x.

A Novel Error-Correcting System Based on Product ... - IEEE Xplore
Sep 23, 2011 - with large changes in signal amplitude and large values of log-likelihood ratios ... Low-density parity check (LDPC) codes and Reed-Solomon.

A Block-Based Video-Coding Algorithm Focusing on ...
[15] Shanableh, T. and M. Ghanbari, “Heterogeneous video transcoding to lower spatio-temporal resolutions and different encoding formats,” IEEE trans. on multimedia, 2(2), 101–110, 2000. [16] Shi, Y.Q. and H. Sun, Image and Video Compression fo

BULATS Writing Part One Error Correction - UsingEnglish.com
Your company is going to hold a conference at the end of the year and it is your job to find ... I saw your conference centre advertised at Best Conference and Trade Fair Monthly and ... Work with someone else to edit the email on the last page.

Transparent Error Correction for Communication ... - IEEE Xplore
Jun 15, 2011 - TCP/IP throughput by an order of magnitude on a 1-Gb/s link with 50-ms ... protocols, aggregating traffic for high-speed encoding and using a.

A Novel Blind Watermarking Scheme Based on Fuzzy ...
In this paper, a novel image watermarking scheme in DCT domain based on ... health professionals and manipulated and managed more easily [13],[15] .... log),(. (8). And 'entropy' is an indication of the complexity within an image. A complex ..... dif

Towards a Distributed Clustering Scheme Based on ...
Comprehensive computer simulations show that the proposed ..... Protocols for Wireless Sensor Networks,” Proceedings of Canadian Con- ference on Electrical ...

Error Correction: A Traffic Light Approach
they volunteer or are asked to speak in class, they will flash one of the following three cards: Red: When a student flashes a red card, the student does not want ...

Towards a Distributed Clustering Scheme Based on ... - IEEE Xplore
Abstract—In the development of various large-scale sensor systems, a particularly challenging problem is how to dynamically organize the sensor nodes into ...

Sharp Threshold Detection Based on Sup-norm Error ...
May 3, 2015 - E (X1(τ)X1(τ) ) denote the population covariance matrix of the covariates. ... 1 ≤ s ≤ 2m, a positive number c0 and some set S ⊂ R, the following condition holds wpa1 κ(s, c0,S) = min ..... package of Friedman et al. (2010).

Urban Water Demand with Periodic Error Correction - from Ron Griffin
The U. S. Texas Water Resources Institute Technical Report TR-331. College Station,. TX: Texas A&M University. http://ron-griffin.tamu.edu/reprints/.

Characterizations of Network Error Correction/Detection ...
the code in terms of error correction/detection and erasure correction. We have ...... Thus there is a host of problems to be investigated in the direction of our work ...

Network Coding, Algebraic Coding, and Network Error Correction
Abstract— This paper discusses the relation between network coding, (classical) algebraic coding, and net- work error correction. In the first part, we clarify.

Network coding and error correction - Information ...
Department of Information Engineering. The Chinese University of Hong Kong. Shatin, N,T. Hong Kong, China http://www.ie.cuhk.edu.hk/people/raymond.php.