GBCS: a Two-Step Compressive Sensing Reconstruction Based on Group Testing and Basis Pursuit Ali Talari and Nazanin Rahnavard Oklahoma State University, Stillwater, OK 74078 Emails: {ali.talari, nazanin.rahnavard}@okstate.edu Abstract—Compressive sensing (CS) reconstruction algorithms can recover a signal from its undersampled random projections given that the signal is sparse or has a sparse representation in some appropriate transform domain. These algorithms are grouped into three main categories: group-testing based, greedy methods, and linear programming based. The two former have lower complexities and lower reconstruction performance compared to the latter. In this paper, we propose group testing basis pursuit CS (GBCS), which exploits the low complexity of the former category and the accuracy of the latter. First, we design an efficient group-testing based CS reconstruction algorithm and then propose to integrate it with a regular basis pursuit (BP) CS reconstruction. We design and analyze GBCS and show that it surpasses existing algorithms for noiseless measurements and absolutely sparse signals. Further, we show that if the number of random projections is large enough our designed group-testing reconstruction fully recovers the signal and the need for BP is eliminated.

I. I NTRODUCTION N

Assume x ∈ R is a compressible signal in some appropriate transform basis Ψ ∈ RN ×N . Let s be the sparse representation of x in Ψ domain, i.e., x = Ψs, with only K non-zero coefficients. Such a sparse signal is called a Ksparse signal and its sparsity rate is defined as S = K N . Since the basis Ψ is not of our primary concern here, without loss of generality we assume that it is the canonical basis, i.e., Ψ = IN ×N , and indeed x is sparse and has only K non-zero coefficients [1]. The key idea behind compressive sensing (CS) is that there is no need to sample all the N values of x. In contrast, we can recover x from only M ≪ N random projections, where M ≥ O(K log N ) [2]. CS is composed of two following key components. Signal Sampling (encoding): The random projections are generated by y = Φx, where Φ ∈ RM×N is the measurement matrix with entries randomly selected from {+1, 0, −1} or N (0, 1) [2]. Signal Recovery (decoding): Signal reconstruction can be done by finding the estimate x ˆ from the system of linear equations y = Φx. This is an underdetermined system with infinitely many solutions. It has been shown that xˆ, the This material is based upon work supported by the National Science Foundation under Grant No. ECCS-1056065.

estimate of x, is the solution to the following problem [2]: (1) x ˆ = argminkxk1 , s.t. y = Φx, PN where kxk1 = i=1 |xi |. Various solutions have been proposed to the problem (1). The algorithms that solve (1) employing linear programming such as Basis Pursuit (BP) [3] have the highest complexity (O(N 3 )) and the highest reconstruction performance. On the other hand, iterative greedy algorithms such as orthogonal matching pursuit (OMP) [4] and group-testing based algorithms such as Sudocodes [5] exhibit lower complexity compared to BP, while having a lower reconstruction performance. In this paper, we propose to concatenate a group-testing based CS reconstruction with a regular BP and propose a twophase decoding algorithm. For the first phase, we design a group-testing reconstruction algorithm similar to Sudocodes [5] that decodes a large fraction of signal coefficients and considerably reduces the problem dimensions. In the second phase, the remaining undecoded coefficients are decoded employing a BP reconstruction. This paper is organized as follows. In Section II, we review some related work. In Section III, we propose and analyze GBCS. In Section IV, we evaluate the performance of GBCS and compare it to existing algorithms. Finally, Section V concludes the paper. II. R ELATED W ORK In the premier work [5], Sudocodes were proposed based on group-testing ideas. In Sudocodes, the encoder generates sparse random measurements employing a sparse Φ until it receives a feedback from the decoder indicating the recovery of a certain ratio of signal coefficients. Upon receiving the feedback, the encoder generates a predetermined number of measurements based on a dense Φ, which results in the exact recovery of the signal. Although Sudocodes have low coding/decoding complexities, a feedback channel may not always be available. Therefore, in contrast to Sudocodes we design GBCS considering a general setup with no available feedback and a fixed number of available measurements. Authors in [6] proposed to solve (1) directly utilizing ℓ0 optimization. Since regular ℓ0 is known to be a NP-hard problem authors propose a simplified ℓ0 optimization with

reduced complexity. Later we will see that GBCS out performs this scheme. In [1], authors employ belief propagation in CS reconstruction, and propose CSBP with outstanding performance. However, as we later see GBCS slightly outperforms CSBP while providing a much lower decoding complexity. III. O UR P ROPOSED A LGORITHM : GBCS It has been shown that if Φ is a sparse matrix, the complexity of the CS encoding/decoding can be decreased to a great extent while some reconstruction performance degradation may also occur [1, 7]. Nevertheless, employing a sparse Φ allows us to design an efficient group-testing based reconstruction algorithm. Therefore, we employ a sparse Φ similar to [1, 5, 8, 9] and design GBCS encoding and decoding algorithms. A. GBCS Encoding As mentioned earlier, in CS encoding measurements are obtained by y = Φx. Let Φ be a sparse measurement matrix with only L ≪ N non-zero entries in each row of Φ selected from {+1, −1} with equal probabilities. These L non-zero entries are placed independently in each row and uniformly distributed throughout the columns. Clearly, the encoding procedure has a very low complexity since it consists of only L additions per measurement. In [1], it has been shown that the signal coefficients and the measurements may be represented by vertices of a bipartite graph, where signal coefficients are the variable nodes and measurements are the check nodes as shown in Figure 1. The measurement yi has edges connected to L signal coefficients that are added together to build up yi , referred to by its neighbors and denoted by N(yi ). We refer to the number of edges connected to coefficients and measurements by the term degree. Clearly, in GBCS encoding the degree all measurements yi , i ∈ 1, 2, . . . , N is |N(yi )| = L, where |.| denotes the cardinality of a set. Later, we discuss our decoding and find the appropriate value of L. It is worth noting that [10] has discussed the restricted isometry property (RIP) of such measurement matrices in detail. x1

x2

x3

x4

x5

x6

y4

y5

x7

xN

L y1

y2

y3

yM

Fig. 1. The bipartite graph representing measurements by check nodes (squares) and signal coefficients by variable nodes (circles).

B. GBCS Decoding As discussed earlier, GBCS decoding is comprised of two phases. GBCS Decoding Phase-I: Assume M measurements are available at a decoder. Due to randomness in the placement of non-zeros in rows and columns of Φ, some coefficients may not be included in any measurement. Hence, they have degree zero. At the beginning, we decode the value of such

coefficients to zero. This is the best we can do because their value has not been captured by any measurement, and due to the sparsity of the signal they are zero with a higher probability. Next, our iterative decoding begins. In our iterative algorithm whenever a coefficient is estimated its estimated value is subtracted from all the measurements that include this coefficient, and the degree of the corresponding measurements is decremented by one. The formal definition of our iterative decoding steps are as follows. 1) ∀yj = 0, set xl = 0, xl ∈ {N(yj )}. 2) ∀|N(yj )| = |{xk }| = 1 set xk = yj . 3) Find the set of measurements Y that have exactly the same value y. T If ∃yi , yj ∈ Y, {N(yi )S N(yj )} = {xk } then set xk = y and xl = 0, xl ∈ {{ yj ∈Y N(yj )}\xk }. T Else, for yi , yj ∈ S Y, yi , yj = argminyi ,yj |N(yi ) N(yj )| T set xl = 0, xl ∈ {{ y ∈Y N(yk )}\{N(yi ) N(yj )}}. k 4) Iterate through steps 1 to 3. If full decoding is realized there is no need to move to Phase-II.

In step 1 of our Phase-I decoding, we set all the coefficients connected to a zero-valued measurement to zero as depicted in Figure 2(a). This is because the non-zero coefficients have real values, thus it is almost impossible that the addition of some non-zero coefficients results in exactly zero. This results in the decoding of many coefficients and would decrease the degree of several other measurements using a single measurement. Therefore, it is important to have a sufficiently large number of zero-valued measurements to decode as many as possible zero-valued coefficients. In step 2, we find measurements with a single neighbor as illustrate in Figure 2(b). We can see that y3 has a single neighbor, hence the value of its unique neighbor x7 is easily equal to the value of y3 . These measurements appear as the decoding is iterated and the degree of measurements gradually decreases. Finally, in step 3 we find measurements with equal value. For instance, in Figure 2(c) we can see that y1 = y2 = y3 = y, hence Y = {y1 , y2 , y3 }. Since coefficients are real valued, we may conclude that the measurements yj ∈ Y are generated using the same non-zero coefficients w.h.p. Now the decoder finds two measurements in Y such that they have minimum number of the common neighbors. Clearly, y1 and y3 have only one neighbor and the first case of step 3 is performed by setting the value of common neighbor x3 to y and the rest of neighbors of Y to 0. Assume y1 did not exist in Figure 2(c). In this case, the non-zero coefficient/s that the measurements in Y have in common, lies in smallest set of common coefficients of any two measurements in Y . Therefore, the second case of step 3 would be performed since y2 and y3 have two common neighbors. Therefore, only the value of non-common coefficients could be determined to be zero, and the value of x3 and x4 would have been left undetermined in this setup. Earlier, we mentioned that Phase-I of GBCS is very similar to the Phase-I decoding of Sudocodes [5] except step 2. Despite the similarities, the main advantage of GBCS is that

x1

0

x3

x4

0

x6

0

xN

zero coefficients to less than rN , we need to have N S ln . (3) M r Proof: To limit the number of non-zero coefficients to ML rN , we have Sπ0 N ≤ rN . This gives Se− N ≤ r, which after simple algebraic operations gives (3). Besides providing a lower bound on L, Lemma 1 shows that if M N is kept constant L is independent of N . Employing (3), we plot the lower bound on L versus r for various S with M N = {0.2, 0.3} in Figure 3. L≥

L y1

y2

0

y4

y5

yM

(a) Decoding a zero-valued measurement. x1

x2

x3

x4

x5

x6

y1

y2

y3

y4

y5

y3

xN

yM

(b) Decoding a degree 1 measurement. 0

0

y

0

x5

0

0

40

xN

M/N = 0.2, S = 0.1

35

M/N = 0.2, S = 0.05

30

M/N = 0.2, S = 0.01 M/N = 0.3, S = 0.1

25

y

y

y4

y5

20

yM

M/N = 0.3, S = 0.01

15

(c) Decoding measurements with equal value. Fig. 2.

M/N = 0.3, S = 0.05

L

y

10

Illustration of Phase-I decoding steps.

5 0 −4 10

it does not need to send a feedback after decoding phase-I. GBCS Decoding Phase-II: Perform a conventional BP to estimate the remaining coefficients from the remaining measurements, which completes the decoding. Since the remaining measurements have very low degrees after phase-I, Φ is usually super sparse. Therefore, the complexity of BP decoding in GBCS Phase-II is way below that of regular BP. C. GBCS Analysis Now that we have described GBCS decoding, we can determine the appropriate value of L to generate Φ. For a constant row weight of L, there would be M L total edges connected uniformly at random to coefficients. Therefore, πd the probability that a coefficients has degree d is πd =



  d  ML−d ML 1 1 1− . d N N

(2)

Asymptotically (for large enough values of N ), (2) approaches Poisson distribution with mean λ = ML N , i.e., −λ d πd = e d!λ . We can see that if λ is too small the probability π0 = e−λ increases considerably. Clearly, π0 is the fraction of signal coefficients that are not connected to any measurement and are all decoded to zero at the beginning of GBCS decoding. Therefore, non-zero coefficients with degree zero are erroneously decoded to zero. The expected number of non-zero coefficients with degree-zero is Sπ0 N , with S = K N being the sparsity of the signal. We aim to confine the expected number of non-zero coefficients left out due to having a degree zero to less than rN . This actually gives a lower bound on the value of L given in the following lemma. Lemma 1: Consider a GBCS encoding with a Φ of row weight L. To limit the expected number of undecodable non-

10

−3

10

r

Fig. 3.

−2

Lower bound on L versus r for various S.

From Figure 3, we may select an appropriate L based on the desired r. Note that although a larger L results in a smaller r, we may not assign very large values to L. This is because a larger L reduces the probability of generating zero-valued measurements. Clearly, the number of non-zero coefficients in a measurement follows Hypergeometric distribution. Let τi,L be the probability that i non-zero coefficients are included in a measurement of degree L. Consequently, τi,L is given by   K N −K τi,L =

i

L−i  N L

.

Therefore, the probability that all neighbors of a measurement with degree L are zero coefficients is τ0,L =

N −K L  N L



L Y N −K −i = . N −i i=1

This gives the fraction of zero-valued measurements out of M in expectation, which are decoded in the first step of PhaseI. Therefore, on average τ0,L M zero-valued measurements recover Lτ0,L M zero-valued non-unique coefficients, i.e., a particular zero-valued coefficient may be recovered several times because it is connected to several zero-valued measurements. On the other hand, we have N ≫ L, hence we may assume the L non-zero coefficients are placedQ in each row of L−1 Φ with replacement because with probability i=1 NN−i ≈ 1 the row weight of Φ would be L. Consequently, recovery of Lτ0,L M recoveries is similar to Lτ0,L M uniform random drawings from a set of N distinct objects with replacement.

or equivalently we have    1 M Nrec −Lτ0,L (1−S) N = (1 − S) 1 − e . N

(5)

On the other hand, we may approximate the Hypergeometric distribution with normal distribution for large values of N and a fixed S since K ≫ L and S = K N ≈ 0. Therefore, for large values of N , we have ! LS τ0,L ≈ Θ − p , (6) LS(1 − S) where

1 Θ(x) = √ 2π

Z

x

t2

e− 2 dt = −∞



1 1 + erf 2



x √ 2



is the standard normal distribution function. Therefore, τ0,L is independent of N when N is large enough. Interestingly, from (5) and (6) we conclude that for a large enough N , let us say N ≥ 104 , NNrec becomes independent of N and its value only depends on S, L, and M N as investigated in Figure 4. From Figure 4, we may see that the performance of the first step of our iterative decoding in Phase-I depends on S. Further, we can see that for S = 0.05 approximately 85% of the zero-valued coefficients are recovered with only τ0,L fraction of M measurements and the rest are recovered in the subsequent iterations and steps. This is the main reason of our focus on this decoding step. On the other hand, Figure 4 shows that Nrec has a single maximum value for a particular L, which results in the recovery of the highest ratio of zero-valued coefficients. The maximum of Nrec occurs because of the behavior of the term Lτ0,L . Clearly, for L ≪ N we may write τ0,L

 L L Y N −K N −K −i = ≈ . N −i N i=1

(7)

1

0.8

0.6 Nrec N

However, we are interested in Nrec the number of distinct zero-valued coefficients decoded in the first step of Phase-I. This problem has a close connection with a similar problem in rateless codes [11], which are modern forward error correction codes. In rateless coding, output symbols (encoded packets) are formed by performing binary addition on d randomly selected input symbols (source packets), where d is carefully selected for each output symbol [11]. Therefore, generating output symbols in rateless codes is similar to forming measurements in GBCS by addition of L randomly selected signal coefficients. Consequently, recovery of Lτ0,L M zerovalued coefficients can be mapped to the recovery of N input symbols from Lτ0,L M output symbols with d = 1 in rateless codes. This problem has been comprehensively investigated in [12–14], and the results therein may be employed to find Nrec in the first decoding step of GBCS Phase-I decoding. From [13, lemma 3], it can be shown that Nrec is given by   Lτ0,L M Nrec = (N − K) 1 − e− N −K , (4)

0.4 M/N = 0.2, s = 0.1 M/N = 0.3, s = 0.1

0.2

M/N = 0.2, s = 0.05 M/N = 0.3, s = 0.05

0 0

5

10

15

Nrec N

Fig. 4.

20 L

25

versus L for various

30

M N

35

40

and S.

From (7), it can be inferred that although increasing L increases the number of coefficients recovered by a zero-value measurement, it decreases the ratio of zero-valued measurements. In the following lemma, we give the expression for the value of L that results in the highest ratio of zero-valued coefficients recovery (considering only the first step of Phase-I decoding). Lemma 2: The optimal value of L that results in the highest recovery of zero-valued coefficients Nrec in the first step of Phase-I of GBCS decoding is & −1 % 1 ∗ , L = ln 1−S where ⌊.⌉ returns the closest integer to its argument. Proof: To find L∗ , we set the first derivative of Nrec expression (4) to zero. Considering (7) and leaving out the term (N − K), we have d dL =

M N −K





1−

N −K N

L

Lτ0,L M

e− N −K L

e−



−K ( NN ) N −K

L

d ≈ dL M

h

L ln

L

1−e





−K ( NN )

N −K N

L

N −K



M

!

i

+ 1 = 0.

Therefore, L∗ (the optimal Φ row weight) is given by & −1 % 1 ∗ L = ln . 1−S

Lemma 2 shows that the optimal L solely depends on the value of S. Further, we should note that according to the value of r, Lemma 1 may result in a lower bound for L that is larger than L∗ . Therefore, the appropriate L should be chosen to satisfy the lower bound as well. IV. P ERFORMANCE E VALUATION OF GBCS In this section, we evaluate the performance of GBCS and compare it to existing algorithms.

60 GBCS CSBP ℓ0 optimization

50 40

BP

30 20 10 0 100

150

200

250

Reconstruction error, eR

50 40 30 GBCS CSBP ℓ0 optimization

20 10

BP 0

30

40

50

60

Reconstruction error, eR

BP

20

200

250

300

350

100

(a) Performance comparison for S = 0.05.

GBCS CSBP ℓ0 optimization

40 30

BP 20 10 0 100

120

140

160

180

200

M

Fig. 6. Comparison of the GBCS’s performance with CSBP, BP, and ℓ0 reconstructions for N = 500 for two cases of S = 0.05 and S = 0.1.

40

0 150

90

(b) Performance comparison for S = 0.1.

GBCS CSBP ℓ0 optimization

60

80

50

M (a) Performance comparison for S = 0.05. 80

70

M

Reconstruction error, eR

Reconstruction error, eR

A. Comparison with Existing Algorithms We empirically find that when the full GBCS decoder is considered, L = 2L∗ gives the optimal result. We compare the performance of GBCS with CSBP [1], Basis Pursuit (BP) [3], and ℓ0 optimization reconstruction [6] for N ∈ {500, 1000} with S ∈ {0.05, 0.1}. We evaluate error PNthe reconstruction 2 defined by eR = kx − x ˆ k2 = (x − x ˆ ) for an x i i i=1 with the magnitude of its non-zero coefficients drawn from N (0, 100). Figures 5 and 6 compare the eR of the GBCS to that of existing work, and Table I compares the runtime of various algorithms.

400

M (b) Performance comparison for S = 0.1. Fig. 5. Comparison of the GBCS’s performance with CSBP, BP, and ℓ0 reconstructions for N = 1000 for two cases of S = 0.05 and S = 0.1.

We emphasis that GBCS is designed for absolutely K-sparse signals and noiseless measurements. Therefore, GBCS has not been defined for the cases of approximately K-sparse signals (small coefficients are close to zero rather than being exactly zero) and noisy measurements, while some other decoding schemes such as [1, 3, 6] have the capability of decoding such signals. Next, we check to see for L = 2L∗ what values of M satisfy Lemma 1 for r ≤ N1 so that in expectation less than or equal to one non-zero coefficient obtains degree zero. TABLE II M INIMUM M

TABLE I

N

RUNTIME COMPARISON OF

DIFFERENT ALGORITHMS ON THE SAME PLATFORM IN SECONDS .

Algorithm GBCS ℓ0 [6] BP [3] CSBP [1]

N = 1000 S = .05 S = 0.1 3.86 16.35 6.46 27.13 14.69 31.47 1080.91 1283.0

N = 500 S = .05 S = 0.1 0.33 9.94 0.66 2.50 8.71 7.07 393.73 386.36

Figures 5 and 6 show that GBCS outperforms all existing reconstruction methods. In addition, Table I shows that GBCS offers low encoding/decoding complexities besides surpassing existing signal reconstruction algorithms in terms of achieving a lower eR . For N = 500 and S = 0.1, BP and ℓ0 reconstructions run faster than GBCS. However, for N ≥ 1000 GBCS always runs faster than all other algorithms.

500 1000

TO SATISFY L EMMA

S 0.05 0.1 0.05 0.1

2L∗

L= 39 18 39 18

1 FOR L = 2L∗ . Mmin 41 109 100 256

Table II gives the minimum value of M such that in expectation all non-zero coefficients are captured by measurements. Further, Figures 6 and 5 confirm that for these values of M , eR considerably decreases. B. Evaluation of Phase-I of GBCS Decoding In this section, we study the performance of our proposed group-testing algorithm in Phase-I. We set N ∈ {1000, 500} and S = {0.1, 0.05} and plot the fraction of coefficients that remain undecoded after Phase-I in Figure 7 employing MonteCarlo method by averaging over 105 runs.

Fraction of remaining coefficients

10

10

Another important advantage of GBCS versus Sudocodes is that in Sudocodes after Phase-I of decoding, the remaining undecoded measurements are discarded and dense measurements are requested, while the discarded measurements still contain information about the coefficients. However, in GBCS the same undecoded measurements are decoded in the second phase. This would reduce the total number of require measurements for signal reconstruction.

−1

−2

N = 1000, S = 0.1 10

N = 1000, S = 0.05

−3

N = 500, S = 0.1 N = 500, S = 0.05

10

−4

50

100

150

200

250

300

350

M

Ratio of full decodings at Phase-I

(a) Fraction of signal coefficients remaining undecoded after Phase-I of GBCS. 1 0.8 0.6 0.4

N = 1000, S = 0.1 N = 1000, S = 0.05

0.2

N = 500, S = 0.1 N = 500, S = 0.05

0 50

100

150

200

250

300

350

400

M (b) Ratio of successful Phase-I decodings. Fig. 7. Fraction of signal coefficients remaining undecoded after Phase-I, and the ratio of full Phase-I decodings versus M .

From Figure 7, we can see an interesting phenomenon. The fraction of remaining undecoded coefficient after Phase-I initially increases with M and then decreases considerably (note the log scale). This is because when M is small many zero-valued coefficients have degree 0 and are correctly decoded to 0 at the beginning of GBCS. As M slightly increases, the degree of most of these coefficients becomes larger than 0 while they may not be decoded due to lack of enough measurements. However, as M increases further correct recovery is obtained for many coefficients. In addition, we can see that for a large enough M Phase-I may realize the full decoding. C. Discussion and Comparison with Sudocodes Despite the similarities between decoding Phase-I of Sudocodes and GBCS discussed in Section III-B, we should note that in Sudocodes the encoding is also comprised of two phases with sparse and dense measurements matrices, which are separated by a feedback. Nevertheless, the feedback channel may not always be available nor may the encoder be present at the time of decoding. In addition, employing a dense Φ in the second encoding phase does not allow utilizing Sudocodes in applications that require a sparse Φ such as data collection in wireless sensor networks, where generating dense measurements maps to a huge number of communications.

V. C ONCLUSION In this paper, we proposed GBCS, a novel CS reconstruction algorithm, by integrating two CS reconstructions. We designed a group-testing based CS reconstruction and concatenated it with a regular Basis Pursuit (BP) CS reconstruction to obtain a superior low complexity and outstanding performance. Therefore, our proposed decoding algorithm has two phases. In the first phase, our iterative decoding algorithm recovers as many signal coefficients as possible. If the first phase of GBCS cannot fully recover the sparse signal, a second decoding phase is performed employing a regular BP. We analyzed our decoding and discussed its advantages. We observed that GBCS surpasses existing CS reconstruction algorithms in terms of reconstruction accuracy and complexity for noiseless measurements and absolutely sparse (not approximately sparse) signals. R EFERENCES [1] D. Baron, S. Sarvotham, and R. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Transactions on Signal Processing, vol. 58, no. 1, pp. 269 –280, 2010. [2] D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, pp. 1289–1306, April 2006. [3] S. Chen, D. Donoho, , and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM Jour. Sci. Comp., vol. 20, no. 1, pp. 33–61, 1998. [4] J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, pp. 4655–4666, Dec. 2007. [5] S. Sarvotham, D. Baron, and R. Baraniuk, “Sudocodes- fast measurement and reconstruction of sparse signals,” in IEEE International Symposium on Information Theory, pp. 2804–2808, July 2006. [6] H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed ℓ0 norm,” IEEE Transactions on Signal Processing, vol. 57, pp. 289–301, January 2009. [7] R. Berinde and P. Indyk, “Sparse recovery using sparse random matrices,” Preprint, 2007. [8] W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,” in IEEE Information Theory Workshop, 2007. ITW’07, pp. 414–419, 2007. [9] S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, “Efficient and robust compressed sensing using optimized expander graphs,” IEEE Transactions on Information Theory, vol. 55, pp. 4299 –4308, sept. 2009. [10] W. Wang, M. Garofalakis, and K. Ramchandran, “Distributed sparse random projections for refinable approximation,” in Proceedings of the 6th international conference on Information processing in sensor networks, pp. 331–339, ACM New York, NY, USA, 2007. [11] M. Luby, “LT codes,” The 43rd Annual IEEE Symposium on Foundations of Computer Science,, pp. 271–280, 2002. [12] M. G. Luby, M. Mitzenmacher, and M. A. Shokrollahi, “Analysis of random processes via And-Or tree evaluation,” Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms, pp. 364–373, 1998. [13] N. Rahnavard, B. Vellambi, and F. Fekri, “Rateless codes with unequal error protection property,” IEEE Transactions on Information Theory, vol. 53, pp. 1521–1532, April 2007. [14] N. Rahnavard and F. Fekri, “Generalization of rateless codes for unequal error protection and recovery time: Asymptotic analysis,” IEEE International Symposium on Information Theory, pp. 523–527, July 2006.

GBCS: a Two-Step Compressive Sensing ...

Oklahoma State University, Stillwater, OK 74078. Emails: {ali.talari ... Abstract—Compressive sensing (CS) reconstruction algorithms can recover a signal from ...

226KB Sizes 0 Downloads 235 Views

Recommend Documents

A Lecture on Compressive Sensing 1 Scope 2 ...
Audio signals and many communication signals are compressible in a ..... random number generator (RNG) sets the mirror orientations in a pseudorandom 0/1 pattern to ... tion from highly incomplete frequency information,” IEEE Trans. Inform.

Object Detection by Compressive Sensing
[4] demonstrate that the most discriminative features can be learned online to ... E Rn×m where rij ~N(0,1), as used in numerous works recently [9]. ..... 7.1 Sushma MB has received Bachelor degree in Electronics and communication in 2001 ...

COMPRESSIVE SENSING FOR THROUGH WALL ...
SCENES USING ARBITRARY DATA MEASUREMENTS. Eva Lagunas1, Moeness G. Amin2, Fauzia Ahmad2, and Montse Nájar1. 1 Universitat Polit`ecnica de Catalunya (UPC), Barcelona, Spain. 2 Radar Imaging Lab, Center for ... would increase the wall subspace dimensi

Generalized compressive sensing matching pursuit algorithm
Generalized compressive sensing matching pursuit algorithm. Nam H. Nguyen, Sang Chin and Trac D. Tran. In this short note, we present a generalized greedy ...

Beamforming using compressive sensing
as bandwidth compression, image recovery, and signal recovery.5,6 In this paper an alternating direction ..... IEEE/MTS OCEANS, San Diego, CA, Vol. 5, pp.

Compressive Sensing With Chaotic Sequence - IEEE Xplore
Index Terms—Chaos, compressive sensing, logistic map. I. INTRODUCTION ... attributes of a signal using very few measurements: for any. -dimensional signal ...

Generalized compressive sensing matching pursuit ...
Definition 2 (Restricted strong smoothness (RSS)). The loss function L .... Denote R as the support of the vector (xt−1 − x⋆), we have. ∥. ∥(xt−1 − x⋆)R\Γ. ∥. ∥2.

A Lecture on Compressive Sensing 1 Scope 2 ...
The ideas presented here can be used to illustrate the links between data .... a reconstruction algorithm to recover x from the measurements y. Initially ..... Baraniuk, “Analog-to-information conversion via random demodulation,” in IEEE Dallas.

A Compressive Sensing Based Secure Watermark Detection And ...
the cloud will store the data and perform signal processing. or data-mining in an encrypted domain in order to preserve. the data privacy. Meanwhile, due to the ...

TC-CSBP: Compressive Sensing for Time-Correlated ...
School of Electrical and Computer Engineering ... where m

Photon-counting compressive sensing laser radar for ...
in the spirit of a CCD camera. A variety of ... applying single-pixel camera technology [11] to gen- ..... Slomkowski, S. Rangwala, P. F. Zalud, T. Senko, J. Tower,.

Compressive Sensing for Through-the-Wall Radar ... - Amazon AWS
the wall imaging,” Progress in Electromagnetics Research M, vol. 7, pp. 1-13, 2009. [24] T. S. Ralston, G. L. Charvat, and J. E. Peabody, “Real-time through-wall imaging using an ultrawideband multiple-input multiple-output (MIMO) phased array ra

Compressive Sensing for Ultrasound RF Echoes using ... - FORTH-ICS
B. Alpha-stable distributions for modelling ultrasound data. The ultrasound image .... hard, there exist several sub-optimal strategies which are used in practice. Most of .... best case is observed for reweighted lp-norm minimization with p = α −

Unequal Compressive Imaging
Consequently, employing the idea of UEP erasure coding from [14, 15], we propose to concentrate the L non-zero elements of Φ at its columns which sample the important coefficients of x. With this setup, more important coefficients are incorporated i

REUSABLE LOW-ERROR COMPRESSIVE SAMPLING SCHEMES ...
Definition 1 (Forall/Malicious) A compressive sam- pling algorithm in the Forall model consists of a matrix. Φ a recovery algorithm R, and a constant C such that,.

GBCS Health Care Fact Sheet Tenn.pdf
Tennessee. God's vision for ... “Like police and fire protection, health care is best funded through the. government's ... GBCS Health Care Fact Sheet Tenn.pdf.

A Self-Sensing Nanomechanical Resonator Built on a ...
Carbon Nanotube. Adam R. Hall,‡,† Michael R. Falvo,†,§ Richard Superfine,†,§,| and Sean Washburn*,†,§,|,⊥. Curriculum in Applied and Materials Sciences, Department of Physics and Astronomy,. Department of Computer Science, and Departme

Wall Clutter Mitigations for Compressive Imaging of Building Interiors
contributions and eliminate its potentially overwhelming signature in the image. Availability ...... different random measurement matrix was used to generate the reduced set of ..... using compressive sensing,” Journal of Electronic Imaging, vol.

Entanglement-Enhanced Sensing in a Lossy and Noisy ...
Mar 20, 2015 - Here, we experimentally demonstrate an entanglement-enhanced sensing system that is resilient to quantum decoherence. We employ ... loss degrades to 1 dB in a system with 6 dB of loss. Under ideal conditions, N00N .... pair of DMs that

Sensing, Tracking and Modelling with Ignition – a ...
Introduction. Students working in small-group activities in the class- room commonly engage and participate differently. Teachers have to manage the multiple ...

design and implementation of a high spatial resolution remote sensing ...
Therefore, the object-oriented image analysis for extraction of information from remote sensing ... Data Science Journal, Volume 6, Supplement, 4 August 2007.