Thresholding Orthogonal Multi Matching Pursuit Entao Liu Department of Mathematics University of South Carolina May 01, 2010

1

Introduction

Compressed sensing (CS) is a new signal recovery method established in the recent years. The fundamental work of CS is done by Donoho[8], Candes, Romberg and Tao([1] , [2], and [3]). The CS approach recovers a sparse signal in high dimensional space with only a few measurements. We assume the unknown signal x ∈ RN . In the CS setting it is allowed to take m < N inner products between x and a collection of vectors {ϕi }N i=1 . The measurements yj = hx, ϕj i, j = 1, . . . , m can be rewritten with a m × N matrix Φ. Let ϕj , j = 1, . . . , m are the rows of Φ. Clearly, y = Φx,

(1.1)

where y is the measurements vector. The goal of CS is the recovery of x from y and Φ. In general, it is impossible to achieve this goal. The signal has to be sparse. We denote the support of x denoted as supp(x). If the cardinality of the support | supp(x)| = K, x is K sparse. By now, most of the recovery algorithms are based on two ideas. One is `1 minimization as in [3], the other method involving matching pursuit. The most simple one is orthogonal matching pursuit(OMP). The counterpart of OMP in the approximation setting is orthogonal greedy algorithm (OGA), which is well studied. Also many other algorithms using the matching pursuit

1

ideas are proved to performed well, such as Cosamp, subspace pursuit, Multi matching pursuit, and etc. In this paper, we provide a new CS recovery algorithm called Thresholding Orthogonal Multi Matching Pursuit(TOMMP). Differ from other CS algorithms based on the matching pursuit idea, TOMMP does not require the knowledge of sparsity of the unknown signal. Assume the sparse signal x has property about the magnitude of its components, say, |xi | ≥ a for i ∈ T . Φ is a m × N matrix. We call y = Φx the observation vector, so y ∈ Rm .

1.1

Notations

To avoid confusions, let us clarify the notations. For x ∈ RN , xΓ ∈ R|Γ| is a vector whose entries are the entries of x with indices in Γ. For m × N matrix Φ, ΦΓ is a m × |Γ| submatrix of Φ with columns indexed in Γ. Denote kxk := hx, xi−1/2 . Given y ∈ Rm , define the projection of y onto span(ΦI ) := span{φi : i ∈ I} as PI (y) := argmin ky − y 0 k. y 0 :y 0 ∈span{ΦI }

It is known that if Φ∗I ΦI is invertible then PI (y) = ΦI Φ†I y, where Φ†I = (Φ∗I ΦI )−1 Φ∗I is the Moore-Penrose pseudoinverse of ΦI and Φ∗ is the transpose of Φ.

2

Algorithm

In this section we build a new CS recovery algorithm which is called Thresholding Orthogonal Multi Matching Pursuit with parameter s and a (TOMMP(s, a)). It is designed to recover those signals satisfying the above assumptions.

2

Algorithm: TOMMP(s, a) Input: a, Φ, y, and s Initialization Stage: Let Λ0 := Λ0a := ∅, r0 := y, and j := 1. Iteration Stage: Step 1: Let m := 1. Step 2: Denote Λj := Λaj−1 ∪ {i1 , . . . , ik , where k = ms and |hrj−1 , φi|}. |hrj−1 , φi1 i| ≥ . . . ≥ |hrj−1 , φik i| ≥ sup φ∈Φ φ6=φi ,k=1,2,...,ms k

Then find xj such that xjΛj := argminz ky − ΦΛj zk and xj[1,N ]\Λj = 0. Step 3: Denote Λja := {i ∈ [1, N ] : i ∈ Λj and |xji | ≥ a/2}. j j−1 j−1 j , j := j + 1, Step 4: If Λja ⊆ Λj−1 a , update Λa := Λa , r := r and m := m + 1. Then go to step 2. Otherwise, update rj := y − PΛja (y). If rj = 0, stop. Otherwise, update j := j + 1 and go to step 1. Output: If the algorithm stops at the `th iteration, the output xˆ satisfies xˆ[1,N ]\Λ`a = 0 and xˆΛ`a = ΦΛ`a Φ†Λ` y. a

3 3.1

Analysis of the Algorithm Analysis and Proofs

In order to analyze the performances of many compressed sensing recovery algorithms, restricted isometry property (RIP) introduced by Cand´es and Tao in [3] is used very often. Definition 3.1. (Restricted Isometry Property) A m × N matrix Φ satisfies the Restricted Isometry Property of order K with constant δ, we say Φ satisfies RIP (K, δ) for simplicity, if there exist δ ∈ (0, 1), such that (1 − δ)kxk2 ≤ kΦxk2 ≤ (1 + δ)kxk2

(3.1)

holds for every K-sparse x. Define δK := inf{δ : (3.1) holds for any K-sparse x}. A simple observation can be derived directly from the definition of RIP. If Φ satisfies both RIP (K, δK ) and RIP (K 0 , δK 0 ), provided K < K 0 , then δK ≤ δK 0 . 3

The following two lemmas present several inequalities are used frequently in this paper. Lemma 3.1. Suppose Φ satisfies RIP of order s. Then for any set of indices Γ satisfying |Γ| ≤ s and any x ∈ RN and y ∈ R|Γ| , we bound (1 − δ|Γ| )kxΓ k ≤ kΦ∗Γ ΦΓ xΓ k ≤ (1 + δ|Γ| )kxΓ k

(3.2)

kΦ∗Γ yk ≤ (1 + δ|Γ| )1/2 kyk.

(3.3)

and Lemma 3.2. Assume Γ and Λ are two disjoint sets of indices. If Φ satisfies RIP of order |Γ ∪ Λ| with constant δ|Γ∪Λ| , then for any vector x ∈ R|Λ| kΦ∗Γ ΦΛ xk ≤ δ|Γ∪Λ| kxk.

(3.4)

The proof of the above two lemmas see([5] and [13]). For convenience, in the rest of this paper we denote Ω(r, j) := {i1 , . . . , ij } ⊂ [1, N ] such that |hr, φi1 i| ≥ . . . ≥ |hr, φij i| ≥

sup

|hr, φi|.

φ∈Φ φ6=φi ,k=1,2,...,j k

Lemma 3.3. Assume Φ satisfies RIP of order js + K with constant δ := δjs+k < 1/2, where js ≥ K. Then we have Ω(y, js) ∩ T 6= ∅. Proof. We can prove by contradiction. Assume Ω := Ω(y, js) and Ω∩T = ∅. Since Ω maximize the inner products, by (3.2) we have kΦ∗Ω yk ≥ kΦ∗T yk = kΦ∗T ΦT xT k ≥ (1 − δK )kxk. In addition by Lemma 3.2, δjs+K kxk ≥ kΦ∗Ω ΦT xT k. Thus δkxk ≥ (1 − δ)kxk ≥ (1 − δ)kxk. Apparently, if δ < 1/2 the above inequality yields a contradiction. This implies Ω ∩ T 6= ∅ for δ < 1/2. 2 The interpretation of the above lemma tells us under some RIP condition, if only j is big enough, it guarantees that Ω(y, js) includes some right indices. Then let us consider the case that the true support of x is partially known. Assume a subset of true support of x, say, Γ ⊂ T is given. And let r = y − PΓ (y). Denote Ω := Ω(r, L) and Ω0 := Ω0 (r, L, Γ) := Ω∪Γ. Then define Ω0a := Ω0a (r, L, Γ) as follows. First, find w ∈ RN such that wΩ0 = argminz ky −ΦΩ0 zk and w[1,N ]\Ω0 = 0. Next, we define Ω0a := {i ∈ [1, N ] : i ∈ Ω0 and |wi | ≥ a/2}. 4

Lemma 3.4. Assume Φ satisfies RIP of order L + K with constant δ := )a. If Γ ⊂ T and r = y − PΓ (y), then Ω0a (r, L, Γ) = δL+K < b and kxk < ( 1−b 2b Ω0 (r, L, Γ) ∩ T . Proof. It is sufficient to consider two cases. First, we denote Ω := Ω(r, L) and assume Ω ∩ T = ∅. From the definition of Ω0 := Ω0 (r, L, Γ), we obtain ΦΩ0 wΩ0 = PΩ0 (y) = PΩ0 (ΦT xT ) = PΩ0 (ΦΓ xΓ + ΦT \Γ xT \Γ ) = ΦΓ xΓ + ΦΩ0 Φ†Ω0 ΦT \Γ xT \Γ = ΦΓ xΓ + ΦΩ0 xp , where xp = Φ†Ω0 ΦT \T 0 xT \T 0 . Therefore, by Lemma 3.1 and Lemma 3.2, kxp k = k(Φ∗Ω0 ΦΩ0 )−1 Φ∗Ω0 ΦT \Γ xT \Γ k δL+K ≤ kxT \Γ k 1 − δL+K δ kxT \Γ k. ≤ 1−δ Since δ < b, by simple calculation , if kxT \Γ k ≤ kxk < a(1−b)/2b we have kxp k∞ < a/2. And clearly kxp k∞ ≤ kxp k < a/2. It is not difficult to see that the magnitude of every component of xΓ is greater than a. Therefore, for w its components supported on Γ have magnitudes greater than or equal a/2, and the components supported on Ω have magnitudes less than a/2. This implies that Ω0a = Ω0 ∩ Γ = Γ. In the second case, we denote T 0 := Ω ∩ T and assume T 0 6= ∅. Furthermore, we denote T 00 = T \(Γ ∪ T 0 ). It is clear that ΦΩ0 wΩ0 = = = =

PΩ0 (y) PΩ0 (ΦT xT ) = PΩ0 (ΦΓ xΓ + ΦT 0 xT 0 + ΦT 00 xT 00 ) ΦΓ xΓ + ΦT 0 xT 0 + ΦΩ0 Φ†Ω0 ΦT 00 xT 00 ΦΓ xΓ + ΦT 0 xT 0 + ΦΩ0 x¯p ,

where x¯p = Φ†Ω0 ΦT 00 xT 00 .Therefore, by Lemma 3.1 and Lemma 3.2, k¯ xp k = k(Φ∗Ω0 ΦΩ0 )−1 Φ∗Ω0 ΦT 00 xT 00 k δL+K ≤ kxT 00 k 1 − δL+K δ ≤ kxT 00 k. 1−δ 5

Since δ < b, by simple calculation, if kxT 00 k ≤ kxk < a(1 − b)/2b we have kxp k∞ < a/2. And clearly kxp k∞ ≤ kxp k < a/2. It is not difficult to see that the magnitude of every component of xΓ and xT 0 is greater than a. Therefore, for w its components supported on Γ and T 0 have magnitudes greater than or equal a/2, and the components supported on Ω\T 0 have magnitudes less than a/2. This implies that Ω0a = T 0 ∪ Γ = Ω0 ∩ T , which completes the proof. 2 Theorem 3.1. Assume K-sparse signal x ∈ RN satisfies |xi | ≥ a for all i ∈ supp(x). Assume L is the smallest integer such that sL ≥ K. If Φ satisfies RIP with order sL + K and δ := δsL+K < b < 1/2, then for all x )a TOMMP(s, a) will recover x exactly. such that kxk < ( 1−b 2b Proof. Let us prove by induction. At the very beginning, initialize the estimated support Γ := ∅ and residual r := y. Without of loss of generality, assume Ω(r, js) ∩ T = ∅ for j = 1, . . . , n − 1 and Ω(r, ns) ∩ T 6= ∅. Using Lemma 3.3, we can claim that n ≤ L. And apply Lemma 3.4, we derive Ω0a (r, js, Γ) = Ω0 (r, js, Γ)∩T = ∅ for j = 1, . . . , n−1 and Ω0a (r, ns, Γ) = Ω(r, ns)∩T . This implies that Λja = ∅ for j = 1, . . . , n−1 and Λna = Λn ∩T 6= ∅. Then let rn := y−PT1 (y) = Φxn . If rn = 0, we finish the proof. Otherwise, since Λna is a subset of T , xn is at most K sparse. We can repeat a argument similar as above. The only modification needed is to update Γ := Λna and r := rn . We can see that every time we do the repeation, we would get a subset of the true support T . Finally, we will get them all, equivalently we recover the signal x. 2 This new algorithm provided in this paper is a greedy algorithm. Besides the observations y and sensing matrix Φ, we require knowing of the threshold a of x. Consider all other known greedy algorithms by far, many of them need the sparsity to run. This new algorithm requires different information of x and also guarantees exact recovery under some condition. It widens the applications of greed algorithms to those cases while the accurate sparsity is not available.

References [1] Cand´es, E., Romberg, J., Tao, T.: Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information. 6

IEEE Trans. Inform. Theory. 52(2), 489–509 (2006) [2] Cand´es, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006) [3] Cand´es, E., Tao, T.: Decoding by Linear Programming. IEEE Trans. Inform. Theory. 51(12), 4203–4215 (2005) [4] Cohen, A., Dahmen, W., DeVore, R.A.: Compressed sensing and k-term approximation. J. Amer. Math. Soc. 22, 211–231(2009) [5] Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing: Closing the gap between performance and complexity. IEEE Trans. Inform. Theory. 55(5), 2230–2249 (2009) [6] Davenport, M., Wakin, M.: Analysis of Orthogonal Matching Pursuit using the Restricted Isometry Property. Preprint (2009) [7] DeVore, R.A., Temlyakov, V.N.: Some remarks on greedy algorithms. Adv. Comput. Math. 5, 173–187 (1996) [8] Donoho, D.L.: Compressed Sensing. IEEE Trans. Inform. Theory. 52(4), 1289–1306 (2006) [9] Donoho, D.L., Elad, M., Temlyakov, V.N.: Stable recovery of sparse overcomplete representations in the presence of noise. IMI Preprint (2004) [10] Donoho, D.L., Elad, M., Temlyakov, V.N.: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory. 52, 6–18 (2006) [11] Donoho, D.L., Elad, M., Temlyakov, V.N.: On Lebesgue-type inequalities for greedy approximation. J. Approx. Theory. 147(2), 185–195 (2007) [12] Gilbert, A.C., Muthukrishnan, S., Strauss, M.J.: Approximation of functions over redundant dictionaries using coherence. In Proc. 14th Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 243–252 (2002) 7

[13] Needell, D, Tropp, J.: COSAMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmonic Analysis. 26(3), 301–321 (2009) [14] Temlyakov, V.N.: Weak greedy algorithms. Adv. Comput. Math. 12, 213–227 (2000) [15] Temlyakov, V.N. Greedy approximation. Acta Numerica. pp. 235–409 (2008) [16] Tropp, J.: Greedy is good: Algorithmic results for sparse approximation. IEEE Trans. Inform. Theory. 50(10), 2231–2242 (2004) [17] Tropp, J., Gilbert, A.: Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit. IEEE Trans. Inform. Theory. 53(12), 4655–4666 (2007)

8

Thresholding Orthogonal Multi Matching Pursuit

May 1, 2010 - Compressed sensing (CS) is a new signal recovery method established in the recent years. The fundamental work of CS is done by Donoho[8], Candes,. Romberg and Tao([1] , [2], and [3]). The CS approach recovers a sparse signal in high dimensional space with only a few measurements. We assume the ...

179KB Sizes 1 Downloads 216 Views

Recommend Documents

Generalized Orthogonal Matching Pursuit
This work was supported by the KCC (Korea Communications Commission),. Korea, under the R&D program ...... stitute of Technology, China, in 2006 and 2009, re- spectively. ... compressive sensing, wireless communications, and statistical ...

Support Recovery With Orthogonal Matching Pursuit in ... - IEEE Xplore
Nov 1, 2015 - Support Recovery With Orthogonal Matching Pursuit in the Presence of Noise. Jian Wang, Student Member, IEEE. Abstract—Support recovery ...

Multipath Matching Pursuit - IEEE Xplore
Abstract—In this paper, we propose an algorithm referred to as multipath matching pursuit (MMP) that investigates multiple promising candidates to recover ...

Correction to “Generalized Orthogonal Matching Pursuit”
Jan 25, 2013 - On page 6204 of [1], a plus sign rather than a minus sign was incor- ... Digital Object Identifier 10.1109/TSP.2012.2234512. TABLE I.

Generalized compressive sensing matching pursuit ...
Definition 2 (Restricted strong smoothness (RSS)). The loss function L .... Denote R as the support of the vector (xt−1 − x⋆), we have. ∥. ∥(xt−1 − x⋆)R\Γ. ∥. ∥2.

Generalized compressive sensing matching pursuit algorithm
Generalized compressive sensing matching pursuit algorithm. Nam H. Nguyen, Sang Chin and Trac D. Tran. In this short note, we present a generalized greedy ...

Signal Detection with Parallel Orthogonal Matching ...
Pursuit in Multi-User Spatial Modulation Systems. Jeonghong Park, Bang Chul Jung, Tae-Won Ban†, and Jong Min Kim††. Department of Electronics Engineering, Chungnam National University, Daejeon, Korea. † Department of Information and Communica

Sparsity adaptive matching pursuit algorithm for ...
where y is the sampled vector with M ≪ N data points, Φ rep- resents an M × N ... sparsity adaptive matching pursuit (SAMP) for blind signal recovery when K is ...

Accelerating String Matching Using Multi-threaded ...
processor are too slow for today's networking. • Hardware approaches for .... less complexity and memory usage compared to the traditional. Aho-Corasick state ...

Accelerating String Matching Using Multi-threaded ...
Experimental Results. AC_CPU. AC_OMP AC_Pthread. PFAC. Speedup. 1 thread. (Gbps). 8 threads. (Gbps). 8 threads. (Gbps) multi-threads. (Gbps) to fastest.

Accelerating String Matching Using Multi-Threaded ...
Abstract—Network Intrusion Detection System has been widely used to protect ... malware. The string matching engine used to identify network ..... for networks. In. Proceedings of LISA99, the 15th Systems Administration Conference,. 1999.

Accelerated Singular Value Thresholding for Matrix ...
Aug 16, 2012 - tem [13, 14, 22] and image/video analysis [17, 11]. Since the completion of ...... Sdpt3 – a matlab software package for semidefinite quadratic.

Histogram Thresholding using Beam Theory and ...
†Address for correspondence: Center for Soft Computing Research, Indian ... in order to threshold the image by optimizing an entropy measure, which they call as the ..... In literature, numerous automatic algorithms have been proposed to ...

orthogonal-research-quarter-2-report.pdf
... on Meta-Science: “The Structure and Theory of Theories”, “The Analysis of ... Three types of data: cell lineage, ... orthogonal-research-quarter-2-report.pdf.

Proper Orthogonal Decomposition Model Order ...
Reduction (MOR) is a means to speed up simulation of large systems. Ex- isting MOR techniques mostly apply to linear problems and even then they have to be ...

orthogonal-research-quarter-2-report.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.Missing:

Reconstruction of Orthogonal Polygonal Lines
algorithm has a low computational complexity and can be used for restoration of orthogonal polygonal lines with many vertices. It was developed for a raster- to-vector conversion system ArcScan for ArcGIS and can be used for interactive vectorization

Orthogonal Trajectories_Exercise 4.1.pdf
Page 1 of 18. Q.1. Q.2. Page 1 of 18. devsamajcollege.blogspot.in Sanjay Gupta, Dev Samaj College For Women,Ferozepur City. ORTHOGONAL TRAJECTORIES. EXERCISE 4.1 CHAPTER - 4. Page 1 of 18. Page 2 of 18. Q.3. Q.4. Page 2 of 18. devsamajcollege.blogspo

orthogonal-research-quarter-3-report.pdf
project presentation, posted to Figshare (with Stephen Larson, Mark Watts, Steve McGrew, and. Richard Gordon). DevoWorm: raising the (Open)Worm.

Fast n-Dimensional Orthogonal Polytopes ...
classical equivalence relation based in geometrical transformations and therefore ... possible combinations, which according to an equivalence relation, can be ...

orthogonal-research-quarter-2-report.pdf
There was a problem loading this page. Retrying... orthogonal-research-quarter-2-report.pdf. orthogonal-research-quarter-2-report.pdf. Open. Extract. Open with.

Orthogonal complex spreading method for multichannel and ...
Sep 2, 2004 - CSEM/Pro Telecom, et al., “FMAiFRAMEs Multiple. Access A Harmonized ... Sequence WM'nl by a ?rst data Xnl of a mh block and. (51) Int. Cl.