A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai ∗and Louis Y. Liu† January 12, 2011

Abstract We show that as long as the restricted isometry constant δ2k < 1/2, there exist a value q0 ∈ (0, 1] such that for any q < q0 , each minimizer of the nonconvex `q minimization for the sparse solution of any underdetermined linear system is the sparse solution.

1

Introduction

Let us start with the one of basic problems in compressed sensing: seek the minimizer x∗ ∈ Rn solving min{ke xk 0 :

Φe x = b},

(1)

e, Φ is a where ke xk0 stands for the number of nonzero entries of vector x matrix of size m × n with m << n. That is, the purpose of the research is to find the sparse solution satisfying the under-determined linear system Φx = b with kxk0 as small as possible. A key concept to describe the solution of (1) is the restricted isometry constants of a matrix Φ introduced in [5]. Definition 1 For each integer k = 1, 2, · · · , let δk be the smallest number such that (1 − δk )kxk22 ≤ kΦxk22 ≤ (1 + δk )kxk22 (2) ∗ [email protected]. This author is partly supported by the National Science Foundation under grant DMS-0713807. Department of Mathematics, The University of Georgia, Athens, GA 30602 † [email protected], Department of Mathematics, Marlboro College, Marlboro, Vermont 05344.

1

holds for all k-sparse vectors x ∈ Rn with kxk0 ≤ k, where kxk2 the standard `2 norm for vector x. δk is called restricted isometry constant. One of the standard approaches to find the minimizer x∗ is to seek the minimizer x1 ∈ Rn solving min{ke xk 1 :

e ∈ Rn }, Φe x = b, x

(3)

e. where ke xk1 is the standard `1 norm of vector x Suppose that kx∗ k0 = k. Let T0 ⊂ {1, 2, · · · , n} be the subset of indices for the k largest entries of x∗ . For any vector x, let xT0 denote the vector whose entries agree with that of x at the indices in T0 and zeros for other entries. Many researchers have established the following result in various literature: Theorem 1 (Noiseless Recovery) For appropriate δ2k > 0, the solution x1 of the minimization problem (3) satisfies kx − x1 k2 ≤ C0 k −1/2 kx − xT0 k1 ,

(4)

for any x with Φx = b, where C0 is a positive constant dependent on δ2k . In particular, if x is k-sparse, the recovery is exact. √ It is known from Cand`es, 2008[4] that the above result holds when δ2k < 2 − 1. This √ condition is improved in Foucart and Lai, 2009[11] to be δ2k < 2/(3 + 2) ≈ 0.4531. Subsequently, this condition is further improved in Cai, Wang, and Xu, 2010[2] for special k (multiple of 4), δ2k < 2/(2 + √ 5) ≈ 0.4721 as well as in Foucart, 2010[10] to be δ2k <

4 3 √ ≈ 0.4652 and for large k, δ2k < √ ≈ 0.4734. 4+ 6 6+ 6

Recently, Li and Mo proposed another approach in [15] and showed that the inequality (4) holds as long as δ2k < 0.4931. The problem (3) was extended in [12] by seeking a minimizer xq ∈ Rn for a number q ∈ (0, 1) which solves min{ke xkqq :

e ∈ Rn }, Φe x = b, x

(5)

e. See also [7], [11], [6], where ke xkq is the standard `q quasi-norm of vector x [14] for study of the nonconvex `q minimization problem (5). In Foucart and Lai, 2009[11], the following result was established. 2

√ Theorem 2 Suppose that δ2k < 2(3 − 2)/7 ≈ 0.4531. Then for any q ∈ (0, 1], kxq − xkq ≤ C0 kx − xT0 kq , (6) for any x with Φx = b, where C0 is a positive constant dependent on δ2k . In particular, if x = x∗ is k-sparse, the recovery is exact. To improve the result in Theorem 2, our main result in this paper is Theorem 3 Suppose that δ2k < 1/2. There exists a number q0 ∈ (0, 1] such that for any q < q0 , each minimizer xq of the `q minimization (5) is the sparse solution of (1). Furthermore, there exists a positive constant Cq such that for any x ∈ Rn with Φx = b such that kx − xq kq ≤ Cq kx − xT0 kq , where Cq is dependent on q and δ2k and T0 is the index set of the k nonzero entries of the sparse solution x∗ . Under the assumption that the `q minimization (5) can be computed, the sensing matrix Φ requires a more relaxed condition on the restricted isometry constant to be able to find the sparse solution than the conditions listed above for Theorem 1. For simplicity, we only discuss the sparse solution for noiseless recovery in this paper. We leave the discussion on noisy recovery to the interested reader. After we establish an elementary inequality in Preliminary section §2, we prove our main result in §3. Finally we give a few remarks in §4.

2

Preliminary Results

Let x = (x1 , · · · , xn )T be a vector in Rn and we use kxkp be the standard norm for vector x for any p ≥ 1 and kxkq be the standard quasi-norm when q < 1. Recall that we have the following standard inequality kxk1 ≤



kxk1 nkxk2 or kxk2 ≥ √ . n

(7)

by the well-known Cauchy-Schwarz inequality. A converse of the above inequality is kxk2 ≤ kxk1 which can be seen directly after dividing kxk∞ = max{|xi |, i = 1, · · · , n} both sides. Recently, Cai, Wang and Xu proved the following interesting inequality in [3]. 3

Lemma 1 (Cai, Wang and Xu’10) For any x ∈ Rn ,  √  kxk1 n kxk2 ≤ √ + max |xi | − min |xi | . 1≤i≤n 4 1≤i≤n n

(8)

We now extend the inequality to the setting of quasi-norm kxkq with q ∈ (0, 1). It is easy to see that for 0 < q < 1, kxk2 ≥

kxkq 1/q−1/2 n

(9)

by using H¨ older’s equality. The following converse inequality ∀x ∈ Rn .

kxk2 ≤ kxkq ,

is often used in the literature. Motivated by the new inequality in (8), we would like to see the converse of the inequality (9). Lemma 2 Fix 0 < q < 1. For any x ∈ Rn ,   √ kxkq kxk2 ≤ 1/q−1/2 + n max |xi | − min |xi | . 1≤i≤n 1≤i≤n n

(10)

Proof. Without loss of generality, we may assume that x1 ≥ x2 ≥ · · · ≥ xn ≥ 0 and not all xi are equal. Let f (x) = kxk2 −

kxkq . 1/q−1/2 n

Let us fix x1 and find an upper bound for f (x). Note that q−1 kxk1−q xi ∂f q xi = − ∂xi kxk2 n1/q−1/2

is an increasing function as a function of xi . Indeed, it is easy to see that both functions v uX n xj kxk2 u = t ( )2 xi xi j=1

and q−1 kxk1−q q xi

(1−q)/q  n X xj =  ( )q  xi j=1

4

∂f of xi are decreasing. Thus, ∂x is an increasing function of xi . It follows i that f (x) is convex as a function of xi for each i = 2, · · · , n − 1. The maximum achieves at either xi = xi−1 or xi = xi+1 . It follows that when f achieves its maximum, x must be of the form that x1 = x2 = · · · = xk and xk+1 = · · · = xn for some 1 ≤ k < n. Thus,

f (x) =

q

k(x21 − x2n ) + nx2n −

(k(xq1 − xqn ) + nxqn )1/q . n1/q−1/2

It is easy to see that f (x) ≤

q √ (nxqn )1/q n(x21 − x2n ) + nx2n − 1/q−1/2 = n(x1 − xn ) n

To find a better upper bound for some q < 1, see Remark 4.4. One can see from Remark 4.4 that it is not an easy task to find out which k to maximize the function q (k(xq1 − xqn ) + nxqn )1/q 2 2 2 g(k) = k(x1 − xn ) + nxn − n1/q−1/2 (cf. Remark 4.4). Anyway, the result in Lemma 2 is good enough for our application in the next section.

3

Main Results and Proofs

To describe our results, we need more notation. We use Null(Φ) to denote the null space of Φ and S(x) to denote the support of x ∈ Rn , i.e., S = {i, xi 6= 0} for x = (x1 , · · · , xn )T . Recall that x∗ is a sparse solution, i.e., Φx∗ = b with S(x∗ ) ⊂ T0 with cardinality of T0 less or equal to k. Let xq be the solution of the minimization problem (5). Recall from [12] that xq is the unique spare solution x∗ if and only if khT0 kq < khT0c kq

(11)

for all nonzero vector h in the null space of Φ. It is called the null space property. Indeed, we have kx∗ kqq = kx∗T0 kqq ≤ kx∗T0 + hT0 kqq + khT0 kqq < kx∗T0 + hT0 kqq + khT0c kqq = kx∗ + hkqq

5

by (11) for any nonzero vector h in the null space of Φ. Thus, x∗ is the solution of (5). Another way to show the sufficiency is to let xq be the solution of (5) and let h = x∗ − xq which is in the null space of Φ. If h 6= 0, we have kx∗T0 kqq = kx∗ kqq ≥ kxq kqq = kxqT0 kqq + kxqT c kqq 0

≥ kx∗T0 kqq − khT0 kqq + khT0c kqq . It follows that khT0c kqq ≤ khT0 kqq < khT0c kqq which is a contradiction, where we have used (11). Thus, xq is the sparse solution. The necessity of the null space property (11) can be seen as follows: suppose that there is a nonzero vector h ∈ null(Φ) such that khT0c kq ≤ khT0 kq . Let x∗ = hT0 and b = Φx∗ . If khT0c kq < khT0 kq , then −hT0c satisfies Φ(−hT0c ) = b and the minimization (5) should find a solution xq which is not hT0 , the sparse solution of this vector b which is a contradiction to the assumption that xq is the unique sparse solution x∗ . Similarly, if khT0c kq = khT0 kq , the minimization (5) may find two solutions hT0 and −hT0c which is a contradiction. In fact, one can find the smallest constant ρ < 1 such that khT0 kq ≤ ρkhT0c kq ,

∀h ∈ null (Φ).

Indeed, it is easy to see that the following equality holds P P q q i∈T0 |hi | i∈T0 |hi | P sup P = max q q h∈Null(Φ) h∈Null(Φ) i6∈T0 |hi | i6∈T0 |hi | khk2 =1

h6=0

which is denoted by ρ. In general, for h = x − xq , let khT0 kq = τ (h, q)khT0c kq .

(12)

The purpose of the study is to show how to make τ (h, q) < 1 for all nonzero vector h in the null space of Φ. For any nonzero vector h in the null space of Φ, we rewrite h as a sum of vectors hT0 , hT1 , hT2 , · · · , each of sparsity at most k. Here, T0 corresponds to the locations of the k largest entries of x∗ ; T1 to the locations of the k largest entries of hT0c ; T2 to the locations of the next k largest entries of hT0c , and so on, where T0c stands for the complement index set of T0 in {1, 2, · · · , n}. Without loss of generality, we may assume that h = (hT0 , hT1 , hT2 , · · · )T with the cardinality of Ti being equal to k for all i = 0, 1, 2, · · · . Let us introduce another ratio t := t(h, q) ∈ [0, 1] be a number such that khT1 kqq = X t khTi kqq . First of all, we have i≥1

6

Lemma 3 For q ∈ (0, 1), we have X

khTi k22 ≤

i≥2

1 k (2−q)/q

2/q  X khTi kqq  . (1 − t)t(2−q)/q 

(13)

i≥1

Proof. It is easy to see that X

khTi k22

2−q

≤ |h2k+1 |

X

khTi kqq

 ≤

i≥2

i≥2

 ≤ ≤

khT1 kqq k

khT1 kqq k

(2−q)/q X

khTi kqq

i≥2

(2−q)/q

1−t khT1 kqq t  2/q 1 1 − t 2/q X t khTi kqq  . k (2−q)/q t i≥1

The result in (13) follows. Next we have Lemma 4 For q ∈ (0, 1), we have X

khTi k2 ≤

i≥2

1 k 1/q−1/2

 1/q X  khTi kqq  .

(14)

i≥1

Proof. By Theorem 2, we have k 1/q−1/2 khTi k2 ≤ khTi kq + k 1/q (|hik+1 | − |hik+k |) for i ≥ 2. It follows X X k 1/q−1/2 khTi k2 ≤ khTi kq + k 1/q khT1 kq /k 1/q i≥2

i≥2

1/q

 ≤

X i≥1

khTi kq ≤ 

X

khTi kqq 

i≥1

since q ≤ 1. Furthermore, we have Lemma 5 For q ∈ (0, 1), we have kΦ(hT0

 2/q X 1 − δ2k + hT1 )k22 ≥ 2/q−1 (τ (h, q)2 + t2/q )  khTi kqq  . k i≥1 7

(15)

Proof. By the definition of δ2k and using (9), we have kΦ(hT0 + hT1 )k22 ≥ (1 − δ2k )khT0 + hT1 k22 = (1 − δ2k )(khT0 k22 + khT1 k22 ) ≥ (1 − δ2k )(khT0 k2q + khT1 k2q )/k 2/q−1 2/q  X 1 − δ2k = khTi kqq  . (τ (h, q)2 + t2/q )  k 2/q−1 i≥1 P P It is easy to see that Φ(hT0 + hT1 ) = Φh − Φ( j≥2 hTj ) = −Φ( i≥2 hTi ). We have the following estimate Lemma 6 For q ∈ (0, 1), we have X kΦ(hT0 +hT1 )k22 = kΦ( hTj )k22 ≤ j≥2

t)t(2−q)/q

(1 − k (2−q)/q

+

δ2k 2/q−1 k

2/q

! X 

khTi kqq 

i≥1

(16) Proof. A straightforward calculation shows X X kΦ( hTj )k22 = hΦ(hTi ), Φ(hTj )i j≥2

i,j≥2

=

X X hΦ(hTi ), Φ(hTj )i hΦ(hTj ), Φ(hTj )i + 2 2≤i
j≥2

≤ (1 + δk )

X

khTi k22 + 2δ2k

i≥2



X

khTi k2 khTj k2

i>j≥2

khTi k22 + δ2k (

i≥2



X

X

khTi k2 )2

i≥2

t)t(2−q)/q

(1 − k (2−q)/q

+

δ2k 2/q−1 k

2/q

! X 

khTi kqq 

.

i≥1

By using (15) and (16), we have (1 − δ2k )(τ (h, q)2 + t2/q ) ≤ (1 − t)t(2−q)/q + δ2k or τ (h, q)2 ≤ (δ2k + t(2−q)/q − (2 − δ2k )t2/q )/(1 − δ2k ).

8

(17)

.

Let us study the maximum of the right-hand side as a function of t ∈ [0, 1]. Letting s = (2 − q)/(2 − δ2k ) with s ≤ 2, it is easy to see that the maximum happens at t = s/2 and 2/q  s 2/q δ2k s + 2s q 2  s 2/q τ (h, q) ≤ (δ2k + − (2 − δ2k ) )/(1 − δ2k ) = . s 2 2 s(1 − δ2k ) 2

If the term on the right-hand of the inequality is less than 1, then we will have τ (h, q) < 1 and hence, xq is the sparse solution of (1). To see the range of value of δ2k , we continue the following simple analysis: δ2k s + q

 s 2/q

or 2δ2k +

2

< s − δ2k s

 s 2/q q < 1. 2 s

Further simplification yields  δ2k + q

2−q 2(2 − δ2k )

2/q

2 − δ2k < 1/2. 2(2 − q)

(18)

Since the second term on the left-hand side goes to zero as q → 0+ as δ2k < 1, q ≤ 1 and 

2−q 2(2 − δ2k )

2/q

2 − δ2k ≤ 2(2 − q)



2−q 2

2/q

1 ≈ , e

we can establish the results in Theorem 3. Proof. of Theorem 3. Based on the proofs of Lemmas 5 and 6, we have  2/q X khT0 k2q ≤ ρ2q  khTi kqq  . i≥1

where ρ2q is ρ2q

2/q δ2k s + 2s q := . s(1 − δ2k )

That is, 1/q

 khT0 kq ≤ ρq 

X i≥1

9

khTi kqq 

.

(19)

Since δ2k < 1, there exists a q0 such that (18) holds and hence, we will have ρq < 1 for any q < q0 . As xq is a minimizer of (5) and for any x which is a solution of the under-determined linear equations Φx = b, we let h = xq − x and X X kxT0 kqq + kxT0c kqq = kxkqq ≥ kxq kqq = kx + hkqq = |xi + hi |q + |xi + hi |q i∈T0

kxT0 kqq





khT0 kqq

+ kh

T0c

kqq

− kx

T0c

kqq .

i∈T0c

Thus, we have khT0c kqq ≤ khT0 kqq + 2kxT0c kqq .

(20)

Together with (19), we conclude X X khTi kqq ≤ ρqq khTi kqq + 2kxT0c kqq . i≥1

i≥1

That is, X

khTi kqq ≤

i≥1

2 kxT0c kqq . 1 − ρqq

By (19), we have khkqq = khT0 kqq +

X

khTi kqq ≤ (ρqq + 1)

i≥1

X i≥1

khTi kqq ≤

2(1 + ρqq ) kxT0c kqq . 1 − ρqq

These complete the proof.

4

Remarks

We have a few remarks in order. Remark 4.1 Clearly, the results in Theorem 3 can be extended to the noisy recovery setting as in [4] and [11]. We leave the discussion to the interested reader. Remark 4.2 The results in Theorem 3 can also be extended to dealing with sparse solution for multiple measurement vectors as discussed in [13]. We omit the details.

10

Remark 4.3 Recently the block sparse solution of compressed sensing problems was introduced and studied in [8], [1], which have many practical applications, such as DNA microarrays [17], multiband signal [16], and magnetoencephalography (MEG) [9]. In recovering the sparse solution x from Φx = b, the entries of x are grouped into blocks. That is, x = (xt1 , xt2 , · · · , xt` ) with xti being a block of entries for each i. One looks for the fewest number of nonzero blocks xti such that Φx = b. Letting k|xk|2,q =

` X

!1/q kxti kq2

i=1

be a mixed norm with kxti k2 is the standard `2 norm of vector xti , one finds the block sparse solution min{k|xk|2,q ,

Φx = b}.

(Cf. [8] for q = 1.) The concept of restricted isometry constant was extended in this mixed norm minimization when q = 1 in [8]. Our study in §3 can be generalized to the setting. We leave the details to the interested reader. Remark 4.4 In order to find a better upper bound for Lemma 2, we need to find out which k maximizes f (x). Let us treat the right-hand side of the equation in the end of the proof of Theorem 2 as a function g(k). Note that g(n) = 0 and g(0) = 0. The maximum of g must happen inside k between 1 and n − 1. The derivative of g is x21 − x2n (xq1 − xqn ) g 0 (k) = p − (k(xq1 − xqn ) + nxqn )1/q−1 . qn1/q−1/2 2 k(x21 − x2n ) + nx2n The critical point satisfies qn1/q−1/2 (x21 − x2n ) = 2(xq1 − xqn )

q

k(x21 − x2n ) + nx2n (k(xq1 − xqn ) + nxqn )1/q−1

That is, q qn1/q−1/2 (x21 − x2n ) k(x21 − x2n ) + nx2n = (k(xq1 − xqn ) + nxqn )1−1/q . 2(xq1 − xqn )

(21)

The critical point of k is not easy to find except for q = 1. Let us try a particular q = 32 . In this case, we have 11

Lemma 7 For any x ∈ Rn , one has r  1/q kxkq 2 n q q kxk2 − 1/q−1/2 ≤ max |xi | − min |xi | 1≤i≤n 3 3 1≤i≤n n

(22)

for q = 23 . In particular, one has kxk2 −

kxkq n1/q−1/2

2 ≤ 3

r   n max |xi | − min |xi | . 1≤i≤n 3 1≤i≤n

(23)

Proof. It is easy to see that     2/3 2/3 2/3 3/2 q k x − x + nx n n  1 g (k) = nx2n + k x21 − x2n − n achieves its maximum at k0 =

q 8/3 4/3 4/3 8/3 2/3 10/3 10/3 2/3 n 4x41 +12x1 xn +33x1 xn +46x21 x2n +33x1 xn +12x1 xn +4x4n 2



 6(x1 −xn ) 4/3 2/3 2/3 4/3 3n x1 xn +x1 xn +2x2n 2

6(x21 −x2n )

by the standard calculation. Let p (s, t) := 4s6 + 12s5 t + 33s4 t2 + 46s3 t3 + 33s2 t4 + 12st5 + 4t6 . 2/3

2/3

Setting s := x1 and t := xn , we see that the maximum of g (k) is  !3/2  p √ q p (s, t) + 3st (s + t) n  p . 6 p (s, t) − 3st (s + t) − g (k0 ) = √ s2 + st + t2 6 6 Let qp F (s, t) := 6 p (s, t) − 3st (s + t) −

!3/2 p p (s, t) + 3st (s + t) s2 + st + t2



so that g(k0 ) = 6√n6 F (s, t). To find an upper bound of F (s, t), we may consider F (1, y) with y = t/s for a fixed s. It is easy to plot F (1, y) and √ 4 2 (1 − y)3/2 in Fig. 1 and their difference. Hence, the inequality (22) follows. Furthermore, by the quasi-triangle inequality for q = 2/3,  1/q q q max |xi | − min |xi | ≤ max |xi | − min |xi | 1≤i≤n

1≤i≤n

1≤i≤n

1≤i≤n

one obtains the inequality in (23). The analysis above just shows that a better estimate for Lemma 2 is hard to find. 12

5 0.4 4

4

3

2 H1 - tL

0.3

4

2 H1 - tL32 - F H1, tL

32

0.2 2 F (1, t)

0.1

1

0.2

0.4

0.6

0.8

1.0

t 0.2

0.4

0.6

0.8

√ Figure 1: The graphs of F (1, t) and 4 2 (1 − t)3/2 (left) and the graph of their difference (right)

References [1] R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, Model-based compressive sensing, IEEE Trans. Inform. Theory 56 (2010). 1982 – 2001. [2] T. Cai, L. Wang, G. Xu, Shifting inequality and recovery of sparse signals, IEEE Trans. Signal Process., 58(2010), pp. 1300-1308. [3] T. Cai, L. Wang, and G. Xu, New Bounds for Restricted Isometry Constants, Information Theory, IEEE Transactions, 2010, pp.4388–4394. [4] E. Cand`es, The restricted isometry property and its implications for compressed sensing, C. R. Acad. Sci. Ser. I 346 (2008) 589-592. [5] E. Cand`es and T. Tao, Decoding by linear programing, IEEE Trans. Inform. Theory 51 (2005) 42034215. [6] M. Davies and R. Gribonval, Restricted isometry constants where `q sparse recovery can fail for 0 < p < 1, IEEE Trans. Inform. Theory, in press. [7] R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett. 14 (2007) 707–710. [8] Y. C. Eldar, M. Mishali, Robust recovery of signals from a structured union of subspaces, IEEE Trans. Inform. Theory 55 (2009) 5302–5316.

13

1.0

t

[9] Y. C. Eldar, P. Kuppinger, H. Bolcskei, Block-sparse signals uncertainty relations and efficient recovery, IEEE Trans. Signal Process. 58 (2010) 3042-3054. [10] S. Foucart, A note on guaranteed sparse recovery via `1 -minimization, Appl. Comput. Harmon. Anal. 29 (2010), 97–103. [11] S. Foucart, M.J. Lai, Sparsest solutions of underdetermined linear systems via `q -minimization for 0 < q < 1, Appl. Comput. Harmon. Anal. 26 (2009) 395-407. [12] R. Gribonval and M. Nielsen, Sparse decompositions in unions of bases. IEEE Trans. Info. Theory, vol. 49, no. 12, pp. 3320–3325, Dec 2003. [13] M. J. Lai and Louis Y. Liu, The null space property for sparse recovery from multiple measurement vectors, to appear in Applied Computational Harmonic Analysis, 2010. [14] M. J. Lai and J. Wang, An unconstrained lq minimization for sparse solution of under determined linear systems, to appear in SIAM J. Optimization, 2010. [15] S. Li and Q. Mo, New bounds on the restricted isometry constant δ2k , submitted, 2010. [16] M. Mishali, Y. C. Eldar, Blind multi-band signal reconstruction: Compressed sensing for analog signals, IEEE Trans. Signal Process. 57 (2009) 993–1009. [17] F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays, IEEE J. Sel. Top. Signal Process. 2 (2008) 275–285.

14

A New Estimate of Restricted Isometry Constants for Sparse Solutions

Jan 12, 2011 - where ˜x1 is the standard l1 norm of vector ˜x. Suppose that x∗. 0 = k. Let T0 ⊂ {1,2,ททท ,n} be the subset of indices for the k largest entries of ...

399KB Sizes 39 Downloads 250 Views

Recommend Documents

A New Estimate of Restricted Isometry Constants for Sparse Solutions
Jan 12, 2011 - q < hTc. 0 q. (11) for all nonzero vector h in the null space of Φ. It is called the null ... find two solutions hT0 and −hTc. 0 ... First of all, we have. 6 ...

A new estimate for present-day Cocos-Caribbean plate motion ...
spreading rates and additional marine geophysical data to ... Linear velocity analysis for forearc locations ..... of ζ1 in degrees counter-clockwise from east.

A New Algorithm for Finding Numerical Solutions of ... - IEEE Xplore
optimal control problem is the viscosity solution of its associated Hamilton-Jacobi-Bellman equation. An example that the closed form solutions of optimal ...

A new algorithm for finding numerical solutions of ...
Mar 2, 2009 - Department of Mathematics, Beijing Institute of Technology, Beijing .... Given a running cost L(t, y, u) and a terminal cost ψ(y), the optimal ...

New Limits on Coupling of Fundamental Constants to ...
Apr 9, 2008 - electron-proton mass ratio , and light quark mass. ... 87Sr as the fourth optical atomic clock species to enhance constraints on yearly drifts of ...

Estimate of expenditure-KBF
Name & Address of the Hospital: 2. Name of Patient : 3. Address : (as per hospital records) ... Age: 5. Name of Father/Mother/Husband: 6. Registration No.

Quasiharmonic elastic constants corrected for ...
Oct 21, 2008 - Pierre Carrier,1 João F. Justo,2 and Renata M. Wentzcovitch1. 1Minnesota ..... the SGI Altix XE 1300 Linux Cluster, and Yonggang Yu for.

The arithmetic theory of local constants for abelian ...
Jan 14, 2012 - We explain how to get a canonical isogeny decompo# sition of the ..... We are going to recover, from the results of [7], an explicit description of the ...... Lemma A.9 The property of having an 4"polarization (over

Equivariant epsilon constants for Galois extensions of ...
Acknowledgements. It is a pleasure to thank my supervisor David Burns for his great support. His enthu- ..... by the free А-module А with automorphism a ι→ au.

download The Future of Service-Learning: New Solutions for ...
Full PDF The Future of Service-Learning: New Solutions for Sustaining and Improving Practice, Reading PDF The Future of Service-Learning: New Solutions for ...

Truthful Approximation Mechanisms for Restricted ...
Jun 14, 2007 - ∗School of Computer Science and Engineering, Hebrew University of Jerusalem, Israel. [email protected]. †School of Computer Science ...

Probability Estimate for Generalized Extremal Singular Values of ...
Louis Yang Liu. Generalized Singular Values of Random Matrices ... largest singular value of random matrices of size m × n with independent entries of mean 0 ...

Agreement of a Restricted Secret Key
Email: [email protected], [email protected],. Abstract—The .... and exponents using the random coding approach by the 2- universal hashing in [5].

Estimate For Appliance Repair Scottsdale.pdf
Kenmore home appliance service Scottsdale, AZ. Page 3 of 5. Estimate For Appliance Repair Scottsdale.pdf. Estimate For Appliance Repair Scottsdale.pdf.

Agreement of a Restricted Secret Key
Institute of Network Coding (INC). Department of ... Email: [email protected], [email protected], ...... CoRR, vol. abs/1007.2945, 2010. [5] C. H. ...

Force constants and dispersion relations for the ...
coupled into one longitudinal and two transverse oscillations which are .... Thus, the motion is decoupled into a longitudinal (L) and ..... the experimental data.

A New Push-Relabel Algorithm for Sparse Networks 1 ...
Jul 4, 2014 - computer science and operations research, as well as practical ... algorithm for bounded-degree networks; that is, when there is some .... A distance labeling is a function d : V → Z≥0 that associates each vertex with a positive.

A Convex Hull Approach to Sparse Representations for ...
noise. A good classification model is one that best represents ...... Approximate Bayesian Compressed Sensing,” Human Language Tech- nologies, IBM, Tech.

estimate
Date: 1/18/2009. Estimate #. 08-54. Town of Underhill. Chris Murphy, Sherri Morin. 12 Pleasant Valley Rd. Underhill, VT 05490. Prices valid for 30 days. Ashton Thermal, LLC | 1463 Taylor Road East Fairfield, Vermont 05448 | 802-849-2908 | www.ashtont

A Convex Hull Approach to Sparse Representations for ...
1IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA. ... data or representations derived from the training data, such as prototypes or ...

Ionization (dissocation) constants worksheet blank.pdf
Ionization (dissocation) constants worksheet blank.pdf. Ionization (dissocation) constants worksheet blank.pdf. Open. Extract. Open with. Sign In. Main menu.

A WIDEBAND DOUBLY-SPARSE APPROACH ... - Infoscience - EPFL
a convolutive mixture of sources, exploiting the time-domain spar- sity of the mixing filters and the sparsity of the sources in the time- frequency (TF) domain.

Constants, Units, and Uncertainty
each Federal agency, by a date certain and to the extent economically ...... Greek alphabet in roman and italic type alpha. A. A beta. B. B gamma delta epsilon. E.