IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

1

Nonlinear Dimensionality Reduction with Local Spline Embedding Shiming Xiang, Feiping Nie, Changshui Zhang, Member, IEEE, and Chunxia Zhang

Abstract This paper presents a new algorithm for Non-Linear Dimensionality Reduction (NLDR). Our algorithm is developed under the conceptual framework of compatible mapping. Each such mapping is a compound of a tangent space projection and a group of splines. Tangent space projection is estimated at each data point on the manifold, through which the data point itself and its neighbors are represented in tangent space with local coordinates. Splines are then constructed to guarantee that each of the local coordinates can be mapped to its own single global coordinate with respect to the underlying manifold. Thus the compatibility between local alignments is ensured. In such a work setting, we develop an optimization framework based on reconstruction error analysis, which can yield a global optimum. The proposed algorithm is also extended to embed out-of-samples via spline interpolation. Experiments on toy data sets and real-world data sets illustrate the validity of our method. Index Terms Non-linear dimensionality reduction, compatible mapping, local spline embedding, out-of-samples.

Shiming Xiang, Feiping Nie and Changshui Zhang are with the State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing 100084, China. Email: [email protected]. Chunxia Zhang is with the Software School, School of Computer Science, Beijing Institute of Technology, Beijing 100081, China. Manuscript received January 16, 2008; revised July 19, accepted September 16, 2008.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

2

I. I NTRODUCTION In many areas of natural and social sciences, one is often confronted with intrinsically lowdimensional data points which lie in a very high-dimensional observation space. An example might be a set of images of an individual’s face observed under different poses and lighting conditions. If there are n × n greyscale pixels totally, then each face image yields a data point in n2 -dimensional space. But the intrinsic dimensionality of the space of all these images only equals to the number of the pose and lighting parameters. Reducing the dimensionality of such data is needed to facilitate the visualization, clustering, classification, communication and storage of high-dimensional data in artificial intelligence, pattern recognition, computer vision, data mining, information retrieval, and data engineering. Many dimensionality reduction algorithms have been proposed, most of which are linear, such as the commonly used Principal Component Analysis (PCA) [27] and classical multidimensional scaling [9]. Linear methods can discover the true structure of data lying on or nearly lying on a linear space, but may be inadequate to those with nonlinear structures. To explore the nonlinear structures, algorithms which can generate nonlinear mappings [40] have also been developed [31], such as self-organizing map [29], auto-encoder neural network [1], generative topographic map [4], principal curve and manifold [24], [49], [28], [46], [21], [15], [20], and kernel principal component analysis [42], [35]. However, most of these algorithms can only give a local minimum, and thus may fail to discover the true nonlinear structure hidden in the data. Recently a new research direction of nonlinear mapping methods has been developed under the assumption that the data points are sampled from an underlying manifold embedded in a high-dimensional Euclidean space. The two seminal algorithms are Isomap [48] and Locally Linear Embedding (LLE) [39]. Based on the multidimensional scaling algorithm [9], Isomap attempts to preserve globally the geodesic distances between any pair of data points. LLE tries to discover the nonlinear structure by exploiting the local geometry of the data. Later, different manifold learning algorithms have been proposed, such as manifold charting [5], Laplacian Eigenmap (LE) [2], Hessian LLE (HLLE) [13], Local Tangent Space Alignment (LTSA) [61], semi-definite embedding [54], conformal eigenmap [45], and other extensive work [10], [22], [47], [38], [59], [60], [12], [16], [17], [37]. Analyses and interpretations about these algorithms

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

3

are given in view of graph embedding [56], kernelization [22], global vs. local framework [10], and so on. The utility of manifold learning has been demonstrated in many applications, such as visualization [6], [50], [20], image analysis [11], image super-resolution [8], denoising [25], motion analysis [26], data ranking [62], semi-supervised learning [52], and so on. All these manifold learning algorithms share a common characteristic in that globally optimal dimensionality reduction is achieved by assembling together the local geometrical information learned from the data. But the local geometrical information is evaluated in different ways in these algorithms. This paper presents a new algorithm for geometrically motivated Non-Linear Dimensionality Reduction (NLDR). Our algorithm is developed from the conceptual framework of compatible mapping. In this framework, each data point can be represented in different local coordinate systems, but its global coordinate should be maintained unique. Compatible mappings are used to achieve this goal. Specifically, each mapping is composed of a tangent space projection and a group of splines. The local geometry of the manifold is exploited by using the tangent space at each data point, and the corresponding tangent space projection is estimated to locally coordinatize the data point itself and its neighbors. Different from using linear mappings [61], here smooth splines are constructed to map each of the local coordinates to its own single low-dimensional global coordinate. In this way, we can achieve compatible alignments on the manifold for any groups of neighboring data points. In such a work setting, the optimization framework is developed, in which a globally optimal optimum is achieved. Our algorithm has many parallels to manifold learning by Isomap, LLE, LE, HLLE and LTSA. Most notably, the objective function is based on the reconstruction error and a global low-dimensional reduction can be achieved by solving an eigenvalue problem. Experiments on many data sets illustrate the validity of our method. Embedding new data points (out-of-samples) into the previously learned structure is needed in dynamic settings where the data points are collected sequentially. Bengio et al. developed a framework for out-of-sample extension to Isomap, LLE, LE by using “kernel tricks” to reformulate the algorithms [3]. We also develop the out-of-sample extension for our algorithm. The goal is achieved by spline interpolation. The computation steps are as the same steps as used to develop our manifold learning algorithm. Such a straightforward extension for out-of-samples November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

4

can also be obtained in LLE [41] and LTSA [61]. Experimental results show that our method can achieve better performance. Early results of this research have been reported in the ECML conference [55]. The rest of this paper is organized as follows. Section II introduces our motivation and idea. We construct the compatible mappings and describe the details of algorithm in Section III. The NLDR algorithm and its extension for out-of-samples are developed in Section IV. Section V demonstrates the experimental results. Some issues are discussed in Section VI. We give an overview of the related work in Section VII and draw conclusions in Section VIII. II. M OTIVATION A. Problem Formulation Formally, in Non-Linear Dimensionality Reduction (NLDR) problems, we are given a set of n data points X = {x1 , x2 , · · · , xn } ⊂ Rm which are assumed to be sampled from a smooth manifold embedded in a m-dimensional observation space. We further assume the data points do not contain noise and X does not contain any outliers. The task of NLDR is to find a corresponding set Y = {y1 , y2 , · · · , yn } ⊂ Rd , with d < m, in which yi is a low-dimensional representation of xi (i = 1, 2 · · · , n). What we are looking for here is to find a global optimum Y from X , in which the local relations between neighboring data points are well preserved. For clarity, we first introduce some definitions. Definition 1.

A k-nearest neighborhood of a data point x in X is a subset of X , which

contains k nearest data points in X , including x itself. For each xi ∈ X , we denote its k-nearest neighborhood by Ni = {xij }kj=1 ⊂ X . Here xij is a data point in X and the subscript ij denotes its index in {1, 2, · · · , n}. These k data points in Ni will be locally coordinatized and then mapped to Y. In terms of mappings, this process can be denoted as follows h

gi

i xij −−→ tji −−→ yij , j = 1, 2, · · · , k

where yij is the final low-dimensional vector of xij . Further for hi , gi and tji , we have the following definitions: Definition 2.

hi : Rm 7→ Rd is called a local coordinatization mapping, which reduces the

dimensionality of each xij in Ni from m to d, namely, tji = hi (xij ) ∈ Rd , j = 1, 2, · · · , k. November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

Definition 3.

5

gi : Rd 7→ Rd is called a global alignment mapping, which aligns each tji to

yij , namely, yij = gi (tji ) ∈ Rd , j = 1, 2, · · · , k. Definition 4.

We call tji a local coordinate of xij ∈ X in Rd with respect to Ni and hi ,

and call yij ∈ Y its corresponding global coordinate in Rd . During algorithm development, each tji ∈ Rd will be represented in a local coordinate system, while each yij ∈ Rd will be addressed in a global coordinate system. Local coordinate system changes from neighborhood to neighborhood, while the global coordinate system is maintained unique, in which we hope to obtain a global optimum. B. Compatible Mappings Generally, given a parameter space Ω ⊂ Rd and a mapping f : Ω 7→ Rm with m > d, then M = f (Ω) is called a parameterized manifold embedded in Rm . This manifold is locally homeomorphic to Euclidean space. That is, it can be locally coordinatized. This indicates that we can use local coordinates to exploit the local geometrical structures of the data. Then we face a task, namely, the data points with local coordinates should be aligned together into a single global coordinate system in which low-dimensional parameter space is represented. Let Ni = {xij }kj=1 be a k-nearest neighborhood of xi ∈ X . According to the above analysis, for each Ni (i = 1, · · · , n), we need to introduce a local coordinatization mapping hi : Ni − → Ti and a global alignment mapping gi : Ti − → Yi (⊂ Y ⊂ Ω), in which Ti = {tji |tji = hi (xij ), xij ∈ Ni } and Yi = {yij |yij = gi (tji ), tji ∈ Ti }. Combining hi and gi together, we can get a compound mapping ϕi : ϕi = gi ◦ hi : Ni −→ Yi .

(1)

Note that we can construct a k-nearest neighborhood for each data point in X . Thus totally we will need n such compound mappings {ϕi }ni=1 , each of which will be employed to deal with k neighboring data points in X . To avoid conflicts during global alignment, some constraints should be introduced. Without loss of generality, suppose x is contained in the first L neighborhoods: N1 , N2 , · · · , NL . Then we have y = ϕi (x) = gi ◦ hi (x) = gi (ti ),

November 28, 2008

i = 1, 2, · · · , L

(2)

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

6

here ti = hi (x) is a local coordinate of x with respect to Ni and hi . Eq. (2) indicates that the L local coordinates {ti }Li=1 of x should be aligned to a single global coordinate y ∈ Y (Fig. 1(a)). Furthermore, for the data points in the intersection of two neighborhoods, the following constraints should also be satisfied (Fig. 1(b)): ϕ1 (N1 ∩ N2 ) = ϕ2 (N1 ∩ N2 )

(3)

Now our task is to construct n mappings {ϕi }ni=1 such that the constraints in (2) and (3) are satisfied. x N2

N1

h1

h2

h L-1

hL M



h2

h1

Local coordinate systems

g1

g 2 g L-1

gL

local coordinate system

local coordinate system

g2

g1

y

Global coordinate system global coordinate system

(a)

(b)

Fig. 1. (a). Mapping a single data point to a global embedding. (b). Mapping the intersection of N1 and N2 . The intersection is illustrated with shadowed area.

Definition 5. Let ϕ1 and ϕ2 be two mappings, and N1 ∩ N2 6= ∅. We say ϕ1 and ϕ2 are compatible with each other if ϕ1 (x) = ϕ2 (x) holds for any x ∈ N1 ∩ N2 . To construct n compatible mappings {ϕi }ni=1 , we have the following theorem: Theorem 1. Let tji = hi (xij ). If for each ϕi = gi ◦ hi , we have yij = gi (tji ) = ϕi (xij ), j = 1, 2, · · · , k,

(4)

then ϕi , i = 1, · · · , n, are compatible with each other. Proof: For a data point x ∈ X contained in different neighborhoods, it may have different local coordinates after local coordinatization. However, Eq. (4) indicates that all such local coordinates will be aligned to a single y ∈ Y. Thus the constraints in (2) and (3) can then be satisfied. Based on the definition of compatibility, this theorem holds naturally. November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

7

This proposition indicates that if the constraints in Eq. (4) are well satisfied we can construct hi and gi by only considering the data points in each k-nearest neighborhood Ni . In this paper, the local coordinates of the data points in Ni will be calculated in the tangent space of M at xi and the local coordinatization mapping hi will be treated as a subspace projection (Subsection III-A). In addition, splines are used to construct the global alignment gi (Subsections III-B, III-C, III-D, III-E). III. C ONSTRUCTING C OMPATIBLE M APPINGS A. Tangent Space Projection Suppose that M is a smooth manifold, then the tangent space Tx (M ) ⊂ Rm can be well defined at each point x ∈ M . Given the underlying generative model x = f (y) for manifold, if f is sufficiently smooth, we can use first-order Taylor expansion at yi [61]: x = f (y) = f (yi ) + ∂f (yi ) · (y − yi ) + o(||y − yi ||2 )

(5)

where ∂f (yi ) ∈ Rm×d is the Jacobi matrix whose column vectors span the tangent space and y − yi can be viewed as the centralized local coordinate calculated in the tangent space. Eq. (5) can be approximated as follows: (y − yi ) ≈ [∂f (yi )]T (x − f (yi ))

(6)

Min et al. pointed out that with densely sampling on the manifold the Jacobi matrix can be evaluated by performing singular value decomposition on the covariance matrix of the neighboring data points [34]. This is just the foundation of local coordinatization in tangent spaces. Here we give the computation for n data points in X . Let Xi = [xi1 , xi2 , · · · , xik ] ∈ Rm×k contain k neighbors of xi in Ni ⊂ X . Performing a singular value decomposition of the centralized matrix of Xi , we have 1 Xi · (I − e · eT ) = Ui Σi ViT , k

i = 1, · · · , n

(7)

where I is an k × k identity matrix, e is an k-dimensional vector with e = [1, 1, · · · , 1]T ∈ Rk , Σi = diag(σ1 , · · · , σk ) contains the singular values in descending order, Ui is an m × m matrix whose column vectors are the left singular vectors, and Vi is an k × k matrix whose column vectors are the right singular vectors.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

8

Finally, we get the tangent coordinate tji , which connects to Eq. (6): ˜ i )T · (xi − x ˜ i · vj , j = 1, · · · , k tji = (U ¯i ) = Σ j i ˜ i ∈ Rm×d contains the first d column vectors of Ui , x here U ¯i =

1 k

Pk j=1

(8)

˜ i = diag(σ1 , · · · , σd ), xij , Σ

vij is a d-dimensional vector which contains the first d components of the j-th row of matrix Vi . We use tangent coordinate as local coordinate. Thus, hi can be defined as a tangent space ˜ i )T . projection operator from Rm to Rd , namely, hi = (U B. Mapping in Low-dimensional Space The local coordinatization via singular value decomposition for data points in each k-nearest neighborhood achieves two goals: (1) The neighboring data points are locally represented; (2) Their dimensionality has already been reduced from m to d. As pointed in Section 2, now our task is to find a global alignment mapping gi : Rd 7→ Rd such that gi (tji ) = yij (j = 1, 2, · · · , k). In LTSA [61], gi is considered as a linear function, namely, gi (tji ) = Utji +b, where U ∈ Rd×d is an affine matrix and b ∈ Rd is a translation vector. Given {tji , yij }kj=1 , the optimal U∗ and b∗ can be estimated within a standard least squares framework. With the optimal affine transformation a data point may not be mapped to its target point (Fig. 2(b)). Based on this observation, here we use spline to map the local coordinates. (r)

(r)

Let Yi = [yi1 , yi2 , · · · , yik ] ∈ Rd×k . Further denote the r-th row of Yi by [yi1 , · · · , yik ]. Our (r)

task is to develop d splines gi

: Rd 7→ R, r = 1, 2, · · · , d, such that the coordinate components

can be faithfully mapped: (r)

(r)

yij = gi (tji ), j = 1, 2, · · · , k

(9)

Mathematically, there are many forms of splines which could be constructed to satisfy the conditions in Eq. (9). However, currently Yi is unknown which needs to be solved globally. In other words, we should consider those splines from which not only the conditions in Eq. (9) can be satisfied, but also the reconstruction error can be formulated explicitly in term of Yi . The spline developed in Sobolev space [36] meets our needs.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

9

C. Splines Without loss of generality, we consider how to deal with one coordinate component by fixing (r)

(r)

the upper-script r in Eq. (9). For clarity, we use simple notations to replace gi , tji and yij in (r)

(r)

Eq. (9), namely, g ← gi , tj ← tji and yj ← yij . Formally, we have yj = g(tj ),

j = 1, 2, · · · , k

(10)

To construct g, we use the following minimization of the penalized sum of squares: 1 Xk J(g) = (yj − g(tj ))2 + λJsd (g) j=1 k

(11)

where λ > 0 is a regularization parameter and Jsd (g) is a smoothness penalty in d dimensions on g. We choose to optimize the functional J(g) in Sobolev space [36]. Specifically, Jsd (g) is defined as follows [14], [32], [51]: X Jsd (g) = α1 +···αd

s! α ! · · · αd ! =s 1

Z

³

Rd

∂sg α α ∂t1 1 ···∂td d

´2

dt

where αi , i = 1, · · · d, are positive integers. Duchon [14] and Meinguet [32] demonstrated that under some constraints there is a unique spline function that minimizes J(g): Xl g(t) =

i=1

βi pi (t) +

Xk j=1

αj φj (t)

(12)

where l = (d+s−1)!/(d!(s−1)!), {pj (t)}lj=1 are a set of polynomials spanning the l-dimensional space of polynomials in Rd of total degree less than s, and φj (t) is a Green’s function [14]:   (||t − t ||)2s−d · log(||t − t ||) if d is even j j φj (t) = (13)  (||t − tj ||)2s−d if d is odd The constraints used to construct the unique function are the boundary conditions which ensure that the reconstruction error of Jsd (g) is zero at infinity [14], [32], [51]: Xk αj · pi (tj ) = 0, i = 1, · · · , l j=1

(14)

Actually, the equations in Eq. (14) are well discussed by analysts, under which the function φj (·) in Eq. (13) is conditionally positive definite [58].

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

10

Now substituting the constraints in Eq. (10) into Eq. (12) and combing Eq. (14) together, the coefficients α = [α1 , α2 , · · · , αk ]T ∈ Rk and β = [β1 , β2 , · · · , βl ]T ∈ Rl can be solved via the following linear equations:



 A·

β 

 where y = [y1 , · · · , yk ]T , A = 

α

K

P

T





=

y



(15)

0

 ∈ R(k+l)×(k+l) , in which K is an k × k symmetrical

P 0 matrix with elements Kij = φ(||ti − tj ||), and P is an k × l matrix with elements Pij = pi (tj ). In the case of regularization formulation, K should be replaced by K + λI. Since φj (·) is a conditionally positive definite function, the existence and uniqueness of the solution to (15) is ensured [33]. Finally, we show an example to demonstrate the interpolation ability of the spline. Here, the

task is to map five points {a(0, 0), b(2, 2), c(5, 5), d(6, 3), e(6, 0)} into their corresponding target points {A(2, 0), B(3, 3), C(9, 5), D(5, 1), E(9, 0)} as shown in Fig. 2. According to Eq. (9), we can write two groups of conditions: {g1 (a) = 2, g1 (b) = 3, g1 (c) = 9, g1 (d) = 5, g1 (e) = 9} and {g2 (a) = 0, g2 (b) = 3, g2 (c) = 5, g2 (d) = 1, g2 (e) = 0}. To construct two splines, we take s = 2, and formally use the notations in Cartesian coordinate system o-t1 t2 . Thus, the l = 3 primitive polynomials can be taken as {1, t1 , t2 }. Then   g (t) = β 1 + β 1 · t + β 1 · t + P5 α1 ·(r )2 · log(r ) j j 2 1 1 2 1 0 j=1 j P 5 2 2 2 2 2  g2 (t) = β + β · t1 + β · t2 + α ·(rj ) · log(rj ) 0

1

2

j=1

(16)

j

where r1 , r2 , r3 , r4 and r5 are the distances from t = [t1 , t2 ]T to a, b, c, d and e, respectively. The coefficients in Eq. (16) can be solved via Eq. (15). Now we use g1 (t) and g2 (t) to map the points a, b, c, d and e, and the results are shown in Fig. 2(a). With splines, all the data points are faithfully mapped to their own target points. For comparison, we also perform an optimal affine transformation for this task. The results are illustrated in Fig. 2(b). The transformed data points fail to meet their own target points. Numerically, the sum of squared errors between the mapped data points and the true target data points is 1.1324 × 10−28 under splines with λ = 0.0001, while it is up to 3.6379 under the optimal affine transformation. Notes. The spline in Eq. (12) has several good properties. First, g(tj ) ≈ yj holds for each data point tj (see Fig. 2(a)). This nature is necessary for us to construct compatible mappings. Second, November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

c

11

c

C

d

B

b

b

D

D

a

e

A

a

E

e

A

(a) Fig. 2.

C

d

B

E

(b)

Source points (shown as “+”), target points (shown as circles), and the mapped results (shown as squares). (a). The

results obtained by splines; (b). The results obtained by optimal affine transformation.

it is smooth and we can use it to interpolate new points near {tj }kj=1 . This yields a mechanism to treat the out-of-samples. Third, this spline has no parameters to be tuned to data. It may facilitate real-world applications. In contrast, the commonly used kernel ridge regression [43] with Gaussian kernel functions needs to be supplied a parameter σ, which should be well tuned to data. Finally, as pointed in Section III-B, another reason for using this spline is that the reconstruction error can be formulated in terms of y (see Section III-D). This gives us a way to develop an objective function from which a global optimum can be obtained (see Section IV). D. Reconstruction Error The value of J(g) in Eq. (11) can be approximately evaluated as follows [51]: 1 Xk J(g) ≈ (yj − g(tj ))2 + λαT Kα j=1 k

(17)

Actually, the regularization parameter λ controls the amount of smoothness of the spline near the k neighboring data points. In the limiting case λ = 0, these data points will be exactly mapped. With an enough small λ, the constraints can also be well satisfied (Fig. 2(a)). That is, the first term in Eq. (17) can be evaluated as zero in this case. Thus we only need to consider the second term. Further omitting the scalar λ, we have J(g) ∝ αT Kα

(18)

Theorem 2. For α and y, it holds that αT Kα = yT By

(19)

where B is the upper left k × k subblock of A−1 . November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

 Proof: Let the inverse matrix of A be  D∈R

l×l

12

 B

C

T

, in which B ∈ Rk×k , C ∈ Rk×l and

C D . According to Eq. (15), we can easily obtain α = By. In addition, also according to

Eq. (15), we have Kα + Pβ = y and PT α = 0. Then Kα + Pβ = y ⇒ αT Kα + αT Pβ = αT y ⇒ αT Kα = αT y ⇒ αT Kα = yT BT y ⇒ αT Kα = yT By The last equality holds since A is symmetrical and its inverse matrix is also symmetrical. Thus B is also symmetrical. Now the reconstruction error can be written in the form of vector y. This gives us a way to construct the objective function for solving the NLDR problem (see Section IV). Finally, we give a property of matrix B as follows: Theorem 3. The sum of each row vector of matrix B equals to zero. Proof: Let e = [1, 1, · · · , 1]T ∈ Rk . We only need to prove Be = 0. Based on Theorem 2, there exists a α such that αT Kα = eT Be = αT e. Note that Eq. (14) contains a constraint αT e = eT α = 0 which corresponds to the constant polynomial term “1”. Thus, eT Be = 0. Furthermore, we can prove that B is positive semi-definite. Actually, K is a kernel matrix calculated from {tj }ki=1 with Green’s function (radial basis function). It is naturally positive semi-definite [43]. Then based on Theorem 2, for any vector z there exists a vector α such that zT Bz = αT Kα ≥ 0. Thus B is positive semi-definite. According to matrix theory [18], for semidefinite matrix B, there exists a matrix Q such that QT Q = B. Thus eT Be = 0 ⇒ eT QT Qe = 0 ⇒ Qe = 0 ⇒ QT Qe = 0 ⇒ Be = 0. E. Mapping Local Coordinates to Global Coordinates (1)

(d)

Now we give the steps of constructing d splines in gi = (gi , · · · , gi ) in Eq. (9), under which the local coordinates are mapped to the global coordinates. First, replace tj in Eq. (10) by tji in Eq. (8), and calculate the coefficient matrix A in Eq. (15). Adding a subscript i to A, we get Ai .

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

13

Second, introduce k low dimensional coordinates yi1 , yi2 , · · · , yik , and use Eq. (15) d times for d coordinate components. Then we have     T α1 , · · · , αd Y = i  Ai ·  β1 , · · · , βd 0

(20)

here Yi = [yi1 , yi2 , · · · , yik ] ∈ Rd×k . Third, solve Eq. (20) and substitute the coefficients (α1 , β1 ), · · · , (αd , βd ) into Eq. (12) to (1)

(d)

construct d splines: gi , · · · , gi . Finally, the error of constructing such d splines for mapping d coordinate components can be summed together. Let y(r) denote the r-th row vector, then according to Eq. (18), we have Xd ei = y(r) · Bi · (y(r) )T = tr(Yi · Bi · YiT ) (21) r=1

where tr is the trace operator, and Bi is the upper left k × k subblock of A−1 i . IV. L OCAL S PLINE E MBEDDING A. Global Embedding Based on Eq. (21), from n neighborhoods, we can get n reconstruction errors. Summing them together, an objective function related to Y = {yi }ni=1 can be constructed. The steps can be summarized as follows: First, for k data points {xij }kj=1 in each Ni , i = 1, · · · , n, we calculate their local coordinates {tji }kj=1 according to the method introduced in Subsection III-A. Second, based on these k local coordinates, we calculate the reconstruction error ei in Eq. (21). Summing all the reconstruction errors together, we have Xn Xn E(Y) = ei = tr(Yi Bi YiT ) i=1

i=1

(22)

Note that Yi = [yi1 , yi2 , · · · , yik ] ∈ Rd×k collects k column vectors of Y = [y1 , y2 , · · · , yn ], with indices i1 , i2 , · · · , ik ∈ {1, 2, · · · , n}. According to these indices, we can construct a column selection matrix Si ∈ Rn×k such that Yi = YSi . Then Eq. (22) turns to be the following form: Xn E(Y) = tr(YSi Bi STi YT ) = tr(YSBST YT ) (23) i=1

where S = [S1 , · · · , Sn ] ∈ Rn×nk and B = diag(B1 , · · · , Bn ) ∈ Rnk×nk .

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

14

We use E(Y) in Eq. (23) as the objective function. To avoid degenerate solutions, we add a constraint YYT = I. Then, the minimum of E(Y) for the d-dimensional global embedding can be obtained from the d eigenvectors of symmetrical sparse matrix SBST , which are associated to d + 1 smallest eigenvalues. We discard the eigenvector corresponding to the zero eigenvalue and use the next d eigenvectors to construct Y = [y1 , y2 , · · · , yn ] ∈ Rd×n . In this way, we achieve a global optimum. Now we give some explanations on the final results collected in Y = {yi }ni=1 . We have the following theorem: Theorem 4.

Let M = SBST . Then M has a zero eigenvalue with corresponding eigenvector

e = [1, 1, · · · , 1]T ∈ Rn . Proof: Actually, M is accumulated from n local error matrices {Bi }ni=1 which are developed according to Eq. (19). For example, for Bi , the operation of Si Bi STi in Eq. (23) is just to assign the elements of Bi to an k × k sub-matrix of M whose rows and columns are selected according to k indices i1 , i2 , · · · , ik . Note that based on Theorem 3, the sum of each row vector in each Bi is zero. Naturally, the sum of each row in M is also zero. This indicates that Me = 0. That is, Me = 0 · e. In this way, we finish the proof. Theorem 4 tells us that when constructing Y, why we discard the eigenvector corresponding to the zero eigenvalue and use those eigenvectors corresponding to the second to the (d + 1)-th minimum eigenvalues. Theorem 4 also tells us that the n coordinates in Y = {yi }ni=1 ⊂ Rd are located around the coordinate origin of Rd coordinate system. Note that the d rows of Y = [y1 , y2 , · · · , yn ] ∈ Rd×n are the d eigenvectors of M. But M has an eigenvector e. Thus each row of Y is orthogonal to e, namely, Ye = 0. This indicates that the mean of {yi }ni=1 equals to zero. As a result, {yi }ni=1 are located around the coordinate origin. Finally, {yi }ni=1 are further limited into a d-dimensional unit sphere. This fact can be easily justified, due to the constraint YYT = I used to minimize the objective function in Eq. (23). B. Extension for Out-of-Samples From Subsection IV-A, we see that our algorithm runs in a batch mode, once for all the given n data points. Such a performance can also be witnessed in other algorithms, such as Isomap, LLE, Laplacian Eigenmap, LTSA, and so on. That is, the algorithm can not generate an explicit November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

15

function for dealing with new data points. This is known as out-of-sample problem in manifold learning. Following our framework, here we extend our algorithm for out-of-samples. Given Y = {yi }ni=1 of X = {xi }ni=1 . Now the task is to map a new data point x sampled from the manifold M . Let X 0 = X ∪ {x}. We first construct a (k + 1)-nearest neighborhood Nx from X 0 . Without loss of generality, we denote Nx = {x, x1 , · · · , xk }, with xj ∈ X , j = 1, · · · , k. First, for k + 1 data points in Nx we use the method represented in Subsection 3.1 to get k + 1 local coordinates. Denote them by t, t1 , · · · , tk . Then, for the k previously treated data points, we have yj = g(tj ),

j = 1, · · · , k

(24)

Note that yj ∈ Y, j = 1, · · · , k, are known. Using the above k interpolation conditions, we can construct d splines for mapping d coordinate components and organize them as a vector function g (Subsections III-B, III-D and III-E). Based on g, we can map t into the d-dimensional global coordinate system and get the low-dimensional coordinate of x. V. E XPERIMENTAL R ESULTS In this section, we validate our Local Spline Embedding (LSE) and its extension for out-ofsamples, using the toy data points sampled from Swiss roll surface, S-surface and trefoil curve, and the real-world data points of face images and teapot images. Isomap, LLE, LE, HLLE, LTSA and LSE will be compared on these data sets. A. Global Embedding Results For Isomap, LLE, LE, HLLE, LTSA and LSE, there are two common parameters k and d. LE has another parameter σ in Gaussian kernel used to construct the Laplacian matrix [2]. These parameters will be manually set in experiments. In addition, LSE has another parameter λ in Eq (15). Here we fix it to be 0.0001. Fig. 3(a) shows n = 1000 data points sampled from a Swiss roll surface. Geometrically, the parameter domain of this surface is a two dimensional (2D) rectangle. The results with k = 12 and σ = 4.0 are illustrated in Fig. 3(b) to Fig. 3(g). As can be seen, Isomap yields a rectangle since it attempts to preserve the geodestic distances between any pairs of data points. LLE and November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

(a) Data

(b) Isomap

(e) HLLE Fig. 3.

(f) LTSA

(d) LE

(g) LSE

The 1000 source data points sampled from a Swiss surface and the learned results.

(a) Data

(b) Isomap

(e) HLLE Fig. 4.

(c) LLE

16

(c) LLE

(f) LTSA

(d) LE

(g) LSE

The 1000 source data points sampled from a Swiss surface with a hole and the learned results.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

(a) Data

(b) Isomap

(e) HLLE Fig. 5.

(c) LLE

(f) LTSA

(d) LE

(g) LSE

The 1500 source data points sampled from a S-surface with four holes and the learned results.

(a) Data Fig. 6.

17

(b) Isomap

(c) LLE

(d) LE

(e) HLLE

(f) LTSA

(g) LSE

The 539 original data sampled along a 3D trefoil knot and the learned results.

LE generate irregular shapes in this experiment. We also tested the LE with different σ ranging from 0.0001 to 10.0, and got similar results in these trials. HLLE, LTSA and LSE generate a square. The reason is that HLLE, LTSA and LSE employ an orthogonality constraint YYT = I to avoid degenerate solutions. In view of manifold learning or shape preserving, these results are acceptable. Actually, with an affine transformation the square can be transformed into a rectangle. Fig. 4 demonstrates another example. The n = 1000 data points are also sampled from the same Swiss surface, but with a hole. Thus, the intrinsic structure is a rectangle with a hole at the center. We also set k = 12 and take σ = 4.0 for LE. Clearly, the results learned by Isomap, LLE and LE are not well shaped. The shapes are distorted. This indicates that the topological structure of the data may affect the performance of these three algorithms. In contrast, HLLE, November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

Fig. 7.

18

(a) Isomap

(b) LLE

(c) LE

(d) HLLE

(e) LTSA

(f) LSE

The learned results of 698 statue face images with LLE, Isomap, LE, HLLE, LTSA and LSE.

LTSA and LSE discover the true structure. To test our algorithm on data set sampled from more complex manifold, we cut a S-surface with four holes. The intrinsic structure is a rectangle with four holes. The n = 1500 data points in Fig. 5 are sampled from this surface. The results are obtained with k = 12 and σ = 4.0. In contrast, LLE, HLLE, LTSA and LSE find the well shaped structure. Fig. 6 shows an example where n = 539 data points are sampled from a closed trefoil knot [53]. We take d = 2 to discover its global geometrical structure. Thus the true shape is a circle. The results are learned with k = 7, and σ = 0.05 in LE. In our implementation, HLLE fails to unfold the knot. The results in Fig. 7 are obtained from a real data set of n = 698 statue face images taken with two pose parameters (left-right and up-down) and one parameter of lighting direction [48]. Each image includes 64 × 64 grayscale pixels. The dimensionality of the original data is 4096. In experiment, we take k = 12 and d = 3, and set σ = 4000.0 in LE. The intrinsic structure of the manifold hidden in this data set is unknown. To explain the learned results, representative images are illustrated near their positions with symbols of squares. As can be seen, Isomap and November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

Fig. 8.

19

(a) Isomap

(b) LLE

(c) LE

(d) HLLE

(e) LTSA

(f) LSE

The learned results of 1965 Frey face expression images of faces with LLE, Isomap, LE, HLLE, LTSA and LSE.

LLE show the variation of poses, but fail to render a clear cue about the variation of lighting directions. In contrast, other four algorithms convey clearly the variation of face poses and lighting directions. Fig. 8 demonstrates the results from a set of n = 1965 images with different face expressions [39]. The size of each image is 28 × 20. Thus the dimensionality of the original data is 560. In experiment, we take k = 12 and d = 3, and set σ = 4000.0 in LE. The intrinsic structure of the manifold hidden in this data set is also unknown. To show the variations, representative images are illustrated near their positions with squares. In this experiment, HLLE can not separate well the face expressions. For example, some of the smile and surprise expressions are mapped together. Isomap, LLE and LE separate correctly the expressions, but some transition expressions are aligned together. In contrast, LTSA and LSE generate similar results. Compared with LTSA, LSE yields clearly the transition between different expressions. To test the real-world data containing a closed curve, we run the six algorithms on n = 400 color images taken from a teapot with different viewpoints along a circle [53]. The size of the images is 76 × 101. The manifold is embedded in R23028 . The intrinsic dimensionality d equals November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

Fig. 9.

20

(a) Isomap

(b) LLE

(c) LE

(d) HLLE

(e) LTSA

(f) LSE

The learned results of 400 color images taken from a teapot with LLE, Isomap, LE, HLLE, LTSA and LSE.

2 2

60

1

40

1

20

0 0

0

−1

−20

−1 −40

−2

−60

−2

−100

−50

0

−3

50

−2

(a) Isomap

−1

0

1

2

−3

(b) LLE

−1

0

1

2

(c) LE

2

1.5

0.25

1.5

0.2

1

1

0.15 0.1

0.5

0.05

0

0

−0.5

0.5 0 −0.5

−0.05

−1

−0.1

−1

−1.5 −0.2

−2

−0.1

0

0.1

0.2

0.3

−2

(d) HLLE

−1

0

(e) LTSA

1

2

−2

−1.5

−1

−0.5

0

0.5

1

(f) LSE

Fig. 10. The learned results of 200 color images of teapot with LLE, Isomap, LE, HLLE, LTSA and LSE. The 2D coordinates of the points in (e) and (f) are manually scaled for print.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

21

to one. But we take d = 2 to explore the intrinsic structure, namely, a circle. The results are illustrated in Fig. 9. In experiment, we take k = 7 and set σ = 400.0 in LE. As can be seen, LLE fails to find the true structure. In the above six experiments, we tested six algorithms on toy data sets and real-world data sets with different manifold shapes. As a summary, for Isomap, LLE, LE and HLLE, they perform well one data set, but may produce unsatisfactory results on another. In contrast, LTSA and LSE can successfully discover the intrinsic structures in all the tested data sets. To further compare these algorithms, Fig. 10 shows another example. In this example, the first 200 teapot images used in Fig. 9 are tested. For these data points, the intrinsic dimensionality is one, and the intrinsic structure is a line segment of camera angles. Here we also take d = 2 to clearly demonstrate relations between data points in 2D plane. In experiments, we set k = 30. From Fig. 10 we see that some segments of the curves learned by Isomap, LLE, LE, HLLE, LTSA are distorted. Actually, in this example, one rule of evaluation can be the order relationship between data points. If we project them on a line, they should be correctly ordered along a fixed direction. Specifically, suppose the second data point (corresponding to the image taken at 0.9◦ ) is located on the right of the first data point (corresponding to the image taken at 0◦ ). Then the third data point (corresponding to the image taken at 1.8◦ ) should be located on the right of the second data point. In Fig. 10, it is easy to check that the order between data points obtained by Isomap and HLLE is destroyed, if we project them onto the horizontal axis. In LLE, LE and LTSA, the order of the data points in the circles will not be faithfully maintained. However, LSE keeps well the order of the data points. Actually, the neighborhood of each data point contains 30 images taken within about 27◦ camera angle. Linearly modeling the data points in such a large neighborhood is inadequate to represent the curve (an arc of 27◦ ). In contrast, spline is a nonlinear mapping, which shows more adaptability to locally nonlinear neighborhoods. B. Embedding Out-of-samples As mentioned in Introduction, LLE and LTSA can be directly extended for out-of-samples. Now let x be a new data point to be embedded. We assume its k neighbors be {xi }ki=1 ⊂ X and the corresponding low-dimensional coordinates {yi }ki=1 ⊂ Y. According to LLE, we have P P x = ki=1 wi xi . Here wi , i = 1, · · · , k, are weights and ki=1 wi = 1. Thus, the low-dimensional

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

coordinate y of x can be calculated as follows [39], [41]: Xk y= wi yi i=1

22

(25)

For LTSA, the steps of embedding new data points are similar to our method. The difference is that in Eq. (24) the function g will be replaced by an affine transform matrix A. In terms of homogeneous coordinate, we have [1, yiT ]T = [1, tTi ]T · A, i = 1, · · · , k. After A is estimated via minimizing the least squares error [61], then we have [1, yT ]T = [1, tT ]T · A

(26)

here t is the local coordinate of x (see Subsection IV-B). Now we demonstrate an example of embedding new data points, using LLE (Eq. (25)), LTSA (Eq. (26)) and LSE (Subsection IV-B). The 400 teapot images in Fig. 9 are used in this experiment. They are equally divided into two subsets. One subset of 200 data points are previously learned by LSE 1 . Our task is to map the rest 200 data points according to the results previously learned from the first subset. Fig. 11 illustrates the comparative results with LLE, LTSA and LSE. For each algorithm, we report three experiments, with k = 5, 15 and 25. To clearly print the results, totally only 50 data points are shown in Fig. 11, including 25 previously learned data points with symbols “·” and 25 new data points with symbols “+”. We see that for k = 5, all the three algorithms can correctly map the new data points. Actually, for the data points in a small neighborhood on a circle, they can be approximately represented by the tangent line. Such a locally linear structure can be well captured by LLE and LTSA. When k increases, however, the data points will locate in a larger arc. In this case, linearly modeling the data points will not help to unfold the local geometry (here an arc). That is, even with optimal affine transformation, the data points with nonlinear structure may not be mapped to their true positions. In contrast, LSE shows more adaptability to data, due to the nature of nonlinearity of splines. VI. D ISCUSSIONS A. Parameters of Algorithm LSE has three parameters: the low dimensionality d, the number of nearest neighbors k and the regularization parameter λ in Eq. (15). Several algorithms have been developed for determining 1

The data points can also be learned by other algorithms.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

23

(a) LLE

(b) LTSA

(c) LSE Fig. 11.

Mapping new data points with LLE, LTSA and LSE. The new data points are shown with symbols “+”. In columns,

from the first to the third are the results learned with k = 5, 15 and 25, respectively.

a proper d from data [45], [16]. Here, we only discuss the parameter λ and k. The regularization parameter λ controls the amount of smoothness of the spline near the k neighboring data points. In the limiting case, namely, λ = 0, these k data points will be exactly mapped. For comparison, Fig. 12 demonstrates an example of the 400 teapot images with different λ. There from the first to the last are the results obtained by LSE with λ = 10−6 , 10−5 , 10−4 , 10−3 , 10−2 , 10−1 , 1 and 10, respectively. As can be seen from Fig. 12, there are no significant differences between the results. Actually, λ is parameter independent of data, which indicates that we need not tune it to the data. Experiments with different λ on the data sets used in Subsection V-A also show that it can be supplied in a large interval. Generally, how to select a proper k directly from data is an open problem in manifold learning. Fig. 13 shows the results learned from the 400 teapot images with different k by Isomap, LLE, LE, HLLE, LTSA and LSE, listed row by row respectively. In this experiment, k is set to be 4, 5, 10, 15, 20, 25 and the learned results are shown column by column in Fig. 13. As can be seen

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

Fig. 12.

24

The results learned by LSE with different λ. From the first to the last correspond to λ = 10−6 , 10−5 , 10−4 , 10−3 ,

10−2 , 10−1 , 1 and 10. In experiment, k = 7.

Fig. 13.

The learned results with different k. From the first to the sixth row are the results with Isomap, LLE, LE, HLLE,

LTSA and LSE. In columns are the results with k = 4, 5, 10, 15, 20 and 25, respectively.

from Fig. 13, with a smaller or a larger k the algorithm may yield incorrect result. Under this observation, a possible principle of selecting a proper k may be that the corresponding k nearest data points can be well linearly reconstructed in a subspace, for example, a tangent space of the smooth manifold. B. Learning from Noise Data In problem formulation in Subsection II-A, we assume the data points do not contain noise. To investigate the performance of our algorithm on noise data, here we conduct experiments on two data sets. First, the 1000 data points with 0.7 times of standard Gaussian noise are sampled from the Swiss roll by the following Matlab sentences: t = (3 * pi / 2) * (1 + 2 * rand(1, 1000)); height = 21 * rand(1, 1000); November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

(a) Data

(b) Isomap

(e) HLLE Fig. 14.

(c) LLE

(f) LTSA

25

(d) LE

(g) LSE

The 1000 noise data points and the learned results, using k = 12.

X = [t .* cos(t); height; t .* sin(t)]; stdpara = 0.7; X = X + stdpara .* randn(size(X)); In experiment, we take k = 12 and σ = 4.0 for LE. The learned results with Isomap, LLE, LE, HLLE, LTSA and LSE are illustrated in Fig. 14. Compared with the results illustrated in Fig. 3, however, the learned results are significantly degraded. In contrast, LE and LSE generate better results since the learned shape is not overlapped together. However, LE compresses seriously the shape in local regions. In addition, LTSA drags closely the head and tail. To be comparable, the results learned by LSE are almost locally unfolded. We further tested these algorithms with different standard deviations “stdpara” for this data set. It can be summarized that when “stdpara > 0.6”, the learned shapes with our method and other algorithms may be blended or even overlapped together. Second, the 400 teapot images with significant slat and pepper noise are considered. Here two experiments are reported. In the first experiment, for each image “I” with 101 × 76 pixels, we use the flowing Matlab sentences to add slat and pepper noise: November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

26

Fig. 15.

The images with salt and pepper noise. The percent of pixels with noise is 20%.

Fig. 16.

The results learned by Isomap, LLE, LE, HLLE, LTSA and LSE from the noise images as shown in Fig. 15, using

k = 10.

dRatio = 0.2; Inoised = imnoise(I, ’salt & pepper’, dRatio); Here “dRatio = 0.2” means that totally there are 20 percent of the pixels which are added noise. Some examples are illustrated in Fig. 15. The learned results from these noise images are shown in Fig. 16. They are all similar to a circle, indicating that the results are acceptable. In another experiments, we increase the noise to 35% by taking “dRatio = 0.35”. Some noise images are illustrated in Fig. 17. The learned results are shown in Fig. 18. In this case, the learned results are distorted. In the case of “dRatio = 0.2”, for most data points about 90% nearest neighbors are their true neighbors. In the case of “dRatio = 0.35”, however, for most data points, their neighbors contains about several incorrect neighbors. Some distant data points are dragged together. As a

Fig. 17.

The images with salt and pepper noise. The percent of pixels with noise is 35%.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

Fig. 18.

27

The results learned by Isomap, LLE, LE, HLLE, LTSA and LSE from the noise images as shown in Fig. 17, using

k = 10.

Fig. 19.

The results leaned by Isomap, LLE, LE, HLLE, LTSA and LSE from the de-noised data of images as shown in

Fig. 17, using k = 10. The results directly learned by PCA is also illustrated for comparison.

result, the learned shape is distorted. We also test different proportions of pixels with noise. It can be summarized that when “dRatio” is greater than 0.3, the learned results with these six algorithms may be distorted. This proportion indicates that a significant amount of noise can be tolerated, but learning from data with very large amount of noise is a challenging task. To further investigate the performance of the algorithm on noise data, we conduct another experiment in Fig. 19. First, PCA [27] is employed to approximately represent the noise images as shown in Fig. 17. The subspace dimensionality is manually set to be 50. Then, LSE and other five algorithms are run on the PCA-preprocessed data. The learned results are shown in Fig. 19. Compared with those in Fig. 18, here the results are significantly improved. For comparison, the results obtained by PCA with 2D subspace are also illustrated in Fig. 19. As can be seen, the shape is far from a circle. Additionally, we also perform PCA with different subspace dimensionality. As a summary, when the subspace dimensionality is in [10, 90], the results are all similar to those reported in Fig. 19. When the subspace dimensionality is in [120, 399], however, the results become unsatisfactory and in most cases similar to those as shown in Fig. 18.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

28

Swiss

S-surf.

Trefoil

Statue

Frey

Teapot

n

1000

1500

539

698

1965

400

m

3

3

3

4096

560

23028

d

2

2

2

3

2

2

k

12

12

7

12

12

7

117.9

390.5

20.0

42.1

866.4

8.4

LLE

2.4

3.5

0.3

7.5

12.6

13.1

LE

3.3

7.1

1.4

5.3

278.3

7.3

HLLE

28.9

98.7

4.6

22.5

50.7

18.7

LTSA

6.7

15.1

0.7

14.5

43.4

23.2

LSE

6.9

15.3

0.9

15.2

49.2

26.6

Isomap

TABLE I A BRIEF DESCRIPTION OF THE DATA SETS , PARAMETERS AND THE COMPUTATION TIME ( S ) ON A PC WITH 1.73GH Z CPU AND

512 RAM, USING M ATLAB 6.5.

C. Computation Time The main computation in LSE includes three steps: finding the k nearest neighbors for each data point, calculating the local coordinates of the k neighbors and the coefficient matrix Ai in Eq. (20), and minimizing the function in Eq. (23). On average, about 80% time is costed on the first step. To further compare with the other algorithms, Table I reports the computation time of the experiments in Subsection V-A. We see that LSE is much faster than HLLE and Isomap in the case of large data set, but slower than LLE and LE. On average, it takes similar computation time as LTSA. D. Quantitative Evaluation To give a quantitative evaluation on manifold learning algorithms, here we introduce a measure for the data set whose true manifold is known. Suppose we are given the true low-dimensional coordinates Z = {zi }ni=1 ⊂ Rd of data points in X = {xi }ni=1 ⊂ Rm and their recovered lowdimensional coordinates Y = {yi }ni=1 ⊂ Rd . We calculate Procrustes Measure (PM) [44] to evaluate the similarity between Y and Z. PM determines a linear transformation (translation, reflection, orthogonal rotation, and scaling) of the points in Y to best conform them to those November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

29

Data set

Isomap

LLE

LE

HLLE

LTSA

LSE

Swiss-1

0.63

0.27

0.91

0.13

0.13

0.13

Swiss-2

0.60

0.31

0.81

0.14

0.14

0.14

S-surface

0.02

0.06

0.13

0.06

0.06

0.05

Trefoil

0.002

0.007

0.04

0.06

0.01

0.01

Teapot-1

0.001

0.64

0.001

0.004

0.005

0.006

Half-tea

0.16

0.20

0.04

0.99

0.08

0.06

Teapot-2



0.31





0.03

0.003

TABLE II T HE PROCRUSTES MEASURE ON THE DATA SETS . T HE DATA SETS “S WISS -1” AND “S WISS -2” ARE USED IN F IG . 3 AND 4, RESPECTIVELY.

“S- SURFACE ”, “T REFOIL” AND “H ALF - TEA” ARE USED IN F IG . 5, 6, AND 10, RESPECTIVELY. “T EAPOT-1” AND

“T EAPOT-2” ARE USED IN F IG . 9 AND 11, RESPECTIVELY.

in Z. The “goodness-of-fit” criterion is the sum of squared errors. Thus a smaller PM value indicates that more accurate results are obtained. Table II lists the PM values of the algorithms on the data sets used in Fig. 3, 4, 5, 6, 9, 10, and 11. We can not measure the algorithms on the real world data sets used in Fig. 7 and Fig. 8 since we do not know their true manifolds. As can be seen from Table II that for these data sets with known manifolds, LSE outperforms Isomap, LLE and LE on four, six and five data sets, respectively. In contrast, HLLE and LTSA only outperform LSE on one data set, respectively. PM can give a global “goodness of fit” between two data sets and do not consider the topological structures. For example, for the half teapot data set used in Fig. 10, the learned results by LE do not show well the order relationship in some local regions. However, the sum of squared errors is less than that with LSE. Actually, as pointed by Cayton [7], developing rigorous evaluation methodologies still remains elusive. VII. R ELATED W ORK LLE, LE, LTSA, HLLE and LSE can be considered into a common framework of thinking globally and fitting locally [41], in which the local information is collected together to obtain a global optimum. But they treat the local information in different ways. LLE and LE capitalize on the local geometrical preserving criteria, while LTSA, HLLE and our algorithm map the local November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

30

coordinates into a single global coordinate system. Specifically, the connection between LLE and LSE is that local information is first calculated on each neighborhood and then assembled together to develop the objective function. Differently, in LLE, each data point is linearly reconstructed by its neighbors and the reconstruction coefficients (local information) are kept unchanged to construct its low-dimensional coordinate [39]. In contrast, LSE first locally coordinatizes the data points in each neighborhood and then use spline to map the local coordinates to their global coordinates. In LTSA, HLLE and LSE, the local coordinates of the points in a neighborhoods are all evaluated in the local tangent space. Given the local coordinates, LTSA maps them to their global coordinates in view of affine reconstruction [61], while HLLE uses the local coordinates to define the Hessian estimator, and obtain a global optimum by finding an approximate null space associated to the Hessian estimator [13]. In terms of Sobolev space, the HLLE capitalizes on the constant function and the d coordinate functions to treat the local coordinates. In LSE, spines composed of polynomials and Green’s functions are used to map the local coordinates. Additionally, splines (with different forms from here) are introduced in Gaussian process [30] to deal with regression problems in a Bayesian framework. Compared with Gaussian process, we use splines to construct compatible mappings and hence treat them in view of geometrical interpolation. A parallel work related to out-of-sample extension for manifold learning is the semi-supervised manifold learning [57], [23], [19]. In a semi-supervised framework, the coordinates of some landmark points are provided to constrain the shape to be learned. The landmark points are supplied according to prior knowledge about the manifold shape or simply given by hand. In general, it is formulated as a transductive learning problem. In contrast, out-of-sample extension starts with a known manifold shape which is learned from the original data points, and focuses on how to deal with the new data points, for example, in a dynamic setting which are collected sequentially. In such a work setting, the coordinates of previously learned data points can be maintained unchanged. VIII. C ONCLUSION In this paper, we propose a general method for nonlinear dimensionality reduction algorithm, which is developed from the conception of compatible mappings. Tangent space projection and November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

31

splines are used to construct such mappings. A global optimum is finally achieved by minimizing the low-dimensional global reconstruction error. Due to the spline interpolation, the proposed methods can be directly extended to deal with out-of-samples. Comparative experiments on toy data sets, real-world data sets and noise data sets illustrated the validity of our method. In the further, we would like to study semi-supervised local spline embedding. We would also like to apply our method to many tasks in pattern analysis, data mining and high-dimensional data processing. ACKNOWLEDGMENT This work is supported by the Projects (Grant No. 60721003 and 60705022) of the National Natural Science Foundation of China. The anonymous reviewers have helped to improve the quality and representation of this paper. R EFERENCES [1] P. Baldi and K. Hornik, “Neural networks and principal component analysis: Learning from examples without local minima,” Neural Networks, vol. 2, no. 1, pp. 53–58, 1989. [2] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol. 15, no. 6, pp. 1373–1396, 2003. [3] Y. Bengio, J. F. Paiement, and P. Vincent, “Out-of-sample extensions for lle, isomap, mds, eigenmaps and spectral clustering,” in Advances in Neural Information Processing Systems 16, 2004. [4] C. M. Bishop, M. Svensn, and C. K. I. Williams, “Gtm: The generative topographic mapping,” Neural Computation, vol. 10, no. 1, pp. 215–234, 1998. [5] M. Brand, “Charting a manifold,” in Advances in Neural Information Processing Systems 15. Cambridge, MA, USA: MIT Press, 2003, pp. 961–968. [6] A. Brun, H. J. Park, H. Kuntsson, and C. F. Westin, “Coloring of dt-mri fiber traces using laplacian eigenmaps,” in Proceedings of International conference on Computer Aided Systems Theory, Las Palmas, Spain, 2003, pp. 518–529. [7] L. Cayton, “Algorithms for manifold learning,” University of California, San Diego, Tech. Rep. CS2008-0923, 2005. [8] H. Chang, D. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 2004, pp. 275–282. [9] T. F. Cox and M. Cox, Multidimensional scaling.

Laodon, England: Chapman & Hall, 1994.

[10] V. de Silva and J. B. Tenenbaum, “Global versus local methods in nonlinear dimensionality reduction,” in Advances in Neural Information Processing Systems 15.

Cambridge, MA, USA: MIT Press, 2003, pp. 721–728.

[11] P. Dollar, V. Rabaud, and S. Belongie, “Learning to traverse image manifolds,” in Advances in Neural Information Processing Systems 19, Cambridge, MA, USA: MIT Press, 2007, pp. 361–368. [12] P. Dollar, V. Rabaud, and S. Belongie, “Non-isometric manifold learning: Analysis and an algorithm,” in International Conference on Machine learning, Corvallis, OR, USA, 2007, pp. 241–248.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

32

[13] D. L. Donoho and C. Grimes, “Hessian eigenmaps: locally linear embedding techniques for highdimensional data,” Proceedings of the National Academy of Arts and Sciences, vol. 100, no. 10, pp. 5591–5596, 2003. [14] J. Duchon, “Splines minimizing rotation-invariant semi-norms in sobolev spaces,” in Constructive Theory of Functions of Several Variables, A. Dold and B. Eckmann, Eds.

Springer-Verlag, 1977, pp. 85–100.

[15] J. Einbeck, G. Tutz, and L. Evers, “Local principal curves,” Statistics and Computing, vol. 15, no. 4, pp. 301–313, 2005. [16] A. M. Farahmand, C. Szepesvari, and J.-Y. Audibert, “Manifold-adaptive dimension estimation,” in Proceedings of International Conference on Machine learning, Corvallis, OR, USA, 2007, pp. 265–272. [17] S. Gerber, T. Tasdizen, and R. Whitaker, “Robust non-linear dimensionality reduction using successive 1-dimensional laplacian eigenmaps,” in Proceedings of International Conference on Machine learning, Corvallis, OR, USA, 2007, pp. 281–288. [18] G. H. Golub and C. F. van Loan, Matrix Computations, 3rd ed.

Baltimore, MD, USA: The Johns Hopkins University

Press, 1996. [19] H. Gong, C. Pan, Q. YANG, H. LU, and S. MA, “A semi-supervised framework for mapping data to the intrinsic manifold,” in Proceedings of International Conference on Computer Vision, Beijing, China, 2005, pp. 98–105. [20] A. Gorban, B. Kegl, D. Wunsch, and A. Zinovyev, Principal Manifolds for Data Visualization and Dimension Reduction. Berlin, Germany: Springer, 2007. [21] A. Gorban and A. Zinovyev, “Elastic principal graphs and manifolds and their practical applications,” Computing, vol. 79, pp. 359–379, 2005. [22] J. Ham, D. D. Lee, S. Mika, and B. Schokopf, “A kernel view of the dimensionality reduction of manifolds,” in International Conference on Machine learning, Banff, Canada, 2004, pp. 369–376. [23] J. Ham, D. D. Lee, and L. K. Saul, “Semisupervised alignment of manifolds,” in International Workshop on Artificial Intelligence and Statistics, Barbados, West Indies, 2004, pp. 120–127. [24] T. Hastie and W. Stuetzle, “Principal curves,” Journal of American Statistical Association, vol. 84, no. 406, pp. 502–516, 1989. [25] M. Hein and M. Maier, “Manifold denosing,” in Advances in Neural Information Processing Systems 19.

Cambridge,

MA, USA: MIT Press, 2007, pp. 1–8. [26] O. C. Jenkins and M. J. Mataric, “A spatio-temporal extension to isomap nonlinear dimension reduction,” in International Conference on Machine learning, Banff, Alberta, Canada, 2004, pp. 441–448. [27] I. T. Jolliffe, Principal Component Analysis.

New York, USA: Springer, 1986.

[28] B. Kegl, A. Krzyzak, T. Linder, and K. Zeger, “Learning and design of principal curves,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 3, pp. 281–297, 2000. [29] T. Kohonen, Self-organization Maps, 3rd ed.

Berlin, Germany: Springer-Verlag, 2001.

[30] D.

processes,”

MacKay,

“Introduction

to

gaussian

University

of

Cambridge,

Available

electronically

at

http://www.inference.phy.cam.ac.uk/mackay/abstracts/gpB.html., Tech. Rep., 1997. [31] J. Mao and A. K. Jain, “Artificial neural networks for feature extraction and multivariate data projection,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 296–317, 1995. [32] J. Meinguet, “Multivariate interpolation at arbitrary points made simple,” Journal of the Applied Mathematics and Physics, vol. 30, 1979. [33] C. A. Micchelli, “Interpolation of scattered data: Distance matrices and conditionally positive functions,” Constructive Approximation, vol. 2, pp. 11–22, 1986.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

33

[34] W. Min, K. Lu, and X. He, “Locality pursuit embedding,” Pattern recognition, vol. 37, no. 4, pp. 781–788, 2004. [35] S. Mosci, L. Rosasco, and A. Verri, “Dimensionality reduction and generalization,” in Proceedings of International Conference on Machine learning, Corvallis, OR, USA, 2007, pp. 657–664. [36] A. W. Naylor and G. R. Sell, Linear Operator Theory in Engineering and Science.

Berlin, Germany: Springer-Verlag,

1982. [37] J. Nilsson, F. Sha, and M. I. Jordan, “Regression on manifolds using kernel dimension reduction,” in International Conference on Machine learning, Corvallis, OR, USA, 2007, pp. 697–704. [38] M. Polito and P. Perona, “Grouping and dimensionality reduction by locally linear embedding,” in Advances in Neural Information Processing Systems 14.

Cambridge, MA, USA: MIT Press, 2002, pp. 1255–1262.

[39] S. Roweis and L. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, pp. 2323–2326, 2000. [40] J. W. Sammom, “A nonlinear mapping for data structure analysis,” IEEE Transactions on Computers, vol. 18, no. 5, pp. 401–409, 1969. [41] L. K. Saul and S. T. Roweis, “Thinking globally, fit locally: unsupervised learning of low dimensional manifolds,” Journal of Machine Learning Research, vol. 4, pp. 119–155, 2003. [42] B. Sch¨olkopf, A. J. Smola, and K. R. Muller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10, no. 5, pp. 1299–1319, 1998. [43] B. Sch¨olkopf and A. J. Smola, Learning with kernels. [44] G. Seber, Multivariate Observations.

Cambridge, MA, USA: MIT Press, 2002.

New York, USA: John Wiley and Sons, 1984.

[45] F. Sha and L. K. Saul, “Analysis and extension of spectral methods for nonlinear dimensionality reduction,” in International Conference on Machine learning, Bonn, Germany, 2005, pp. 784–791. [46] A. J. Smola, S. Mika, B. Sch¨olkop, and R. C. Williamson, “Regularized principal manifolds,” Journal of Machine Learning, vol. 1, no. 3, pp. 179–209, 2001. [47] Y. W. Teh and S. Roweis, “Automatic alignment of local representations,” in Advances in Neural Information Processing Systems 15. Cambridge, MA, USA: MIT Press, 2003, pp. 841–848. [48] J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, pp. 2319–2323, 2000. [49] R. Tibshirani, “Principal curves revisited,” Statistics and Computing, vol. 2, pp. 183–190, 1992. [50] M. Vlachos, C. Domeniconi, and D. Gunopulos, “Non-linear dimensionality reduction techniques for classification and visualization,” in International Conference on Knowledge Discovery and Data Mining, Edmonton, Canada, 2002, pp. 645– 651. [51] G. Wahba, Spline models for observational data. SIAM Press, 1990. [52] F. Wang and C. Zhang, “Label propagation through linear neighborhoods,” in International Conference on Machine Learning, Pittsburgh, Pennsylvania, USA, 2006, pp. 985–992. [53] K. Q. Weinberger and L. K. Saul, “Unsupervised learning of image manifolds by semidefinite programming,” in IEEE conference on Computer Vision and Pattern Recognition, Washington DC, USA, 2004, pp. 988–995. [54] K. Q. Weinberger, F. Sha, and L. K. Saul, “Learning a kernel matrix for nonlinear dimensionality reduction,” in International Conference on Machine learning, Banff, Canada, 2004, pp. 888–905. [55] S. M. Xiang, F. P. Nie, C. S. Zhang, and C. X. Zhang, “Spline embedding for nonlinear dimensionality reduction,” in European Conference on Machine Learning, Berlin, Germany, 2006, pp. 825–832.

November 28, 2008

DRAFT

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2009

34

[56] S. Yan, D. Xu, B. Zhang, and H. Zhang, “Graph embedding: A general framework for dimensionality reduction,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 830–837. [57] X. Yang, H. Fu, H. Zha, and J. Barlow, “Semi-supervised nonlinear dimensionality reduction,” in International Conference on Machine Learning, Pittsburgh, USA, 2006, pp. 1065–1072. [58] J. Yoon, “Spectral approximation orders of radial basis function interpolation on the sobolev space,” SIAM Journal on Mathematical Analysis, vol. 33, no. 4, pp. 946–958, 2001. [59] H. Zha and Z. Zhang, “Isometric embedding and continuum isomap,” in Proceedings of International Conference on Machine learning, Washington DC, USA, 2003, pp. 864–871. [60] Z. Zhang and J. Wang, “Mlle: Modified locally linear embedding using multiple weights,” in Advances in Neural Information Processing Systems 19, Cambridge, MA, USA: MIT Press, 2007, pp. 1593–1600. [61] Z. Zhang and H. Zha, “Principal manifolds and nonlinear dimensionality reduction via tangent space alignment,” SIAM Journal on Scientific Computing, vol. 26, no. 1, pp. 313–338, 2004. [62] D. Zhou, J. Weston, A. Gretton, O. Bousquet, and B. Sch¨olkopf, “Ranking on data manifolds,” in Advances in Neural Information Processing Systems 15, 2003.

November 28, 2008

DRAFT

here

Nov 28, 2008 - dimensional data points which lie in a very high-dimensional ... high-dimensional data in artificial intelligence, pattern recognition, computer vision, data mining, ...... Isomap and HLLE is destroyed, if we project them onto the ...

6MB Sizes 2 Downloads 512 Views

Recommend Documents

here
Jan 1, 2015 - Adjusting the viewfinder (diopter-adjustment) [10] ...... If the product is connected to a laptop computer that is not connected to a power source, ...

here
This thesis investigates using an approach based on the Actors paradigm for ... The analysis of these comparisons indicates that for certain existing ...... practical implementation reasons and to ease reasoning about message passing. ..... return R

here
List of Abbreviations xi ...... When the agent is in the infectious state, this contact list is used to determine ..... The meaning of the fields in elements of the distributions are as follows. ...... using multiple, which require restating the func

here
Nov 28, 2008 - Tsinghua National Laboratory for Information Science and ... Chunxia Zhang is with the Software School, School of Computer Science, Beijing Institute of Technology, Beijing 100081, ..... local coordinates after local coordinatization.

here
Jun 7, 2015 - people must be met, not the needs of political and military elites. ... We see military commanders, each with their own agendas, not necessarily ...

here
Jan 1, 2015 - If the product is connected to a laptop computer that is not ...... iPhone and iPad are trademarks of Apple Inc., registered in the U.S. and other ...

here - GitHub
Word boundary (Perl, Java, .Net) ... Group re as a unit (Perl, Java, . .... If cond then re1, else re2. re2 is optional. .... Lazy quantifiers may be slower than greedy.

here - GitHub
Jan 15, 2015 - ift6266h15/master/assignments/01/solution.pdf ... the same file using some sort of custom naming scheme (e.g. myfile_9.py). A way to end the fear of saving and quitting. Keep a trace how your files change throughout development. Revert

here - Juror13
Nov 12, 2014 - 2 S v Pillay1977 (4) SA 531 (A) at 534 H- 535 A; S v Fazzie and Others 1964 (4) SA 673 (A). 3 S v Nkosi and Another 2011 (2) SACR 482 SCA ...

HERE - Lawctopus
Relevant experience is essential and a degree level qualification in a law or social ... Bachelors degree + Masters in Social Science or Humanities (Law degree ...

here - GitHub
Sep 14, 2015 - Highlights. 1 optimizationBenchmarking tool for evaluating and comparing ...... in artificial intelligence, logic, theoretical computer science, and various application ...... can automatically be compiled to PDF [86],ifaLATEX compiler

here - GitHub
can start it for free, but at some point you need to pay to advance through (not sure of the ... R for Everyone, Jared Lander, http://www.amazon.com/Everyone-Advanced-Analytics-Graphics-Addison-Wesley/ ... ISLR%20First%20Printing.pdf.

here - GitHub
Feb 16, 2016 - 6. 2 Low Level System Information. 7. 2.1 Machine Interface . .... devspecs/abi386-4.pdf, which describes the Linux IA-32 ABI for proces- ...... rameters described for the usual personality routine below, plus an additional.

here - GitHub
Oct 13, 2015 - Thus, I will need the email address that you reliably check. The default is whatever ... as a PDF and sending them to me. You will be rewarded in ... Part of good mathematics is its beautiful presentation. Thus, there will be a 1– ..

Here - GitHub
Mar 29, 2017 - 5.2 Basis for creating a system of equations from a single ODE . .... (weighted 20%) and a final exam (weighted 50%). 6 .... If you submit a PDF, please also submit the source-files used to generate the ...... Source: https: // commons

here - Juror13
Nov 12, 2014 - The State applies for leave to appeal the sentence imposed on. the conviction of culpable ... to a different conclusion on sentence, the Court of appeal would not be entitled to ..... that a non-custodial sentence would send a wrong me

'Newporters will learn here and become champions here' - The ...
Sheldon Whitehouse, D-R. ... Sheldon Whitehouse, D-R.I., Gov. Gina ... 'Newporters will learn here and become champions here' - The Newport Daily News.pdf.

here - NFL Labor
Run Like the Wind – Fans will have the opportunity to test their speed by running alongside a 40-yard long LED screen featuring NFL stars running the 40-yard dash. • AFC & NFC Row/Team Houses – All 32 NFL clubs have a home at Draft Town at thei

here
Welcomes Dan Bourke - Irish Music Masterclass. * Continues Slow sessions ... and Rant will perform at 8.30pm followed by a mighty session. Tommy Carty Slow ...

here
ARGONAUT INSURANCE COMPANY. Respondent. APPLICATION OF CALFORNA WORKERS COMPENSATION. INSTITUTE FOR STATUS AS AMICUS ...

Winter is here! - Groups
Page 1. Winter is here! Color the boy and the snowman. Can you name all the winter clothes?

here - Chris Oatley
didn't want realism in our stgle. The same for the effects design that is based on design elements we found in. Chinese art, like clouds, fire, water etc.

here
Information Technology Code Enforce TeÅ¿t Parks ReCreation Planning ... See County Organizational Operational Policy Ordinance for powers, duties and ...