Predoctoral Project Report

Cue Fusion using Information Theoretic Learning Nikolay Chumerin Supervisor: Prof. Dr. Marc M. Van Hulle

Computational Neuroscience Research Group Laboratorium voor Neuro- en Psychofysiologie Katholieke Universiteit Leuven

April 28, 2006

Table of Contents

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2

Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

3

2.1

Early Visual Cues Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.2

Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.3

Methodology of FE methods comparison . . . . . . . . . . . . . . . . . . . . . . . . .

10

2.4

Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

3.1

Feature extraction methods comparison . . . . . . . . . . . . . . . . . . . . . . . . . .

12

3.2

IMO segmentation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

4

Discussion & Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

5

Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

5.1

Incorporating new methods

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

5.2

Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

Publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

6

1

Summary

In order to segment independently moving objects (IMO) in a visual scene, neurons in the higher processing stages of the visual cortex are able to fuse visual attributes (cue fusion). The modeling of cue fusion is the objective of the predoctoral project and will be developed in the context of Information Theoretic Learning. The biologically-inspired computation of the visual cues from real-world video sequences is done with a software tool that is currently being developed in the MCCOOP European project (Chapter 2, Section 2.1). In order to perform cue fusion, we have adopted Feature Extraction (FE) methods that are based on the Maximization of Mutual Information (MMI). We have studied in depth these methods and performed a detailed comparison of two of them (Chapter 2, Section 2.2). We have used real-world stereo video sequences with manually labeled IMOs for feature extraction learning. The features in this way were further used in a classification stage that segments the IMOs. This result could then be used in a recurrent system for modifying, in a clearly defined region of interest (ROI) (i.e., the IMO), the tuning curves of the neurons responsible for extracting the cues. Results on the evaluation of this approach are presented in Chapter 3. The obtained results are discussed in Chapter 4. The future direction of the research, as part of the doctoral project, is presented in Chapter 5.

2

Chapter 1: Introduction

There are still a lot of unresolved issues in how we perceive the visual world. Data from anatomical, neurophysiological and psychophysical studies have been combined to shed light on the visual system. An emerging consensus is that the visual system operates on the basis of semi-independent channels that separately convey information about the various attributes (cues) of the visual scene. It is unlikely that individual channels would be capable of generating a full and complete representation of the visual scene. Rather, each channel carries only a small portion of the full range of attribute information that characterizes the external world. Mainstream biologically-inspired techniques for the processing of visual inputs are restricted to single channel modelling, namely the processing of only one cue or type of input, failed to bridge the gap between local connectivity and spatial layout at the assembly level. For instance, the orientation selectivity of simple cells in V1 and functional properties of their receptive fields (RFs) has been employed in several models for edge detection [21, 18]. These models employ segregated ON- and OFF-streams interacting via a mechanism of opponent inhibition. Grossberg et al. [8] have extended these models by incorporating long-range horizontal connections and feedback projections from V2 to V1 [9]. An extended Gabor energy operator with an inhibition term [7] has been used to reproduce an inhibiting aspect of the behaviour of most simple cells outside their classical RFs and accounts for the suppression of the edges that are part of the surrounding texture. These surrounds have also been observed in area MT, which is crucial in motion processing, and which were modelled by pooling among neighboring cells [6]. All these models do not incorporate the response of neurons from other processing channels, however. In light of the now well-accepted view that each channel of a system of separate channels conveys only a part of the full range of visual information, it is therefore logical to assume that some kind of fusion (recombination) process should be present. There is ample evidence that cells in area IT are sensitive to stimuli defined by various cues [26, 29]. Investigating the possible mechanisms for this fusion process has become an active field of research in recent years. In order to reveal the mechanism of Cue Fusion, we have modeled this process using a very rigorous technique with strong mathematical background – Information Theoretic Learning based on the Maximization of Mutual Information (MMI) which is heavily researched in Machine Learning. To model Cue Fusion, we have employed MMI-based Feature Extraction (FE). FE is a widespread pre-processing technique in high-dimensional data analysis, visualization and modeling. It should be regarded as complementary to Feature Selection (FS) which reduces dimensionality of the original data by selecting only those dimensions that contain the most relevant information for solving the particular problem. FE tries to develop a transformation of the input space onto the lower-dimensional subspace that preserves most of the relevant information. Using this approach we have tried to extract implicit, previously unknown, and potentially useful information from original high-dimensional data (visual cues). A scheme of the suggested model is presented on Fig. 1.1. The real-world video sequence is processed by the framework in order to obtain visual cues. As the main visual cues we consider: binocular disparity, orientation and motion. These cues were chosen as they are important in experimental neuroscience since visual areas exists where neurons are responsive to them. These cues are extracted in the framework using biological models of individual neurons. Motion is represented by optic flow (two dimensional vector field) and motion in depth. Disparity is represented by static and dynamic disparity. Also in the model, we have used a number of other cues: luminance, horizontal and vertical

3

Figure 1.1: Flow diagram of proposed model. symmetry. These visual cues are quite dense (defined for almost each pixel of each original images) and highdimensional so that using them for direct IMO segmentation (classification) is problematic. As it was mentioned above, we performed cue fusion and this reduces the data dimensionality from D to d, D > d. Actually, we project the cue output onto a linear subspace of dimensionality d. The coordinate axes of this subspace are the extracted cue combinations (cue fusion). But first we have to teach our model to do cue fusion by setting forth a clear learning goal. We have trained the cue fusion model on real-world video data with manually marked IMOs. The cues should develop so that the IMOs are optimally segmented from the environment. Hence, cue fusion is regarded as a classification problem with two classes: IMO’s and environment. The outputs of this stage can then be projected back (recurrent link) to the framework to adapt the neurons’ tuning curves in the regions of interest (presumably IMOs).

4

Chapter 2: Methods

2.1 2.1.1

Early Visual Cues Extraction Background on early visual cues extraction

In order to extract the early vision cues, we have used a multichannel architecture based on the linear filtering of stereo video sequences with spatiotemporal kernels that mimic the receptive fields of visual cortical neurons. Spatial filters Conform with the disparity energy (phase-based) models, we have filtered the stereo visual signal with a quadrature pair of Gabor kernels (receptive fields): hL C (x, y) = g(x, y, 0),

hR C (x, y) = g(x, y, 0),

hL S (x, y) = g(x, y, π/2),

hR S (x, y) = g(x, y, π/2),

(2.1)

with g(·, ·, ·) is a vertically oriented Gabor kernel centered at the origin:   1 x2 y2 g(x, y, ψ) = A · exp − 2 − 2 cos(k0 x + ψ) 2πσx σy 2σx 2σy = A · G(x, y) cos(k0 x + ψ),

(2.2)

where k0 is the peak tuning frequency, σx and σy determine the x and y receptive field (RF) dimensions, and ψ the phase parameter for the sinusoidal modulation, and A a normalization constant, which is application dependent. To generate RFs with orientation θ (measured from the positive horizontal axis), we can rotate the vertically oriented RF (2.2) by θ − π/2 with respect to the RF center (positive angle means counterclockwise rotation):   x2θ yθ2 A g(x, y, ψ, θ) = exp − 2 − 2 cos(k0 x + ψ), (2.3) 2πσx σy 2σx 2σy where

( xθ yθ

= x cos(θ − π/2) + y sin(θ − π/2), = −x sin(θ − π/2) + y cos(θ − π/2).

Here k0 can be considered the radial peak frequency (and the corresponding frequencies projected in the Cartesian space are k0x = k0 cos(θ − π/2) and k0y = k0 sin(θ − π/2). Spatiotemporal receptive fields Starting from a separable space-time receptive field: h0 (x, y, t) = g(x, y, ψ)f (t) = G(x, y)F (t)cos(k0 x + ψ) cos ω0 t 5

(2.4)

where ω0 is the temporal frequency, g(x, y, ψ) is the spatial Gabor function defined in (2.2) and F (t) is a causal decaying temporal function (e.g., exponential), a direction-selective receptive field can be written as: h(x, y, t) = g(x, y, ψ)f (t) + η¯ g (x, y, ψ)f¯(t), (2.5) where the g¯ and f¯ functions are obtained from the corresponding g and f functions by replacing all the cosine terms by the sine terms, and where the constant weighing factor η (η ∈ [−1, 1]) is introduced to model various degrees of directional sensitivity. In the following, for the sake of simplicity, we will fix |η| = 1, thus modeling “pure” directional units. In general, we can define RFs for opposite directions: h+ (x, y, t) = G(x, y)F (t) cos(k0 (x − vt) + ψ) = g(x, y, ψ)f (t) + g¯(x, y, ψ)f¯(t),

(2.6)



h (x, y, t) = G(x, y)F (t) cos(k0 (x + vt) + ψ) = g(x, y, ψ)f (t) − g¯(x, y, ψ)f¯(t),

(2.7)

where + and − indicate a selectivity for “rightward” and “leftward” motion, respectively and v = ω0 /k0 is the preferred velocity. If we consider different spatial orientations, we can write: h+ ¯(x, y, ψ, θk )f¯(t), k (x, y, t) = g(x, y, ψ, θk )f (t) + g h− (x, y, t) = g(x, y, ψ, θk )f (t) − g¯(x, y, ψ, θk )f¯(t), k

(2.8) (2.9)

or alternatively as: + h+ k (x, y, t) = hk (x, t) = G(x)F (t) cos(k0 (x − vk t) + ψ),

(2.10)

h− k (x, y, t)

(2.11)

=

h− k (x, t)

= G(x)F (t) cos(k0 (x + vk t) + ψ),

where k0 = (k0x , k0y ) is the wave vector of the spatial RF and vk = ω0 /k0 is the preferred velocity in the direction orthogonal to the RF orientation (θ − π/2). Accordingly, as a first approximation, by using 8 orientations and one temporal frequency ω0 , we can obtain a bank of 16 cells selective to 16 directions of motion. The spatial scale can be used to determine the magnitude of the preferred velocity. Static disparity Formally, the intensities measured by the left and right eyes, I L (x) and I R (x), respectively, are related as: I L (x) = I R (x + δ(x)), (2.12) where δ(x) is the (horizontal) binocular disparity. Following [25], disparity can be estimated in terms of phase differences in the spectral components of the stereo image pair. Since the two images are locally related by a shift, in the neighbourhood of each image point, the local k spectral components of I L (x) and I R (x) are related by a phase difference equal to ∆φ(k) = φL (k) − φR (x) = kδ. Spatially-localized phase measures can be obtained by a filtering operation with quadrature pair Gabor filters (2.1). Dynamic disparity Extention of static disparity to dynamic disparity could be done employing spatiotemporal filters instead of spatial filters.

6

Figure 2.1: Framework. Motion in depth In [24, 23], a cortical model for the generation of binocular motion-in-depth selective cells as a hierarchical combination of binocular energy complex cells was proposed. All computations rely on spatiotemporal differentials of the left and right retinal phases that can be approximated by linear filtering operations with spatiotemporal RFs. Edges For biologically inspired edges detection we employed implemented the framework an extension [15] of an iterative model for contrast detection [16]. The extended model takes into account not only the contrast invariance of the orientation preference, but also incorporates a cross-oriented suppression of simple cells in the primary visual cortex (V1).

2.1.2

Framework

For the biologically-inspired computation of the early visual cues from the real-world video sequences we have used a software tool (Framework) that is currently being developed in the MCCOOP European project. The input to the Framework consists of a sequence of stereo image frames all of the same dimension. The linear filter stage processes one (stereo) frame at a time by applying a set of linear spatial filters. These filters model the receptive fields of the input cells of the orientation, disparity and motion channels. By connecting together the built-in blocks in the Framework it is possible to construct the network responsible for computing a particular visual cue. 7

C++ has been chosen as the computer language to implement the Framework since it is object oriented, on the one hand, and it generates high-performing code, on the other hand. The blitz++ and the fftw libraries are used for performance-critical matrix operations. These libraries are widely used and well optimized for a wide varieties of platforms. A user interface is implemented using Qt4 by Trolltech, a well-known C++-based library for GUI-development that also offers some nice object-oriented extensions on C++ available under free as well as commercial licenses on all major platforms. The output of the Framework can be saved as a sequence of images (BMP or PGM) or as a Matlab variable. Additional flexibility of the modelling is offered by its possibility to perform simulations (using simple scripting).

2.2

Feature Extraction

We will further focus on linear FE methods which means that they can be represented by a linear transformation W : RD → Rd , D > d (one can obtain Feature Selection from Feature Extraction choosing matrix W with different column-vectors, where each of them has only one nonzero component equal 1). Feature Extraction methods can be supervised or unsupervised, depending on whether or not class labels are used. Among the unsupervised methods, Principal Component Analysis (PCA) [13], Independent Component Analysis (ICA) [12], and Multidimensional Scaling (MDS) [31] are the most popular ones. Supervised FE methods (and also FS methods) either use information about the current classification performance, called wrappers, or use some other, indirect measure, called filters. One expects that, in the case of a classification problem, supervised methods will perform better than unsupervised ones. Recently, a method has been introduced by Torkkola [30] that has attracted a lot of attention. Consider the data set {xi , ci }, i = 1, . . . , N with xi ∈ RD the data points, and ci the class labels taken from the discrete set C = {cp }, p = 1, . . . , Nc . The objective is to find a linear transformation W ∈ RD×d for which the mutual information (MI) of the transformed data points Y = {yi } = {WT xi } and the corresponding labels C = {ci } is maximized. The objective is different from ICA’s where MI between the transformed data components is minimized. Also, the presence of the labels C makes the objective different. Torkkola derived an expression for MI based on Renyi’s quadratic entropy [22], instead of Shannon’s entropy, and a plug-in density estimate based on Parzen windowing. Prior to Torkkola, Bollacker and Ghosh [2] proposed an incremental approach to MI maximization that was derived by rewriting the original MI objective function as a sum of MI terms between the one-dimensional projections and the corresponding class labels. A polytope algorithm was used for the optimization and histograms for estimating the probabilities. Very recently, a method based on the same reformulation of the MI objective function was introduced by Leiva-Murillo and Art´esRodr´ıguez (2006) [1]. However, they used gradient descent as an optimization strategy, and expressed the one-dimensional MI terms as one-dimensional negentropies, which were then estimated using Hyv¨arinen’s robust estimator [11].

8

2.2.1

Torkkola’s method

Given two random variables X1 and X2 with joint probability density p(x1 , x2 ) and marginal probability densities p1 (x1 ) and p2 (x2 ), the mutual information (MI) can be expressed as: I(X1 , X2 ) = K(p(x1 , x2 ), p1 (x1 )p2 (x2 )),

(2.13)

with K(·, ·) the Kullback-Leibler divergence. In order to estimate MI, Torkkola and Campbell [30] use the quadratic measures KC or KT originally introduced by Principe and co-workers [22]: R 2 R f (x)dx g 2 (x)dx KC (f, g) = log (2.14) 2 R f (x)g(x)dx Z KT (f, g) = (f (x) − g(x))2 dx. (2.15) For continuous-valued Y and discrete-valued C, using (2.13), (2.14) and (2.15), one can derive two types of MI estimates: V(cy)2 Vc2 y2 , (Vcy )2 IT (Y, C) = V(cy)2 + Vc2 y2 − 2Vcy ,

IC (Y, C) = log

(2.16) (2.17)

where: V(cy)2

XZ

=

c∈C

V c2 y 2

p2 (y, c)dy,

y

XZ

=

c∈C

p2 (c)p2 (y)dy,

y

XZ

Vcy =

c∈C

p(y, c)p(c)p(y)dy.

(2.18)

y

The class probability can be evaluated as p(cp ) = Jp /N , where Jp is the number of samples in class cp . The density of the projected data p(y) and the joint density p(y, c) are estimated with the Parzen window approach [19]: p(y) =

N 1 X G(y − yi , σ 2 I) N i=1 Jp

p(y, cp ) =

1 X G(y − ypj , σ 2 I), N

(2.19)

i=1

with G(x, Σ) the Gaussian kernel with center x and covariance matrix Σ, and yjp the j-th sample in class cp . In order to reduce the number of parameters to optimize, Torkkola proposes a parametrization of the desired matrix W in terms of Givens rotations in RD . As a result, there are only d(D − d) parameters to optimize instead of D2 . Obviously, the maximal number of parameters to estimate occurs for d near D/2. The computational complexity of the method is claimed to be O(N 2 ).

2.2.2

Art´ es-Rodr´ıguez method

In the Art´es-Rodr´ıguez method, an objective function in terms of the sum of individual MI’s is considered: d d X X IAR (Y, C) = I(yi , c) = I(wiT x, c), (2.20) i=1

i=1

9

Dataset name Iris Pima Glass Pipeline flow Wine

Dimension (D) 4 8 9 12 13

Number of samples (N ) 150 500 214 1000 178

Number of classes (Nc ) 3 2 7 3 3

Table 2.1: Information about used real data sets with yi = wiT x the data projected onto direction wi , and wi ∈ RD the i-th column of the desired orthonormal matrix W. According to this cost function, one should preserve orthonormality of W when sequentially obtaining the projection directions wi with decreasing degrees of relevance (topdown scheme) or when sequentially removing the directions with minimum individual MI between the variables and classes (bottom-up scheme). Assuming the original data is whitened, each individual MI can be estimated as: I(yi , c) =

Nc X

p(cp ) (J(yi |cp ) − log σ(yi |cp )) − J(yi ),

(2.21)

p=1

with yi |cp the projection of the p-th class’ data points onto the wi direction, J(·) the negentropy, and σ(·) the standard deviation. Hyv¨arinen’s robust estimator [11] for the negentropy is used:  2 J(z) ≈ k1 E z exp(−z 2 /2) 2   p +k2 E exp(−z 2 /2) − 1/2 , (2.22) √ √ with k1 = 36/(8 3 − 9) and k2 = 24/(16 3 − 27).

2.3

Methodology of FE methods comparison

In order to have a fair comparison, we use the original source code of the Art´es-Rodr´ıguez algorithm (courtesy of Leiva-Murillo and Art´es-Rodr´ıguez) and the publicly available implementation of Torkkola’s approach by Kenneth E. Hild II (MeRMaId-SIG) [10]. For the Art´es-Rodr´ıguez algorithm, we choose the top-down scheme. We consider both synthetic and real world data sets. The synthetic data set consists of a variable number of equal-sized, normally distributed clusters (modes) in RD . The clusters centers are Gaussianly distributed with variance equal to 3. All data sets are centered and whitened before applying the respective FE methods. We consider Nc = 3, . . . , 10 clusters, and use 1000 data sets with d = 1, . . . , D − 1 subspace dimensions. The MI estimators’ means and standard deviations for the 1000 data sets are then plotted as a function of the subspace dimensionalities d. For the real-world data sets, we compute the MI estimates for each possible subspace dimension d. The Pipeline Flow data set was taken from Aston University1 . The rest of the real-world data sets were taken from the UCI Machine Learning Repository2 . If the data dimensionality was more than 9, we did not evaluate IB (binned estimator) due to memory limitations. In order to compare the speeds of the algorithms, the numbers of float-point operations (flops) needed for one gradient evaluation are determined. It is a more relevant measure than the average computing time because it does not dependent on the optimization techniques used by these algorithms. We should remind that in Art´es-Rodr´ıguez approach, for each iteration, the scalar gradient is computed, 1 2

http://www.ncrg.aston.ac.uk/GTM/3PhaseData.html http://www.ics.uci.edu/~mlearn 10

whereas in Torkkola’s approach the gradient is a d(D −d)-dimensional vector. The flops were obtained using the flops function (in Matlab 5.3) on data sets with N ∈ {1000, 2000, 3000, 4000} and D ∈ {4, 8, 12, 16}.

2.4

Classification

IMO segmentation can be treated as a classification problem, and the system should somehow classify each pixel of the entire image in two classes: environment and IMO. For this purpose, we have implemented a Multilayer Perceptron (MLP) classifier, and are still working on the implementation of the Support Vector machine (SVM) classifier. The MLP is a feedforward neural network consisting of multiple layers of computational units, usually interconnected in a feed-forward manner. Each neuron in a layer has directed connections to the neurons of the subsequent layer. The universal approximation theorem for neural networks states that every continuous function that maps intervals of real numbers to some output interval of real numbers can be approximated arbitrarily closely by a multi-layer perceptron with just one hidden layer. This result holds only for restricted classes of activation functions, e.g. for the sigmoidal functions. Classification is one of the applications of MLPs. In our first attempt, we have used a simple twolayered MLP with eight nonlinear neurons in first layer and one linear neuron in second (output) layer. For the simulations we have used the Neural Network Toolbox 4.0.6 in MATLAB 7.0.3 and selected the Levenberg-Marquardt algorithm for training the MLP.

11

Chapter 3: Results

3.1 3.1.1

Feature extraction methods comparison Results of FE methods comparison

For the synthetic data sets we show only the case of D = 6, N = 10 and Nc = 5 (Figs. 3.1(a)–3.1(d)). The results for the real-world data sets are shown in Figs. 3.1(e)–3.1(h) and in Table 3.1. The speed comparison results are shown in Table 3.2. The case D = 8 for Torkkola’s method is shown in Fig. 3.2(a) in more detail. We do not show the plots for the Art´es-Rodr´ıguez approach because each gradient evaluation needs the same number of flops for all d = 1, . . . , D − 1. When performing the speed comparison, we noticed that both methods needed almost the same numbers of floating point operations for gradient evaluation, for data sets with fixed number of samples N , dimension D and different number of clusters Nc : the deviation in flops for constant N , D and Nc ∈ {5, 10, 20} was less than 1%. Hence we decided to perform a speed comparison for fixed Nc = 5 and different N and D. The CPU time should grow with increasing Nc , however, it stays almost constant. We explain this by the highly optimized manner Matlab treats matrix computations: for fixed N , the more classes we have, the more portions of the data (with smaller sizes) are processed in a vectorized manner.

3.2

IMO segmentation

In our simulations, we have used the real-world stereo video sequence City3. In order to reduce the computations, and to achieve the same resolution for the optic flow computations, and all extracted early vision cues, each high-resolution (1276 × 1016) pixels image was downscaled to a lower-resolution 320 × 256 pixels image. Only pixels with corresponding high confidence levels (with respect to optic Data set Iris Pima Glass Pipeline Wine

Approach Art´es-Rodr´ıguez Torrkola Art´es-Rodr´ıguez Torrkola Art´es-Rodr´ıguez Torrkola Art´es-Rodr´ıguez Torrkola Art´es-Rodr´ıguez Torrkola

hIB i 1.0391 1.0181 0.3428 0.3461 0.7078 0.7430 1.0814 1.0749 1.0668 0.9009

hIC i 1.1541 1.0565 0.2089 0.2026 0.4212 0.4409 1.4331 1.4889 1.5019 1.2422

hIAR i 5.0251 4.0409 0.8528 0.4140 6.8401 3.8368 20.461 5.9973 8.3007 3.0588

hI (2) i 0.9944 0.9561 0.1628 0.1678 0.5952 0.4764 1.0668 1.0605 0.8798 0.7194

Table 3.1: Averages of the estimated MI for all real data sets considered and d = 1, . . . , D − 1; hIB i. Note that the estimates were computed only for d < 9) (see text).

12

1.6

1.4

1.4 MI(Kraskov)

MI(binned)

1.6

1.2 1

D = 6, N = 1000, Nc = 5

1

2

3

4

5

dimensions

D = 6, N = 1000, Nc = 5

1

FE_Artes_Rodriguez FE_Torkkola

0.8

1.2

FE_Artes_Rodriguez FE_Torkkola

0.8 1

6

2

10

5

6

2

8 6

MI(Principe)

MI(Artes−Rodriguez)

4

dimensions

(b) Mean of I (2) vs. d

(a) Mean of IB vs. d

D = 6, N = 1000, N = 5 c

4 2

3

4

5

dimensions

1.5 D = 6, N = 1000, Nc = 5

1

FE_Artes_Rodriguez FE_Torkkola 1

6

FE_Artes_Rodriguez FE_Torkkola

1

2

(c) Mean of IAR vs. d

3

4

5

dimensions

6

(d) Mean of IC vs. d

0.6

1.25

0.5

1.2

MI(binned)

MI(Principe)

3

1.15 1.1

D = 4, N = 150, Nc = 3

1

2

dimensions

3

0.3

D = 8, N = 768, Nc = 2

FE_Artes_Rodriguez FE_Torkkola

0.2

FE_Artes_Rodriguez FE_Torkkola

1.05

0.4

1

4

(e) IC versus d for Iris Plants Database

2

3

4

5

6

dimensions

7

8

(f) IB versus d for Pima Indians Diabetes Database

8 1.4

6

MI(Principe)

MI(Artes−Rodriguez)

1.6

4 D = 9, N = 214, Nc = 6

2 0

FE_Artes_Rodriguez FE_Torkkola

1

2

3

4

5

dimensions

6

7

8

1.2 D = 12, N = 1000, Nc = 3

FE_Artes_Rodriguez FE_Torkkola

1

9

1

(g) IAR versus d for Glass Database

2

3

4

5

6

7

dimensions

8

9

10 11 12

(h) IC versus d for Pipeline Flow data

Figure 3.1: Values of MI estimators vs. extracted features subspace dimension d. 13

D 4

8

12

16

N 1000 2000 3000 4000 1000 2000 3000 4000 1000 2000 3000 4000 1000 2000 3000 4000

Torkkola 0.165 0.329 0.493 0.657 0.438 0.874 1.310 1.746 0.841 1.677 2.513 3.349 1.372 2.736 4.100 5.464

Art´es-Rodr´ıguez 0.253 . . . 0.390 0.505 . . . 0.778 0.757 . . . 1.166 1.009 . . . 1.554 1.976 . . . 5.121 3.900 . . . 9.977 5.824 . . . 14.833 7.748 . . . 19.689 6.977 . . . 27.540 13.517 . . . 50.544 20.057 . . . 73.548 26.597 . . . 96.552 17.556 . . . 104.446 33.192 . . . 175.022 48.828 . . . 245.598 64.464 . . . 316.174

Table 3.2: Comparison of floating point operations (in Mflops) needed for one gradient evaluation.

10

N = 4000 N = 3000 N = 2000 N = 1000

80 Mflops

15 Mflops

100

N = 4000 N = 3000 N = 2000 N = 1000

20

5

60 40 20

1

2

3

4

dimension

5

6

7

1

2

3

4

5

6

dimension

7

8

9

10

11

Figure 3.2: Plots of the floating point operations required for Torkkola’s gradient evaluation for D = 8 and D = 12 and different N . It should not come as a surprise that the shape of plots reflect the quadratic nature of the number of parameters to optimize (see text).

14

−12 background

−10 3

−8

IMO

−6 −4

2

−2 0

1

2

0 −2 −1

0

−2

2 4

−3 −4 −5 −6

Figure 3.3: Result of cue fusion (feature extraction) from 9-D early vision feature space to 3-D space. Only 10000 samples are plotted for the sake of exposition. flow) were chosen as data samples for further training and evaluation. This is one of the reasons why validation frames looks so patchy: in validation stage all pixels with not sufficient corresponding confidence level of optic flow have been ignored. The result of cue fusion from (9D to 3D) is shown in Fig. 3.3. We have used 8 frames for trining and 50 frames for validation. Some of the validation frames are shown in Figs. 3.4(a)–3.4(d). The output of the MLP was thresholded at 0.9. The misclassification error was less than 5%.

15

Figure 3.4: Results of the validation. Left images are the original frames (20,30,35 and 40) with superimposed IMO masks. Right images represent the binary (thresholded) outputs of the classifier.

16

Chapter 4: Discussion & Conclusion

The results show that, for most data sets, the Art´es-Rodr´ıguez approach yields better results. From our point of view, the better performance of the Art´es-Rodr´ıguez algorithm can be explained by the fact that IAR is much smoother than with the other measures, including the IC metric used in Torkkola’s, with almost coinciding maxima. This is illustrated in Fig. 4.1. Another issue is data preprocessing. In Torkkola’s only PCA is used as initial data preprocessing, whereas Art´es-Rodr´ıguez employs a more sophisticated preprocessing: PCA and SIR (Sliced Inverse Regression) which already yields a quite good MI result. One should also mention that the Art´es-Rodr´ıguez algorithm employs a simple adaptation of the learning rate during evaluation, while the MeRMaId-SIG uses a pure gradient ascent with constant learning rate. This makes the Art´es-Rodr´ıguez algorithm much faster. In summary, the Art´es-Rodr´ıguez approach is not only robust but also fast and reliable, and holds promise of a successful cue fusion application.

IB IAR IC 2⋅ I(2)

Figure 4.1: Plots of IB , IAR , IC and I (2) (which is doubled for the sake of exposition) as a function of the angle of the direction on which the data points are projected, given a two-dimensional data set consisting of 3 equal sized Gaussian clusters. For each plot, the direction of the maxima is indicated with a line segment. It can be clearly seen that IAR is much smoother than the other measures, with almost coinciding maxima.

17

Chapter 5: Future work

5.1 5.1.1

Incorporating new methods Population methods for early vision cues extraction

Disparity information could be encoded by the population response of a set of cells that have different disparity tunings (i.e., different phase relationships (∆ψ) in their binocular receptive fields: there is a phase difference (∆ψ) between the sinusoidal modulations of the left and right RFs): hL (x, y) = g(x, y, ψ),

hR (x, y) = g(x, y, ψ + ∆ψ).

(5.1)

The distributed representation of the disparity (and, in general, of the other early vision cues) requires a higher number of RFs, but it allows us to achieve a maximal flexibility in modulating the cells’ responses through feedback (i.e., from higher levels) and recurrent interactions. In this way, it will be possible to adjust (enhance or inhibit) the cells’ responses in accordance with the signals coming from the spatial neighborhoods of the same channel or from different channels. From this perspective, in the following, we will focus on population methods, albeit that most of the proposed filter design criteria apply to the direct methods too.

5.1.2

Support Vector Machines

Support vector machines (SVMs) are a set of related supervised learning methods for classification and regression. Their common factor is the use of a technique known as the kernel trick to apply linear classification techniques to nonlinear classification problems. In machine learning, the kernel trick is a method for easily converting a non-linear classification learning algorithm into linear one, by mapping the original observations into a higher dimensional nonlinear space so that linear classification in the new space is equivalent to nonlinear classification in the original space. SVMs could be also characterized by the absence of local minima, the sparseness of the solution and the capacity control obtained by acting on the margin, or on other “dimension independent” quantities such as the number of support vectors. They were invented by Vladimir Vapnik and his co-workers, and introduced in [5]. Now we are working on the implementation of the LS-SVMlab 1.5 [28] – a MATLAB/C implementation of Least Squares Support Vector Machines (LS-SVM) classifier – in our system. LS-SVM reformulates the standard SVM leading to solving linear KKT (KarushKuhnTucker) systems. LS-SVM alike primal-dual formulations have been given to kernel PCA (Principal Component Analysis), kernel CCA (Canonical Correlation Analysis) and kernel PLS (Partial Least Squares), thereby extending the class of primal-dual kernel machines. Links between kernel versions of classical pattern recognition algorithms such as kernel Fisher discriminant analysis and extensions to unsupervised learning, recurrent networks and control are available.

18

5.2

Future directions

In this section we present the future direction of the research, as part of the doctoral project.

5.2.1

Modelling orientation selectivity and selective contour detection

This workpackage will investigate the impact of selective feedback projections on the response of simple cells. The goal is to develop a model for selective detection of contours of interest, such as contours of moving objects. This will require the design of a dedicated interaction mechanism with neuronal units from the motion and depth processing channels. This interaction will activate feedback projections to the orientation processing neuronal unit, which, in turn, will trigger recursive local interactions of simple cells enhancing their responses to the moving object. At the end, the contour of selected moving object will be enhanced whereas other contours will be suppressed. The developed model for the selective edge detection will be adapted to the needs of image processing. The model should fulfill the specific requirements of computer vision applications, such as time and memory constraints, robustness with respect to noisy images, etc. In practice, the model parameters will be optimized by a training procedure running on real images. Another aspect to be investigated is the model’s selective ability to extract edges of moving objects and the role of amplifying recursive interactions within the motion processing channel.

5.2.2

Joint motion and disparity representation

When the stereopsis problem is extended to include time-varying images, one has to deal with the problem of tracking the monocular point descriptions or the 3-D descriptions, which they represent through time. Therefore, in general, dynamic stereopsis is the integration of two problems: static stereopsis and temporal correspondence [14]. Considering jointly the binocular spatiotemporal constraints posed by moving objects in 3-D space, the disparity assigned to a point as a function of time is related to the trajectories in the left and right (monocular) images of the corresponding point in the 3-D scene. Therefore, dynamic stereopsis implies the knowledge of the position of objects in the scene as a function of time. In general, the solutions to these problems rely upon a global analysis of the optic flow or on token matching techniques, which combine stereo correspondence and visual tracking. This workpart is devoted to the design of local operators (e.g. binocular motion energy units), capable of providing a full description of the 3D motion event (e.g., a spatially extended object moving in 3D), by projecting it into a subspace of elemental features (motion, disparity and orientation). Such a description will rely upon space and time phase information gathered from a band-pass spatiotemporal transformation of the binocular visual signal.

5.2.3

Integration of visual processing units

The interaction of the different processing channels starts at early processing stages that will tune the neuronal units to selected patterns. This tuning amplifies the response of neuronal units to these patterns, shaping the originally broadly tuned response into a stable percept. The cross-channel interaction allows detection of complex visual patterns and their motions across the visual field. Each neuronal channel specializes for the processing of particular visual features, transmits responses of its neurons to other processing channels via interaction with neurons of these channels. In this way, each channel provides indispensable information to the processing pool contributing to a common percept of complex patterns. The mechanism of cross-channel interaction of multiple neuronal units is recursive and is complemented by feedback projections transmitting signals to lower processing layers. The 19

above mentioned mechanisms will be used to model the cross channel interaction so as to integrate the response of orientation selective, motion and depth selective neuronal units. The phase temporal derivative components provided by binocular energy units, will be combined at a higher level, to yield a specific motion-in-depth selectivity. The stability of the estimation will be achieved by exploiting the topological organization of binocular energy units characterized by partially overlapping receptive fields, and different ocular dominance indices. Integrating the response of orientation selective neuronal unit about local contour orientations will further enhance the estimation and stabilize the overall percept.

5.2.4

Development and testing of computer vision techniques

This workpackage will be the testbed for the biologically-inspired computer vision techniques that we intend to develop. Its purpose is twofold: • the testing of the developed technique on real video data, • the comparison with standard computer vision techniques. In the first case, a database will be compiled of image sequences taken from a stereo video camera installed in a car close to the driver’s viewpoin. Inner-city as well as outer-city driving situations will be considered since both are important for supplying real-world cases of tracking the heading of the driver under the case of distractors, such as upcoming traffic, and for segmenting independently moving objects with possibly, but not necessarily different headings. In the second case, other algorithms will be used for segmenting and tracking independently moving objects and the results compared with our multi-channel approach. A variety of techniques could be considered here, such as the ones that: use optic flow to segment objects by their difference in heading from the observer [20], use optic flow to segment objects by clustering egomotion constraints [17], use normal flow and minimize the local variance of depth estimates [3], use image intensity values and combine 2D-feature tracking (corners) with 1D boundaries (edges) [27]. Such a comparison will allow us to judge critically the progress made by our biologically-inspired multi-channel vision system.

20

Chapter 6: Publication

The results of the comparison of the two feature extraction methods [4] will be reported at IEEE International Workshop on Machine Learning for Signal Processing (Formerly the IEEE Workshop on Neural Networks for Signal Processing) September 6–8, 2006, Maynooth, Ireland.

21

Bibliography

[1] A. Art´es-Rodr´ıguez and J.M. Leiva-Murillo. Maximization of mutual information for supervised linear feature extraction. TNN, 2006. (in press). [2] K.D. Bollacker and J. Ghosh. Linear feature extractors based on mutual information. In Proceedings of the 13th International Conference on Pattern Recognition, volume 2, pages 720–724, 1996. [3] T.L. Brodsk` y, C.L. Ferm¨ uller, and Y.L. Aloimonos. Structure from Motion: Beyond the Epipolar Constraint. International Journal of Computer Vision, 37(3):231–258, 2000. [4] N. Chumerin and M.M. Van Hulle. Comparison of two feature extraction methods based on maximization of mutual information. IEEE Machine Learning for Signal Processing (MLSP) Workshop, 2006. (in press). [5] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995. [6] T. Gautama and M.M. Van Hulle. Function of center-surround antagonism for motion in visual area MT/V5: a modeling study. Vision Research, 41(28):3917–3930, 2001. [7] C. Grigorescu, N. Petkov, and MA Westenberg. Contour detection based on nonclassical receptive field inhibition. Image Processing, IEEE Transactions on, 12(7):729–739, 2003. [8] S. Grossberg, E. Mingolla, and W.D. Ross. Visual brain and visual perception: How does the cortex do perceptual grouping. Trends in Neurosciences, 20(3):106–111, 1997. [9] S. Grossberg and R. Raizada. Contrast-sensitive perceptual grouping and object-based attention in the laminar circuits of primary visual cortex. Vision Research, 40(10):1413–32, 2000. [10] K.E. Hild II, D. Erdogmus, K. Torkkola, and J.C. Principe. Sequential feature extraction using information-theoretic learning. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006. (in press). [11] A. Hyv¨arinen. New approximations of differential entropy for independent component analysis and projection pursuit. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Advances in Neural Information Processing Systems, volume 10, pages 273–279. The MIT Press, 1998. [12] A. Hyv¨arinen, J. Karhunen, and E. Oja. Independent component analysis. John Wiley & Sons, 2001. [13] J.E. Jackson. A User’s Guide to Principal Components. Wiley, New York, 1991. [14] M. Jenkin and J.K. Tsotsos. Applying temporal constraints to the dynamic stereo problem. Computer Vision, Graphics, and Image Processing, 33(1):16–32, 1986. [15] M. Kolesnik and A. Barlit. Iterative Orientation Tuning in V1: A Simple Cell Circuit with Cross-Orientation Suppression. Lecture Notes in Computer Science, pages 232–238, 2003. [16] M. Kolesnik, A. Barlit, and E. Zubkov. Iterative Tuning of Simple Cells for Contrast Invariant Edge Enhancement. Proc. of the 2nd International Workshop on Biologically Motivated Computer Vision, pages 27–37, 1993. [17] J. MacLean. Recovery of egomotion and segmentation of independent object motion using the EM-algorithm. National Library of Canada Biblioth`eque nationale du Canada, 1996. 22

[18] H.S. Neumann, L.S. Pessoa, and T.S. Hanse. Interaction of ON and OFF pathways for visual contrast measurement. Biological Cybernetics, 81(5):515–532, 1999. [19] E. Parzen. On the estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065–1076, 1962. [20] K. Pauwels and M.M. Van Hulle. Segmenting Independently Moving Objects from Egomotion Flow Fields. Isle of Skye, Scotland, 2004. [21] L. Pessoa, E. Mingolla, and H. Neumann. A contrast-and luminance-driven multiscale network model of brightness perception. Vision Res, 35(15):2201–23, 1995. [22] J.C. Principe, J.W. Fisher III, and D. Xu. Information theoretic learning. In Simon Haykin, editor, Unsupervised Adaptive Filtering, New York, 2000. Wiley. [23] S.P. Sabatini and F. Solari. Emergence of motion-in-depth selectivity in the visual cortex through linear combination of binocular energy complex cells with different ocular dominance. 2003. [24] S.P. Sabatini, F. Solari, and G.M. Bisio. A cortical architecture for the binocular perception of motion-in-depth. 2001. [25] T.D. Sanger. Stereo disparity computation using Gabor filters. Biological Cybernetics, 59(6):405– 418, 1988. [26] G. Sary, R. Vogels, and G.A. Orban. Cue-invariant shape selectivity of macaque inferior temporal neurons. Science, 260(5110):995–7, 1993. [27] S.M. Smith and J.M. Brady. ASSET-2: real-time motion segmentation and shape tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8):814–820, 1995. [28] J.A.K. Suykens, T.V. Gestel, J. De Brabanter, B. de Moor, and J. Vandewalle. Least Squares Support Vector Machines. World Scientific Publishing Company, 2002. [29] K. Tanaka. Mechanisms of visual object recognition: monkey and human studies. Current Opinion in Neurobiology, 7:523–529, 1997. [30] K. Torkkola and W.M. Campbell. Mutual information in learning feature transformations. In ICML, pages 1015–1022, 2000. [31] F.W. Young and R.M. Hamer. Multidimensional Scaling: History, Theory, and Applications. L. Erlbaum Associates, 1987.

23

Cue Fusion using Information Theoretic Learning - Semantic Scholar

Apr 28, 2006 - disparity. Also in the model, we have used a number of other cues: luminance, horizontal and vertical. 3 ... problem with two classes: IMO's and environment. ...... Lecture Notes in Computer Science, pages 232–238, 2003.

1MB Sizes 1 Downloads 249 Views

Recommend Documents

Cue Fusion using Information Theoretic Learning - Semantic Scholar
Apr 28, 2006 - hR. S (x, y) = g(x, y, π/2),. (2.1) with g(·,·,·) is a vertically oriented Gabor kernel centered ..... 1http://www.ncrg.aston.ac.uk/GTM/3PhaseData.html.

Learning, Information Exchange, and Joint ... - Semantic Scholar
Atlanta, GA 303322/0280, [email protected]. 2 IIIA, Artificial Intelligence Research Institute - CSIC, Spanish Council for Scientific Research ... situation or problem — moreover, the reasoning needed to support the argumentation process will als

Learning from weak representations using ... - Semantic Scholar
how to define a good optimization argument, and the problem, like clustering, is an ... function space F · G. This search is often intractable, leading to high .... Linear projections- Learning a linear projection A is equivalent to learning a low r

Learning from weak representations using ... - Semantic Scholar
was one of the best in my life, and their friendship has a lot to do with that. ...... inherent structure of the data can be more easily unravelled (see illustrations in ...

Learning, Information Exchange, and Joint ... - Semantic Scholar
as an information market. Then we will show how agents can use argumentation as an information sharing method, and achieve effective learning from communication, and information sharing among peers. The paper is structured as follows. Section 2 intro

Information Discovery - Semantic Scholar
igate the considerable information avail- .... guages and indexing technology that seamless- ... Carl Lagoze is senior research associate at Cornell University.

Information Discovery - Semantic Scholar
Many systems exist to help users nav- igate the considerable ... idea of automatic access to large amounts of stored .... use a common protocol to expose structured infor- mation about .... and Searching of Literary Information," IBM J. Research.

using rapd markers - Semantic Scholar
based on this, cluster analysis was done using minimum variance algorithm. Cluster analysis showed two major groups. Each sub-group was characterized ...

Learning Invariant Features Using Inertial Priors ... - Semantic Scholar
Nov 30, 2006 - Supervised learning with a sufficient number of correctly labeled ..... Figure 5: An illustration of a hidden Markov model embedded in a ...... At this point, we are concentrating on a C++ implementation that will free us from our ...

The Information Workbench - Semantic Scholar
applications complementing the Web of data with the characteristics of the Web ..... contributed to the development of the Information Workbench, in particular.

Learning Invariant Features Using Inertial Priors ... - Semantic Scholar
Nov 30, 2006 - Department of Computer Science .... Figure 1: The simple three-level hierarchical Bayesian network (a) employed as a running example in the ...

using rapd markers - Semantic Scholar
RAPD data were used to calculate a Squared Euclidean Distance matrix, and based on this, cluster ... Africa, South-East, Asia, U.S.A, Brazil, Australia and. Turkey. In some ... homogenate was cooled to room temperature and extracted with 5 ...

The Information Workbench - Semantic Scholar
across the structured and unstructured data, keyword search combined with facetted ... have a Twitter feed included that displays live news about a particular resource, .... Advanced Keyword Search based on Semantic Query Completion and.

Learning sequence kernels - Semantic Scholar
such as the hard- or soft-margin SVMs, and analyzed more specifically the ..... The analysis of this optimization problem helps us prove the following theorem.

A Game-Theoretic Approach to Apprenticeship ... - Semantic Scholar
The following lemma, due to Kearns and Singh [4] (Lemma 7), shows that MZ is essentially a pessimistic estimate for M. Lemma 3. Let M = (S, A, γ, θ, φ) be a MDP/R where φ(s) ∈ [−1, 1]k, and let Z ⊆S×A. Then for all w ∈ Sk and ψ ∈ Ψ,

A Game-Theoretic Approach to Apprenticeship ... - Semantic Scholar
[1] P. Abbeel, A. Ng (2004). Apprenticeship Learning via Inverse Reinforcement Learning. ... Near-Optimal Reinforcement Learning in Polynomial Time. Ma-.

On Speeding Up Computation In Information Theoretic Learning
On Speeding Up Computation In Information Theoretic Learninghttps://sites.google.com/site/sohanseth/files-1/IJCNN2009.pdfby S Seth - ‎Cited by 22 - ‎Related articleswhere G is a n × n lower triangular matrix with positive diagonal entries. This

On the Information Theoretic Limits of Learning Ising ...
IIS-1320894, IIS-1447574, and DMS-1264033. K.S. and A.D. acknowledge the support of NSF via. CCF 1422549, 1344364, 1344179 and DARPA ... lower bounds for distributed statistical estimation with communication constraints. In Ad- vances in Neural Infor

Customized Cognitive State Recognition Using ... - Semantic Scholar
training examples that allow the algorithms to be tailored for each user. We propose a ..... to the user-specific training example database. The TL module is used ...

Quantifying Organismal Complexity using a ... - Semantic Scholar
Feb 14, 2007 - stomatitis virus, and to illustrate the consistency of our approach and its applicability. Conclusions/Significance. Because. Darwinian evolution ...

Lightpath Protection using Genetic Algorithm ... - Semantic Scholar
connectivity between two nodes in the network following a failure by mapping ... applications and high speed computer networks because of huge bandwidth of ...

Acoustic Modeling Using Exponential Families - Semantic Scholar
For general exponential models, there is no analytic solution for maximizing L(θ) and we use gradient based numerical op- timization methods. This requires us ...