Fast Conditional Kernel Density Estimation Niels Stender University of Aarhus [email protected] December 15, 2006 Abstract A conjectured O (n)-method for computation of unconditional kernel density estimates is extended to conditional density estimation. Empirical calculation time is investigated on simulated data with a limited dependent variable.

Contents 1 Introduction 73 1.1 Note on Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 2 Kernel Methods

74

3 Algorithm 75 3.1 Unconditional Density Algorithm . . . . . . . . . . . . . . . . . . . . . . . 76 3.2 Conditional Density Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 77 3.3 Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4 Experiment 78 4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5 Conclusion

1

81

Introduction

When conducting many types of inference in the presence of a large number of observations, kernel density estimation and regression can be an attractive choice of method due to its nearly assumption-free nature. A kernel estimator is preferred in situations with a lot of local structure and a high number of observations, while other methods such as spline smoothing might be preferred in scenarios with smoother surfaces and a lower number of observations. However, computing time is a significant barrier to the usage kernel methods. For a naive implementation, computing time scales with the number of observations n as O (n2 ) . In Gray & Moore (2001, 2003a,b) a new algorithm is introduced for unconditional kernel density estimation that is conjectured to scale O (n) and seems to do so on a range 73

of simulated and real-world datasets. In this paper, the Gray algorithm is extended to conditional kernel density estimation.

1.1

Note on Dimensionality

It is sometimes argued that nonparametric methods are useless in higher dimensions, since the number of observations needed to attain a fixed level of estimate confidence grows exponentially with the number of dimensions. The empty space phenomena, Scott & Thomsen (1983), illustrates the problem. A ten-dimensional multivariate normal distribution will on average have some 99% of observations falling outside the one standard deviation hypercube. However, many real-world high-dimensional datasets are spanned by lower-dimensional manifolds, and so can be said to be of a lower intrinsic dimensionality.

2

Kernel Methods

This section is based on Pagan & Ullah (1999) and Racine & Li (2004). The goal of density estimation is to approximate a proper density function f : RD → n R+ 0 , D ∈ N , given a set of n realisations {xi }i=1 from f. The main idea behind kernel density estimation is as follows. Given a point of interest x∗ ∈ RD we estimate f (x∗ ) by calculating the proportion of observations falling in the neigborhood of x∗ , each observation xi weighted proportionally to its distance to x∗ . Given a weight function, also called a kernel, K (·) , the kernel density estimate fˆ for f is calculated by eq. (1). 1X K (x∗ − xi ) (1) fˆ (x∗ ) = n i=1 R A kernel is any function for which RD K (x) dx = 1 and is often taken to be a product of univariate smooth functions Kd (·), such as gaussians. The distance metric is scaled by a bandwidth hd along each dimension, resulting in eq. (2). Bandwidth scaling of distance R 1 ˆ necessitates the inclusion of additional normalization terms hd , to ensure RD f (x) dx = 1. n

1 XY 1 fˆ (x∗ |h) = Kd n i=1 d=1 hd n

D

µ

x∗,d − xdi hd



,

(2)

x∗,d and xdi being the d’th element of vectors x∗ and xi , respectively. Conditional kernel density estimation is a straightforward extension of plain estimation. Given a point of interest z ∗ = (y ∗ , x∗ ) we can estimate f (y ∗ |x∗ ) by eq. (3). ( ˆ ∗ f (z |h) for fˆ (x∗ |h) > 0 ∗ ∗ ˆ(x∗ |h) ˆ f f (y |x , h) = (3) 0 for fˆ (x∗ |h) = 0 The possibility of zero probability in eq. (3) can in practice lead to optimization problems, so a modified version will often be used as described in section 3.3, p. 78. Limited dependent variables can be accomodated by discrete kernels. The simplest possible is the counting measure, Kcount (yi , yj ) = 1 (yi = yj ) . The unordered categorical kernel shown in eq. (4) is a more sophisticated choice in allowing for a variable degree of smoothing.

74

Kuc (yi , yj |λ) =

½

λ 1−λ c−1

1 , λ ∈ [ ; 1], yi , yj ∈ {0, . . . , c − 1}, c

for yi = yj for yi = 6 yj

(4)

c being the number of categories for yi . For λ = 1 eq. (4) equals the counting measure, while λ = 1c results in an indiscriminate kernel giving equal weight to matching and non-matching observations. Existing Implementations. Returning to the issue of existing implementations, by a naive approach it is meant that the program code calculates eq. 3 directly, in the sense that each x in the data is compared to every other point, with no regards to the structure of the data. In any realistic case, it will be true that hi is much smaller than the range of data, so two points xi and xj that are very far apart in the data will yield a kernel value of approximately zero (exactly zero for some kernels). Any smarter kernel density implementation tries to use that fact to its advantage, by including heuristics that avoid the added computational burden of comparing data points that do not add probability mass. A typical implementation is to use binning, the process of grouping data into predefined intervals, or a grid in the two-dimensional case. The kernel density estimate is then calculated using pairs of points within the bin, while pairs of data points in different bins are ignored or approximated by a representative point for each bin. Traditional binning methods suffer from the fact that the the bins do not match the local structure of the data, so the approximation error will be very large in some areas. On the other hand, if the bins are choosen to be too small, little is gained in terms of computational speed. The algorithm presented in this paper is aware of the local structure of the data and is more intelligent in deciding what pairs of xi and xj should be compared directly and what pairs to approximate.

3

Algorithm

The following is a simplified walk-through of the mechanisms of the unconditional kernel density algorithm, based on Gray & Moore (2003a). The new method is based on a form of sophisticated binning. Instead of clustering observations on an evenly spaced grid, a binary tree stores observations hierachially in clusters of similar observations. In the root node, all observations are available. Then along the most wide dimension, data is split along the middle of this dimension into two groups, thus creating two new nodes. Each node is decorated with information on the bounds of all data it contains. The process is iterated until each node contains a preset maximum of observations. Thus the higher up the tree one moves, the more similar observations will be. In computer science such a tree is called a kd-tree, for k-dimensional tree.

75

2.0 1.0

Root (n=20) Node (n=10) x1 <= 1.0006

Node (n=10) x1 > 1.0006

0.5

x2

1.5

Tree

0.0

Node (n=5) x2 <= 0.992579 0.5

1.0

1.5

Node (n=3) x1 <= 1.51517

0.5

1.0

1.5

2.0

2.0 x2

0.5 0.0 0.0

0.5

1.0

x1

Level 2

Node (n=3) x1 > 1.51517

1.5

2.0 1.5 x2

0.5 0.0

0.0

0.5

1.0

1.0

1.5

2.0

Root

0.0

Node (n=4) x2 > 0.885561

2.0

x1

x2

Node (n=6) x2 <= 0.885561

1.0

0.0

Node (n=10) x2 > 0.992579

1.5

x1

Level 3

2.0

0.0

0.5

1.0

1.5

2.0

x1

Level 4

Assume the existence of two datasets: a query set and a training set. Suppose we would like to calculate the likelihood of the query set, using the training set to form the kernel density estimate at each query point. First, a kd-tree is generated for each set. Each pair of nodes in the trees are now processed iteratively. Given a node Q from the query tree and a node T from the training tree, we can infer about the lower bound dl and upper bound du of the likelihood contribution of T to Q, by comparing the pre-calculated data boundaries saved in the nodes. If |du − dl| is below some preset threshold δ, the to likelihood contribution is approximated by letting each xt ∈ T contribute, say, |du−dl| 2 each xq ∈ Q. Now all pairwise comparisons of the child nodes of T and Q can be pruned. Chunks of data in Q is compared against chunks of data in T . Pairs of chunks with high relative heterogeneity is examined more closely in smaller chunks, while those with low heterogeneity can be put aside.

3.1

Unconditional Density Algorithm

Below is a simplified version of the Gray-algorithm, while the conditional density version is presented in the following section. The iterative scheme is started out by calling dualtree on the two root nodes of the training and query tree. 76

dualtree(Q, T ) dl = NT K (maxdist (Q, T )) du = NT K (mindist (Q, T )) if |du − dl| < δ for each q ∈ Q lq = lq + dl uq = uq + du − NT return if leaf(Q) and leaf(T ) dualtree_base(Q, T ) return dualtree(Q.left, T.left) dualtree(Q.left, T.right) dualtree(Q.right, T.left) dualtree(Q.right, T.right) end dualtree dualtree_base(Q, T ) for each q ∈ Q for each t ∈ T c = K (xq , xt ) lq = lq + c uq = uq + c − 1 end dualtree_base Notation. maxdist and mindist calculates the maximum and minimum distances between points in supplied nodes, using the boundary information of supplied nodes. leaf is true if the supplied node has no children. lq , uq lower and upper bounds on the likelihood for individual observation q. NT number of observations contained in node T.

3.2

Conditional Density Algorithm

Each node in the training tree is now decorated with an additional piece of information: The empirical density of the dependent variable based on the information available at the node-level fY,T (y). dualtree(Q, T ) dl = NT KX (maxdist (Q, T )) du = NT KX (mindist (Q, T )) if |du − dl| < δ for each q ∈ Q lX,q = lx,q + dl ux,q = ux,q + du − NT for each y ∗ ∈ Y cxy = KY (yq , y ∗ ) × fY,T (y∗ ) lxy,q = dl × cxy uxy,q = du × cxy return if leaf(Q) and leaf(T ) 77

dualtree_base(Q, T ) return dualtree(Q.left, T.left) dualtree(Q.left, T.right) dualtree(Q.right, T.left) dualtree(Q.right, T.right) end dualtree dualtree_base(Q, T ) for each q ∈ Q for each t ∈ T cx = KX (xq , xt ) lx,q = lx,q + cx ux,q = ux,q + cx − 1 cyx = cx × KY (yq , yt ) lxy,q = lxy,q + cxy uxy,q = uxy,q + cxy end dualtree_base Lower and upper bounds on the global log-likelihood are calculated as a function of the individual boundaries lx,i , ux,i , lxy,i and uxy,i . ÃN µ µ ¶ X ¶! N X lx,i ux,i (Ll , Lu ) = log log , . (5) u l xy,i xy,i i=1 i=1

3.3

Outliers

A situation can arise in which one or more observations in Q has no neighbours in T. One can choose to ignore a fixed proportion of far-flung observations in the hope that no outliers will be included, as is often done in practice and in the literature. As an alternative you can assume a mixed model generates the data. Assume probability α of observing an instance of the non-parametric model fnp, and a small probability (1 − α) of observing an instance from a multivariate uniform distribution funif bounded by the range of T. ½ αfnp (x) with probability α fmix (x) = (1 − α) funif (x) with probability (1 − α) Note that α is fixed in advance and not considered a part of the estimation. Now all observations will be assigned some probability, though it will be very small for observations without neighbours. A third alternative exists for conditional density estimation. When an observation turns up with no probability mass, a crude nearest neighbour-principle can be used. Since a tree has already been generated, a "tree-nearest-neighbour" search is carried out by sweeping the parent leaf and maybe a few neighbouring leafs.

4

Experiment

Artificial data is generated to test how algorithmic time performance scales with number of observations, namely how long it takes to calculate a leave-one-out cross-validation statistic at the optimal bandwidth. 78

n 32000 64000 128000 256000 512000 1024000 2048000

min max 0.60 8.00 0.42 5.66 0.30 4.00 0.21 2.83 0.40 1.00 0.28 0.71 0.20 0.50

Table 1: Ranges for bandwidth grid search The running times are compared to the running time of a naive implementation of the conditional kernel density estimator. Other algorithms, such a methods using binning, could have been used, but their precision is fixed beforehand. Comparison would involve matching the binning width to the δ in this paper’s model. As mentioned in a previous section, the simulated data must have some local structure for any speed-up algorithm to have some value. The optimal bandwidths must be much smaller than the range of data, or else every pair of points will add probability mass to the estimation of eq. 1 and 2. The simulated data is structured as an egg tray, a smooth surface with peaks and valleys distributed uniformly as seen in fig. 1, p. 80. To someone ignorant about the global structure of the surface, this surface seems to have a lot of local structure. A binary variable y is generated as a realization from a density g in eq. 7, a function of two continous variables x1 and x2 . 1 1 (sin (πx1 ) + sin (πx2 )) + 2 ½4 ∗ 1 with probability g (x1 , x2 ) g (x1 , x2 ) = 0 otherwise

g∗ (x1 , x2 ) =

(6) (7)

A realization of y is shown in fig. 2, p. 80. Both figures are zoomed versions of the actual dataset, where (x1 , x2 ) ∈ [0; 128]2 . Six datasets of size n are simulated five times each, letting n ∈ 1000 × {64, 128, 256, 512, 1024, 2048}. The experiment is carried out using a 2 GHz Intel Centrino processor on a 2 GB ram system running Windows XP. An approximate optimal bandwidth is found by grid-searching on a shrinking interval as n grows, examining the cross-validation log-likelihood at each iteration. A relation between CV log-likelihood and a large bandwidth range is shown in fig. 5 and is used to set the intial range. A large interval is set initially at the smallest sample size, and the interval is then shrunk around the expected optimal bandwidth according to table 1.

4.1

Results

The average leave-one-out cross-validation log likelihood as a function of bandwidth is displayed in fig. 4 for 64k observations, while figure 6 displays the CV log-likelihood for the largest sample size. The identification of an optimal bandwidth is easier as n grows. Fig. 7 and table 2 display the average calculation time at the optimal average bandwidth for each of the dataset sizes. The speeds are shown next to the estimated time of a naive ckde implementation. For the dualtree algorithm, the growth in time consumption as n increases is steeper than linear, but much less than quadratic. The speed ratio degrades significantly when 79

4 3 2 0

1

x2

0

1

2

3

4

x1

4 0

2

x2

6

8

Figure 1: Contour plot of function g ∗

0

2

4

6

8

x1

Figure 2: A realization of y = g (x1 , x2 ) . Grey dots: y = 0, black dots: y = 1

80

n 64000 128000 256000 512000 1024000 2048000

Dualtree CKDE t (sec.) 0.4 1.0 2.3 5.4 13.3 35.1

Naive CKDE t approx. 4.2 m 16.9* m 1.1* h 4.5* h 18.0* h 71.9* h

Table 2: Calculation time in seconds for CV log likelihood at the optimal bandwidth moving from 1M to 2M datapoints, but this could be a hardware effect of working memory being squeezed out of the memory cache. Nonetheless, with the number of observations ranging in the millions and calculation time ranging in seconds and minutes, conditional kernel density estimation is feasible on large datasets. From fig. 5 it is seen how time consumption increases with bandwidth. Time consumption will reach an upper plateau when the bandwidth makes the kernel span the entire range of the data, and then gradually fade again as approximation can kick in as the relative distance between observation points converge to zero. In Gray & Moore (2003a) it is conjectured that time consumption reaches a maximum at the optimal bandwidth, but for the conditional kernel density case, this is clearly not the case. A real-world dataset is likely to display some local structure along one or more dimensions, thus ensuring that the optimal bandwidth along those dimensions is unlikely to span the range. Another experiment is carried out to look at algorithmic time consumption versus increased dimensionality. A dataset of size n = 64k is generated, letting (x1 , ..., xd ) ∈ ˙ As seen from fig. [0; 16]d . Time consumption is recorded at a fixed bandwidth of 0.35. 8, time consumption seems nearly linear. However, this example does not give a clear indication dimensional performance on real-world datasets, since the optimal bandwidth is increasing in the number of dimensions at fixed dataset sizes, and the true dimensionality data. In Gray & Moore (2003a) experiments on real-world data suggests that time grows in polynomial time as dimensionality is increased.

5

Conclusion

Computational feasibility of the the speed-up method in the presence of a large dataset has been demonstrated in this paper. The author conjectures that a calculation-time speedup is possible in all datasets with local structure that implies optimal kernel bandwidth sizes that are much smaller than the range of the data, but it has not been proven in this paper. Conditional kernel density estimation is a statistically well understood non-parametric estimator, so any researcher with access to standard computing resources and a lot of data has little reason to avoid experimenting with kernel density methods. A similar approach will work for continous and ordered categorical dependent variables, while presence of categorical independent variables must be given more thought.

References Gray, A. G. & Moore, A. W. (2001), N-body problems in statistical learning, in T. K. Leen & T. G. Dietterich, eds, ‘Advances in Neural Information Processing Systems’, 81

-45000 -49000

-47000

50.0 0.1

-55000

0.5

-53000

-51000

5.0

t

1

e -0 2

1

e -0 1

1 e +0 1

1 e +0 3

bw

-45000 -46000 -48000

-47000

loglik

-44000

-43000

Figure 3: Black line-left axis: Time consumption in seconds. Grey line-right axis: CV log-likelihood. Both are based on one realization of a dataset of size n = 64k. Log-scales on both axes.

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

bw

Figure 4: Average CV log-likelihood from five realizations of a dataset of size n = 64k

82

-70000 -80000

50.0

-90000

5.0 10.0 0.2

-110000

0.5

1.0

-100000

2.0

t

1 e -0 3

1 e-02

1 e -01

1 e+0 0

1 e+01

bw

-1180000 -1220000

-1200000

loglik

-1160000

Figure 5: Black line-left axis: Time consumption in seconds. Grey line-right axis: CV log-likelihood. Both are based on one realization of a dataset of size n = 128k

0.20

0.25

0.30

0.35

0.40

0.45

0.50

bw

Figure 6: Average CV log-likelihood from five realizations of a dataset of size n = 2, 048k

83

35 30 25 20 0

5

10

15

t

0

500000

1000000

1500000

2000000

n

1

2

3

t

4

5

Figure 7: Calculation time in seconds for CV log likelihood at the optimal bandwidth

2

4

6

8

10

12

14

dim

Figure 8: Time consumption in seconds at varying independent dimensions

84

MIT Press. Gray, A. G. & Moore, A. W. (2003a), Nonparametric density estimation: Toward computational tractability, in D. Barbar & C. Kamath, eds, ‘2003 SIAM International Conference on Data Mining’. Gray, A. G. & Moore, A. W. (2003b), Rapid evaluation of multiple density models, in ‘Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics’. Pagan, A. & Ullah, A. (1999), Nonparametric Econometrics, Themes in Modern Econometrics, Cambridge University Press. Racine, J. & Li, Q. (2004), ‘Nonparametric estimation of regression functions with both categorical and continuous data’, Journal of Econometrics 119(1), 99—130. Scott, D. & Thomsen, J. (1983), Probability density estimation in higher dimensions, in J. Gentle, ed., ‘Computer Science and Statistics: Fifteenth Symposium on the Interface’, North Holland-Elsevier Science Publishers, pp. 173—179.

85

Fast Conditional Kernel Density Estimation

15 Dec 2006 - Fast Conditional Kernel Density Estimation. Niels Stender. University .... 2.0. 0.0. 0.5. 1.0. 1.5. 2.0 x1 x2. Level 4. Assume the existence of two datasets: a query set and a training set. Suppose we would like to calculate the likelihood of the query set, using the training set to form the kernel density estimate at ...

1MB Sizes 1 Downloads 210 Views

Recommend Documents

Fuzzy Correspondences and Kernel Density Estimation ...
moving point set consists of inliers represented using a mixture of Gaussian, and outliers represented via an ... Doermann [24] proposed a robust method to match point set by preserving local structures. Ma et al. ... modeled the moving model set as

Semi-supervised kernel density estimation for video ...
Computer Vision and Image Understanding 113 (2009) 384–396. Contents lists ..... vary the kernel bandwidths according to the sparseness degree of the data such ..... SSAKDE with Exponential kernel has obtained the best re- sults for most ...

1 Kernel density estimation, local time and chaos ...
Kernel density estimation, local time and chaos expansion. Ciprian A. TUDOR. Laboratoire Paul Painlevé, Université de Lille 1, F-59655 Villeneuve d'Ascq, ...

Fast Global Kernel Density Mode Seeking with ...
We propose an accelerated version of the mean ... gorithm, the accelerated mean shift can significantly decrease the number of ... to 3D articulated tracking [16].

Kernel-Based Skyline Cardinality Estimation
which is more meaningful than the conventional skyline for high- dimensional ... [2], use DD&C as a sub-routine and, thus, preserve its worst-case bound, while ...

Kernel-Based Skyline Cardinality Estimation
otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGMOD'09 ... implemented in Microsoft SQL Server. LS assumes that the skyline cardinality m, of an ...... In general, the kIDR

Stochastic Gradient Kernel Density Mode-Seeking
Lecture Notes in Artificial Intelligence, LNAI 3176, pages ... data sets: Research articles. Applied ... Business and Industry, 21(2):137 – 151, March 2005.

Conditional ML Estimation Using Rational Function Growth Transform
ity models by means of the rational function growth transform. (RFGT) [GKNN91]. .... ☞Dependency on Training Data Size: normalize βθ. = γθ. T. ☞Dependency ...

Dictionary-based probability density function estimation ...
an extension of previously existing method for lower resolution images. ...... image processing, Proceedings of the IGARSS Conference, Toronto (Canada), 2002.

New density estimation methods for charged particle ...
Jul 8, 2011 - analyze the sources and profiles of the intrinsic numerical noise, ... We devise two alternative estimation methods for charged particle distribution which represent ... measurements of CSR-induced energy loss and transverse.

Deconvolution Density Estimation on SO(N)
empirical Bayes estimation, see for example Maritz and Lwin (1989), as well as, ..... 1, which can be established by consulting Proposition 3.1 (Rosenthal (1994, ...

New density estimation methods for charged particle beams ... - NICADD
8 Jul 2011 - where n is an integer. One of the ways to quantify noise in a PIC simulation is to define a .... linear operation on data sets with size of integer power of 2 in each dimension, yielding a transformed data set of the ...... of about 2880

Nearest Neighbor Conditional Estimation for Harris ...
Jul 5, 2008 - However, under more restrictive conditions central limit theorems can also be inferred and details are provided. Inferential arguments in ...

Robust Estimation of Edge Density in Blurred Images
Electronics and Telecommunications Research Institute (ETRI). Daejeon 305-700, Korea. {jylee, ywp}@etri.re.kr. Abstract—Edge is an ... of Knowledge and Economy (MKE) and the Korea Evaluation Institute of. Industrial Technology (KEIT). (The Developm

Probability Density Estimation via Infinite Gaussian ...
practice PLS is a more appropriate tool for describing the process outputs whilst ..... the data points pertaining to a represented mixture are associated with other ...

nprobust: Nonparametric Kernel-Based Estimation and Robust Bias ...
Sep 16, 2017 - features of the software package nprobust, which offers an array of ..... p Λpm(p+1)(x),. B2[ ˆm(ν)(x)] = ν! (p + 2)!. eνΓ. −1 p Λp+1m(p+2)(x),.

Fast Algorithms for Linear and Kernel SVM+
Sep 1, 2016 - i-th entry being one, the subproblem for solving d can be formulated as min d f(β + dδ(i)) s.t. β + dδ(i) ≥ 0,. (10) which has an analytic solution ...

CONDITIONAL MEASURES AND CONDITIONAL EXPECTATION ...
Abstract. The purpose of this paper is to give a clean formulation and proof of Rohlin's Disintegration. Theorem (Rohlin '52). Another (possible) proof can be ...

VLSI Oriented Fast Multiple Reference Frame Motion Estimation ...
mance. For the VLSI real-time encoder, the heavy computation of fractional motion ... is the most computation intensive part, can not be saved. Another promising ...

A Fast Algorithm For Rate Optimized Motion Estimation
Abstract. Motion estimation is known to be the main bottleneck in real-time encoding applications, and the search for an effective motion estimation algorithm has ...

Fast Sub-Pixel Motion Estimation and Mode Decision ...
partition selection and only performs the 'precise' sub-pel search for the best ... Motion Estimation process contains two stages: integer pixel search .... full. COST avg. COST. COST avg. COST ii blocktype if c where COSTFull is the best COST after

Rate-distortion estimation for fast JPEG2000 ... -
Aug 19, 2004 - method that enables pre-compression rate-distortion optimization to be ... coding engine and speedup results for Tier-1 of block coding have ...

A Fast Algorithm For Rate Optimized Motion Estimation
uous motion field reduces the bit rate for differentially encoded motion vectors. Our motion ... In [3], we propose a rate-optimized motion estimation based on a “true” motion tracker. ..... ftp://bonde.nta.no/pub/tmn/software/, June 1996. 477.