Large Scale Online Learning of Image Similarity Through Ranking Gal Chechik∗

Varun Sharma∗ ∗

Google

Uri Shalit† †

Samy Bengio∗

Hebrew University

Mountain View, CA, USA

Jerusalem, Israel

{gal,vasharma,bengio}@google.com

[email protected]

Learning a measure of similarity between pairs of objects is a fundamental problem in machine learning. Pairwise similarity plays a crucial role in classification algorithms like nearest neighbors, and is practically important for applications like searching for images that are similar to a given image or finding videos that are relevant to a given video. In these tasks, users look for objects that are both visually similar and semantically related to a given object. Unfortunately, current approaches for learning semantic similarity are limited to small scale datasets, because their complexity grows quadratically with the sample size, and because they impose costly positivity constraints on the learned similarity functions. To address real-world large-scale AI problem, like learning similarity over all images on the web, we need to develop new algorithms that scale to many samples, many classes, and many features. The current abstract presents OASIS, an Online Algorithm for Scalable Image Similarity learning that learns a bilinear similarity measure over sparse representations. OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost. Our experiments show that OASIS is both fast and accurate at a wide range of scales: for a dataset with thousands of images, it achieves better results than existing state-of-the-art methods, while being an order of magnitude faster. Comparing OASIS with different symmetric variants, provides unexpected insights into the effect of symmetry on the quality of the similarity. For large, web scale, datasets, OASIS can be trained on more than two million images from 150K text queries within two days on a single CPU. Human evaluations showed that 35% of the ten top images ranked by OASIS were semantically relevant to a query image. This suggests that query-independent similarity could be accurately learned even for large-scale datasets that could not be handled before.

1

The Similarity Learning Model and Algorithm

We focus on a similarity learning problem that only assumes a supervised signal about the relative similarity of image pairs. Given a set of images, each represented as a vector of features pi ∈ Rd , we assume that for every image pi , we have access to images that are similar to pi and images that are less similar. Formally, for a small fraction of image pairs we have a relevance measure available rij = r(pi , pj ) ∈ R, which states how strongly pj is related to pi . This relevance measure could encode the fact that two images share the same label or match the same query. We do not assume that values of r are precise but only that they correctly capture ordering among pairs. Our goal is to learn a similarity measure SW with the form: SW (pi , pj ) ≡ pTi W pj

(1)

with parameters W ∈ Rd×d . Importantly, if image vectors pi are sparse, then SW can be computed very efficiently even when d is large. We propose an online algorithm based on the Passive-Aggressive (PA) family of learning algorithms introduced by [CDK+ 06]. Here we consider an algorithm that uses triplets of images − + − pi , p+ i , pi that obey r(pi , pi ) > r(pi , pi ). We define the hinge loss function for all triplets: X © ª − + − + − lW (pi , p+ (2) LW = i , pi ) with lW (pi , pi , pi ) = max 0, 1 − SW (pi , pi ) + SW (pi , pi ) . − (pi ,p+ i ,pi )

To minimize LW , we apply the Passive-Aggressive algorithm iteratively to optimize W. First, W is initialized − to some value W0 . Then, at each training iteration i, we randomly select a triplet (pi , p+ i , pi ), and solve the following convex problem with soft margin: 1 kW − Wi−1 k2F ro + Cξ 2

Wi = argmin W

− s.t. lW (pi , p+ i , pi ) ≤ ξ

and

ξ≥0

(3)

where k·kF ro is the Frobenius norm. At each iteration i, Wi optimizes a trade-off between remaining − close to the previous parameters Wi−1 and minimizing the loss on the current triplet lW (pi , p+ i , pi ). The aggressiveness parameter C controls this trade-off. Eq. 3 can be solved analytically and yields a very efficient parameter update rule. Unlike previous approaches for similarity learning, OASIS does not enforce positivity or even symmetry during learning, since projecting the learned matrix onto the set of symmetric or positive matrices after training yielded better generalization (not shown). The intuition is that positivity constraints help to regularize small datasets but harm learning with large data.

2

Experiments

We have first compared OASIS with small-scale methods over the standard Caltech256 benchmark. Fig. 1 compares the performance of OASIS to other recently proposed similarity learning approaches over 20 of the 256 Caltech classes. All hyper-parameters of all methods were selected using cross-validation. OASIS outperforms the other approaches, achieving higher precision at the full range of first to top-50 ranked image. Furthermore, OASIS was faster by 1-4 orders of magnitude than competing methods (Fig. 1B). For the purpose of a fair comparison with competing approaches, we tested both a Matlab implementation and a C implementation of OASIS for this task. Finally, Fig. 1C compares the runtime of OASIS with a clever fast implementation of LMNN [WS08], that maintains smaller active set of constraints, but still scales quadratically. OASIS scales linearly on a web-scale dataset described below. B.

0.3

Method

0.2

OASIS(M) OASIS(M,C) MCML(M,C)

0.1

0 0

OASIS MCML LEGO LMNN Euclidean

10

LEGO(M) Random

LMNN(M,C) 20 30 number of neighbors

40

50

C. fast LMNN (MNIST 10cats) nd projected extrapolation (2 poly) OASIS (Web data)

Training (minute)

45± 8 0.15± 0.02 7425± 106 533± 49 631± 40

runtime (min)

precision

A.

~190 days

2days

2 days

3 hrs

2.3M

60K

3hrs

1.5 hrs 100K 5min

5 min

37sec 9sec 60

600

10K

100K

2M

number of images (log scale)

Figure 1: A. Comparison of the precision of OASIS, LMNN [WBS06], MCML [GR06], LEGO [JKDG08] and the Euclidean metric in feature space. Each curve shows the precision at top k as a function of k neighbors. Results are averages across 5 train/test partitions (40 training images, 25 test images). B. Run time in minutes for methods on panel A. M means Matlab, while M,C means core components implemented in C. C. Run time as a function of data set size for OASIS and a fast implementation of LMNN [WS08]. Our second set of experiments is two orders of magnitude larger than the previous experiments. We collected a set of ∼150K text queries submitted to the Google Image Search system. For each of these queries, we had access to a set of relevant images, each of which associated with a numerical relevance score. This yielded a total of ∼2.7 million images, which we split into a training set of 2.3 million images and a test set of 0.4 million images. Overall, training took ∼3000 minutes (2 days) on a single CPU. Fig. 2 shows the top five images as ranked by OASIS on two examples of query-images in the test set.

References [CDK+ 06] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. Journal of Machine Learning Research (JMLR), 7:551–585, 2006.

Query image

Top 5 relevant images retrieved by OASIS

Figure 2: Examples of successful cases from the Web dataset using OASIS. [GR06]

A. Globerson and S. Roweis. Metric Learning by Collapsing Classes. Advances in Neural Information Processing Systems, 18:451, 2006.

[JKDG08] P. Jain, B. Kulis, I. Dhillon, and K. Grauman. Online metric learning and fast similarity search. In Advances in Neural Information Processing Systems, volume 22, 2008. [WBS06]

K. Weinberger, J. Blitzer, and L. Saul. Distance Metric Learning for Large Margin Nearest Neighbor Classification. Advances in Neural Information Processing Systems, 18:1473, 2006.

[WS08]

K. Weinberger and L. Saul. Fast Solvers and Efficient Implementations for Distance Metric Learning. In Proc. of 25th International Conference on Machine Learning (ICML), 2008.

Large Scale Online Learning of Image Similarity ... - Research

the Euclidean metric in feature space. Each curve shows the precision at top k as a function of k neighbors. Results are averages across 5 train/test partitions (40 ...

307KB Sizes 3 Downloads 271 Views

Recommend Documents

Large Scale Online Learning of Image Similarity ... - Research at Google
of OASIS learned similarity show that 35% of the ten nearest neighbors of a ..... the computer vision literature (Ojala et al., 2002, Takala et al., 2005), ...... Var10: bear, skyscraper, billiards, yo-yo, minotaur, roulette-wheel, hamburger, laptop-

Large Scale Online Learning of Image Similarity Through ... - CiteSeerX
Mountain View, CA, USA ... classes, and many features. The current abstract presents OASIS, an Online Algorithm for Scalable Image Similarity learning that.

An Online Algorithm for Large Scale Image Similarity Learning
machines, and is particularly useful for applications like searching for images ... Learning a pairwise similarity measure from data is a fundamental task in ..... ACM SIGKDD international conference on Knowledge discovery and data mining,.

Evaluating Similarity Measures: A Large-Scale ... - Research at Google
Aug 24, 2005 - A Large-Scale Study in the Orkut Social Network. Ellen Spertus ... ABSTRACT. Online information services have grown too large for users ... similarity measure, online communities, social networks. 1. INTRODUCTION.

Learning a Large-Scale Vocal Similarity Embedding for Music
ommendation at commercial scale; for instance, a system similar to the one described ... 1Spotify Inc.. ... sampled to contain a wide array of popular genres, with.

Large Scale Page-Based Book Similarity ... - Research at Google
tribution is a two-step technique for clustering books based on content similarity (at ... We found that the only truly reliable way to establish relationships between.

Google Image Swirl: A Large-Scale Content ... - Research at Google
{jing,har,chuck,jingbinw,mars,yliu,mingzhao,covell}@google.com. Google Inc., Mountain View, ... 2. User Interface. After hierarchical clustering has been performed, the re- sults of an image search query are organized in the struc- ture of a tree. A

Google Image Swirl: A Large-Scale Content ... - Research at Google
used to illustrate tree data data structures, there are many options in the literature, ... Visualizing web images via google image swirl. In NIPS. Workshop on ...

Semi-supervised Learning for Large Scale Image ...
edge length between two superpixels j and k, N(j) is the ... the proportion of foreground superpixels Pf i is mea- sured as: Pf i = ∑N j=1 yi(j) si. = yT i ⋅ ei si. (6) ...

LARGE SCALE NATURAL IMAGE ... - Semantic Scholar
1MOE-MS Key Lab of MCC, University of Science and Technology of China. 2Department of Electrical and Computer Engineering, National University of Singapore. 3Advanced ... million natural image database on different semantic levels defined based on Wo

Large-Scale Manifold Learning - Cs.UCLA.Edu
ever, when dealing with a large, dense matrix, as in the case of Isomap, these products become expensive to compute. Moreover, when working with 18M data ...

Robust Large-Scale Machine Learning in the ... - Research at Google
and enables it to scale to massive datasets on low-cost com- modity servers. ... datasets. In this paper, we describe a new scalable coordinate de- scent (SCD) algorithm for ...... International Workshop on Data Mining for Online. Advertising ...

Large-Scale Deep Learning for Intelligent ... - Research at Google
Android. Apps. GMail. Image Understanding. Maps. NLP. Photos. Robotics. Speech. Translation many research uses.. YouTube … many others . ... Page 10 ...

Large Scale Learning to Rank - Research at Google
In this paper, we are concerned with learning to rank methods that can learn on large scale data sets. One standard method for learning to rank involves ...

Large-Scale Learning with Less RAM via ... - Research at Google
such as those used for predicting ad click through rates. (CTR) for sponsored ... Streeter & McMahan, 2010) or for filtering email spam at scale (Goodman et al., ...

Large scale image annotation: learning to rank with joint word-image ...
Jul 27, 2010 - on a laptop, at least at annotation time. For many .... (9) to the total risk, i.e. taking the expectation of these contributions approximates (8) be-.

Compact Representation for Large-Scale Clustering and Similarity ...
in high-dimensional feature space for large-scale image datasets. In this paper, ... the clustering method to avoid high-dimension indexing. [5] also states that the.

TensorFlow: Large-Scale Machine Learning on Heterogeneous ...
Nov 9, 2015 - containers in jobs managed by a cluster scheduling sys- tem [51]. These two different modes are illustrated in. Figure 3. Most of the rest of this section discusses is- sues that are common to both implementations, while. Section 3.3 di

TensorFlow: Large-Scale Machine Learning on Heterogeneous ...
Nov 9, 2015 - at Google and other Alphabet companies have deployed deep neural ... TensorFlow for research and production, with tasks as diverse as ...

TensorFlow: Large-Scale Machine Learning on Heterogeneous ...
Nov 9, 2015 - cantly simplifies the real-world use of machine learning system, as we ..... a tensor containing one or more examples from the data stored in that ...

Building Large-Scale Internet Services - Research
~1 network rewiring (rolling ~5% of machines down over 2-day span). ~20 rack ...... Web Search for a Planet: The Google Cluster Architecture, IEEE Micro, 2003.Missing:

Large-Scale Automated Refactoring Using ... - Research
matching infrastructure are all readily available for public con- sumption and improvements continue to be publicly released. In the following pages, we present ...