Mixed Similarity Learning for Recommendation with Implicit Feedback Mengsi Liu, Weike Pan# , Miao Liu, Yaofeng Chen, Xiaogang Peng∗ and Zhong Ming∗ College of Computer Science and Software Engineering Shenzhen University

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

1 / 28

Introduction

Problem Definition and Illustration Recommendation with implicit feedback Input: Implicit feedback in the form of (user, item) pairs Output: A personalized ranked list of unexamined items for each user

Figure: Illustration of mixed similarity learning.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

2 / 28

Introduction

Notations Table: Some notations. U I Ui Iu Ni R = {(u, i)} (p) sii ′ (ℓ) sii ′ (m) sii ′ Vi· , Wi ′ · ∈ R1×d bi , bj ˆrui(p) , ˆrui(ℓ) , ˆrui(m) T λs , α

Liu et al. (CSSE, SZU)

user set, u ∈ U, |U| = n item set, i, i ′ , j ∈ I, |I| = m users that examined i items examined by u nearest neighbors of item i examination records predefined similarity between i and i ′ learned similarity between i and i ′ mixed similarity between i and i ′ item-specific latent feature vector item bias predicted preference iteration number tradeoff parameter

Mixed Similarity Learning

KBS 2017

3 / 28

Introduction

Overall of Our Solution

ICF [Deshpande and Karypis, TOIS 2004]: item-oriented collaborative filtering with predefined similarity BPR [Rendle et al., UAI 2009]: recommendation with pairwise preference learning FISMauc [Kabbur, Ning and Karypis, KDD 2013]: recommendation with learned similarity We combine predefined similarity, learned similarity and pairwise preference learning in a single framework P-FMSM (pairwise factored mixed similarity model) P-FISM (pairwise factored item similarity model) is a special case of P-FMSM with learned similarity only.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

4 / 28

Method

Prediction Rule of FISMauc and P-FISM The predicted rating of user u on item i (i ∈ Iu ), ˆrui = bi + p

1

X

|Iu \{i}| i ′ ∈I

(ℓ)

sii ′

(1)

u \{i}

(ℓ)

where sii ′ = Vi· WiT′ · is the learned similarity between item i and item i ′ . The predicted rating of user u on item j (j ∈ I\Iu ), ˆruj = bj + p

1

X

|Iu | i ′ ∈I

(ℓ)

sji ′

(2)

u

(ℓ)

where sji ′ = Vj· WiT′ · is the learned similarity between item j and item i ′ . Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

5 / 28

Method

Prediction Rule of P-FMSM The predicted rating of user u on item i (i ∈ Iu ), ˆrui = bi + p (m)

(ℓ)

1

X

|Iu \{i}| i ′ ∈I

(m)

sii ′

(3)

u \{i}

(p) (ℓ)

where sii ′ = (1 − λs )sii ′ + λs sii ′ sii ′ is the mixed similarity. The predicted rating of user u on item j (j ∈ I\Iu ), 1 X (m) ˆruj = bj + p s′ |Iu | i ′ ∈I ji

(4)

u

(m)

(ℓ)

(p) (ℓ)

where sii ′ = (1 − λs )sji ′ + λs sji ′ sji ′ is the mixed similarity.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

6 / 28

Method

Objective Function

The objective function of BPR, P-FISM and P-FMSM, XX X fuij min Θ

(5)

u∈U i∈Iu j∈I\Iu

where P fuij = −ln σ(ˆruij )+ α2 kVi· k2 + α2 kVj· k2 + α2 i ′ ∈Iu ||Wi ′ · ||2F + α2 kbi k2 + α2 kbj k2 , ˆruij = ˆrui − ˆruj , and Θ = {Wi· , Vi· , bi , i = 1, 2, . . . , m} denotes the set of parameters to be learned.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

7 / 28

Method

Gradients of P-FISM For a triple (u, i, j), we have the gradients, ∇bj

=

∇Vj·

=

∇bi

=

∇Vi·

=

∇Wi ′ ·

= +

∇Wi· ¯ u· = √ 1 where U

|Iu |

P

i ′ ∈Iu

Liu et al. (CSSE, SZU)

=

∂fuij ∂bj ∂fuij ∂Vj· ∂fuij ∂bi ∂fuij ∂Vi· ∂fuij

= −σ(−ˆruij )(−1) + αbj ,

(6)

¯ u· ) + αVj· , = −σ(−ˆruij )(−U

(7)

= −σ(−ˆruij ) + αbi ,

(8)

−i ¯ u· = −σ(−ˆruij )U + αVi· ,

(9)

∂Wi ′ ·

= −σ(−ˆruij )(

Vj· Vi· − ) |Iu \{i}|α |Iu |α

αWi ′ · , i ′ ∈ Iu \{i} ∂fuij −Vj· = −σ(−ˆruij ) + αWi· ∂Wi· |Iu |α

¯ −i = √ Wi ′ · and U u·

1 |Iu \{i}|

P

i ′ ∈Iu \{i}

Mixed Similarity Learning

(10) (11)

Wi ′ · .

KBS 2017

8 / 28

Method

Gradients of P-FMSM For a triple (u, i, j), we have the gradients, ∇bj

=

∇Vj·

=

∇bi

=

∇Vi·

=

∇Wi ′ ·

=

∂fuij ∂bj ∂fuij ∂Vj· ∂fuij ∂bi ∂fuij ∂Vi· ∂fuij

¯ u· = √ 1 where U ¯ −i = √ U u·

1 |Iu \{i}|

∂fuij ∂Wi·

P

P

¯ u· ) + αVj· , = −σ(−ˆruij )(−U

(13)

= −σ(−ˆruij ) + αbi ,

(14)

−i ¯ u· = −σ(−ˆruij )U + αVi· ,

(15)

(p) = −σ(−ˆruij )((1 − λs ) + λs sii ′ )(

αWi ′ · , i ′ ∈ Iu \{i}

=

|Iu |

(12)

∂Wi ′ ·

+ ∇Wi·

= −σ(−ˆruij )(−1) + αbj ,

i ′ ∈Iu ((1

= −σ(−ˆruij )

(p) λs ) + λs sii ′ )Vj· α |Iu |

+ αWi·

(16) (17)

(p)

− λs ) + λs sii ′ )Wi ′ · and

i ′ ∈Iu \{i} ((1

Liu et al. (CSSE, SZU)

−((1 −

Vj· Vi· − ) |Iu \{i}|α |Iu |α

(p)

− λs ) + λs sii ′ )Wi ′ · . Mixed Similarity Learning

KBS 2017

9 / 28

Method

Update Rules of P-FISM and P-FMSM

For a triple (u, i, j), we have the gradients, bj

= bj − γ∇bj

(18)

Vj· = Vj· − γ∇Vj· bi

(19)

= bi − γ∇bi

(20)

Vi· = Vi· − γ∇Vi·

(21) ′

Wi ′ · = Wi ′ · − γ∇Wi ′ · , i ∈ Iu \{i}

(22)

Wi· = Wi· − γ∇Wi·

(23)

where γ is the learning rate.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

10 / 28

Method

Algorithm

1: 2: 3: 4: 5: 6: 7: 8: 9:

Initialize the model parameters Θ for t = 1, . . . , T do for t2 = 1, . . . , |R| do Randomly pick up a pair (u, i) ∈ R Randomly pick up an item j from I\Iu Calculate the gradients via Eq.(12-17) Update the model parameters via Eq.(18-23) end for end for Figure: The SGD algorithm for P-FMSM.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

11 / 28

Experiments

Datasets For direct comparative empirical studies, we use four public datasets1 . Table: Statistics of the data used in the experiments, including numbers of users (|U|), items (|I|), implicit feedback (|R|) of training records (Tr.) and test records (Te.), and densities (|R|/|U|/|I|) of training records. Data set MovieLens100K MovieLens1M UserTag Netflix5K5K

1

|U| 943 6040 3000 5000

|I| 1682 3952 2000 5000

|R| (Tr.) 27688 287641 123218 77936

|R| (Te.) 27687 287640 123218 77936

|R|/|U|/|I| (Tr.) 1.75% 1.21% 2.05% 0.31%

https://sites.google.com/site/weikep/GBPRdata.zip Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

12 / 28

Experiments

Evaluation Metrics We use Prec@5 and NDCG@5 in the experiments,   5 X X 1  1 Pre@5 = δ(Lu (p) ∈ Iute ) , |U te | 5 p=1 u∈U te   u) 5 δ(L (p)∈I X2 u te − 1  1 1 X  NDCG@5 =  Pmin(5, I te ) , te | u| |U | log(p + 1) 1 u∈U te

p=1

log(p+1) p=1

where Lu (p) is the pth item for user u in the recommendation list, and δ(x) is an indicator function with value of 1 if x is true and 0 otherwise. Note that U te and Iute denote the set of test users and the set of examined items by user u in the test data, respectively.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

13 / 28

Experiments

Baslines Ranking based on items’ popularity (PopRank) Item-oriented collaborative filtering with Cosine similarity (ICF) ICF with an amplifier on the similarity (ICF-A), which favors the items with higher similarity Factored item similarity model with AUC loss (FISMauc) and RMSE loss (FISMrmse) Hierarchical poisson factorization (HPF) Bayesian personalized ranking (BPR) A factored version of adaptive K nearest neighbors based recommendation KNN (FA-KNN) Group preference based BPR (GBPR)

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

14 / 28

Experiments

Initialization of Model Parameters

We use the statistics of training data to initialize the model parameters, bi bj

= =

n X

u=1 n X

yui /n − µ yuj /n − µ

u=1

Vik

= (r − 0.5) × 0.01, k = 1, . . . , d

= (r − 0.5) × 0.01, k = 1, . . . , d P P where r (0 ≤ r < 1) is a random variable, and µ = nu=1 m i=1 yui /n/m. Wi ′ k

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

15 / 28

Experiments

Parameter Configurations For ICF and ICF-A, we use Cosine similarity and |Ni | = 20 in neighborhood construction. For the amplifier in ICF-A, we use 2.5 as a typical value. For factorization-based methods, i.e., FISMauc, FISMrmse, BPR, FA-KNN, GBPR and our P-FMSM, we use d = 20 latent features, and search the tradeoff parameter on regularization terms α ∈ {0.001, 0.01, 0.1} and iteration number T ∈ {100, 500, 10000} using NDCG@5. For the Bayesian method HPF, we use the public code and default parameter configurations, i.e., 20 for the latent dimension number, 10 for the iteration number multiple, 0.3 for the hyperparameter, and true for bias and hierarchy. The tradeoff parameter λs for the mixed similarity in P-FMSM is also searched via NDCG@5 from {0.2, 0.4, 0.6, 0.8, 1}. The learning rate γ is fixed as 0.01. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

16 / 28

Experiments

Main Results (1/4) Table: Recommendation performance on MovieLens100K. Note that the results of GBPR are copied from [Pan and Chen, IJCAI 2013]. The significantly best results are marked in bold (p value < 0.01). PopRank ICF ICF-A FISMauc (α = 0.01, T = 100) FISMrmse (α = 0.001, T = 100) HPF BPR (α = 0.1, T = 1000) FA-KNN (α = 0.1, T = 1000) GBPR P-FMSM (α = 0.001, T = 500, λs = 0.4)

Prec@5 0.2724±0.0094 0.3145±0.0018 0.2820±0.0048 0.3651±0.0086 0.3987±0.0085 0.3412±0.0077 0.3627±0.0079 0.3679±0.0044 0.4051±0.0038 0.4115±0.0059

NDCG@5 0.2915±0.0072 0.3305±0.0010 0.2943±0.0091 0.3788±0.0139 0.4153±0.0092 0.3523±0.0097 0.3779±0.0079 0.3868±0.0076 0.4201±0.0031 0.4320±0.0054

Note that the sampling strategy of BPR and P-FMSM in this paper is more efficient than that of BPR and GBPR in [Pan and Chen, IJCAI 2013]. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

17 / 28

Experiments

Main Results (2/4) Table: Recommendation performance on MovieLens1M. Note that the results of GBPR are copied from [Pan and Chen, IJCAI 2013]. The significantly best results are marked in bold (p value < 0.01). PopRank ICF ICF-A FISMauc (α = 0.001, T = 100) FISMrmse (α = 0.001, T = 100) HPF BPR (α = 0.01, T = 1000) FA-KNN (α = 0.01, T = 100) GBPR P-FMSM (α = 0.001, T = 500, λs = 0.4)

Prec@5 0.2822±0.0019 0.3831±0.0021 0.3921±0.0016 0.3502±0.0069 0.4227±0.0006 0.4131±0.0080 0.4195±0.0013 0.3650±0.0026 0.4494±0.0020 0.4562±0.0037

NDCG@5 0.2935±0.0010 0.3966±0.0020 0.4066±0.0022 0.3569±0.0082 0.4366±0.0011 0.4262±0.0096 0.4307±0.0008 0.3808±0.0017 0.4636±0.0014 0.4731±0.0038

Note that the sampling strategy of BPR and P-FMSM in this paper is more efficient than that of BPR and GBPR in [Pan and Chen, IJCAI 2013]. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

18 / 28

Experiments

Main Results (3/4) Table: Recommendation performance on UserTag. Note that the results of GBPR are copied from [Pan and Chen, IJCAI 2013]. The significantly best results are marked in bold (p value < 0.01). PopRank ICF ICF-A FISMauc (α = 0.1, T = 500) FISMrmse (α = 0.001, T = 100) HPF BPR (α = 0.1, T = 500) FA-KNN (α = 0.1, T = 1000) GBPR P-FMSM (α = 0.001, T = 500, λs = 0.6)

Prec@5 0.2647±0.0012 0.2257±0.0051 0.2480±0.0028 0.2536±0.0031 0.3006±0.0046 0.2684±0.0070 0.2849±0.0036 0.2641±0.0046 0.3011±0.0008 0.3129±0.0026

NDCG@5 0.2730±0.0014 0.2306±0.0045 0.2540±0.0032 0.2619±0.0037 0.3092±0.0049 0.2756±0.0074 0.2931±0.0047 0.2720±0.0036 0.3104±0.0009 0.3218±0.0030

Note that the sampling strategy of BPR and P-FMSM in this paper is more efficient than that of BPR and GBPR in [Pan and Chen, IJCAI 2013]. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

19 / 28

Experiments

Main Results (4/4) Table: Recommendation performance on Netflix5K5K. Note that the results of GBPR are copied from [Pan and Chen, IJCAI 2013]. The significantly best results are marked in bold (p value < 0.01). PopRank ICF ICF-A FISMauc (α = 0.01, T = 500) FISMrmse (α = 0.001, T = 100) HPF BPR (α = 0.01, T = 1000) FA-KNN (α = 0.001, T = 100) GBPR P-FMSM (α = 0.001, T = 1000, λs = 0.2)

Prec@5 0.1728±0.0012 0.2017±0.0012 0.1653±0.0032 0.1932±0.0039 0.2217±0.0043 0.1899±0.0047 0.2207±0.0051 0.2101±0.0020 0.2411±0.0027 0.2469±0.0027

NDCG@5 0.1794±0.0004 0.2186±0.0008 0.1804±0.0041 0.2022±0.0023 0.2413±0.0063 0.2044±0.0041 0.2374±0.0055 0.2254±0.0029 0.2611±0.0025 0.2661±0.0027

Note that the sampling strategy of BPR and P-FMSM in this paper is more efficient than that of BPR and GBPR in [Pan and Chen, IJCAI 2013]. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

20 / 28

Experiments

Main Results (Observations) 1

Our proposed P-FMSM performs significantly better than all other methods in all cases, which clearly shows the advantages of integrating the predefined and learned similarities in the pairwise preference learning framework

2

The recommendation methods based on the learned similarity (e.g., FISMauc, FISMrmse, FA-KNN and our P-FMSM) usually perform better than that based on the predefined similarity (i.e., ICF and ICF-A), which shows the helpfulness of similarity learning for a recommendation algorithm

3

The recommendation methods based on the pairwise preference assumption (e.g., BPR, FA-KNN, GBPR and our P-FMSM) perform well, which shows its effectiveness for the uncertain implicit feedback Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

21 / 28

Experiments

Effect of Mixed Similarity

0.4

0.3

0.2

0.5

P−FISM P−FMSM

NDCG@5

Prec@5

0.5

MovieLens100K

MovieLens1M

UserTag

Netflix5K5K

P−FISM P−FMSM

0.4

0.3

0.2

MovieLens100K

Prec@5

MovieLens1M

UserTag

Netflix5K5K

NDCG@5

Figure: Recommendation performance of P-FISM (i.e., with learned similarity only when λs = 0) and P-FMSM.

Observations: P-FMSM is better P-FISM across all the four datasets, which clearly shows the merit of the mixed similarity via exploiting the complementarity of the predefined similarity and the learned similarity in the pairwise preference learning framework. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

22 / 28

Experiments

Effect of Neighborhood Size (1/2) 0.4

0.3

0.2

0.5

ICF w/ different K P−FMSM w/ different K P−FMSM

Prec@5

Prec@5

0.5

20

30

K

40

0.4

0.3

0.2

50

20

MovieLens100K

0.3

20

30

K

40

ICF w/ different K P−FMSM w/ different K P−FMSM 50

MovieLens1M

0.4

0.2

K

40

0.5

ICF w/ different K P−FMSM w/ different K P−FMSM

Prec@5

Prec@5

0.5

30

50

ICF w/ different K P−FMSM w/ different K P−FMSM

0.4

0.3

0.2

20

UserTag

30

K

40

50

Netflix5K5K

Figure: Recommendation performance of Prec@5 using ICF and P-FMSM with different values of K (i.e., different neighborhood sizes), and the default P-FMSM with all the neighbors. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

23 / 28

Experiments

Effect of Neighborhood Size (2/2) 0.5

NDCG@5

NDCG@5

0.5

0.4

0.3

0.2

20

30

K

ICF w/ different K P−FMSM w/ different K P−FMSM 40 50

0.4

0.3

0.2

20

MovieLens100K

0.3

20

30

K

40

ICF w/ different K P−FMSM w/ different K P−FMSM 50

MovieLens1M

0.4

0.2

K

40

0.5

ICF w/ different K P−FMSM w/ different K P−FMSM

NDCG@5

NDCG@5

0.5

30

50

ICF w/ different K P−FMSM w/ different K P−FMSM

0.4

0.3

0.2

20

UserTag

30

K

40

50

Netflix5K5K

Figure: Recommendation performance of NDCG@5 using ICF and P-FMSM with different values of K (i.e., different neighborhood sizes), and the default P-FMSM with all the neighbors. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

24 / 28

Experiments

Effect of Neighborhood Size (Observations)

1

Both P-FMSM and P-FMSM (with different K ) give about 23% on MovieLens100K, 10% on MovieLens1M, 28% on UserTag, and 15% on Netflix5k5k of significantly better results compared to ICF w.r.t. the corresponding values of the neighborhood sizes K ∈ {20, 30, 40, 50}, which shows the effectiveness of the developed algorithm

2

The performance of P-FMSM and P-FMSM (with different K ) are close, which shows that we can use a proper size of neighborhood instead of using all the neighbors to achieve low space complexity. It is an appealing property for real deployment by industry practitioners

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

25 / 28

Related Work

Related Work We summarize some related works from the perspective of problem settings (explicit feedback, implicit feedback) and recommendation techniques (neighborhood-based and factorization-based). Table: Summary of some related works. Neighborhood-based recommendation Explicit feedback

predefined similarity: many

Implicit feedback

predefined similarity: many

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

Factorization-based recommendation w/o similarity: many learned similarity: some mixed similarity: N/A w/o similarity: e.g., many learned similarity: some mixed similarity: P-FMSM

KBS 2017

26 / 28

Conclusions

Conclusions We study an important recommendation problem with implicit feedback from the perspective of item similarity. We propose a novel mixed similarity learning model that exploits the complementarity of the predefined similarity and the learned similarity commonly used in the state-of-the-art recommendation methods. With the mixed similarity, we further develop a novel recommendation algorithm in the pairwise preference learning framework, i.e., pairwise factored mixed similarity model (P-FMSM). Our P-FMSM can recommend significantly better than the state-of-the-art methods with the predefined similarity or the learned similarity. Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

27 / 28

Thank you

Thank you!

We thank the handling editor and reviewers for their expert comments and constructive suggestions. We thank the support of National Natural Science Foundation of China No. 61502307 and No. 61672358, and Natural Science Foundation of Guangdong Province No. 2014A030310268 and No. 2016A030313038.

Liu et al. (CSSE, SZU)

Mixed Similarity Learning

KBS 2017

28 / 28

Mixed Similarity Learning for Recommendation with ...

Figure: Illustration of mixed similarity learning. Liu et al. (CSSE ..... Experiments. Effect of Neighborhood Size (1/2). 20. 30. 40. 50. 0.2. 0.3. 0.4. 0.5. K. Prec@5.

421KB Sizes 4 Downloads 261 Views

Recommend Documents

Mixed similarity learning for recommendation with ...
ical studies on four public datasets show that our P-FMSM can recommend significantly more accurate than several ... ing sites, and check-in records in location-based mobile social networks, etc. For recommendation with implicit feedback, there are a

Mixed similarity learning for recommendation with ...
Implicit feedback such as users' examination behaviors have been recognized as a very important source of information in most recommendation scenarios. For recommendation with implicit feedback, a good similarity measurement and a proper preference a

Mixed factorization for collaborative recommendation with ...
Nov 10, 2015 - the CR-HEF problem, and design a novel and generic mixed factorization based transfer learn- ing framework to fully exploit those two different types of explicit feedbacks. Experimental results on two CR-HEF tasks with real-world data

Learning Fine-grained Image Similarity with Deep Ranking (PDF)
One way to build image similarity models is to first ex- tract features like Gabor .... We call ti = (pi,p+ i ,p− ... A ranking layer on the top evaluates the hinge loss (3).

An Online Algorithm for Large Scale Image Similarity Learning
machines, and is particularly useful for applications like searching for images ... Learning a pairwise similarity measure from data is a fundamental task in ..... ACM SIGKDD international conference on Knowledge discovery and data mining,.

Learning a Large-Scale Vocal Similarity Embedding for Music
ommendation at commercial scale; for instance, a system similar to the one described ... 1Spotify Inc.. ... sampled to contain a wide array of popular genres, with.

Learning Similarity Measure for Multi-Modal 3D Image ...
The most popular approach in multi-modal image regis- ... taken within a generative framework [5, 4, 8, 13]. ..... //www.insight-journal.org/rire/view_results.php ...

Recommendation for New Users with Partial ...
propose to leverage some auxiliary data of online reviewers' aspect-level opinions, so as to .... called CompleteRank), mainly contains the following three steps. ... defined dictionary). Inspired from this observation, we emphasize the usage of aspe

Stacking Recommendation Engines with Additional ...
cable to the target recommendation task can be easily plugged into our STREAM system. Anytime a ..... [14] IMDb. Internet movie database. downloadable at.

An Efficient Algorithm for Similarity Joins With Edit ...
ture typographical errors for text documents, and to capture similarities for Homologous proteins or genes. ..... We propose a more effi- cient Algorithm 3 that performs a binary search within the same range of [τ + 1,q ..... IMPLEMENTATION DETAILS.

Personalized Itinerary Recommendation with Queuing ...
tems; Location based services; Data mining; Web applications;. KEYWORDS ...... by iteratively calling the SelectNextNode() method (Line 12) and appending the ...

Supporting Approximate Similarity Queries with ...
support approximate answering of similarity queries in P2P networks. When a ... sampling to provide quality guarantees. Our work dif- ...... O(log n) messages. In [16], the authors propose a de- centralized method to create and maintain a random expa

Contour Grouping with Partial Shape Similarity - CiteSeerX
... and Information Engineering,. Huazhong University of Science and Technology, Wuhan 430074, China ... Temple University, Philadelphia, PA 19122, USA ... described a frame integrates top-down with bottom-up segmentation, in which ... The partial sh

Contour Grouping with Partial Shape Similarity - CiteSeerX
the illustration of the process of prediction and updating in particle filters. The .... fine the classes of the part segments according to the length percentage. CLi.

Context-Aware Query Recommendation by Learning ...
not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission a

Quantum Programming With Mixed States
scalable quantum computer makes therefore even more important to have a model for .... That definition of Fin remains valid when an observable O is defined.

Learning Context Sensitive Shape Similarity by Graph ...
Mar 24, 2009 - The Label propagation supposes the number of classes C is known, and all ..... from the Ph.D. Programs Foundation of Ministry of Education of China (No. ... [17] X. Zhu, “Semi-supervised learning with graphs,” in Doctoral ...

Syntactical Similarity Learning by means of Grammatical Evolution
by a similarity learning algorithm have proven very powerful in many different application ... a virtual machine that we designed and implemented. The virtual ..... pattern (e.g., telephone numbers, or email addresses) from strings which do not.

Component Recommendation for Cloud Applications - Semantic Scholar
with eigenvalue 1 or by repeating the computation until all significant values become stable. With the above approach, the significant value of a component ci is ...

Large Scale Online Learning of Image Similarity Through ... - CiteSeerX
Mountain View, CA, USA ... classes, and many features. The current abstract presents OASIS, an Online Algorithm for Scalable Image Similarity learning that.

Learning Distance Function by Coding Similarity
Intel research, IDC Matam 10, PO Box 1659 Matam Industrial Park, Haifa, Israel 31015. Daphna Weinshall ... data retrieval, where similarity is used to rank items.

Large Scale Online Learning of Image Similarity ... - Research
the Euclidean metric in feature space. Each curve shows the precision at top k as a function of k neighbors. Results are averages across 5 train/test partitions (40 ...