Part I

Entity Linking

Outline - Entity Linking

-

introduction methods evaluation test collections toolkits open challenges

Introduction

rich a text with nput document, pts in the text the correspondstem show that d hardly distin-

meanings of the word “plant.”

s

ext analysis; I.7 ument and Text

ion, Wikipedia,

online encycloe largest online h millions of arFigure 1: A sample Wikipedia page, with links to uages. In fact, related articles. n 200 languages, Image taken from Mihalcea and Csomai (2007). Wikify!: linking documents to encyclopedic w pages to knowledge. more In CIKM '07.

terms is taken into account when doing so. Wikify’s detection approach, in contrast, relies exclusively on link probability. If a term is used as a link for a sufficient proportion of the Wikipedia articles in which it is found, they consider it to be a link whenever it is encountered in other documents—regardless of context. This approach will always make mistakes, no matter what threshold is chosen. No matter how small a terms link probability is, if it exceeds zero then, by definition, there is some context in which

Democrat

Democratic Party (United States)

Link Probability. Mihalcea and Csomai’s link probability is a proven feature. On its own it is able to recognize the majority of links. Because each of our training instances involves several candidate link locations (e.g. Hillary Clinton and Clinton in Figure 4), there are multiple link probabilities. These are combined into two separate features: the average and the maximum. The former is expected to be more consistent, but the latter may be more indicative of links. For example, Democratic

Delegate

President of the United States

Florida (US State)

Michigan (US State)

Hilary Rodham Clinton

Barack Obama

Nomination

Figure 4: Associating document phrases with appropriate Wikipedia articles

Image taken from Milne and Witten (2008b). Learning to Link with Wikipedia. In CIKM '08.

Voting

Let’s learn something about Spin-Optical Metamaterial

See http://tagme.di.unipi.it

Microsoft Smart Tags

Google toolbar

Why do we need entity linking? - Enable

-

semantic search advanced UI/UX automatic document enrichment; go-read-here inline annotations (microformats, RDFa) ontology learning, KB population

- “Use as feature”

- to improve - classification; retrieval; word sense disambiguation; semantic similarity;…

- dimensionality reduction (e.g., term vectors)

A little bit of history - Text classification

- NER

- WSD

- NED/NEN

- {person name, geo, movie name, ...} disambiguation - (Cross-document) coreference resolution - Automatic link generation

- Entity linking

Entity linking? - NE normalization / canonicalization / sense disambiguation

- DB record linkage / schema mapping

- (not the focus here, but see [Demartini et al. 2013])

- Knowledge base population

- Entity linking

- D2W - Wikification - Semantic linking

Main problem

Main problem - Linking free text to entities

- Any piece of text -

news documents blog posts tweets queries ...

- Entities: typically taken from a knowledge base - Wikipedia - Freebase  - ...

Common steps 1. Determine “linkable” phrases

- mention detection – MD

2. Rank/Select candidate entity links - link generation – LG - may include NILs (null values, i.e., no target in KB)

3. (Use “context” to disambiguate/filter/improve) - disambiguation – DA

MD … degeneracy is removed …

LG … degeneracy …

DA … degeneracy …

Methods

Preliminaries

- Knowledge bases...

- Wikipedia-based methods

- commonness - relatedness - keyphraseness

Wikipedia - Basic element: article (proper)

- But also

-

redirect pages disambiguation pages category/template pages admin pages

- Hyperlinks

- use “unique identifiers” (URLs) - [[United States]] or [[United States|American]] - [[United States (TV series)]] or 
 [[United States (TV series)|TV show]]

Disambiguation pages

- Senses of an ambiguous phrase

- Short description

- (Possible) categorization

- Non-exhaustive

Some statistics - WordNet

- 80k “entity” definitions - 115k surface forms - 142k senses (entity - surface form combinations)

- Wikipedia (only)

- ~4M entity definitions - ~12M surface forms - ~24M senses

Wikipedia-based methods

42 Wikipedia-based methods May 9, 2013 - keyphraseness(w) [Mihalcea & Csomai 2007]

CF(wl ) CF(w)

Collection frequency term w as a link to another Wikipedia article

Collection frequency term w

May 9, 2013

Wikipedia-based methods log(max(|Lc |, |Lc0 |)) log(|Lc \ Lc0 |) - commonness(w,c) [Medelyan et al. 2008] log(|W P |) log(min(|Lc |, |Lc0 |)) |Lw,c | P 0 c0 |Lw,c |

Number of links
 with target c’ and anchor text w

Commonness and keyphraseness

FIG. 1. Sample article with phrases disambiguated to Wikipedia topics. Ambiguous phrases are highlighted in red and boldface. The top 10 unambiguous phrases in terms of keyphraseness are highlighted in gradual changing green colors. The top five unambiguous phrases are highlighted in boldface with their corresponding Wikipedia topics labeled. The most probable candidate topics are listed for the ambiguous phrases along with the corresponding commonness values, and their correct Wikipedia topics are highlighted in boldface. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Image taken from Li et al. (2013). TSDW: Two-stage word sense disambiguation using Wikipedia. In JASIST 2013. processing, which can be an important preprocessing step in

Nakayama, Aramaki, Hara, & Nishio, 2010; Tamagawa

Wikipedia-based methods - Of course, these can also be based on other data, e.g.,

- (focused) web crawls, with anchor text in the links to Wikipedia articles - click logs [Pantel et al. 2011]

Baseline methods

Recall the steps

1. mention detection – MD

2. link generation – LG

3. (disambiguation) – DA

Wikify!

[Mihalcea & Csomai 2007]

- First paper on actual entity linking

- Identifies two steps

1. identify important concepts in the text
 ~ “keyword extraction” (MD) 2. link these to corresponding Wikipedia pages 
 ~ “word sense disambiguation” (LG/DA)

Wikify!

[Mihalcea & Csomai 2007]

- MD

- tf.idf, Χ2, keyphraseness

- LG/DA

1. Overlap between definition (Wikipedia page) and context (paragraph) [Lesk 1986] 2. Naive Bayes [Mihalcea 2007] - context, POS, entity-specific terms

3. Voting between (1) and (2)

Learning to Link with Wikipedia [Milne & Witten 2008b]

- Key idea: disambiguation informs detection

- start with unambiguous senses - compare each possible sense with its relatedness to the context sense candidates - So, first LG, then base MD on these results

As figure 2 demonstrates, this is not always the best decision. Here tree clearly refers to one of the less common senses—the hierarchical data structure—because it is surrounded by computer science concepts. Our algorithm identifies these cases by comparing each possible sense with its surrounding context. This is a cyclic problem because these terms may also be ambiguous. Fortunately in a sufficiently long piece of text one generally finds terms that do not require any disambiguation at all, because they are only ever used to link to one Wikipedia article. There are four unambiguous links in the text of Figure 2, including algorithm, uninformed search and LIFO stack. We use every unambiguous link in the document as context to disambiguate ambiguous ones.

link probability feature helps to identify such cases; there are millions of articles that mention the but do not use it as a link. Weighting context terms on this feature emphasizes those that are most likely a priori—ones that are almost always used as a link within the articles where they are found, and always link to the same destination.

Learning to Link with Wikipedia [Milne & Witten 2008b]

Each candidate sense and context term is represented by a single Wikipedia article. Thus the problem is reduced to selecting the

Secondly, many of the context terms will be outliers that do not relate to the central thread of the document. We can determine how closely a term relates to this central thread by calculating its average semantic relatedness to all other context terms, using the measure described previously. These two variables—link probability and relatedness—are averaged to provide a weight for each context term. This is then used when calculating the weighted average of a candidate sense to the context articles.

sense Tree

commonness relatedness 92.82%

15.97%

Tree (graph theory)

2.94%

59.91%

Tree (data structure)

2.57%

63.26%

Tree (set theory)

0.15%

34.04%

Phylogenetic tree

0.07%

20.33%

Christmas tree

0.07%

0.0%

Binary tree

0.04%

62.43%

Family tree

0.04%

16.31%



Figure 2: Disambiguating tree using surrounding unambiguous links as context.

Image taken from Milne and Witten (2008b). Learning to Link with Wikipedia. In CIKM '08.

semantic relatedness measures from Wikipedia, which we call the Wikipedia Link-based Measure (WLM). The central difference between this and other Wikipedia based approaches is the use of Wikipedia’s hyperlink structure to define relatedness. This theoretically offers a measure that is both cheaper and more accurate than ESA: cheaper, because Wikipedia’s extensive textual content can largely be ignored, and more accurate, because it is more closely tied to the manually defined semantics of the resource. Wikipedia’s extensive network of cross-references, portals, categories and info-boxes provide a huge amount of explicitly defined semantics. Despite the name, Explicit Semantic Analysis takes advantage of only one property: the way in which Wikipedia’s text is segmented into individual topics. It’s central component—the weight between a term and an article—is automatically derived rather than explicitly specified. In contrast, the central component of our approach is the link: a manually-defined connection between two manually disambiguated concepts. Wikipedia provides millions of these connections, as

approach—WikiRelate—took familiar techniques that had previously been applied to WordNet and modified them to suit Wikipedia. Their most accurate approach is based on Leacock & Chodorow’s (1998) path-length measure, which takes into account the depth within WordNet at which the concepts are found. WikiRelate’s implementation does much the same for Wikipedia’s hierarchical category structure. While the results are similar in terms of accuracy to thesaurus based techniques, the collaborative nature of Wikipedia offers a much larger—and constantly evolving—vocabulary. Gabrilovich and Markovitch (2007) achieve extremely accurate results with ESA, a technique that is somewhat reminiscent of the vector space model widely used in information retrieval. Instead of comparing vectors of term weights to evaluate the similarity between queries and documents, they compare weighted vectors of the Wikipedia articles related to each term. The name of the approach—Explicit Semantic Analysis—stems from the way these vectors are comprised of manually defined

Wikipedia-based measures - relatedness(c, c’) [Milne & Witten 2008a] Diesel Engine

Battery (electricity)

20th Century

Fossil Fuel

Petrol Engine

Emission Standard

Arctic Circle

Environmental Skepticism

Bicycle

Greenpeace

Audi

Ecology

incoming links outgoing links

incoming links

Global Warming

Automobile

outgoing links

Transport

Planet

Vehicle

Ozone

Henry Ford

Greenhouse Effect Combustion Engine

Carbon Dioxide

Air Pollution

Greenhouse Gas

Alternative Fuel

Kyoto Protocol

Figure 1: Obtaining a semantic relatedness measure between Automobile and Global Warming from Wikipedia links.

Image taken from Milne and Witten (2008a). An Effective, Low-Cost Measure of Semantic Relatedness Obtained from Wikipedia Links. In AAAI WikiAI Workshop. 26

Wikipedia-based measures 42 - relatedness(c, c’) [Milne & Witten 2008a] NumberMay of links
9, with target c

2013Intersection of inlinks with target c and c’

log(max(|Lc |, |Lc0 |)) log(|Lc \ Lc0 |) log(|W P |) log(min(|Lc |, |Lc0 |)) Total number of Wikipedia articles

Learning to Link with Wikipedia [Milne & Witten 2008b]

- MD

- ...

- LG

- Machine learning - keyphraseness, average relatedness, sum of average weights

Learning to Link with Wikipedia [Milne & Witten 2008b]

- MD

- Machine learning - link probability, relatedness, confidence of LG, generality, frequency, location, spread

- LG

- Machine learning - keyphraseness, average relatedness, sum of average weights

Learning to Link with Wikipedia [Milne & Witten 2008b]

- Some heuristics

- filter non-informative, non-ambiguous candidates (e.g., “the”) - based on keyphraseness, i.e., link probability

- filter non-central candidates - based on average relatedness to all other context senses

Context

Main intuition - Leverage “context” as signal for disambiguation

- query - history; session; interests; …

- phrase - sentence; paragraph; document; …

- But also candidate entity context

- e.g., most central entities in the candidate entity graph, relatedness, …

Local versus global context - “Global”

- i.e., disambiguation of the candidate entity graph - NP-hard

- Optimization

- reduce the search space to a “disambiguation context” - all plausible (reciprocal) disambiguations [Cucerzan 2007] - unambiguous surface forms, pair-wise comparisons, and/or averages [Milne & Witten 2008b] - hill-climbing, integer linear programs [Kulkarni et al. 2009] - hybrid + ML [Ratinov et al. 2011, Ferragina & Scaiella 2010]

Collective annotation of Wikipedia entities in web text [Kulkarni et al. 2009]

- Contribution

- determine a collective score based on trade-off between local compatibility and global topical coherence between candidate entities - use ILP or Hill-climbing (ILP beats HC, but is slower)

- Also

- new test collection (web pages), including NILs

Local and Global Algorithms for Disambiguation to Wikipedia [Ratinov et al. 2011]

- Main contribution, in steps – MD + DA

1. use “local” approach (e.g., commonness) to generate a disambiguation context 2. apply “global” machine learning approach on pairs - relatedness, PMI - {inlinks, outlinks} in various combinations (c and c’) - {avg, max}

- (Apply another round of machine learning) – LG

TAGME: On-the-fly Annotation of Short Text Fragments [Ferragina & Scaiella 2010]

- MD

- keyphraseness [Mihalcea & Csomai 2007]

- LG

- use “local” approach to generate a disambiguation context, very similar to [Ratinov et al. 2011] - Heavy pruning - mentions; candidate links; coherence

- Accessible at http://tagme.di.unipi.it

Graph-based named entity linking with Wikipedia [Hachey et al. 2011]

- MD

- generate disambiguation context - based on unambiguous entity links

- edges defined by wikilinks (articles & categories) - max step size: 2 (articles), 3 (categories)

- LG

- use degree centrality and PageRank to reweigh cosine-based similarity scores

- Evaluation on TAC KBP

Now we have two relations: the Compatible relation between name mention and entity and the Semantic-Related relation between entities. In this way, the interdependence between the EL decisions in a document can be best represented as a graph, which we refer to as Referent Graph. Concretely, the Referent Graph is defined as follows:

1)

Collective Entity Linking in Web Text: A Graph-Based A Referent Graph isMethod a weighted graph G=(V, E), where the node set V contains all name mentions in a document and all [Han et al. 2011]

the possible referent entities of these name mentions, with each node representing a name mention or an entity; each edge between a name mention and an entity represents a Compatible relation between them; each edge between two random walk on the referent graph, defined by entities represents a Semantic-Related relation between them.

- Global approach, main contribution

-

Name Men mentions in N-grams (u dictionary o matches are “is” and “a Mihalcea an out the mea mentions wi

Candidate candidate re the in Step 1. Witten [14] intra-Wikipedia links For illustration, Figure 2 shows the Referent Graph representation entities are of the EL problem in Example 1. Cosine similarity betweentext which a

context of surface form 3) Node Conn and Wikipedia article

Space Jam

Mention

0.20

Entity

Space Jam

Michael I. Jordan 0.03

0.66 Bulls

0.13

Chicago Bulls

0.82

Michael Jordan

0.01 Bull

2)

0.08

Jordan

0.12 Michael B. Jordan

Figure 2. The Referent Graph of Example 1 relatedness(c, c’) [Milne & By representing both the local mention-to-entity compatibility 2008a] and the global entity relation asWitten edges, two types of dependencies

to the Refe Compatible referent ent pairs of ent Relation bet the method

4. COLLEC

In this section, we which can jointl mentions in the representation, th

Topic modeling methods

From Names to Entities using Thematic Context Distance [Pilz et al. 2011]

- Main contribution

- “extend” previous BOW approaches for disambiguation with LDA topics - compare topic distributions of source document with candidate entities Table 1: Topics for entities with name John Taylor (excerpt) with associated probability value disambiguation term

i

p(ti )

South Carolina governor

109 120

0.3805 0.2477

unit state, state senat, lieuten governor, hous repres, elect governor, ... north carolina, south carolina, unit state, west virginia, civil war, ...

athlete

80 135

0.4190 0.1047

summer olymp, gold medal, world record, silver medal, world championship, ... unit state, rhode island, baltimor maryland, new hampshir, georg washington, ...

racing driver

129

0.7407

grand prix, race driver, motor race, formula, race team, sport car, ...

jazz

141

0.5781

jazz musician, big band, new york, duke ellington, jazz band, ...

18 70

0.2964 0.1594

rock band, solo album, play guitar, band member, rock roll, ... album releas, studio album, debut album, record label, music video, ...

bass guitarist

Important words (titles) of the topics

lieve that the reason behind this is that the inferred topic

Recap - Essential ingredients

- MD - commonness - keyphraseness

- LG - commonness - machine learning

- DA -

relatedness machine learning topic modeling graph-based methods

Outline - Part 1 – Entity Linking

-

introduction methods evaluation test collections toolkits open challenges

Evaluation

Entity linking evaluation

- Ingredients

-

knowledge base document(s) gold-standard annotations evaluation metrics

Knowledge bases

DBpedia - Extract structured information from Wikipedia

- infoboxes, categories, and more - crowd-sourced community effort

- Open source

- written in Scala, Java and VSP - Virtuoso Universal Server Operating system

- See http://dbpedia.org/About

Freebase

- Initially seeded from high-quality open data

- now composed mainly by community - harvested from many sources - Wikipedia, MusicBrainz, and others.

- Acquired by Google in 2010 (GKG)

- See http://www.freebase.com/

Wikipedia vs Freebase - Freebase 5x larger than Wikipedia 
 (in terms of the number of entities)

- Geared towards entertainment

- For 85% of Freebase entities there’s no text...

- but, there are Wikipedia-Freebase links 
 (for some entities) - initial work trying to ameliorate this problem [Zheng et al. 2012]

YAGO - Accuracy manually evaluated

- confirmed accuracy of 95% - relation is annotated with its confidence value.

- Anchored in Time and Space

- Thematic domains (e.g. "music" or "science")

- Includes WordNet

- See http://www.mpi-inf.mpg.de/yago-naga/ yago/

Sense of Scale - YAGO: 10 million entities and 120 million facts

- Freebase: 37 million topics, 1,998 types, and more than 30,000 properties

- DBpedia: 3.77 million things

- 2.35 million classified in Ontology, including: - 764,000 persons, 573,000 places, - 333,000 creative works, 192,000 organizations, - 202,000 species and 5,500 diseases.

- 111 languages, together 20.8 million things

Evaluation metrics

disambiguation process they where performed in three minutes (a rate of about 11,000 every second) on the desktop machine.

4. LEARNING TO DETECT LINKS This section describes a new approach to link detection. The central difference between this and Mihalcea and Csomai’s system is that Wikipedia articles are used to learn what terms should and should not be linked, and the context surrounding the terms is taken into account when doing so. Wikify’s detection approach, in contrast, relies exclusively on link probability. If a term is used as a link for a sufficient proportion of the Wikipedia articles in which it is found, they consider it to be a link whenever it is encountered in other documents—regardless of context. This approach will always make mistakes, no matter what threshold is chosen. No matter how small a terms link probability is, if it exceeds zero then, by definition, there is some context in which

Democrats, for example, could refer to the party or to any proponent of democracy. These automatically identified Wikipedia articles provide training instances for a classifier. Positive examples are the articles that were manually linked to, while negative ones are those that were not. Features of these articles—and the places where they were mentioned—are used to inform the classifier about which topics should and should not be linked. The features are as follows.

Evaluation metrics - What is the task?

Democrat

Democratic Party (United States)

Link Probability. Mihalcea and Csomai’s link probability is a proven feature. On its own it is able to recognize the majority of links. Because each of our training instances involves several candidate link locations (e.g. Hillary Clinton and Clinton in Figure 4), there are multiple link probabilities. These are combined into two separate features: the average and the maximum. The former is expected to be more consistent, but the latter may be more indicative of links. For example, Democratic

Delegate

President of the United States

Florida (US State)

Michigan (US State)

Hilary Rodham Clinton

Barack Obama

Nomination

Voting

Evaluation metrics

- Compounded problem

- tagging/spanning – MG - entity linking – LG/DA

Evaluation metrics - Set-based (similar to WSD)

- “How many correct links were retrieved?” - precision, recall, F-measure

- Rank-based

- “Was the correct link(s) retrieved with a high score?” - precision@k, recall@k, P1, R-prec, MRR, MAP, etc.

- macro/micro

- per anchor phrase - per tweet, query, sentence, document

Test collections

Gold-standard annotations

- Human annotators, labeling or judging links from input “documents” to entities in the KB

Entity linking test collections - Wikipedia

- MSNBC

- AQUAINT

- ACE

- Twitter

- AIDA (CoNLL)

- IITB (web data)

- INEX link-the-wiki

- TREC knowledge base acceleration (KBA)

- TAC knowledge base population (KBP)

- Yahoo Webscope: web search queries (in sessions)

Wikipedia (for evaluation) - Widely used

- Pros

- cheap and easy; the links are already provided

- Cons

- biased (style guides!) - specific scenario - unbalanced

TAC

[McNamee et al. 2010] - Target: KB – from Wikipedia (~800k instances)

- infoboxes; article text; type

- “Query”

- document ID (news, web, blog) - mention string (occurring at least once in that doc)

- Focus on ambiguous mentions

- collected by cherry-picking ‘interesting’ mentions, rather than systematically annotating all mentions

- Explicit NILs (> 50% of the queries)


Evaluation - recap - Even with so many test collections to choose from, there’s still quite some variation

- People create their own “extracts” from WP

- Same method, same test collection, but different results in different papers

- tokenization, normalization, ...

- We need meta-evaluations...

Meta-evaluations

- [Hachey et al. 2013] - [Cornolti et al. 2013]

Evaluating Entity Linking with Wikipedia [Hachey et al. 2013]

- Named entity linking, a.k.a., “NEL”

- include NILs - Wikipedia articles not always named entities

- Explicit focus on separating “search” (LG) and “disambiguation” (DA)

- Reimplement and evaluate three NEL systems

- [Bunescu & Pasça 2006] - [Cucerzan 2007] - [Varna et al. 2009] (TAC system paper)

A Framework for Benchmarking EntityAnnotation Systems [Cornolti et al. 2013]

- Compare five publicly available entity linkers

-

[Hoffart et al. 2007] (AIDA) [Ratinov et al. 2011] [Ferragina & Scaiella 2010] (TAGME) [Milne & Witten 2008] (wikipedia-miner) DBpedia Spotlight

- And also investigate parameter/cut-off settings

See http://acube.di.unipi.it/bat-framework/.

A Framework for Benchmarking EntityAnnotation Systems [Cornolti et al. 2013]

- On five publicly available test collections

- AIDA [Hoffart et al. 2007] - based on CoNLL 2003: noun annotations - 1393 Reuters newswire articles - hand-annotated all nouns with entities in YAGO2

-

AQUAINT [Milne & Witten 2008] MSNBC [Cucerzan 2007] IITB [Kulkarni et al. 2010] (web data) Twitter [Meij et al. 2012]

A Framework for Benchmarking EntityAnnotation Systems [Cornolti et al. 2013]

- Benchmarking framework

- Introduces “fuzzy” evaluation measures

- Main findings

- different systems perform well in different scenarios - AIDA and TagMe seem to be the winners overall

DIY Entity Linking – footnotes - ClueWeb annotated with Freebase (FACC1)

- wiki-links

7/26/13

From Words to Concepts and Back: Dictionaries for Linking Text, Entities and Ideas

- Dictionaries for Linking Text, Entities and Ideas American football football and

concept: “soccer”

Football

football and Football

football americano

fútbol americano Amerikan futbolu football

Soccer and soccer

américain

Association football

piłkarz voetbalclub

fútbol and Fútbol

footballeur

footballer

Fußballspieler bóng đá

Futbol and futbol

sepak bola

Fußball

‫ﻓﻭوﺗﺑﺎﻝل‬

futebolista

futebol

футболист

‫ﻟﻌﺑﺔ ﻛﺭرﺓة ﺍاﻟﻘﺩدﻡم‬

futbolista

‫כדורגל‬

fotbal

ฟุตบอล voetbal Foutbaal

Le Football Américain

futbolu amerykańskiego football team

football field американского футбола อเมริกัน Amerikai futball American ฟุตบอล sepak bola football ‫פוטבול‬ Amerika rules ‫ﻛﺭرﺓة ﺍاﻟﻘﺩدﻡم‬ football player futebol ‫ﺍاﻷﻣﺭرﻳﯾ ﻛﻳﯾﺔ‬ americano Futbol američki fudbal ‫ﻓﻭوﺗﺑﺎﻝل‬ ‫ﺁآﻣﺭرﻳﯾﮑﺎﻳﯾﯽ‬

amerykański ‫ﻛﺭرﺓة ﺍاﻟﻘﺩدﻡم ﺍاﻷ ﻣﻳﯾﺭر ﻛﻳﯾﺔ‬

Outline - Entity Linking

-

introduction methods evaluation test collections toolkits open challenges

Toolkits

Public Toolkits and Web Services for Entity Linking - Wikipedia Miner

- TagMe

- DBpedia Spotlight

- Illinios Wikifier

- AIDA

- (OpenCalais)

Service

Available Languages

Open

Source

Java

Web API,

Application

any WP



Java

Web API

EN, IT



DBpedia Spotlight

Java

Web API,

Application

EN

+ any WP



Illinois Wikifier

Java

Application

EN



AIDA

Java

Web API

EN



OpenCalais

?

Web API

EN, FR, SP



Programming Language

Wikipedia Miner TagMe

Matching Wikipedia Miner TagMe DBpedia Spotlight

Lexical Lexical Lexical?

Target KB

Context

Wikipedia

ML on Relatedness

Wikipedia

Vote on Relatedness

Focus on Short texts

DBpedia

Cosine Similarity

Structure

Illinois Wikifier

NER

Wikipedia

Global Coherence

AIDA

NER

YAGO2

Multiple

OpenCalais

?

Calais

?

Comment

Structure

Open challenges

Open challenges - Difficulty prediction

- similar to ambiguity, but not the same - dependent on context, candidate links, ...

- Multi/Cross-lingual entity linking

- [Wang et al. 2013] - CrossLink-2 (NTCIR-9), CJK - EN [Tang et al. 2013] - TAC...

- Cross-KB entity linking (“Freebase”)

- directly? use Wikipedia as pivot?

Open challenges - Generic test collections

- What’s the task? User model? Evaluation? - TAC? set-based? ranking? known-item finding? top-k? - exhaustive linking? first mention only? - “aboutness”

- Moving beyond entities

- events/news, concepts, relations

- Moving beyond "ad hoc" entity linking:

- incorporate contextual evidence in the task 
 (and evaluation) - {users, history, profile, social, trending, ...}

Follow-up reading - Detecting unlinkable entities [Lin et al. 2012a]

- Linking entities to any database [Sil et al. 2012] - Hyperlinking for multimedia data [Eskevich et al. 2013]

- Automatically generating Wikipedia articles 
 [Sauper & Barzilay 2009]

- Scaling up to the web [Lin et al. 2012b]

- Serendipitous suggestions based on personalized entity links [Bordino et al. 2013]

- Actionable entities/queries [Lin et al. 2012]

Outline - Entity Linking

-

introduction methods evaluation test collections toolkits open challenges

References – Entity linking

http://www.mendeley.com/groups/3339761/entity-linking-and-retrieval-tutorial/papers/added/0/tag/entity +linking/

20140615 Entity Linking and Retrieval for Semantic Search ... - GitHub

Jun 15, 2014 - blog posts. - tweets. - queries. - ... - Entities: typically taken from a knowledge base. - Wikipedia. - Freebase. - ... Page 24 ... ~24M senses ...

9MB Sizes 6 Downloads 315 Views

Recommend Documents

20140615 Entity Linking and Retrieval for Semantic Search ... - GitHub
Wikipedia Miner. [Milne & Witten 2008b]. - Open source. - (Public) web service. - Java. - Hadoop preprocessing pipeline. - Lexical matching + machine learning.

20140615 Entity Linking and Retrieval for Semantic ... - WordPress.com
WiFi. - Network: Delta-Meeting. - Password: not needed(?). Page 3 ... Entity/Attribute/Relationship retrieval. - + social, + personal. - + (hyper)local ...

20140615 Entity Linking and Retrieval for Semantic ... - WordPress.com
Freebase. - Probabilistic retrieval model for semistructured data. - Exercises. - Entity Retrieval with a probabilistic retrieval model for semistructured data ...

20140615 Entity Linking and Retrieval for Semantic ... - WordPress.com
Lazy random walk on entity networks extracted from. Wikipedia ... The entity networks are similar, but Yahoo! ... Other “dimensions” of relevance. - recency. - interestingness. - popularity. - social ... Assume you want to build a “semantic”

entity retrieval - GitHub
Jun 15, 2014 - keyword unstructured/ semi-structured ranked list keyword++. (target type(s)) ..... Optimization can get trapped in a local maximum/ minimum ...

Semantic Search Interface for Entity/Fact Retrieval
a semantic knowledge base containing the extracted data and a semantic search ... process; H.3.5 [Information Storage and Retrieval]: Online. Information ...

Entity Recommendations in Web Search - GitHub
These queries name an entity by one of its names and might contain additional .... Our ontology was developed over 2 years by the Ya- ... It consists of 250 classes of entities ..... The trade-off between coverage and CTR is important as these ...

Strong Baselines for Cross-Lingual Entity Linking - Stanford NLP
documents with articles in a knowledge base (KB). In the two earliest TAC-KBPs, the KB was a subset of the English. Wikipedia, and the documents were also in ...

Strong Baselines for Cross-Lingual Entity Linking - Stanford NLP Group
managed to score above the median entries in all previ- ous English entity ... but now also any non-English Wikipedia pages covered by the cross-mapper ...

Vectorial Phase Retrieval for Linear ... - Semantic Scholar
Sep 19, 2011 - and field-enhancement high harmonic generation (HHG). [13] have not yet been fully .... alternative solution method. The compact support con- ... calculating the relative change in the pulse's energy when using Xр!Ю (which ...

Distributed Indexing for Semantic Search - Semantic Web
Apr 26, 2010 - 3. INDEXING RDF DATA. The index structures that need to be built for any par- ticular search ... simplicity, we will call this a horizontal index on the basis that RDF ... a way to implement a secondary sort on values by rewriting.

Discriminative Models for Information Retrieval - Semantic Scholar
Department of Computer Science. University ... Pattern classification, machine learning, discriminative models, max- imum entropy, support vector machines. 1.

Unsupervised, Efficient and Semantic Expertise Retrieval
a case-insensitive match of full name or e-mail address [4]. For. CERC, we make use of publicly released ... merical placeholder token. During our experiments we prune V by only retaining the 216 ..... and EX103), where the former is associated with

Unsupervised, Efficient and Semantic Expertise Retrieval
training on NVidia GTX480 and NVidia Tesla K20 GPUs. We only iterate once over the entire training set for each experiment. 5. RESULTS AND DISCUSSION. We start by giving a high-level overview of our experimental re- sults and then address issues of s

Efficient Speaker Identification and Retrieval - Semantic Scholar
identification framework and for efficient speaker retrieval. In ..... Phase two: rescoring using GMM-simulation (top-1). 0.05. 0.1. 0.2. 0.5. 1. 2. 5. 10. 20. 40. 2. 5. 10.

Efficient Speaker Identification and Retrieval - Semantic Scholar
Department of Computer Science, Bar-Ilan University, Israel. 2. School of Electrical .... computed using the top-N speedup technique [3] (N=5) and divided by the ...

Entity Linking in Web Tables with Multiple Linked Knowledge Bases
in Figure 1, EL aims to link the string mention “Michael Jordan” to the entity ... the first row of Figure 1) in Web tables, entity types in the target KB, and so ..... science, services and agents on the world wide web 7(3), 154–165 (2009) ...

Entity Synonyms for Structured Web Search - IEEE Xplore
Abstract—Nowadays, there are many queries issued to search engines targeting at finding values from structured data (e.g., movie showtime of a specific ...

Stanford-UBC Entity Linking at TAC-KBP - Stanford NLP Group
Computer Science Department, Stanford University, Stanford, CA, USA. ‡ .... Top Choice ... training data with spans that linked to a possibly related entity:.

reference nodes Entity Nodes Relationship Nodes - GitHub
S S EMS BIOLOG GRAPHICAL NO A ION EN I RELA IONSHIP REFERENCE CARD. LABEL entity. LABEL observable. LABEL perturbing agent pre:label.

Reference Nodes Entity Nodes Relationship Nodes - GitHub
SYSTEMS BIOLOGY GRAPHICAL NOTATION ENTITY RELATIONSHIP REFERENCE CARD. LABEL entity. LABEL phenotype. LABEL perturbing agent pre:label unit of information state variable necessary stimulation inhibition modulation. LABEL. NOT not operator outcome abs

search engines information retrieval practice.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. search engines ...

Automatic Labeling for Entity Extraction in Cyber Security - GitHub
first crawling the web and training a decision clas- sifier to identify ... tailed comparisons of the results, and [10] for more specifics on ... because they are designed to do the best possible with very little ... To build a corpus with security-r

Efficient Spectral Neighborhood Blocking for Entity ... - Semantic Scholar
106, no. 50, pp. 21 068–21 073, 2009. [23] G. Salton, A. Wong, and C. S. Yang, “A vector space model for automatic indexing,” Communications of the ACM, vol. 18, no. 11, pp. 613–620, 1975. [24] P. McNamee and J. Mayfield, “Character п-gram