Introduction to Information Retrieval
Introduction to
Information Retrieval
Boolean retrieval
Introduction to Information Retrieval
Unstructured data in 1680
Which plays of Shakespeare contain the words Brutus
AND Caesar but NOT Calpurnia?
One could grep all of Shakespeare’s plays for Brutus
and Caesar, then strip out lines containing Calpurnia?
Why is that not the answer?
Slow (for large corpora)
NOT Calpurnia is non-trivial
Other operations (e.g., find the word Romans near
countrymen) not feasible
Ranked retrieval (best documents to return)
Later lectures
2
Introduction to Information Retrieval
Term-document incidence
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
Brutus AND Caesar BUT 1 if play contains
NOT Calpurnia word, 0 otherwise
Introduction to Information Retrieval
Incidence vectors
So we have a 0/1 vector for each term.
To answer query: take the vectors for Brutus, Caesar
and Calpurnia (complemented) bitwise AND.
110100 AND 110111 AND 101111 = 100100.
4
Introduction to Information Retrieval
Answers to query
Antony and Cleopatra, Act III, Scene ii
Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
Hamlet, Act III, Scene ii
Lord Polonius: I did enact Julius Caesar I was killed i' the
Capitol; Brutus killed me.
5
Introduction to Information Retrieval
Basic assumptions of Information Retrieval
Collection: Fixed set of documents
Goal: Retrieve documents with information that is
relevant to the user’s information need and helps the
user complete a task
6
Introduction to Information Retrieval
The classic search model
TASK Get rid of mice in a
politically correct way
Misconception?
Info Need Info about removing mice
without killing them
Mistranslation?
Verbal How do I trap mice alive?
form
Misformulation?
Query mouse trap
SEARCH
ENGINE
Query Results
Corpus
Refinement
Introduction to Information Retrieval
How good are the retrieved docs?
Precision : Fraction of retrieved docs that are
relevant to user’s information need
Recall : Fraction of relevant docs in collection that are
retrieved
More precise definitions and measurements to
follow in later lectures
8
Introduction to Information Retrieval
Bigger collections
Consider N = 1 million documents, each with about
1000 words.
Avg 6 bytes/word including spaces/punctuation
6GB of data in the documents.
Say there are M = 500K distinct terms among these.
9
Introduction to Information Retrieval
Can’t build the matrix
500K x 1M matrix has half-a-trillion 0’s and 1’s.
But it has no more than one billion 1’s.
matrix is extremely sparse. Why?
What’s a better representation?
We only record the 1 positions.
10
Introduction to Information Retrieval
Inverted index
For each term t, we must store a list of all documents
that contain t.
Identify each by a docID, a document serial number
Can we used fixed-size arrays for this?
Brutus 1 2 4 11 31 45 173 174
Caesa 1 2 4 5 6 16 57 132
r
Calpurnia 2 31 54 101
What happens if the word
Caesar is added to document
14? 11
Introduction to Information Retrieval
Inverted index
We need variable-size postings lists
On disk, a continuous run of postings is normal and best
In memory, can use linked lists or variable length arrays
Some tradeoffs in size/ease of insertion Posting
Brutus 1 2 4 11 31 45 173 174
Caesar 1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101
Dictionary Postings
Sorted by docID (more later on why).
12
Introduction to Information Retrieval
Inverted index construction
Documents to Friends, Romans, countrymen.
be indexed.
Tokenizer
Token stream. Friends Romans Countrymen
Linguistic
modules
Modified tokens. friend roman countryman
Indexer friend 2 4
roman 1 2
Inverted index.
countryman 13 16
Introduction to Information Retrieval
Indexer steps: Token sequence
Sequence of (Modified token, Document ID) pairs.
Doc 1 Doc 2
I did enact Julius So let it be with
Caesar I was killed Caesar. The noble
i' the Capitol; Brutus hath told you
Brutus killed me. Caesar was ambitious
Introduction to Information Retrieval
Indexer steps: Sort
Sort by terms
And then docID
Core indexing step
Introduction to Information Retrieval
Indexer steps: Dictionary & Postings
Multiple term entries
in a single document
are merged.
Split into Dictionary
and Postings
Doc. frequency
information is added.
Why frequency?
Will discuss later.
Introduction to Information Retrieval
Where do we pay in storage?
Lists of
docIDs
Terms
and
counts
Pointers 17
Introduction to Information Retrieval
The index we just built
How do we process a query? Today’s
Later - what kinds of queries can we process? focus
18
Introduction to Information Retrieval
Query processing: AND
Consider processing the query:
Brutus AND Caesar
Locate Brutus in the Dictionary;
Retrieve its postings.
Locate Caesar in the Dictionary;
Retrieve its postings.
“Merge” the two postings:
2 4 8 16 32 64 128 Brutus
1 2 3 5 8 1 21 34 Caesar
3
19
Introduction to Information Retrieval
The merge
Walk through the two postings simultaneously, in
time linear in the total number of postings entries
2 4 8 16 32 64 128 Brutus
2 8
1 2 3 5 8 13 21 34 Caesar
If the list lengths are x and y, the merge takes O(x+y)
operations.
Crucial: postings sorted by docID.
20
Introduction to Information Retrieval
Intersecting two postings lists
(a “merge” algorithm)
21
Introduction to Information Retrieval
Boolean queries: Exact match
The Boolean retrieval model is being able to ask a
query that is a Boolean expression:
Boolean Queries are queries using AND, OR and NOT to
join query terms
Views each document as a set of words
Is precise: document matches condition or not.
Perhaps the simplest model to build an IR system on
Primary commercial retrieval tool for 3 decades.
Many search systems you still use are Boolean:
Email, library catalog, Mac OS X Spotlight
22
Introduction to Information Retrieval
Example: WestLaw http://www.westlaw.com/
Largest commercial (paying subscribers) legal
search service (started 1975; ranking added
1992)
Tens of terabytes of data; 700,000 users
Majority of users still use boolean queries
Example query:
What is the statute of limitations in cases involving
the federal tort claims act?
LIMIT! /3 STATUTE ACTION /S FEDERAL /2
TORT /3 CLAIM
/3 = within 3 words, /S = in same sentence
23
Introduction to Information Retrieval
Example: WestLaw http://www.westlaw.com/
Another example query:
Requirements for disabled people to be able to access a
workplace
disabl! /p access! /s work-site work-place (employment /3
place
Note that SPACE is disjunction, not conjunction!
Long, precise queries; proximity operators;
incrementally developed; not like web search
Many professional searchers still like Boolean search
You know exactly what you are getting
But that doesn’t mean it actually works better….
Introduction to Information Retrieval Sec. 1.3
Query optimization
What is the best order for query processing?
Consider a query that is an AND of n terms.
For each of the n terms, get its postings, then
AND them together.
Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16
Query: Brutus AND Calpurnia AND Caesar 25
Introduction to Information Retrieval Sec. 1.3
Query optimization example
Process in order of increasing freq:
start with smallest set, then keep cutting further.
This is why we kept
document freq. in
dictionary
Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16
Execute the query as (Calpurnia AND Brutus) AND Caesar.
26
Introduction to Information Retrieval Sec. 1.3
More general optimization
e.g., (madding OR crowd) AND (ignoble OR
strife)
Get doc. freq.’s for all terms.
Estimate the size of each OR by the sum of its
doc. freq.’s (conservative).
Process in increasing order of OR sizes.
27
Introduction to Information Retrieval
Exercise
Recommend a query
processing order for
Term Freq
(tangerine OR trees) AND eyes 213312
(marmalade OR skies) AND kaleidoscope 87009
(kaleidoscope OR eyes) marmalade 107913
skies 271658
tangerine 46653
trees 316812
28
Introduction to Information Retrieval
Query processing exercises
Exercise: If the query is friends AND romans AND
(NOT countrymen), how could we use the freq of
countrymen?
Exercise: Extend the merge to an arbitrary Boolean
query. Can we always guarantee execution in time
linear in the total postings size?
Hint: Begin with the case of a Boolean formula query:
in this, each query term appears only once in the
query.
29
Introduction to Information Retrieval
What’s ahead in IR?
Beyond term search
What about phrases?
Stanford University
Proximity: Find Gates NEAR Microsoft.
Need index to capture position information in docs.
Zones in documents: Find documents with (author =
Ullman) AND (text contains automata).
30
Introduction to Information Retrieval
Evidence accumulation
1 vs. 0 occurrence of a search term
2 vs. 1 occurrence
3 vs. 2 occurrences, etc.
Usually more seems better
Need term frequency information in docs
31
Introduction to Information Retrieval
Ranking search results
Boolean queries give inclusion or exclusion of docs.
Often we want to rank/group results
Need to measure proximity from query to each doc.
Need to decide whether docs presented to user are
singletons, or a group of docs covering various aspects of
the query.
32
Introduction to Information Retrieval
IR vs. databases:
Structured vs unstructured data
Structured data tends to refer to information in
“tables”
Employee Manager Salary
Smith Jones 50000
Chang Smith 60000
Ivy Smith 50000
Typically allows numerical range and exact match
(for text) queries, e.g.,
Salary < 60000 AND Manager = Smith.
33
Introduction to Information Retrieval
Unstructured data
Typically refers to free text
Allows
Keyword queries including operators
More sophisticated “concept” queries e.g.,
find all web pages dealing with drug abuse
Classic model for searching text documents
34
Introduction to Information Retrieval
Semi-structured data
In fact almost no data is “unstructured”
E.g., this slide has distinctly identified zones such as
the Title and Bullets
Facilitates “semi-structured” search such as
Title contains data AND Bullets contain search
… to say nothing of linguistic structure
35
Introduction to Information Retrieval
More sophisticated semi-structured
search
Title is about Object Oriented Programming AND
Author something like stro*rup
where * is the wild-card operator
Issues:
how do you process “about”?
how do you rank results?
36
Introduction to Information Retrieval
Clustering, classification and ranking
Clustering: Given a set of docs, group them into
clusters based on their contents.
Classification: Given a set of topics, plus a new doc D,
decide which topic(s) D belongs to.
Ranking: Can we learn how to best order a set of
documents, e.g., a set of search results
37
Introduction to Information Retrieval
The web and its challenges
Unusual and diverse documents
Unusual and diverse users, queries, information
needs
Beyond terms, exploit ideas from social networks
link analysis, clickstreams ...
How do search engines work? And how can we
make them better?
38
Introduction to Information Retrieval
More sophisticated information retrieval
Cross-language information retrieval
Question answering
Summarization
Text mining
…
39