Lecture 2 - Boolean Retrieval
Lecture 2 - Boolean Retrieval
Vasily Sidorov
Information Retrieval
• Information Retrieval (IR) is finding material (usually
documents) of an unstructured nature (usually text)
that satisfies an information need from within large
collections (usually stored on computers)
2
Related Definitions
• Related definitions
– Information need: The topic about which the user
desires to know more
– Query: What the user conveys to the computer in
an attempt to communicate the information need
– Relevant document: user perceives as containing
information of value with respect to the
information need
Unstructured (text) vs. structured (database)
data in the 1996
250
200
150
Unstructured
Structured
100
50
0
DATA VOLUME MARKET CAP
4
Unstructured (text) vs. structured (database)
data in 2006
250
200
150
Unstructured
Structured
100
50
0
DATA VOLUME MARKET CAP
5
Boolean Retrieval
• The Boolean model is arguably the simplest
model to base an information retrieval system
on
• Queries are Boolean expressions
– Example: Brutus AND Caesar
• The search engine return all documents that
satisfy the Boolean expression
– without ranking?
Sec. 1.1
Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
Incidence vectors
• So we have a 0/1 vector for each term.
• To answer query: take the vectors for Brutus,
Caesar and Calpurnia (complemented!) ➔
bitwise AND.
– 110100 AND Antony
Antony and Cleopatra
1
Julius Caesar
1
The Tempest
0
Hamlet
0
Othello
0
Macbeth
1
Brutus 1 1 0 1 0 0
– 101111 = mercy
worser
1
1
0
0
1
1
1
1
1
1
1
0
– 100100
9
Sec. 1.1
10
Sec. 1.1
Bigger collections
• Consider N = 1 million documents, each with
about 1000 words.
• Avg 6 bytes/word including spaces and
punctuation
– 6 GB of data in the documents overall
• Say there are M = 500K distinct terms among
these.
11
Sec. 1.1
• Inverted index!
12
Sec. 1.2
Inverted index
• For each term t, we must store a list of all documents
that contain t.
– Identify each doc by a docID, a document serial number
• Can we use fixed-size arrays for this?
Inverted index
• We need variable-size postings lists
– On disk, a continuous run of postings is normal and best
– In memory, can use linked lists or variable length arrays
• Some tradeoffs in size/ease of insertion Posting
Dictionary Postings
Sorted by docID (more later on why).
14
Inverted index construction
Documents to Friends, Romans, countrymen.
be indexed
Tokenizer
Linguistic modules
Indexer friend 2 4
roman 1 2
Inverted index
countryman 13 16
Initial stages of text processing
• Tokenization
– Cut character sequence into word tokens
• Deal with “John’s”, a state-of-the-art solution
• Normalization
– Map text and query term to same form
• You want U.S.A. and USA to match
• Stemming
– We may wish different forms of a root to match
• authorize, authorization
• Stop words
– We may omit very common words (or not)
• the, a, to, of
Sec. 1.2
Lists of
docIDs
Terms
and
counts
Later in the course
• How do we index
efficiently?
• How much storage
do we need?
20
Pointers
Sec. 1.3
21
Sec. 1.3
2 4 8 16 32 64 128 Brutus
1 2 3 5 8 13 21 34 Caesar
22
Sec. 1.3
The merge
• Walk through the two postings
simultaneously, in time linear in the total
number of postings entries
2 4 8 16 32 64 128 Brutus
2 8
1 2 3 5 8 13 21 34 Caesar
24
Sec. 1.3
26
Sec. 1.4
Boolean queries:
More general merges
• Exercise: Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar
28
Sec. 1.3
Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT
(Antony OR Cleopatra)
• Can we always merge in “linear” time?
– Linear in what?
• Can we do better?
29
Sec. 1.3
Query optimization
Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16
32
Exercise
• Recommend a query processing order for
Term Freq
eyes 213,312
(tangerine OR trees) AND
kaleidoscope 87,009
(marmalade OR skies) AND marmalade 107,913
(kaleidoscope OR eyes) skies 271,658
tangerine 46,653
trees 316,812
33
Query processing exercises
• Exercise: If the query is friends AND romans AND
(NOT countrymen), how could we use the freq of
countrymen?
• Exercise: Extend the merge to an arbitrary
Boolean query. Can we always guarantee
execution in time linear in the total postings size?
• Hint: Begin with the case of a Boolean formula
query: in this, each query term appears only once
in the query.
34
Sec. 2.4
Phrase queries
• We want to be able to answer queries such as
“stanford university” – as a phrase
• Thus the sentence “I went to university at
Stanford” is not a match.
– The concept of phrase queries has proven easily
understood by users;
– one of the few “advanced search” ideas that works
– Many more queries are implicit phrase queries
• For this, it no longer suffices to store only
<term : docs> entries
Sec. 2.4.1
<be: 993427;
1: 7, 18, 33, 72, 86, 231;
Which of docs 1,2,4,5
2: 3, 149; could contain “to be
4: 17, 191, 291, 430, 434; or not to be”?
Proximity queries
• LIMIT! /3 STATUTE /3 FEDERAL /2 TORT
– Again, here, /k means “within k words of”.
• Clearly, positional indexes can be used for
such queries; biword indexes cannot.
• Exercise: Adapt the linear merge of postings to
handle proximity queries. Can you make it
work for any value of k?
– This is a little tricky to do correctly and efficiently
– See Figure 2.12 of IIR
Sec. 2.4.2
Rules of thumb
• A positional index is 2–4x as large as a non-
positional index
Combination schemes
• These two approaches can be profitably
combined
– For particular phrases (“Michael Jackson”, “Britney
Spears”) it is inefficient to keep on merging positional
postings lists
• Even more so for phrases like “The Who”
• Williams et al. (2004) evaluate a more
sophisticated mixed indexing scheme
– A typical web query mixture was executed in ¼ of the
time of using just a positional index
– It required 26% more space than having a positional
index alone
IR vs. databases:
Structured vs unstructured data
• Structured data tends to refer to information
in “tables”
Employee Manager Salary
Smith Jones 50,000
Chang Smith 60,000
Ivy Smith 50,000
48
Semi-structured data
• In fact almost no data is “unstructured”
• E.g., this slide has distinctly identified zones such
as the Title and Bullets
• … to say nothing of linguistic structure
• Facilitates “semi-structured” search such as
– Title contains data AND Bullets contain search
• Or even
– Title is about Object Oriented Programming AND
Author something like stro*rup
– where * is the wild-card operator
49