SlideShare a Scribd company logo
Chapter 12:  Indexing and Hashing Basic Concepts Ordered Indices  B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing  Comparison of Ordered Indexing and Hashing  Index Definition in SQL Multiple-Key Access
Basic Concepts Indexing mechanisms used to speed up access to desired data. E.g., author catalog in library Search Key  - attribute to set of attributes used to look up records in a file. An  index file   consists of records (called  index entries ) of the form Index files are typically much smaller than the original file  Two basic kinds of indices: Ordered indices:  search keys are stored in sorted order Hash indices:   search keys are distributed uniformly across “buckets” using a “hash function”.  search-key pointer
Index Evaluation Metrics Access types supported efficiently.  E.g.,  records with a specified value in the attribute or records with an attribute value falling in a specified range of values. Access time Insertion time Deletion time Space overhead
Ordered Indices In an  ordered index ,  index entries are stored sorted on the search key value.  E.g., author catalog in library. Primary index :  in a sequentially ordered file, the index whose search key specifies the sequential order of the file. Also called  clustering index The search key of a primary index is usually but not necessarily the primary key. Secondary index :   an index whose search key specifies an order different from the sequential order of the file.  Also called  non-clustering index . Index-sequential file :  ordered sequential file with a primary index. Indexing techniques evaluated on basis of:
Dense Index Files Dense index  — Index record appears for every search-key value in the file.
Sparse Index Files Sparse Index :  contains index records for only some search-key values. Applicable when records are sequentially ordered on search-key To locate a record with search-key value  K  we: Find index record with largest search-key value <  K Search file sequentially starting at the record to which the index record points Less space and less maintenance overhead for insertions and deletions. Generally slower than dense index for locating records. Good tradeoff: sparse index with an index entry for every block in file, corresponding to least search-key value in the block.
Example of Sparse Index Files
Multilevel Index If primary index does not fit in memory, access becomes expensive. To reduce number of disk accesses to index records, treat primary index kept on disk as a sequential file and construct a sparse index on it. outer index – a sparse index of primary index inner index – the primary index file If even outer index is too large to fit in main memory, yet another level of index can be created, and so on. Indices at all levels must be updated on insertion or deletion from the file.
Multilevel Index (Cont.)
Index Update:  Deletion If deleted record was the only record in the file with its particular search-key value, the search-key is deleted from the index also. Single-level index deletion: Dense indices – deletion of search-key is similar to file record deletion. Sparse indices – if an entry for the search key exists in the index, it is deleted by replacing the entry in the index with the next search-key value in the file (in search-key order).  If the next search-key value already has an index entry, the entry is deleted instead of being replaced.
Index Update:  Insertion Single-level index insertion: Perform a lookup using the search-key value appearing in the record to be inserted. Dense indices – if the search-key value does not appear in the index, insert it. Sparse indices – if index stores an entry for each block of the file, no change needs to be made to the index unless a new block is created.  In this case, the first search-key value appearing in the new block is inserted into the index. Multilevel insertion (as well as deletion) algorithms are simple extensions of the single-level algorithms
Secondary Indices Frequently, one wants to find all the records whose values in a certain field (which is not the search-key of the primary index satisfy some condition. Example 1: In the  account  database stored sequentially by account number, we may want to find all accounts in a particular branch Example 2: as above, but where we want to find all accounts with a specified balance or range of balances We can have a secondary index with an index record for each search-key value; index record points to a bucket that contains pointers to all the actual records with that particular search-key value.
Secondary Index on  balance  field of  account
Primary and Secondary Indices Secondary indices have to be dense. Indices offer substantial benefits when searching for records. When a file is modified, every index on the file must be updated, Updating indices imposes overhead on database modification. Sequential scan using primary index is efficient, but a sequential scan using a secondary index is expensive  each record access may fetch a new block from disk
B + -Tree Index Files Disadvantage of indexed-sequential files: performance degrades as file grows, since many overflow blocks get created.  Periodic reorganization of entire file is required. Advantage of B + -tree   index files:  automatically reorganizes itself with small, local, changes, in the face of insertions and deletions.  Reorganization of entire file is not required to maintain performance. Disadvantage of B + -trees: extra insertion and deletion overhead, space overhead. Advantages of B + -trees outweigh disadvantages, and they are used extensively. B + -tree indices are an alternative to indexed-sequential files.
B + -Tree Index Files (Cont.) All paths from root to leaf are of the same length Each node that is not a root or a leaf has between [ n /2] and  n  children. A leaf node has between [( n –1)/2] and  n –1 values Special cases:  If the root is not a leaf, it has at least 2 children. If the root is a leaf (that is, there are no other nodes in the tree), it can have between 0 and ( n –1) values. A B + -tree is a rooted tree satisfying the following properties:
B + -Tree Node Structure Typical node K i  are the search-key values  P i  are pointers to children (for non-leaf nodes) or pointers to records or buckets of records (for leaf nodes). The search-keys in a node are ordered    K 1  <  K 2  <  K 3  <  . . .   <  K n– 1
Leaf Nodes in B + -Trees For  i  = 1, 2, . . .,  n– 1, pointer  P i  either points to a file record with search-key value  K i , or to a bucket of pointers to file records, each record having search-key value  K i .  Only need bucket structure if search-key does not form a primary key. If  L i , L j  are leaf nodes and  i  <  j, L i ’s search-key values are less than  L j ’s search-key values P n  points to next leaf node in search-key order Properties of a leaf node:
Non-Leaf Nodes in B + -Trees Non leaf nodes form a multi-level sparse index on the leaf nodes.  For a non-leaf node with  m  pointers: All the search-keys in the subtree to which  P 1  points are less than  K 1 For 2     i     n  – 1, all the search-keys in the subtree to which  P i  points have values greater than or equal to  K i –1  and less than  K m–1
Example of a B + -tree B + -tree for  account  file ( n =  3)
Example of B + -tree Leaf nodes must have between 2 and 4 values  (  ( n –1)/2   and  n  –1, with  n  = 5). Non-leaf nodes other than root must have between 3 and 5 children (  ( n /2   and  n  with  n  =5). Root must have at least 2 children. B + -tree for  account  file ( n  - 5)
Observations about B + -trees Since the inter-node connections are done by pointers, “logically” close blocks need not be “physically” close. The non-leaf levels of the B + -tree form a hierarchy of sparse indices. The B + -tree contains a relatively small number of levels (logarithmic in the size of the main file), thus searches can be conducted efficiently. Insertions and deletions to the main file can be handled efficiently, as the index can be restructured in logarithmic time (as we shall see).
Queries on B + -Trees Find all records with a search-key value of  k. Start with the root node Examine the node for the smallest search-key value >  k. If such a value exists, assume it is  K j .  Then follow  P i  to the child node Otherwise  k      K m –1 , where there are  m  pointers in the node.  Then follow  P m  to the child node. If the node reached by following the pointer above is not a leaf node, repeat the above procedure on the node, and follow the corresponding pointer. Eventually reach a leaf node.  If for some  i , key  K i  =  k  follow pointer  P i   to the desired record or bucket.  Else no record with search-key value  k  exists.
Queries on B +- Trees (Cont.) In processing a query, a path is traversed in the tree from the root to some leaf node. If there are  K  search-key values in the file, the path is no longer than    log  n /2  ( K )  . A node is generally the same size as a disk block, typically 4 kilobytes, and  n  is typically around 100 (40 bytes per index entry). With 1 million search key values and  n  = 100, at most  log 50 (1,000,000) = 4 nodes are accessed in a lookup. Contrast this with a balanced binary tree with 1 million search key values — around 20 nodes are accessed in a lookup above difference is significant since every node access may need a disk I/O, costing around 20 milliseconds!
Updates on B + -Trees:  Insertion Find the leaf node in which the search-key value would appear If the search-key value is already there in the leaf node, record is added to file and if necessary a pointer is inserted into the bucket. If the search-key value is not there, then add the record to the main file and create a bucket if necessary.  Then: If there is room in the leaf node, insert (key-value, pointer) pair in the leaf node Otherwise, split the node (along with the new (key-value, pointer) entry) as discussed in the next slide.
Updates on B + -Trees:  Insertion (Cont.) Splitting a node: take the  n (search-key value, pointer) pairs (including the one being inserted) in sorted order.  Place the first     n /2     in the original node, and the rest in a new node. let the new node be  p,  and let  k  be the least key value in  p.  Insert ( k,p ) in the parent of the node being split. If the parent is full, split it and propagate the split further up. The splitting of nodes proceeds upwards till a node that is not full is found.  In the worst case the root node may be split increasing the height of the tree by 1.  Result of splitting node containing Brighton and Downtown on  inserting Clearview
Updates on B + -Trees:  Insertion (Cont.) B + -Tree before and after insertion of “Clearview”
Updates on B + -Trees: Deletion Find the record to be deleted, and remove it from the main file and from the bucket (if present) Remove (search-key value, pointer) from the leaf node if there is no bucket or if the bucket has become empty If the node has too few entries due to the removal, and the entries in the node and a sibling fit into a single node, then  Insert all the search-key values in the two nodes into a single node (the one on the left), and delete the other node. Delete the pair ( K i– 1 ,  P i ),  where  P i  is the pointer to the deleted node, from its parent, recursively using the above procedure.
Updates on B + -Trees:  Deletion Otherwise, if the node has too few entries due to the removal, and the entries in the node and a sibling fit into a single node, then Redistribute the pointers between the node and a sibling such that both have more than the minimum number of entries. Update the corresponding search-key value in the parent of the node. The node deletions may cascade upwards till a node which has   n/2    or more pointers is found.  If the root node has only one pointer after deletion, it is deleted and the sole child becomes the root.
Examples of B + -Tree Deletion The removal of the leaf node containing “Downtown” did not result in its parent having too little pointers.  So the cascaded deletions stopped with the deleted leaf node’s parent. Before and after deleting “Downtown”
Examples of B + -Tree Deletion (Cont.) Node with “Perryridge” becomes underfull (actually empty, in this special case) and merged with its sibling. As a result “Perryridge” node’s parent became underfull, and was merged with its sibling (and an entry was deleted from their parent) Root node then had only one child, and was deleted and its child became the new root node Deletion of “Perryridge” from result of previous example
Example of B + -tree Deletion (Cont.) Parent  of leaf containing Perryridge became underfull, and borrowed a pointer from its left sibling Search-key value in the parent’s parent changes as a result Before and after deletion of “Perryridge” from earlier example
B + -Tree File Organization Index file degradation problem is solved by using B + -Tree indices.  Data file degradation problem is solved by using B + -Tree File Organization. The leaf nodes in a B + -tree file organization store records, instead of pointers. Since records are larger than pointers, the maximum number of records that can be stored in a leaf node is less than the number of pointers in a nonleaf node. Leaf nodes are still required to be half full. Insertion and deletion are handled in the same way as insertion and deletion of entries in a B + -tree index.
B + -Tree File Organization (Cont.) Good space utilization important since records use more space than pointers.  To improve space utilization, involve more sibling nodes in redistribution during splits and merges Involving 2 siblings in redistribution (to avoid split / merge where possible) results in each node having at least  entries Example of B + -tree File Organization
B-Tree Index Files Nonleaf node – pointers  B i  are the bucket or file record pointers. Similar to B+-tree, but B-tree allows search-key values to appear only once; eliminates redundant storage of search keys. Search keys in nonleaf nodes appear nowhere else in the B-tree; an additional pointer field for each search key in a nonleaf node must be included. Generalized B-tree leaf node
B-Tree Index File Example B-tree (above) and B+-tree (below) on same data
B-Tree Index Files (Cont.) Advantages of B-Tree indices: May use less tree nodes than a corresponding B + -Tree. Sometimes possible to find search-key value before reaching leaf node. Disadvantages of B-Tree indices: Only small fraction of all search-key values are found early  Non-leaf nodes are larger, so fan-out is reduced.  Thus B-Trees typically have greater depth than corresponding  B + -Tree Insertion and deletion more complicated than in B + -Trees  Implementation is harder than B + -Trees. Typically, advantages of B-Trees do not out weigh disadvantages.
Static Hashing A  bucket  is a unit of storage containing one or more records (a bucket is typically a disk block).  In a  hash file organization  we obtain the bucket of a record directly from its search-key value using a  hash   function. Hash function  h  is a function from the set of all search-key values  K  to the set of all bucket addresses  B. Hash function is used to locate records for access, insertion as well as deletion. Records with different search-key values may be mapped to the same bucket; thus entire bucket has to be searched sequentially to locate a record.
Example of Hash File Organization (Cont.) There are 10 buckets, The binary representation of the  i th character is assumed to be the integer  i. The hash function returns the sum of the binary representations of the characters modulo 10 E.g. h(Perryridge) = 5  h(Round Hill) = 3  h(Brighton) = 3 Hash file organization of  account  file, using  branch-name  as key  (See figure in next slide.)
Example of Hash File Organization  Hash file organization of  account  file, using  branch-name  as key   (see previous slide for details).
Hash Functions Worst has function maps all search-key values to the same bucket; this makes access time proportional to the number of search-key values in the file. An ideal hash function is  uniform ,  i.e., each bucket is assigned the same number of search-key values from the set of  all  possible values. Ideal hash function is  random , so each bucket will have the same number of records assigned to it irrespective of the  actual distribution  of search-key values in the file. Typical hash functions perform computation on the internal binary representation of the search-key.  For example, for a string search-key, the binary representations of all the characters in the string could be added and the sum modulo the number of buckets could be returned. .
Handling of Bucket Overflows Bucket overflow can occur because of  Insufficient buckets  Skew in distribution of records.  This can occur due to two reasons: multiple records have same search-key value chosen hash function produces non-uniform distribution of key values Although the probability of bucket overflow can be reduced, it cannot be eliminated; it is handled by using  overflow buckets .
Handling of Bucket Overflows (Cont.) Overflow chaining  – the overflow buckets of a given bucket are chained together in a linked list. Above scheme is called  closed hashing .   An alternative, called  open hashing , which does not use overflow buckets,  is not suitable for database applications.
Hash Indices Hashing can be used not only for file organization, but also for index-structure creation.  A  hash index  organizes the search keys, with their associated record pointers, into a hash file structure. Strictly speaking, hash indices are always secondary indices  if the file itself is organized using hashing, a separate primary hash index on it using the same search-key is unnecessary.  However, we use the term hash index to refer to both secondary index structures and hash organized files.
Example of Hash Index
Deficiencies of Static Hashing In static hashing, function  h  maps search-key values to a fixed set of  B  of bucket addresses. Databases grow with time.  If initial number of buckets is too small, performance will degrade due to too much overflows. If file size at some point in the future is anticipated and number of buckets allocated accordingly, significant amount of space will be wasted initially. If database shrinks, again space will be wasted. One option is periodic re-organization of the file with a new hash function, but it is very expensive. These problems can be avoided by using techniques that allow the number of buckets to be modified dynamically.
Dynamic Hashing Good for database that grows and shrinks in size Allows the hash function to be modified dynamically Extendable hashing  – one form of dynamic hashing  Hash function generates values over a large range — typically  b -bit integers, with  b  = 32. At any time use only a prefix of the hash function to index into a table of bucket addresses.  Let the length of the prefix be  i  bits,  0     i     32.  Bucket address table size = 2 i.   Initially  i  = 0 Value of  i  grows and shrinks as the size of the database grows and shrinks. Multiple entries in the bucket address table may point to a bucket.  Thus, actual number of buckets is < 2 i The number of buckets also changes dynamically due to coalescing and splitting of buckets.
General Extendable Hash Structure  In this structure,  i 2  =  i 3  =  i , whereas  i 1  =  i  – 1 (see next slide for details)
Use of Extendable Hash Structure Each bucket  j  stores a value  i j ;  all the entries that point to the same bucket have the same values on the first  i j  bits.   To locate the bucket containing search-key  K j : 1. Compute  h(K j ) = X 2. Use the first  i  high order bits of  X  as a displacement into bucket address table, and follow the pointer to appropriate bucket To insert a record with search-key value  K j   follow same procedure as look-up and locate the bucket, say  j .  If there is room in the bucket  j  insert record in the bucket.  Else the bucket must be split and insertion re-attempted (next slide.) Overflow buckets used instead in some cases (will see shortly)
Updates in Extendable Hash Structure  If  i  >  i j  (more than one pointer to bucket  j ) allocate a new bucket  z , and set  i j   and  i z  to the old  i j  -+ 1. make the second half of the bucket address table entries pointing to  j  to point to  z remove and reinsert each record in bucket  j. recompute new bucket for  K j   and insert record in the bucket (further splitting is required if the bucket is still full) If  i = i j   (only one pointer to bucket  j ) increment  i  and double the size of the bucket address table. replace each entry in the table by two entries that point to the same bucket. recompute new bucket address table entry for  K j Now  i  >  i j   so use the first case above.  To split a bucket  j  when inserting record with search-key value  K j :
Updates in Extendable Hash Structure (Cont.) When inserting a value, if the bucket is full after several splits (that is,  i  reaches some limit  b ) create an overflow bucket instead of splitting bucket entry table further. To delete a key value,  locate it in its bucket and remove it.  The bucket itself can be removed if it becomes empty (with appropriate updates to the bucket address table).  Coalescing of buckets can be done (can coalesce only with a “buddy” bucket having same value of i j  and same i j  –1 prefix, if it is present)  Decreasing bucket address table size is also possible Note: decreasing bucket address table size is an expensive operation and should be done only if number of buckets becomes much smaller than the size of the table
Use of Extendable Hash Structure:  Example  Initial Hash structure, bucket size = 2
Example (Cont.) Hash structure after  insertion of one Brighton and two Downtown records
Example (Cont.) Hash structure after insertion of Mianus record
Example (Cont.) Hash structure after insertion of  three Perryridge records
Example (Cont.) Hash structure after insertion of Redwood and Round Hill records
Extendable Hashing vs. Other Schemes Benefits of extendable hashing:  Hash performance does not degrade with growth of file Minimal space overhead Disadvantages of extendable hashing Extra level of indirection to find desired record Bucket address table may itself become very big (larger than memory) Need a tree structure to locate desired record in the structure! Changing size of bucket address table is an expensive operation Linear hashing  is an alternative mechanism which avoids these disadvantages at the possible cost of more bucket overflows
Comparison of Ordered Indexing and Hashing Cost of periodic re-organization Relative frequency of insertions and deletions Is it desirable to optimize average access time at the expense of worst-case access time? Expected type of queries: Hashing is generally better at retrieving records having a specified value of the key. If range queries are common, ordered indices are to be preferred
Index Definition in SQL Create an index create index  <index-name>  or  <relation-name> <attribute-list>) E.g.:  create index  b-index  on  branch(branch-name) Use  create unique index  to indirectly specify and enforce the condition that the search key is a candidate key is a candidate key. Not really required if SQL  unique  integrity constraint is supported To drop an index  drop index  <index-name>
Multiple-Key Access Use multiple indices for certain types of queries. Example:  select  account-number from  account where  branch-name  = “Perryridge”  and  balance  - 1000 Possible strategies for processing query using indices on single attributes: 1. Use index on  branch-name  to find accounts with balances of $1000; test  branch-name = “ Perryridge”.   2. Use index   on  balance  to find accounts with balances of $1000; test  branch-name =  “Perryridge”. 3. Use  branch-name  index to find pointers to all records pertaining to the Perryridge branch.  Similarly use index on  balance .  Take intersection of both sets of pointers obtained.
Indices on Multiple Attributes With the  where  clause where  branch-name =  “Perryridge”  and   balance =  1000 the index on the combined search-key will fetch only records that satisfy both conditions. Using separate indices in less efficient — we may fetch many records (or pointers) that satisfy only one of the conditions. Can also efficiently handle  where  branch-name  - “Perryridge”  and  balance  < 1000 But cannot efficiently handle where  branch-name  < “Perryridge”  and   balance =  1000 May fetch many records that satisfy the first but not the second condition. Suppose we have an index on combined search-key ( branch-name, balance ).
Grid Files Structure used to speed the processing of general multiple search-key queries involving one or more comparison operators. The  grid file  has a single grid array and one linear scale for each search-key attribute.  The grid array has number of dimensions equal to number of search-key attributes. Multiple cells of grid array can point to same bucket To find the bucket for a search-key value, locate the row and column of its cell using the linear scales and follow pointer
Example Grid File for  account
Queries on a Grid File A grid file on two attributes  A  and  B  can handle queries of all following forms with reasonable efficiency  ( a 1      A     a 2 ) ( b 1      B      b 2 ) ( a 1      A     a 2      b 1      B      b 2 ),. E.g., to answer ( a 1      A     a 2      b 1      B      b 2 ), use linear scales to find corresponding candidate grid array cells, and look up all the buckets pointed to from those cells.
Grid Files (Cont.) During insertion, if a bucket becomes full, new bucket can be created if more than one cell points to it.  Idea similar to extendable hashing, but on multiple dimensions If only one cell points to it, either an overflow bucket must be created or the grid size must be increased Linear scales must be chosen to uniformly distribute records across cells.  Otherwise there will be too many overflow buckets. Periodic re-organization to increase grid size will help. But reorganization can be very expensive. Space overhead of grid array can be high. R-trees (Chapter 23) are an alternative
Bitmap Indices Bitmap indices are a special type of index designed for efficient querying on multiple keys Records in a relation are assumed to be numbered sequentially from, say, 0 Given a number  n  it must be easy to retrieve record  n Particularly easy if records are of fixed size Applicable on attributes that take on a relatively small number of distinct values E.g. gender, country, state, … E.g. income-level (income broken up into a small number of  levels such as 0-9999, 10000-19999, 20000-50000, 50000- infinity) A bitmap is simply an array of bits
Bitmap Indices (Cont.) In its simplest form a bitmap index on an attribute has a bitmap for each value of the attribute Bitmap has as many bits as records In a bitmap for value v, the bit for a record is 1 if the record has the value v for the attribute, and is 0 otherwise
Bitmap Indices (Cont.) Bitmap indices are useful for queries on multiple attributes  not particularly useful for single attribute queries Queries are answered using bitmap operations Intersection (and) Union (or) Complementation (not)  Each operation takes two bitmaps of the same size and applies the operation on corresponding bits to get the result bitmap E.g.  100110  AND 110011 = 100010 100110  OR  110011 = 110111   NOT 100110  = 011001 Males with income level L1:  10010 AND 10100 = 10000 Can then retrieve required tuples. Counting number of matching tuples is even faster
Bitmap Indices (Cont.) Bitmap indices generally very small compared with relation size E.g. if record is 100 bytes, space for a single bitmap is 1/800 of space used by relation.  If number of distinct attribute values is 8, bitmap is only 1% of relation size Deletion needs to be handled properly Existence bitmap  to note if there is a valid record at a record location Needed for complementation not( A=v ):  (NOT bitmap-A-v) AND ExistenceBitmap Should keep bitmaps for all values, even null value To correctly handle SQL null semantics for  NOT( A=v ): intersect above result with  (NOT  bitmap-A-Null )
Efficient Implementation of Bitmap Operations Bitmaps are packed into words;  a single word and (a basic CPU instruction) computes and of 32 or 64 bits at once E.g. 1-million-bit maps can be anded with just 31,250 instruction Counting number of 1s can be done fast by a trick: Use each byte to index into a precomputed array of 256 elements each storing the count of 1s in the binary representation Can use pairs of bytes to speed up further at a higher memory cost Add up the retrieved counts Bitmaps can be used instead of Tuple-ID lists at leaf levels of  B + -trees, for values that have a large number of matching records Worthwhile if > 1/64 of the records have that value, assuming a tuple-id is 64 bits Above technique merges benefits of bitmap and B + -tree indices
End of Chapter
Partitioned Hashing Hash values are split into segments that depend on each attribute of the search-key. ( A 1 , A 2 , . . . ,  A n )  for  n  attribute search-key Example:  n =  2, for  customer,  search-key being  ( customer-street, customer-city ) search-key value hash value (Main, Harrison) 101 111 (Main, Brooklyn) 101 001 (Park, Palo Alto) 010 010 (Spring, Brooklyn) 001 001 (Alma, Palo Alto) 110 010 To answer equality query on single attribute, need to look up multiple buckets.  Similar in effect to grid files.
Sequential File For  account  Records
Deletion of “Perryridge” From the B + -Tree of Figure 12.12
Sample  account  File

More Related Content

What's hot (20)

DBMS Keys
DBMS KeysDBMS Keys
DBMS Keys
Tarun Maheshwari
 
Dinive conquer algorithm
Dinive conquer algorithmDinive conquer algorithm
Dinive conquer algorithm
Mohd Arif
 
Tree - Data Structure
Tree - Data StructureTree - Data Structure
Tree - Data Structure
Ashim Lamichhane
 
B trees in Data Structure
B trees in Data StructureB trees in Data Structure
B trees in Data Structure
Anuj Modi
 
I.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AII.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AI
vikas dhakane
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notations
Nikhil Sharma
 
Transaction management DBMS
Transaction  management DBMSTransaction  management DBMS
Transaction management DBMS
Megha Patel
 
File organization 1
File organization 1File organization 1
File organization 1
Rupali Rana
 
Dbms architecture
Dbms architectureDbms architecture
Dbms architecture
Shubham Dwivedi
 
Dbms Notes Lecture 9 : Specialization, Generalization and Aggregation
Dbms Notes Lecture 9 : Specialization, Generalization and AggregationDbms Notes Lecture 9 : Specialization, Generalization and Aggregation
Dbms Notes Lecture 9 : Specialization, Generalization and Aggregation
BIT Durg
 
Dynamic storage allocation techniques in Compiler design
Dynamic storage allocation techniques in Compiler designDynamic storage allocation techniques in Compiler design
Dynamic storage allocation techniques in Compiler design
kunjan shah
 
stack & queue
stack & queuestack & queue
stack & queue
manju rani
 
State space search
State space searchState space search
State space search
chauhankapil
 
Dbms relational model
Dbms relational modelDbms relational model
Dbms relational model
Chirag vasava
 
Relational model
Relational modelRelational model
Relational model
Dabbal Singh Mahara
 
Data Structures : hashing (1)
Data Structures : hashing (1)Data Structures : hashing (1)
Data Structures : hashing (1)
Home
 
Distributed dbms architectures
Distributed dbms architecturesDistributed dbms architectures
Distributed dbms architectures
Pooja Dixit
 
1.2 steps and functionalities
1.2 steps and functionalities1.2 steps and functionalities
1.2 steps and functionalities
Krish_ver2
 
Concurrency Control in Database Management System
Concurrency Control in Database Management SystemConcurrency Control in Database Management System
Concurrency Control in Database Management System
Janki Shah
 
11. Storage and File Structure in DBMS
11. Storage and File Structure in DBMS11. Storage and File Structure in DBMS
11. Storage and File Structure in DBMS
koolkampus
 
Dinive conquer algorithm
Dinive conquer algorithmDinive conquer algorithm
Dinive conquer algorithm
Mohd Arif
 
B trees in Data Structure
B trees in Data StructureB trees in Data Structure
B trees in Data Structure
Anuj Modi
 
I.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AII.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AI
vikas dhakane
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notations
Nikhil Sharma
 
Transaction management DBMS
Transaction  management DBMSTransaction  management DBMS
Transaction management DBMS
Megha Patel
 
File organization 1
File organization 1File organization 1
File organization 1
Rupali Rana
 
Dbms Notes Lecture 9 : Specialization, Generalization and Aggregation
Dbms Notes Lecture 9 : Specialization, Generalization and AggregationDbms Notes Lecture 9 : Specialization, Generalization and Aggregation
Dbms Notes Lecture 9 : Specialization, Generalization and Aggregation
BIT Durg
 
Dynamic storage allocation techniques in Compiler design
Dynamic storage allocation techniques in Compiler designDynamic storage allocation techniques in Compiler design
Dynamic storage allocation techniques in Compiler design
kunjan shah
 
State space search
State space searchState space search
State space search
chauhankapil
 
Dbms relational model
Dbms relational modelDbms relational model
Dbms relational model
Chirag vasava
 
Data Structures : hashing (1)
Data Structures : hashing (1)Data Structures : hashing (1)
Data Structures : hashing (1)
Home
 
Distributed dbms architectures
Distributed dbms architecturesDistributed dbms architectures
Distributed dbms architectures
Pooja Dixit
 
1.2 steps and functionalities
1.2 steps and functionalities1.2 steps and functionalities
1.2 steps and functionalities
Krish_ver2
 
Concurrency Control in Database Management System
Concurrency Control in Database Management SystemConcurrency Control in Database Management System
Concurrency Control in Database Management System
Janki Shah
 
11. Storage and File Structure in DBMS
11. Storage and File Structure in DBMS11. Storage and File Structure in DBMS
11. Storage and File Structure in DBMS
koolkampus
 

Viewers also liked (20)

Indexing and hashing
Indexing and hashingIndexing and hashing
Indexing and hashing
Jeet Poria
 
DBMS - Normalization
DBMS - NormalizationDBMS - Normalization
DBMS - Normalization
Jitendra Tomar
 
ER Model in DBMS
ER Model in DBMSER Model in DBMS
ER Model in DBMS
Kabindra Koirala
 
Relational algebra in dbms
Relational algebra in dbmsRelational algebra in dbms
Relational algebra in dbms
shekhar1991
 
Set operators
Set  operatorsSet  operators
Set operators
Manuel S. Enverga University Foundation
 
Relational keys
Relational keysRelational keys
Relational keys
Sana2020
 
Sql Authorization
Sql AuthorizationSql Authorization
Sql Authorization
Fhuy
 
Database management system basic, database, database management, learn databa...
Database management system basic, database, database management, learn databa...Database management system basic, database, database management, learn databa...
Database management system basic, database, database management, learn databa...
University of Science and Technology Chitttagong
 
6. Integrity and Security in DBMS
6. Integrity and Security in DBMS6. Integrity and Security in DBMS
6. Integrity and Security in DBMS
koolkampus
 
Architecture of-dbms-and-data-independence
Architecture of-dbms-and-data-independenceArchitecture of-dbms-and-data-independence
Architecture of-dbms-and-data-independence
Anuj Modi
 
Relational Algebra-Database Systems
Relational Algebra-Database SystemsRelational Algebra-Database Systems
Relational Algebra-Database Systems
jakodongo
 
Slide 5 keys
Slide 5 keysSlide 5 keys
Slide 5 keys
Visakh V
 
View of data DBMS
View of data DBMSView of data DBMS
View of data DBMS
Rahul Narang
 
PLM Introduction
PLM IntroductionPLM Introduction
PLM Introduction
Jayakumar Vadivelu
 
Database language
Database languageDatabase language
Database language
University of Science and Technology Chitttagong
 
2. Entity Relationship Model in DBMS
2. Entity Relationship Model in DBMS2. Entity Relationship Model in DBMS
2. Entity Relationship Model in DBMS
koolkampus
 
Database design & Normalization (1NF, 2NF, 3NF)
Database design & Normalization (1NF, 2NF, 3NF)Database design & Normalization (1NF, 2NF, 3NF)
Database design & Normalization (1NF, 2NF, 3NF)
Jargalsaikhan Alyeksandr
 
Trigger
TriggerTrigger
Trigger
Slideshare
 
15. Transactions in DBMS
15. Transactions in DBMS15. Transactions in DBMS
15. Transactions in DBMS
koolkampus
 
ERP Implementation Life Cycle
ERP Implementation Life CycleERP Implementation Life Cycle
ERP Implementation Life Cycle
Apurv Gourav
 
Ad

Similar to 12. Indexing and Hashing in DBMS (20)

Indexing and Hashing
Indexing and HashingIndexing and Hashing
Indexing and Hashing
sathish sak
 
DOC-20240804-WA0006..pdforaclesqlindexing
DOC-20240804-WA0006..pdforaclesqlindexingDOC-20240804-WA0006..pdforaclesqlindexing
DOC-20240804-WA0006..pdforaclesqlindexing
storage2ndyr
 
ch12
ch12ch12
ch12
KITE www.kitecolleges.com
 
Indexing and-hashing
Indexing and-hashingIndexing and-hashing
Indexing and-hashing
Ami Ranjit
 
UNIT-6.ppt discusses about indexing aand hashing techniques
UNIT-6.ppt discusses about indexing aand hashing techniquesUNIT-6.ppt discusses about indexing aand hashing techniques
UNIT-6.ppt discusses about indexing aand hashing techniques
DrRBullibabu
 
Ardbms
ArdbmsArdbms
Ardbms
guestcc2d29
 
DBMS-Unit5-PPT.pptx important for revision
DBMS-Unit5-PPT.pptx important for revisionDBMS-Unit5-PPT.pptx important for revision
DBMS-Unit5-PPT.pptx important for revision
yuvivarmaa
 
Ch12
Ch12Ch12
Ch12
Welly Dian Astika
 
Furnish an Index Using the Works of Tree Structures
Furnish an Index Using the Works of Tree StructuresFurnish an Index Using the Works of Tree Structures
Furnish an Index Using the Works of Tree Structures
ijceronline
 
Index Structures.pptx
Index Structures.pptxIndex Structures.pptx
Index Structures.pptx
MBablu1
 
Indexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmm
Indexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmmIndexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmm
Indexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmm
RAtna29
 
Indexing.ppt
Indexing.pptIndexing.ppt
Indexing.ppt
KalsoomTahir2
 
9910559 jjjgjgjfs lke lwmerfml lew we.ppt
9910559 jjjgjgjfs lke   lwmerfml lew  we.ppt9910559 jjjgjgjfs lke   lwmerfml lew  we.ppt
9910559 jjjgjgjfs lke lwmerfml lew we.ppt
abduganiyevbekzod011
 
Indexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Indexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvIndexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Indexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
shesnasuneer
 
Indexing_DATA STRUCTURE FOR ENGINEERING STUDENTS ppt
Indexing_DATA STRUCTURE FOR ENGINEERING STUDENTS pptIndexing_DATA STRUCTURE FOR ENGINEERING STUDENTS ppt
Indexing_DATA STRUCTURE FOR ENGINEERING STUDENTS ppt
ssuser99ca78
 
Cs437 lecture 14_15
Cs437 lecture 14_15Cs437 lecture 14_15
Cs437 lecture 14_15
Aneeb_Khawar
 
Database Management Systems full lecture
Database Management Systems full lectureDatabase Management Systems full lecture
Database Management Systems full lecture
thiru12741550
 
A41001011
A41001011A41001011
A41001011
ijceronline
 
indexing and hashing
indexing and hashingindexing and hashing
indexing and hashing
University of Potsdam
 
Indexing and Hashing.ppt
Indexing and Hashing.pptIndexing and Hashing.ppt
Indexing and Hashing.ppt
vedantihp21
 
Indexing and Hashing
Indexing and HashingIndexing and Hashing
Indexing and Hashing
sathish sak
 
DOC-20240804-WA0006..pdforaclesqlindexing
DOC-20240804-WA0006..pdforaclesqlindexingDOC-20240804-WA0006..pdforaclesqlindexing
DOC-20240804-WA0006..pdforaclesqlindexing
storage2ndyr
 
Indexing and-hashing
Indexing and-hashingIndexing and-hashing
Indexing and-hashing
Ami Ranjit
 
UNIT-6.ppt discusses about indexing aand hashing techniques
UNIT-6.ppt discusses about indexing aand hashing techniquesUNIT-6.ppt discusses about indexing aand hashing techniques
UNIT-6.ppt discusses about indexing aand hashing techniques
DrRBullibabu
 
DBMS-Unit5-PPT.pptx important for revision
DBMS-Unit5-PPT.pptx important for revisionDBMS-Unit5-PPT.pptx important for revision
DBMS-Unit5-PPT.pptx important for revision
yuvivarmaa
 
Furnish an Index Using the Works of Tree Structures
Furnish an Index Using the Works of Tree StructuresFurnish an Index Using the Works of Tree Structures
Furnish an Index Using the Works of Tree Structures
ijceronline
 
Index Structures.pptx
Index Structures.pptxIndex Structures.pptx
Index Structures.pptx
MBablu1
 
Indexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmm
Indexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmmIndexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmm
Indexing.ppt mmmmmmmmmmmmmmmmmmmmmmmmmmmmm
RAtna29
 
9910559 jjjgjgjfs lke lwmerfml lew we.ppt
9910559 jjjgjgjfs lke   lwmerfml lew  we.ppt9910559 jjjgjgjfs lke   lwmerfml lew  we.ppt
9910559 jjjgjgjfs lke lwmerfml lew we.ppt
abduganiyevbekzod011
 
Indexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Indexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvIndexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Indexing.pptvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
shesnasuneer
 
Indexing_DATA STRUCTURE FOR ENGINEERING STUDENTS ppt
Indexing_DATA STRUCTURE FOR ENGINEERING STUDENTS pptIndexing_DATA STRUCTURE FOR ENGINEERING STUDENTS ppt
Indexing_DATA STRUCTURE FOR ENGINEERING STUDENTS ppt
ssuser99ca78
 
Cs437 lecture 14_15
Cs437 lecture 14_15Cs437 lecture 14_15
Cs437 lecture 14_15
Aneeb_Khawar
 
Database Management Systems full lecture
Database Management Systems full lectureDatabase Management Systems full lecture
Database Management Systems full lecture
thiru12741550
 
Indexing and Hashing.ppt
Indexing and Hashing.pptIndexing and Hashing.ppt
Indexing and Hashing.ppt
vedantihp21
 
Ad

More from koolkampus (20)

Local Area Networks in Data Communication DC24
Local Area Networks in Data Communication DC24Local Area Networks in Data Communication DC24
Local Area Networks in Data Communication DC24
koolkampus
 
Bit Oriented Protocols in Data Communication DC23
Bit Oriented Protocols in Data Communication DC23Bit Oriented Protocols in Data Communication DC23
Bit Oriented Protocols in Data Communication DC23
koolkampus
 
Data Link Control in Data Communication DC20
Data Link Control in Data Communication DC20Data Link Control in Data Communication DC20
Data Link Control in Data Communication DC20
koolkampus
 
Error Detection and Correction in Data Communication DC18
Error Detection and Correction in Data Communication DC18Error Detection and Correction in Data Communication DC18
Error Detection and Correction in Data Communication DC18
koolkampus
 
TDM in Data Communication DC16
TDM in Data Communication DC16TDM in Data Communication DC16
TDM in Data Communication DC16
koolkampus
 
Radio Communication Band(Data Communication) DC14
Radio Communication Band(Data Communication) DC14Radio Communication Band(Data Communication) DC14
Radio Communication Band(Data Communication) DC14
koolkampus
 
Connectors in Data Communication DC12
Connectors in Data Communication DC12Connectors in Data Communication DC12
Connectors in Data Communication DC12
koolkampus
 
Transmission of Digital Data(Data Communication) DC11
Transmission of Digital Data(Data Communication) DC11Transmission of Digital Data(Data Communication) DC11
Transmission of Digital Data(Data Communication) DC11
koolkampus
 
Analog to Digital Encoding in Data Communication DC9
Analog to Digital Encoding in Data Communication DC9Analog to Digital Encoding in Data Communication DC9
Analog to Digital Encoding in Data Communication DC9
koolkampus
 
Signal with DC Component(Data Communication) DC7
Signal with DC Component(Data Communication) DC7Signal with DC Component(Data Communication) DC7
Signal with DC Component(Data Communication) DC7
koolkampus
 
Layer Examples in Data Communication CD4
Layer Examples in Data Communication CD4Layer Examples in Data Communication CD4
Layer Examples in Data Communication CD4
koolkampus
 
OSI Model (Data Communication) DC3
OSI Model (Data Communication) DC3OSI Model (Data Communication) DC3
OSI Model (Data Communication) DC3
koolkampus
 
Basic Concepts in Data Communication DC1
Basic Concepts in Data Communication DC1Basic Concepts in Data Communication DC1
Basic Concepts in Data Communication DC1
koolkampus
 
Token Passing in Data Communication DC25
Token Passing in Data Communication DC25Token Passing in Data Communication DC25
Token Passing in Data Communication DC25
koolkampus
 
Data Link Protocols in Data Communication DC22
Data Link Protocols in Data Communication DC22Data Link Protocols in Data Communication DC22
Data Link Protocols in Data Communication DC22
koolkampus
 
Flow Control in Data Communication DC21
Flow Control in Data Communication DC21Flow Control in Data Communication DC21
Flow Control in Data Communication DC21
koolkampus
 
CRC in Data Communication DC19
CRC in Data Communication DC19CRC in Data Communication DC19
CRC in Data Communication DC19
koolkampus
 
Telephone Networn in Data Communication DC17
Telephone Networn in Data Communication DC17Telephone Networn in Data Communication DC17
Telephone Networn in Data Communication DC17
koolkampus
 
Multiplexing in Data Communication DC15
Multiplexing in Data Communication DC15Multiplexing in Data Communication DC15
Multiplexing in Data Communication DC15
koolkampus
 
Transmission Media in Data Communication DC13
Transmission Media in Data Communication DC13Transmission Media in Data Communication DC13
Transmission Media in Data Communication DC13
koolkampus
 
Local Area Networks in Data Communication DC24
Local Area Networks in Data Communication DC24Local Area Networks in Data Communication DC24
Local Area Networks in Data Communication DC24
koolkampus
 
Bit Oriented Protocols in Data Communication DC23
Bit Oriented Protocols in Data Communication DC23Bit Oriented Protocols in Data Communication DC23
Bit Oriented Protocols in Data Communication DC23
koolkampus
 
Data Link Control in Data Communication DC20
Data Link Control in Data Communication DC20Data Link Control in Data Communication DC20
Data Link Control in Data Communication DC20
koolkampus
 
Error Detection and Correction in Data Communication DC18
Error Detection and Correction in Data Communication DC18Error Detection and Correction in Data Communication DC18
Error Detection and Correction in Data Communication DC18
koolkampus
 
TDM in Data Communication DC16
TDM in Data Communication DC16TDM in Data Communication DC16
TDM in Data Communication DC16
koolkampus
 
Radio Communication Band(Data Communication) DC14
Radio Communication Band(Data Communication) DC14Radio Communication Band(Data Communication) DC14
Radio Communication Band(Data Communication) DC14
koolkampus
 
Connectors in Data Communication DC12
Connectors in Data Communication DC12Connectors in Data Communication DC12
Connectors in Data Communication DC12
koolkampus
 
Transmission of Digital Data(Data Communication) DC11
Transmission of Digital Data(Data Communication) DC11Transmission of Digital Data(Data Communication) DC11
Transmission of Digital Data(Data Communication) DC11
koolkampus
 
Analog to Digital Encoding in Data Communication DC9
Analog to Digital Encoding in Data Communication DC9Analog to Digital Encoding in Data Communication DC9
Analog to Digital Encoding in Data Communication DC9
koolkampus
 
Signal with DC Component(Data Communication) DC7
Signal with DC Component(Data Communication) DC7Signal with DC Component(Data Communication) DC7
Signal with DC Component(Data Communication) DC7
koolkampus
 
Layer Examples in Data Communication CD4
Layer Examples in Data Communication CD4Layer Examples in Data Communication CD4
Layer Examples in Data Communication CD4
koolkampus
 
OSI Model (Data Communication) DC3
OSI Model (Data Communication) DC3OSI Model (Data Communication) DC3
OSI Model (Data Communication) DC3
koolkampus
 
Basic Concepts in Data Communication DC1
Basic Concepts in Data Communication DC1Basic Concepts in Data Communication DC1
Basic Concepts in Data Communication DC1
koolkampus
 
Token Passing in Data Communication DC25
Token Passing in Data Communication DC25Token Passing in Data Communication DC25
Token Passing in Data Communication DC25
koolkampus
 
Data Link Protocols in Data Communication DC22
Data Link Protocols in Data Communication DC22Data Link Protocols in Data Communication DC22
Data Link Protocols in Data Communication DC22
koolkampus
 
Flow Control in Data Communication DC21
Flow Control in Data Communication DC21Flow Control in Data Communication DC21
Flow Control in Data Communication DC21
koolkampus
 
CRC in Data Communication DC19
CRC in Data Communication DC19CRC in Data Communication DC19
CRC in Data Communication DC19
koolkampus
 
Telephone Networn in Data Communication DC17
Telephone Networn in Data Communication DC17Telephone Networn in Data Communication DC17
Telephone Networn in Data Communication DC17
koolkampus
 
Multiplexing in Data Communication DC15
Multiplexing in Data Communication DC15Multiplexing in Data Communication DC15
Multiplexing in Data Communication DC15
koolkampus
 
Transmission Media in Data Communication DC13
Transmission Media in Data Communication DC13Transmission Media in Data Communication DC13
Transmission Media in Data Communication DC13
koolkampus
 

Recently uploaded (20)

Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Aaryan Kansari
 
The case for on-premises AI
The case for on-premises AIThe case for on-premises AI
The case for on-premises AI
Principled Technologies
 
Palo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity FoundationPalo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity Foundation
VICTOR MAESTRE RAMIREZ
 
Agentic AI - The New Era of Intelligence
Agentic AI - The New Era of IntelligenceAgentic AI - The New Era of Intelligence
Agentic AI - The New Era of Intelligence
Muzammil Shah
 
AI Emotional Actors: “When Machines Learn to Feel and Perform"
AI Emotional Actors:  “When Machines Learn to Feel and Perform"AI Emotional Actors:  “When Machines Learn to Feel and Perform"
AI Emotional Actors: “When Machines Learn to Feel and Perform"
AkashKumar809858
 
Let’s Get Slack Certified! 🚀- Slack Community
Let’s Get Slack Certified! 🚀- Slack CommunityLet’s Get Slack Certified! 🚀- Slack Community
Let’s Get Slack Certified! 🚀- Slack Community
SanjeetMishra29
 
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....
Jasper Oosterveld
 
Measuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI SuccessMeasuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI Success
Nikki Chapple
 
Contributing to WordPress With & Without Code.pptx
Contributing to WordPress With & Without Code.pptxContributing to WordPress With & Without Code.pptx
Contributing to WordPress With & Without Code.pptx
Patrick Lumumba
 
SDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhereSDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhere
Adtran
 
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyesEnd-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
ThousandEyes
 
Jira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : IntroductionJira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : Introduction
Ravi Teja
 
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)
Peter Bittner
 
Cyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptxCyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptx
Ghimire B.R.
 
Introducing FME Realize: A New Era of Spatial Computing and AR
Introducing FME Realize: A New Era of Spatial Computing and ARIntroducing FME Realize: A New Era of Spatial Computing and AR
Introducing FME Realize: A New Era of Spatial Computing and AR
Safe Software
 
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptx
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptxECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptx
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptx
Jasper Oosterveld
 
Dev Dives: System-to-system integration with UiPath API Workflows
Dev Dives: System-to-system integration with UiPath API WorkflowsDev Dives: System-to-system integration with UiPath API Workflows
Dev Dives: System-to-system integration with UiPath API Workflows
UiPathCommunity
 
New Ways to Reduce Database Costs with ScyllaDB
New Ways to Reduce Database Costs with ScyllaDBNew Ways to Reduce Database Costs with ScyllaDB
New Ways to Reduce Database Costs with ScyllaDB
ScyllaDB
 
Droidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing HealthcareDroidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing Healthcare
Droidal LLC
 
Fortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in CybersecurityFortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in Cybersecurity
VICTOR MAESTRE RAMIREZ
 
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Aaryan Kansari
 
Palo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity FoundationPalo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity Foundation
VICTOR MAESTRE RAMIREZ
 
Agentic AI - The New Era of Intelligence
Agentic AI - The New Era of IntelligenceAgentic AI - The New Era of Intelligence
Agentic AI - The New Era of Intelligence
Muzammil Shah
 
AI Emotional Actors: “When Machines Learn to Feel and Perform"
AI Emotional Actors:  “When Machines Learn to Feel and Perform"AI Emotional Actors:  “When Machines Learn to Feel and Perform"
AI Emotional Actors: “When Machines Learn to Feel and Perform"
AkashKumar809858
 
Let’s Get Slack Certified! 🚀- Slack Community
Let’s Get Slack Certified! 🚀- Slack CommunityLet’s Get Slack Certified! 🚀- Slack Community
Let’s Get Slack Certified! 🚀- Slack Community
SanjeetMishra29
 
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....
Jasper Oosterveld
 
Measuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI SuccessMeasuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI Success
Nikki Chapple
 
Contributing to WordPress With & Without Code.pptx
Contributing to WordPress With & Without Code.pptxContributing to WordPress With & Without Code.pptx
Contributing to WordPress With & Without Code.pptx
Patrick Lumumba
 
SDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhereSDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhere
Adtran
 
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyesEnd-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
ThousandEyes
 
Jira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : IntroductionJira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : Introduction
Ravi Teja
 
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)
Peter Bittner
 
Cyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptxCyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptx
Ghimire B.R.
 
Introducing FME Realize: A New Era of Spatial Computing and AR
Introducing FME Realize: A New Era of Spatial Computing and ARIntroducing FME Realize: A New Era of Spatial Computing and AR
Introducing FME Realize: A New Era of Spatial Computing and AR
Safe Software
 
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptx
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptxECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptx
ECS25 - The adventures of a Microsoft 365 Platform Owner - Website.pptx
Jasper Oosterveld
 
Dev Dives: System-to-system integration with UiPath API Workflows
Dev Dives: System-to-system integration with UiPath API WorkflowsDev Dives: System-to-system integration with UiPath API Workflows
Dev Dives: System-to-system integration with UiPath API Workflows
UiPathCommunity
 
New Ways to Reduce Database Costs with ScyllaDB
New Ways to Reduce Database Costs with ScyllaDBNew Ways to Reduce Database Costs with ScyllaDB
New Ways to Reduce Database Costs with ScyllaDB
ScyllaDB
 
Droidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing HealthcareDroidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing Healthcare
Droidal LLC
 
Fortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in CybersecurityFortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in Cybersecurity
VICTOR MAESTRE RAMIREZ
 

12. Indexing and Hashing in DBMS

  • 1. Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL Multiple-Key Access
  • 2. Basic Concepts Indexing mechanisms used to speed up access to desired data. E.g., author catalog in library Search Key - attribute to set of attributes used to look up records in a file. An index file consists of records (called index entries ) of the form Index files are typically much smaller than the original file Two basic kinds of indices: Ordered indices: search keys are stored in sorted order Hash indices: search keys are distributed uniformly across “buckets” using a “hash function”. search-key pointer
  • 3. Index Evaluation Metrics Access types supported efficiently. E.g., records with a specified value in the attribute or records with an attribute value falling in a specified range of values. Access time Insertion time Deletion time Space overhead
  • 4. Ordered Indices In an ordered index , index entries are stored sorted on the search key value. E.g., author catalog in library. Primary index : in a sequentially ordered file, the index whose search key specifies the sequential order of the file. Also called clustering index The search key of a primary index is usually but not necessarily the primary key. Secondary index : an index whose search key specifies an order different from the sequential order of the file. Also called non-clustering index . Index-sequential file : ordered sequential file with a primary index. Indexing techniques evaluated on basis of:
  • 5. Dense Index Files Dense index — Index record appears for every search-key value in the file.
  • 6. Sparse Index Files Sparse Index : contains index records for only some search-key values. Applicable when records are sequentially ordered on search-key To locate a record with search-key value K we: Find index record with largest search-key value < K Search file sequentially starting at the record to which the index record points Less space and less maintenance overhead for insertions and deletions. Generally slower than dense index for locating records. Good tradeoff: sparse index with an index entry for every block in file, corresponding to least search-key value in the block.
  • 7. Example of Sparse Index Files
  • 8. Multilevel Index If primary index does not fit in memory, access becomes expensive. To reduce number of disk accesses to index records, treat primary index kept on disk as a sequential file and construct a sparse index on it. outer index – a sparse index of primary index inner index – the primary index file If even outer index is too large to fit in main memory, yet another level of index can be created, and so on. Indices at all levels must be updated on insertion or deletion from the file.
  • 10. Index Update: Deletion If deleted record was the only record in the file with its particular search-key value, the search-key is deleted from the index also. Single-level index deletion: Dense indices – deletion of search-key is similar to file record deletion. Sparse indices – if an entry for the search key exists in the index, it is deleted by replacing the entry in the index with the next search-key value in the file (in search-key order). If the next search-key value already has an index entry, the entry is deleted instead of being replaced.
  • 11. Index Update: Insertion Single-level index insertion: Perform a lookup using the search-key value appearing in the record to be inserted. Dense indices – if the search-key value does not appear in the index, insert it. Sparse indices – if index stores an entry for each block of the file, no change needs to be made to the index unless a new block is created. In this case, the first search-key value appearing in the new block is inserted into the index. Multilevel insertion (as well as deletion) algorithms are simple extensions of the single-level algorithms
  • 12. Secondary Indices Frequently, one wants to find all the records whose values in a certain field (which is not the search-key of the primary index satisfy some condition. Example 1: In the account database stored sequentially by account number, we may want to find all accounts in a particular branch Example 2: as above, but where we want to find all accounts with a specified balance or range of balances We can have a secondary index with an index record for each search-key value; index record points to a bucket that contains pointers to all the actual records with that particular search-key value.
  • 13. Secondary Index on balance field of account
  • 14. Primary and Secondary Indices Secondary indices have to be dense. Indices offer substantial benefits when searching for records. When a file is modified, every index on the file must be updated, Updating indices imposes overhead on database modification. Sequential scan using primary index is efficient, but a sequential scan using a secondary index is expensive each record access may fetch a new block from disk
  • 15. B + -Tree Index Files Disadvantage of indexed-sequential files: performance degrades as file grows, since many overflow blocks get created. Periodic reorganization of entire file is required. Advantage of B + -tree index files: automatically reorganizes itself with small, local, changes, in the face of insertions and deletions. Reorganization of entire file is not required to maintain performance. Disadvantage of B + -trees: extra insertion and deletion overhead, space overhead. Advantages of B + -trees outweigh disadvantages, and they are used extensively. B + -tree indices are an alternative to indexed-sequential files.
  • 16. B + -Tree Index Files (Cont.) All paths from root to leaf are of the same length Each node that is not a root or a leaf has between [ n /2] and n children. A leaf node has between [( n –1)/2] and n –1 values Special cases: If the root is not a leaf, it has at least 2 children. If the root is a leaf (that is, there are no other nodes in the tree), it can have between 0 and ( n –1) values. A B + -tree is a rooted tree satisfying the following properties:
  • 17. B + -Tree Node Structure Typical node K i are the search-key values P i are pointers to children (for non-leaf nodes) or pointers to records or buckets of records (for leaf nodes). The search-keys in a node are ordered K 1 < K 2 < K 3 < . . . < K n– 1
  • 18. Leaf Nodes in B + -Trees For i = 1, 2, . . ., n– 1, pointer P i either points to a file record with search-key value K i , or to a bucket of pointers to file records, each record having search-key value K i . Only need bucket structure if search-key does not form a primary key. If L i , L j are leaf nodes and i < j, L i ’s search-key values are less than L j ’s search-key values P n points to next leaf node in search-key order Properties of a leaf node:
  • 19. Non-Leaf Nodes in B + -Trees Non leaf nodes form a multi-level sparse index on the leaf nodes. For a non-leaf node with m pointers: All the search-keys in the subtree to which P 1 points are less than K 1 For 2  i  n – 1, all the search-keys in the subtree to which P i points have values greater than or equal to K i –1 and less than K m–1
  • 20. Example of a B + -tree B + -tree for account file ( n = 3)
  • 21. Example of B + -tree Leaf nodes must have between 2 and 4 values (  ( n –1)/2  and n –1, with n = 5). Non-leaf nodes other than root must have between 3 and 5 children (  ( n /2  and n with n =5). Root must have at least 2 children. B + -tree for account file ( n - 5)
  • 22. Observations about B + -trees Since the inter-node connections are done by pointers, “logically” close blocks need not be “physically” close. The non-leaf levels of the B + -tree form a hierarchy of sparse indices. The B + -tree contains a relatively small number of levels (logarithmic in the size of the main file), thus searches can be conducted efficiently. Insertions and deletions to the main file can be handled efficiently, as the index can be restructured in logarithmic time (as we shall see).
  • 23. Queries on B + -Trees Find all records with a search-key value of k. Start with the root node Examine the node for the smallest search-key value > k. If such a value exists, assume it is K j . Then follow P i to the child node Otherwise k  K m –1 , where there are m pointers in the node. Then follow P m to the child node. If the node reached by following the pointer above is not a leaf node, repeat the above procedure on the node, and follow the corresponding pointer. Eventually reach a leaf node. If for some i , key K i = k follow pointer P i to the desired record or bucket. Else no record with search-key value k exists.
  • 24. Queries on B +- Trees (Cont.) In processing a query, a path is traversed in the tree from the root to some leaf node. If there are K search-key values in the file, the path is no longer than  log  n /2  ( K )  . A node is generally the same size as a disk block, typically 4 kilobytes, and n is typically around 100 (40 bytes per index entry). With 1 million search key values and n = 100, at most log 50 (1,000,000) = 4 nodes are accessed in a lookup. Contrast this with a balanced binary tree with 1 million search key values — around 20 nodes are accessed in a lookup above difference is significant since every node access may need a disk I/O, costing around 20 milliseconds!
  • 25. Updates on B + -Trees: Insertion Find the leaf node in which the search-key value would appear If the search-key value is already there in the leaf node, record is added to file and if necessary a pointer is inserted into the bucket. If the search-key value is not there, then add the record to the main file and create a bucket if necessary. Then: If there is room in the leaf node, insert (key-value, pointer) pair in the leaf node Otherwise, split the node (along with the new (key-value, pointer) entry) as discussed in the next slide.
  • 26. Updates on B + -Trees: Insertion (Cont.) Splitting a node: take the n (search-key value, pointer) pairs (including the one being inserted) in sorted order. Place the first  n /2  in the original node, and the rest in a new node. let the new node be p, and let k be the least key value in p. Insert ( k,p ) in the parent of the node being split. If the parent is full, split it and propagate the split further up. The splitting of nodes proceeds upwards till a node that is not full is found. In the worst case the root node may be split increasing the height of the tree by 1. Result of splitting node containing Brighton and Downtown on inserting Clearview
  • 27. Updates on B + -Trees: Insertion (Cont.) B + -Tree before and after insertion of “Clearview”
  • 28. Updates on B + -Trees: Deletion Find the record to be deleted, and remove it from the main file and from the bucket (if present) Remove (search-key value, pointer) from the leaf node if there is no bucket or if the bucket has become empty If the node has too few entries due to the removal, and the entries in the node and a sibling fit into a single node, then Insert all the search-key values in the two nodes into a single node (the one on the left), and delete the other node. Delete the pair ( K i– 1 , P i ), where P i is the pointer to the deleted node, from its parent, recursively using the above procedure.
  • 29. Updates on B + -Trees: Deletion Otherwise, if the node has too few entries due to the removal, and the entries in the node and a sibling fit into a single node, then Redistribute the pointers between the node and a sibling such that both have more than the minimum number of entries. Update the corresponding search-key value in the parent of the node. The node deletions may cascade upwards till a node which has  n/2  or more pointers is found. If the root node has only one pointer after deletion, it is deleted and the sole child becomes the root.
  • 30. Examples of B + -Tree Deletion The removal of the leaf node containing “Downtown” did not result in its parent having too little pointers. So the cascaded deletions stopped with the deleted leaf node’s parent. Before and after deleting “Downtown”
  • 31. Examples of B + -Tree Deletion (Cont.) Node with “Perryridge” becomes underfull (actually empty, in this special case) and merged with its sibling. As a result “Perryridge” node’s parent became underfull, and was merged with its sibling (and an entry was deleted from their parent) Root node then had only one child, and was deleted and its child became the new root node Deletion of “Perryridge” from result of previous example
  • 32. Example of B + -tree Deletion (Cont.) Parent of leaf containing Perryridge became underfull, and borrowed a pointer from its left sibling Search-key value in the parent’s parent changes as a result Before and after deletion of “Perryridge” from earlier example
  • 33. B + -Tree File Organization Index file degradation problem is solved by using B + -Tree indices. Data file degradation problem is solved by using B + -Tree File Organization. The leaf nodes in a B + -tree file organization store records, instead of pointers. Since records are larger than pointers, the maximum number of records that can be stored in a leaf node is less than the number of pointers in a nonleaf node. Leaf nodes are still required to be half full. Insertion and deletion are handled in the same way as insertion and deletion of entries in a B + -tree index.
  • 34. B + -Tree File Organization (Cont.) Good space utilization important since records use more space than pointers. To improve space utilization, involve more sibling nodes in redistribution during splits and merges Involving 2 siblings in redistribution (to avoid split / merge where possible) results in each node having at least entries Example of B + -tree File Organization
  • 35. B-Tree Index Files Nonleaf node – pointers B i are the bucket or file record pointers. Similar to B+-tree, but B-tree allows search-key values to appear only once; eliminates redundant storage of search keys. Search keys in nonleaf nodes appear nowhere else in the B-tree; an additional pointer field for each search key in a nonleaf node must be included. Generalized B-tree leaf node
  • 36. B-Tree Index File Example B-tree (above) and B+-tree (below) on same data
  • 37. B-Tree Index Files (Cont.) Advantages of B-Tree indices: May use less tree nodes than a corresponding B + -Tree. Sometimes possible to find search-key value before reaching leaf node. Disadvantages of B-Tree indices: Only small fraction of all search-key values are found early Non-leaf nodes are larger, so fan-out is reduced. Thus B-Trees typically have greater depth than corresponding B + -Tree Insertion and deletion more complicated than in B + -Trees Implementation is harder than B + -Trees. Typically, advantages of B-Trees do not out weigh disadvantages.
  • 38. Static Hashing A bucket is a unit of storage containing one or more records (a bucket is typically a disk block). In a hash file organization we obtain the bucket of a record directly from its search-key value using a hash function. Hash function h is a function from the set of all search-key values K to the set of all bucket addresses B. Hash function is used to locate records for access, insertion as well as deletion. Records with different search-key values may be mapped to the same bucket; thus entire bucket has to be searched sequentially to locate a record.
  • 39. Example of Hash File Organization (Cont.) There are 10 buckets, The binary representation of the i th character is assumed to be the integer i. The hash function returns the sum of the binary representations of the characters modulo 10 E.g. h(Perryridge) = 5 h(Round Hill) = 3 h(Brighton) = 3 Hash file organization of account file, using branch-name as key (See figure in next slide.)
  • 40. Example of Hash File Organization Hash file organization of account file, using branch-name as key (see previous slide for details).
  • 41. Hash Functions Worst has function maps all search-key values to the same bucket; this makes access time proportional to the number of search-key values in the file. An ideal hash function is uniform , i.e., each bucket is assigned the same number of search-key values from the set of all possible values. Ideal hash function is random , so each bucket will have the same number of records assigned to it irrespective of the actual distribution of search-key values in the file. Typical hash functions perform computation on the internal binary representation of the search-key. For example, for a string search-key, the binary representations of all the characters in the string could be added and the sum modulo the number of buckets could be returned. .
  • 42. Handling of Bucket Overflows Bucket overflow can occur because of Insufficient buckets Skew in distribution of records. This can occur due to two reasons: multiple records have same search-key value chosen hash function produces non-uniform distribution of key values Although the probability of bucket overflow can be reduced, it cannot be eliminated; it is handled by using overflow buckets .
  • 43. Handling of Bucket Overflows (Cont.) Overflow chaining – the overflow buckets of a given bucket are chained together in a linked list. Above scheme is called closed hashing . An alternative, called open hashing , which does not use overflow buckets, is not suitable for database applications.
  • 44. Hash Indices Hashing can be used not only for file organization, but also for index-structure creation. A hash index organizes the search keys, with their associated record pointers, into a hash file structure. Strictly speaking, hash indices are always secondary indices if the file itself is organized using hashing, a separate primary hash index on it using the same search-key is unnecessary. However, we use the term hash index to refer to both secondary index structures and hash organized files.
  • 46. Deficiencies of Static Hashing In static hashing, function h maps search-key values to a fixed set of B of bucket addresses. Databases grow with time. If initial number of buckets is too small, performance will degrade due to too much overflows. If file size at some point in the future is anticipated and number of buckets allocated accordingly, significant amount of space will be wasted initially. If database shrinks, again space will be wasted. One option is periodic re-organization of the file with a new hash function, but it is very expensive. These problems can be avoided by using techniques that allow the number of buckets to be modified dynamically.
  • 47. Dynamic Hashing Good for database that grows and shrinks in size Allows the hash function to be modified dynamically Extendable hashing – one form of dynamic hashing Hash function generates values over a large range — typically b -bit integers, with b = 32. At any time use only a prefix of the hash function to index into a table of bucket addresses. Let the length of the prefix be i bits, 0  i  32. Bucket address table size = 2 i. Initially i = 0 Value of i grows and shrinks as the size of the database grows and shrinks. Multiple entries in the bucket address table may point to a bucket. Thus, actual number of buckets is < 2 i The number of buckets also changes dynamically due to coalescing and splitting of buckets.
  • 48. General Extendable Hash Structure In this structure, i 2 = i 3 = i , whereas i 1 = i – 1 (see next slide for details)
  • 49. Use of Extendable Hash Structure Each bucket j stores a value i j ; all the entries that point to the same bucket have the same values on the first i j bits. To locate the bucket containing search-key K j : 1. Compute h(K j ) = X 2. Use the first i high order bits of X as a displacement into bucket address table, and follow the pointer to appropriate bucket To insert a record with search-key value K j follow same procedure as look-up and locate the bucket, say j . If there is room in the bucket j insert record in the bucket. Else the bucket must be split and insertion re-attempted (next slide.) Overflow buckets used instead in some cases (will see shortly)
  • 50. Updates in Extendable Hash Structure If i > i j (more than one pointer to bucket j ) allocate a new bucket z , and set i j and i z to the old i j -+ 1. make the second half of the bucket address table entries pointing to j to point to z remove and reinsert each record in bucket j. recompute new bucket for K j and insert record in the bucket (further splitting is required if the bucket is still full) If i = i j (only one pointer to bucket j ) increment i and double the size of the bucket address table. replace each entry in the table by two entries that point to the same bucket. recompute new bucket address table entry for K j Now i > i j so use the first case above. To split a bucket j when inserting record with search-key value K j :
  • 51. Updates in Extendable Hash Structure (Cont.) When inserting a value, if the bucket is full after several splits (that is, i reaches some limit b ) create an overflow bucket instead of splitting bucket entry table further. To delete a key value, locate it in its bucket and remove it. The bucket itself can be removed if it becomes empty (with appropriate updates to the bucket address table). Coalescing of buckets can be done (can coalesce only with a “buddy” bucket having same value of i j and same i j –1 prefix, if it is present) Decreasing bucket address table size is also possible Note: decreasing bucket address table size is an expensive operation and should be done only if number of buckets becomes much smaller than the size of the table
  • 52. Use of Extendable Hash Structure: Example Initial Hash structure, bucket size = 2
  • 53. Example (Cont.) Hash structure after insertion of one Brighton and two Downtown records
  • 54. Example (Cont.) Hash structure after insertion of Mianus record
  • 55. Example (Cont.) Hash structure after insertion of three Perryridge records
  • 56. Example (Cont.) Hash structure after insertion of Redwood and Round Hill records
  • 57. Extendable Hashing vs. Other Schemes Benefits of extendable hashing: Hash performance does not degrade with growth of file Minimal space overhead Disadvantages of extendable hashing Extra level of indirection to find desired record Bucket address table may itself become very big (larger than memory) Need a tree structure to locate desired record in the structure! Changing size of bucket address table is an expensive operation Linear hashing is an alternative mechanism which avoids these disadvantages at the possible cost of more bucket overflows
  • 58. Comparison of Ordered Indexing and Hashing Cost of periodic re-organization Relative frequency of insertions and deletions Is it desirable to optimize average access time at the expense of worst-case access time? Expected type of queries: Hashing is generally better at retrieving records having a specified value of the key. If range queries are common, ordered indices are to be preferred
  • 59. Index Definition in SQL Create an index create index <index-name> or <relation-name> <attribute-list>) E.g.: create index b-index on branch(branch-name) Use create unique index to indirectly specify and enforce the condition that the search key is a candidate key is a candidate key. Not really required if SQL unique integrity constraint is supported To drop an index drop index <index-name>
  • 60. Multiple-Key Access Use multiple indices for certain types of queries. Example: select account-number from account where branch-name = “Perryridge” and balance - 1000 Possible strategies for processing query using indices on single attributes: 1. Use index on branch-name to find accounts with balances of $1000; test branch-name = “ Perryridge”. 2. Use index on balance to find accounts with balances of $1000; test branch-name = “Perryridge”. 3. Use branch-name index to find pointers to all records pertaining to the Perryridge branch. Similarly use index on balance . Take intersection of both sets of pointers obtained.
  • 61. Indices on Multiple Attributes With the where clause where branch-name = “Perryridge” and balance = 1000 the index on the combined search-key will fetch only records that satisfy both conditions. Using separate indices in less efficient — we may fetch many records (or pointers) that satisfy only one of the conditions. Can also efficiently handle where branch-name - “Perryridge” and balance < 1000 But cannot efficiently handle where branch-name < “Perryridge” and balance = 1000 May fetch many records that satisfy the first but not the second condition. Suppose we have an index on combined search-key ( branch-name, balance ).
  • 62. Grid Files Structure used to speed the processing of general multiple search-key queries involving one or more comparison operators. The grid file has a single grid array and one linear scale for each search-key attribute. The grid array has number of dimensions equal to number of search-key attributes. Multiple cells of grid array can point to same bucket To find the bucket for a search-key value, locate the row and column of its cell using the linear scales and follow pointer
  • 63. Example Grid File for account
  • 64. Queries on a Grid File A grid file on two attributes A and B can handle queries of all following forms with reasonable efficiency ( a 1  A  a 2 ) ( b 1  B  b 2 ) ( a 1  A  a 2  b 1  B  b 2 ),. E.g., to answer ( a 1  A  a 2  b 1  B  b 2 ), use linear scales to find corresponding candidate grid array cells, and look up all the buckets pointed to from those cells.
  • 65. Grid Files (Cont.) During insertion, if a bucket becomes full, new bucket can be created if more than one cell points to it. Idea similar to extendable hashing, but on multiple dimensions If only one cell points to it, either an overflow bucket must be created or the grid size must be increased Linear scales must be chosen to uniformly distribute records across cells. Otherwise there will be too many overflow buckets. Periodic re-organization to increase grid size will help. But reorganization can be very expensive. Space overhead of grid array can be high. R-trees (Chapter 23) are an alternative
  • 66. Bitmap Indices Bitmap indices are a special type of index designed for efficient querying on multiple keys Records in a relation are assumed to be numbered sequentially from, say, 0 Given a number n it must be easy to retrieve record n Particularly easy if records are of fixed size Applicable on attributes that take on a relatively small number of distinct values E.g. gender, country, state, … E.g. income-level (income broken up into a small number of levels such as 0-9999, 10000-19999, 20000-50000, 50000- infinity) A bitmap is simply an array of bits
  • 67. Bitmap Indices (Cont.) In its simplest form a bitmap index on an attribute has a bitmap for each value of the attribute Bitmap has as many bits as records In a bitmap for value v, the bit for a record is 1 if the record has the value v for the attribute, and is 0 otherwise
  • 68. Bitmap Indices (Cont.) Bitmap indices are useful for queries on multiple attributes not particularly useful for single attribute queries Queries are answered using bitmap operations Intersection (and) Union (or) Complementation (not) Each operation takes two bitmaps of the same size and applies the operation on corresponding bits to get the result bitmap E.g. 100110 AND 110011 = 100010 100110 OR 110011 = 110111 NOT 100110 = 011001 Males with income level L1: 10010 AND 10100 = 10000 Can then retrieve required tuples. Counting number of matching tuples is even faster
  • 69. Bitmap Indices (Cont.) Bitmap indices generally very small compared with relation size E.g. if record is 100 bytes, space for a single bitmap is 1/800 of space used by relation. If number of distinct attribute values is 8, bitmap is only 1% of relation size Deletion needs to be handled properly Existence bitmap to note if there is a valid record at a record location Needed for complementation not( A=v ): (NOT bitmap-A-v) AND ExistenceBitmap Should keep bitmaps for all values, even null value To correctly handle SQL null semantics for NOT( A=v ): intersect above result with (NOT bitmap-A-Null )
  • 70. Efficient Implementation of Bitmap Operations Bitmaps are packed into words; a single word and (a basic CPU instruction) computes and of 32 or 64 bits at once E.g. 1-million-bit maps can be anded with just 31,250 instruction Counting number of 1s can be done fast by a trick: Use each byte to index into a precomputed array of 256 elements each storing the count of 1s in the binary representation Can use pairs of bytes to speed up further at a higher memory cost Add up the retrieved counts Bitmaps can be used instead of Tuple-ID lists at leaf levels of B + -trees, for values that have a large number of matching records Worthwhile if > 1/64 of the records have that value, assuming a tuple-id is 64 bits Above technique merges benefits of bitmap and B + -tree indices
  • 72. Partitioned Hashing Hash values are split into segments that depend on each attribute of the search-key. ( A 1 , A 2 , . . . , A n ) for n attribute search-key Example: n = 2, for customer, search-key being ( customer-street, customer-city ) search-key value hash value (Main, Harrison) 101 111 (Main, Brooklyn) 101 001 (Park, Palo Alto) 010 010 (Spring, Brooklyn) 001 001 (Alma, Palo Alto) 110 010 To answer equality query on single attribute, need to look up multiple buckets. Similar in effect to grid files.
  • 73. Sequential File For account Records
  • 74. Deletion of “Perryridge” From the B + -Tree of Figure 12.12