Unit 3
Unit 3
1.No dependence on internet connection: Physical storage devices do not require an internet
connection to store or access data.
2.This means that data can be accessed at any time, regardless of internet connectivity.
Cache:- The cache is the fastest and most costly form of storage. Cache memory is small; its use
is managed by the computer system hardware. We shall not be concerned about managing cache
storage in the database system.
Main memory:-The storage medium used for data that are available to be operated on is main
memory. The general-purpose machine instructions operate on main memory. Although main
memory may contain many megabytes of data, or even gigabytes of data in large server systems,
it is generally too small (or too expensive) for storing the entire database. The contents of main
memory are usually lost if a power failure or system crash occurs.
Flash memory has found popularity as a replacement for magnetic disks for storing small
volumes of data (5 to 10 megabytes) in low-cost computer systems, such as computer systems
that are embedded in other devices, in hand-held computers, and in other digital electronic
devices such as digital cameras.
Magnetic-disk storage:-The primary medium for the long-term on-line storage of data is the
magnetic disk. Usually, the entire database is stored on magnetic disk. The system must move
the data from disk to main memory so that they can be accessed. After the system has performed
the designated operations, the data that have been modified must be written to disk.
Tape storage:- Tape storage is used primarily for backup and archival data. Although magnetic
tape is much cheaper than disks, access to data is much slower, because the tape must be
accessed sequentially from the beginning.
For this reason, tape storage is referred to as sequential-access storage. In contrast, disk storage is
referred to as direct-access storage because it is possible to read data from any location on disk.
Fig: Storage Hierarchy
15 16 17 18 15 17 19 21
19 20 21 22 16 18 20 22
23 24 25 26 23 25 27 29
27 28 29 30 24 26 28 30
S S T T
U U V V
W W X X
Y Y Z Z
In the above table, there is duplication of data. Hence only half of the space is utilized to store
data.
Pros
It provides 100% redundancy thus making the system fault tolerant.
In case of a single drive failure, you have your data safely stored in the other drive.
This also means the RAID drive group will function even in case of a single drive failure.
Cons
Here the stored data will use double space, because of mirroring or duplication of data. So it
enables us to actually use only half of the storage capacity.
As any data here requires twice the amount of storage so it turns out to be expensive.
RAID 2
RAID 2 is rarely used. This level stripes data at a bit level and each bit is stored in a separate
drive. It requires a disk separately for storing ECC code of data. The level uses the Hamming
code for error correction. You would agree that it is complex and expensive.
Pros
It uses a selected drive for uniformity in storing data.
It detects error through hamming code.
It can be a good answer to data-security problems.
Cons
It uses an extra drive for error detection.
The need for hamming code makes it inconvenient for commercial use.
RAID 3
The RAID 3 level stripes data at the byte level. It requires a separate parity disk which stores the
parity information for each byte. When a disk fails, data can be recovered with the help of parity
bytes corresponding to them. Then the retrieved data can be stored in a new disk. It also has a
high read speed.
Example:
Disk Disk
Disk 0 Disk 3
1 2
M N O P (M, N, O)
R S T P (R, S, T)
U V W P (U, V, W)
X Y Z P (X, Y, Z)
Pros
It enables high-speed transmission of data.
In case of a disk failure, data can be reconstructed using the corresponding parity byte.
Data can be used parallelly.
It might be used where few users are referring to large files.
Cons
It needs an extra file to store parity bytes.
Its performance is slow in case of files of small size.
It can be said that it is not a reliable or cheap solution to storage problems.
RAID 4
RAID 4 is a quite popular one. It is similar to RAID 1 and RAID 3 in a few ways. It goes for a
block level data stripping which is similar to RAID 0. Just like RAID 3, it uses parity disk to
store data. When you combine both these features together you will clearly understand what
RAID 4 does. It stripes data at the block level and stores its corresponding parity bytes in the
parity disk. In case of a single disk failure, data can be recovered from this parity disk.
Example:
Disk 0 Disk 1 Disk 2 Disk 3
M N O P0
R S T P1
U V W P2
X Y Z P3
0 1 0 0 1
0 0 1 1 0
If T4 is lost then we can recover it from parity bit and other columns.
Pros
In case of a single disk failure, the lost data is recovered from the parity disk.
It can be useful for large files.
Cons
It does not solve the problem of more than one disk failure.
The level needs at least 3 disks as well as hardware backing for doing parity calculations.
It might seem to be slow in case of small files.
RAID 5
RAID 5 is one of the most popular ones especially for a system with three or more drives. This
level has some similarity with RAID 4. This level too uses the parity but in a distributed way. It
stripes data across all drives in a somewhat rotating way. It stripes data at the byte level.
Example:
Disk 0 Disk 1 Disk 2 Disk 3 Disk 4
10 11 12 13 P0
15 16 17 P1 14
20 21 P2 18 19
25 P3 22 23 24
P4 26 27 28 29
RAID 6
RAID 6 is also known as double-parity RAID. It is an enhanced version of RAID 5. It stripes
data at block level and stores two corresponding parity block on all disks. First, it does stripping
of data and follows it up with mirroring of data. In other words, you could say first it works like
RAID 0 for stripping and then RAID 1 in mirroring.
Example:
Disk 1 Disk 2 Disk 3 Disk 4
J0 K0 Q0 P0
J1 Q1 P1 M1
Q2 P2 L2 M2
P3 K3 L3 Q3
Pros
It can help you in case of 2 simultaneous disk failures.
The number of drives for this level should be an even number with a minimum of 4 drives.
Cons
It uses only half for storing data as the other half is used for mirroring.
It needs two extra disks for parity.
It needs to write in two parity blocks and hence is slower than RAID 5.
It has inadequate adaptability.
Conclusion
There are a number of other RAID levels too. Some of them are RAID 10, RAID 5EE, RAID 50,
RAID 60. Different RAID level is a combination of different qualities. Each level can be judged
on the basis of redundancy, read performance, writing performance, minimum disks required and
usage of the disk drive. The best RAID level for you depends on your storage space as well as
performance and reliability you are looking for. Generally, a number of drives result in better
performance. Each RAID level has its own set of advantages and disadvantages. So you have to
decide what you are looking for is safety or speed or storage space.
Summary
Given below is the summary of all the types of RAID −
Levels Summary
RAID- It is the fastest and most efficient array type but offers no fault-tolerance.
0
RAID- It is used today because ECC is embedded in almost all modern disk drives.
2
RAID- It is used in single environments which access long sequential records to speed up
3 data transfer.
RAID- It offers no advantages over RAID-5 and does not support multiple simultaneous
4 write operations.
RAID- It is the best choice in a multi-user environment. However, at least three drives are
5 required for the RAID-5 array.
File Organization
The File is a collection of records. Using the primary key, we can access the records. The
type and frequency of access can be determined by the type of file organization which
was used for a given set of records.
File organization is a logical relationship among various records. This method defines
how file records are mapped onto disk blocks.
File organization is used to describe the way in which the records are stored in terms of
blocks, and the blocks are placed on the storage medium.
It contains an optimal selection of records, i.e., records can be selected as fast as possible.
To perform insert, delete or update transaction on the records should be quick and easy.
The duplicate records cannot be induced as a result of insert, update or delete.
For the minimal cost of storage, records should be stored efficiently.
Types of file organization:
File organization contains various methods. These particular methods have pros and cons on the
basis of access or selection. In the file organization, the programmer decides the best-suited file
organization method according to his requirement.
It is a quite simple method. In this method, we store the record in a sequence, i.e., one after
another. Here, the record will be inserted in the order in which they are inserted into tables.
In case of updating or deleting of any record, the record will be searched in the memory blocks.
When it is found, then it will be marked for deleting, and the new record is inserted.
If the database is very large then searching, updating or deleting of record will be time-
consuming because there is no sorting or ordering of records. In the heap file organization, we
need to check all the data until we get the requested record.
When a record has to be received using the hash key columns, then the address is generated, and
the whole record is retrieved using that address. In the same way, when a new record has to be
inserted, then the address is generated using the hash key and record is directly inserted. The
same process is applied in the case of delete and update.
In this method, there is no effort for searching and sorting the entire file. In this method, each
record will be stored randomly in the memory.
B+ File Organization
B+ tree file organization is the advanced method of an indexed sequential access method. It
uses a tree-like structure to store records in File.
It uses the same concept of key-index where the primary key is used to sort the records. For
each primary key, the value of the index is generated and mapped with the record.
The B+ tree is similar to a binary search tree (BST), but it can have more than two children. In
this method, all the records are stored only at the leaf node. Intermediate nodes act as a pointer
to the leaf nodes. They do not contain any records.
In this method, searching becomes very easy as all the records are stored only in the leaf nodes
and sorted the sequential linked list.
Traversing through the tree structure is easier and faster.
The size of the B+ tree has no restrictions, so the number of records can increase or decrease
and the B+ tree structure can also grow or shrink.
It is a balanced tree structure, and any insert/update/delete does not affect the performance of
tree.
Cons of B+ tree file organization
This method is inefficient for the static method.
If any record has to be retrieved based on its index value, then the address of the data block is
fetched and the record is retrieved from the memory.
1. Indexed Clusters:
In indexed cluster, records are grouped based on the cluster key and stored together. The above
EMPLOYEE and DEPARTMENT relationship is an example of an indexed cluster. Here, all the
records are grouped based on the cluster key- DEP_ID and all the records are grouped.
2. Hash Clusters:
It is similar to the indexed cluster. In hash cluster, instead of storing the records based on the
cluster key, we generate the value of the hash key for the cluster key and store the records with
the same hash key value.
Indexing in DBMS
Indexing is used to optimize the performance of a database by minimizing the number of disk
accesses required when a query is processed.
The index is a type of data structure. It is used to locate and access the data in a database table
quickly.
Index structure:
The first column of the database is the search key that contains a copy of the primary key or
candidate key of the table. The values of the primary key are stored in sorted order so that the
corresponding data can be accessed easily.
The second column of the database is the data reference. It contains a set of pointers holding
the address of the disk block where the value of the particular key can be found.
Indexing Methods
Ordered indices
The indices are usually sorted to make searching faster. The indices which are sorted are known
as ordered indices.
Example: Suppose we have an employee table with thousands of record and each of which is 10
bytes long. If their IDs start with 1, 2, 3....and so on and we have to search student with ID-543.
In the case of a database with no index, we have to search the disk block from starting till it
reaches 543. The DBMS will read the record after reading 543*10=5430 bytes.
In the case of an index, we will search using indexes and the DBMS will read the record after
reading 542*2= 1084 bytes which are very less compared to the previous case.
Primary Index
If the index is created on the basis of the primary key of the table, then it is known as primary
indexing. These primary keys are unique to each record and contain 1:1 relation between the
records.
As primary keys are stored in sorted order, the performance of the searching operation is quite
efficient.
The primary index can be classified into two types: Dense index and Sparse index.
Dense index
The dense index contains an index record for every search key value in the data file. It makes
searching faster.
In this, the number of records in the index table is same as the number of records in the main
table.
It needs more space to store index record itself. The index records have the search key and a
pointer to the actual record on the disk.
Sparse index
In the data file, index record appears only for a few items. Each item points to a block.
In this, instead of pointing to each record in the main table, the index points to the records in
the main table in a gap.
Clustering Index
A clustered index can be defined as an ordered data file. Sometimes the index is created on non-
primary key columns which may not be unique for each record.
In this case, to identify the record faster, we will group two or more columns to get the unique
value and create index out of them. This method is called a clustering index.
The records which have similar characteristics are grouped, and indexes are created for these
group.
Example: suppose a company contains several employees in each department. Suppose we use a
clustering index, where all employees which belong to the same Dept_ID are considered within a
single cluster, and index pointers point to the cluster as a whole. Here Dept_Id is a non-unique
key.
The previous schema is little confusing because one disk block is shared by records which
belong to the different cluster. If we use separate disk block for separate clusters, then it is called
better technique.
Secondary Index
In the sparse indexing, as the size of the table grows, the size of mapping also grows. These
mappings are usually kept in the primary memory so that address fetch should be faster. Then the
secondary memory searches the actual data based on the address got from mapping. If the
mapping size grows then fetching the address itself becomes slower. In this case, the sparse
index will not be efficient. To overcome this problem, secondary indexing is introduced.
In secondary indexing, to reduce the size of mapping, another level of indexing is introduced. In
this method, the huge range for the columns is selected initially so that the mapping size of the
first level becomes small. Then each range is further divided into smaller ranges. The mapping of
the first level is stored in the primary memory, so that address fetch is faster. The mapping of the
second level and actual data are stored in the secondary memory (hard disk).
For example:
If you want to find the record of roll 111 in the diagram, then it will search the highest entry
which is smaller than or equal to 111 in the first level index. It will get 100 at this level.
Then in the second index level, again it does max (111) <= 111 and gets 110. Now using the
address 110, it goes to the data block and starts searching each record till it gets 111.
This is how a search is performed in this method. Inserting, updating or deleting is also done in
the same manner.
B+ Tree
o The B+ tree is a balanced binary search tree. It follows a multi-level index format.
o In the B+ tree, leaf nodes denote actual data pointers. B+ tree ensures that all leaf nodes
remain at the same height.
o In the B+ tree, the leaf nodes are linked using a link list. Therefore, a B+ tree can support
random access as well as sequential access.
Structure of B+ Tree
o In the B+ tree, every leaf node is at equal distance from the root node. The B+ tree is of
the order n where n is fixed for every B+ tree.
o It contains an internal node and leaf node.
Internal node
o An internal node of the B+ tree can contain at least n/2 record pointers except the root
node.
o At most, an internal node of the tree contains n pointers.
Leaf node
o The leaf node of the B+ tree can contain at least n/2 record pointers and n/2 key values.
o At most, a leaf node contains n record pointer and n key values.
o Every leaf node of the B+ tree contains one block pointer P to point to next leaf node.
So, in the intermediary node, we will find a branch between 50 and 75 nodes. Then at
the end, we will be redirected to the third leaf node. Here DBMS will perform a
sequential search to find 55.
B+ Tree Insertion
Suppose we want to insert a record 60 in the below structure. It will go to the 3rd leaf
node after 55. It is a balanced tree, and a leaf node of this tree is already full, so we
cannot insert 60 there.
In this case, we have to split the leaf node, so that it can be inserted into tree without
affecting the fill factor, balance and order.
The 3rd leaf node has the values (50, 55, 60, 65, 70) and its current root node is 50. We
will split the leaf node of the tree in the middle so that its balance is not altered. So we
can group (50, 55) and (60, 65, 70) into 2 leaf nodes.
If these two has to be leaf nodes, the intermediate node cannot branch from 50. It
should have 60 added to it, and then we can have pointers to a new leaf node.
This is how we can insert an entry when there is overflow. In a normal scenario, it is very
easy to find the node where it fits and then place it in that leaf node.
B+ Tree Deletion
Suppose we want to delete 60 from the above example. In this case, we have to remove
60 from the intermediate node as well as from the 4th leaf node too. If we remove it
from the intermediate node, then the tree will not satisfy the rule of the B+ tree. So we
need to modify it to have a balanced tree.
After deleting node 60 from above B+ tree and re-arranging the nodes, it will show as
follows:
Hashing in DBMS
In a huge database structure, it is very inefficient to search all the index values and reach
the desired data. Hashing technique is used to calculate the direct location of a data
record on the disk without using index structure.
In this technique, data is stored at the data blocks whose address is generated by using
the hashing function. The memory location where these records are stored is known as
data bucket or data blocks.
In this, a hash function can choose any of the column value to generate the address.
Most of the time, the hash function uses the primary key to generate the address of the
data block. A hash function is a simple mathematical function to any complex
mathematical function. We can even consider the primary key itself as the address of the
data block. That means each row whose address will be the same as a primary key
stored in the data block.
The above diagram shows data block addresses same as primary key value. This hash
function can also be a simple mathematical function like exponential, mod, cos, sin, etc.
Suppose we have mod (5) hash function to determine the address of the data block. In
this case, it applies mod (5) hash function on the primary keys and generates 3, 3, 1, 4
and 2 respectively, and records are stored in those data block addresses.
Types of Hashing:
Static Hashing
In static hashing, the resultant data bucket address will always be the same. That means
if we generate an address for EMP_ID =103 using the hash function mod (5) then it will
always result in same bucket address 3. Here, there will be no change in the bucket
address.
Hence in this static hashing, the number of data buckets in memory remains constant
throughout. In this example, we will have five data buckets in the memory used to store
the data.
Operations of Static Hashing
o Searching a record
When a record needs to be searched, then the same hash function retrieves the address
of the bucket where the data is stored.
o Insert a Record
When a new record is inserted into the table, then we will generate an address for a new record
based on the hash key and record is stored in that location.
o Delete a Record
To delete a record, we will first fetch the record which is supposed to be deleted. Then
we will delete the records for that address in memory.
o Update a Record
To update a record, we will first search it using a hash function, and then the data record
is updated.
If we want to insert some new record into the file but the address of a data bucket
generated by the hash function is not empty, or data already exists in that address. This
situation in the static hashing is known as bucket overflow. This is a critical situation in
this method.
To overcome this situation, there are various methods. Some commonly used methods
are as follows:
1. Open Hashing
When a hash function generates an address at which data is already stored, then the
next bucket will be allocated to it. This mechanism is called as Linear Probing.
For example: suppose R3 is a new address which needs to be inserted, the hash
function generates address as 112 for R3. But the generated address is already full. So
the system searches next available data bucket, 113 and assigns R3 to it.
2. Close Hashing
When buckets are full, then a new data bucket is allocated for the same hash result and
is linked after the previous one. This mechanism is known as Overflow chaining.
For example: Suppose R3 is a new address which needs to be inserted into the table,
the hash function generates address as 110 for it. But this bucket is full to store the new
data. In this case, a new bucket is inserted at the end of 110 buckets and is linked to it.
Dynamic Hashing
o The dynamic hashing method is used to overcome the problems of static hashing like
bucket overflow.
o In this method, data buckets grow or shrink as the records increases or decreases. This
method is also known as Extendable hashing method.
o This method makes hashing dynamic, i.e., it allows insertion or deletion without resulting
in poor performance.
The last two bits of 2 and 4 are 00. So it will go into bucket B0. The last two bits of 5 and
6 are 01, so it will go into bucket B1. The last two bits of 1 and 3 are 10, so it will go into
bucket B2. The last two bits of 7 are 11, so it will go into B3.