Data Structures Notes With Question Bank
Data Structures Notes With Question Bank
Structures
Prepared By:
Mrs. M. PAVITHRA, MCA., M.Phil., B.Ed.
Assistant Professor,
PG & Research Department of Computer Science & Applications,
Sri Vidya Mandir Arts & Science College (Autonomous),
Katteri, Uthangarai.
Name : _______________________
Class : _______________________
Data Structures Syllabus
UNIT – I
Algorithms (Analysis and Design): Problem Solving – Top-Down and Bottom-Up –
Design, Implementation, Verification of Algorithm – Efficiency Analysis of Algorithms:
Space, Time Complexity, and Frequency Count – Introduction: Definitions – Concepts –
Overview –Abstract Data Types (ADTs).
UNIT – II
Arrays: Definition – Terminology – One Dimensional Array – Multi Dimensional Array.
Linked List: Definition – Single Linked List – Double Linked List – Circular Linked List –
Applications: Sparse Matrix – Polynomial Representation – Dynamic Storage
Management.
UNIT – III
Stack ADT – Operations – Applications – Evaluating arithmetic expressions – Conversion
of infix to postfix expression – Queue ADT – Operations – Circular Queue – Priority
Queue – deQueue – applications of queues.
UNIT – IV
Tree ADT – tree traversals – Binary Tree ADT – expression trees – applications of trees –
binary search tree ADT – Heap Tree. Graph: Definition – Representation of Graph –
Types of graph – Breadth first traversal – Depth first traversal – Topological sort –
Applications of graphs.
UNIT – V
Searching – Linear Search Techniques with – Array, Linked List, and Ordered List –
Binary search – Sorting – Bubble sort – Selection sort – Insertion sort – Shell sort – Radix
sort – Quick Sort – Merge Sort.
Text Book
1. MarkAllenWeiss ― Data Structures and Algorithm Analysis in C++, Pearson
Education, 2014, 4th Edition.
2. ReemaThareja ― Data Structures Using C, Oxford Universities Press 2014, 2nd
Edition
Data Structures UNIT - I
UNIT – 1
ALGORITHMS (ANALYSIS & DESIGN)
1.1 PROBLEM SOLVING:
Problem solving is the process of breaking down the problem into smaller parts
each of which can be solved step by step to obtain the final solution.
In order to solve a problem using computer, one need to write step by step
solution first, this may be done by writing instructions for each operation.
1.1.1 Basic Steps for Solving A Problem:
➢ Formulating the problem and deciding the data types to be entered
➢ Identifying the steps of computation that are necessary for getting the solution
➢ Identifying decision parts
➢ Finding the result and verifying the values
1.1.2 Procedure for Problem Solving:
Problem solving is simply writing the basic steps and putting them into correct
sequence to find the result.
The procedure for solving a problem involves six steps. They are,
Step 1: Understanding the problem
Step 2: Construction of the list of variables
Step 3: Decide the layout for the output
Step 4: Select a programming method best suited to solve a problem
Step 5: Test the program
Step 6: Validating the program
Step 1: Understanding the problem:
In order to solving the problem, do not start drawing a flowchart or decision table
straight away. Instead read each statement of the problem slowly and carefully. Use
paper and pencil to solve the problem manually for some test data.
Example:
Find the sum of first 6 even numbers?
Solution:
The first six even numbers are 2, 4, 6, 8, 10, and 12.
The sum of first 6 even numbers is 42
The process of the procedure Main is to co-ordinate the three branch operations eg:
Get, Process and Put routines. These three routines communicate only through Main.
Similarly, Sub1 and Sub2 can communicate only through the process routine.
Advantages:
➢ Increased comprehension of the problem
➢ Unnecessary lower-level detail are removed
➢ Reduced debugging time
Greedy algorithm:
In each step of this algorithm selects the best available option until all options
finish. This approach is widely used in many places for designing algorithm.
For example,
Shortest path algorithm
Divide and Conquer:
In divide and conquer, the big problem is divided into same type of smaller
problems and we design the algorithm to combine the implementation of these smaller
problems for implementing bigger problem.
For example,
In quick sort, we divide initial list into several smaller lists, after sorting those
smaller lists, we combine them and get the final list sorted.
When this method is applied, it often leads to large improvements in time
complexity.
Non-recursive algorithm:
A set of instructions that perform a logical operation can be grouped together as a
function.
If a function calls itself, then it is called as direct recursion.
If a function calls another function, which in turn invokes the calling function, then
the technique is called as indirect recursion.
Randomized algorithm:
In randomized algorithm, we use the feature of random number instead of a fixed
number. It gives different results with different input data.
Backtrack algorithm:
An algorithm technique to find solutions by trying one of several choices. If the
choice proves incorrect, computation backtracks or restarts at the point choice and tries
another choice.
Example:
Game tree
Modular programming approach:
In industry and commerce, the problems that are to be solved with the help of
computer need thousand or even more number of lines of code.
O(1) to mean a computing is a constant that is when we get data in first time itself.
O(n) is called linear, when all the elements of the linear list will be traversed. For
example, The best case while using bubble sort method.
O(n2) is called quadratic, when all the completed list will be traversed for each
element. For example, The worst case of bubble sort method.
O(log n) is when we divide list half each time and traverse the middle element.
For example, Method used in binary searching.
O(n log n) is when divide list half each time and traverse that half portion. For
example, Best case of quick sort of an algorithm will take O(log n) time. This method is
faster than O(n). O(n log n) is better than O(n2) but not as good as O(n).
INTRODUCTION:
1.5 DEFINITIONS:
The organized collection of data is known as data structure.
Data Structure = Organized data + Operations
The basic terminologies of data structure are,
Data
Entity
Information
Data type
Data:
The term data means a value or a set of values.
Entity:
An entity is one that has certain attributes and which may be assigned values.
For example,
An employee in an organization is an entity. The possible attributes and their
corresponding values for an entity,
Entity: EMPLOYEE
Attributes: NAME DOB SEX DESIGNATION
Values: ANAND 11/12/1990 MALE TEAM LEADER
Information:
The term information is meaningful data or processed data and this has been
generated as result. It is used for data with its attributes.
An ADT in the data structure can be thought of as a set of operations that can be
performed on a set of values. This set of operations actually defines the behavior of the
data structure, and they are used to manipulate the data in a way that suits the needs of
the program.
ADTs are often used to abstract away the complexity of a data structure and to
provide a simple and intuitive interface for accessing and manipulating the data. This
makes it easier for programmers to reason about the data structure, and to use it correctly
in their programs.
Examples of abstract data type in data structures are List, Stack, Queue, etc.
1.6 CONCEPTS:
A digital computer can manipulate only primitive data, that is, data in terms of 0s
and 1s. Manipulation of user data requires,
o Storage representation of user data
o Retrieval of stored data
o Transformation of user data
1.6.1 Storage representation of user data:
User data should be stored in such a way that the computer can understand it.
Types of array:
o One dimensional array
o Two dimensional array
o Three dimensional array
One dimensional array:
One dimensional array is a collection of similar data types where a number is
referred by one script.
Example:
int A[i];
Two dimensional array:
A two dimensional array is a collection of similar data types where a number is
referred by two subscripts. The two major dimensions can be termed as row, column.
Example:
int A[i][j];
Linked list:
The linked list is a linear collection of data items called as nodes. Node is divided
into two fields.
1) INFO Field
2) LINK Field
Advantage:
• Quick insertion
• Quick deletion
Disadvantage:
• Slow search
Stack:
It is a linear data structure in which data is inserted and deleted at one end called
as top of the stack that is data is stored and retrieved in Last In First Out (LIFO) order.
Advantage:
• Provides last in first out access
Disadvantage:
• Slow access to other items
Queues:
Queue is a linear list of elements in which deletion can take place only at one end
called the FRONT and insertion can take place only at the other end called REAR.
Queues are also called First In First Out (FIFO) or First Come First Serve (FCFS).
Advantage:
• Provides first in first out access
Disadvantage:
• Slow access to other items
1.7.2 Non Linear Data Structures:
Data structure is said to be nonlinear if its elements do not form a sequence, ie,
where insertion and deletion is not possible in a linear fashion.
The nonlinear data structures are
Trees
Graphs
Trees:
Tree is a finite set of nodes, that is specially designated node is called root and
remaining nodes are collection of sub trees.
Graphs:
Graph is a collection of non-empty, nonlinear set of nodes and a set of edges.
List ADT:
Lists are linear data structures that hold data in a non-continuous structure. The
list is made up of data storage containers known as "nodes." These nodes are linked to
one another, which means that each node contains the address of another block. All of the
nodes are thus connected to one another via these links. You can discover more about
lists in this article: Linked List Data Structure.
Some of the most essential operations defined in List ADT are listed below.
• front(): returns the value of the node present at the front of the list.
• back(): returns the value of the node present at the back of the list.
• push_front(int val): creates a pointer with value = val and keeps this pointer
to the front of the linked list.
• push_back(int val): creates a pointer with value = val and keeps this pointer
to the back of the linked list.
• pop_front(): removes the front node from the list.
• pop_back(): removes the last node from the list.
• empty(): returns true if the list is empty, otherwise returns false.
• size(): returns the number of nodes that are present in the list.
Stack ADT:
A stack is a linear data structure that only allows data to be accessed from the top.
It simply has two operations: push (to insert data to the top of the stack) and pop (to
remove data from the stack). (used to remove data from the stack top).
Some of the most essential operations defined in Stack ADT are listed below.
• top(): returns the value of the node present at the top of the stack.
• push(int val): creates a node with value = val and puts it at the stack top.
• pop(): removes the node from the top of the stack.
• empty(): returns true if the stack is empty, otherwise returns false.
• size(): returns the number of nodes that are present in the stack.
Queue ADT:
A queue is a linear data structure that allows data to be accessed from both ends.
There are two main operations in the queue: push (this operation inserts data to the back
of the queue) and pop (this operation is used to remove data from the front of the queue).
Some of the most essential operations defined in Queue ADT are listed below.
• front(): returns the value of the node present at the front of the queue.
• back(): returns the value of the node present at the back of the queue.
• push(int val): creates a node with value = val and puts it at the front of the
queue.
• pop(): removes the node from the rear of the queue.
• empty(): returns true if the queue is empty, otherwise returns false.
• size(): returns the number of nodes that are present in the queue.
Advantages of ADT in Data Structures:
• Provides abstraction, which simplifies the complexity of the data structure and
allows users to focus on the functionality.
• Enhances program modularity by allowing the data structure implementation to
be separate from the rest of the program.
• Enables code reusability as the same data structure can be used in multiple
programs with the same interface.
• Promotes the concept of data hiding by encapsulating data and operations into a
single unit, which enhances security and control over the data.
• Supports polymorphism, which allows the same interface to be used with different
underlying data structures, providing flexibility and adaptability to changing
requirements.
Disadvantages of ADT in Data Structures:
• Overhead: Using ADTs may result in additional overhead due to the need for
abstraction and encapsulation.
• Limited control: ADTs can limit the level of control that a programmer has over
the data structure, which can be a disadvantage in certain scenarios.
• Performance impact: Depending on the specific implementation, the performance
of an ADT may be lower than that of a custom data structure designed for a
specific application.
UNIT – 2
ARRAYS
2.1 ARRAY:
An array is a finite, ordered and collection of homogeneous data elements which
are stored in adjacent cells in memory.
The data are represented by a single name identified by a script.
Example,
An array of integers to store the age of all students in a class.
int age[40];
An array is known as a linear data structure because all elements of the array are
stored in a linear order.
2.2 TERMINOLOGY:
The array terminologies are,
Size
Type
Base
Index
Range of Indices
Word
Size:
The number of elements in an array is called the size of the array. The size is also
called length (or) dimension.
Type:
The type of an array represents the kind of data type. For example, an array of
integers, an array of float, etc.
Base:
The base of an array is the address of the memory location where the first element
of the array is located.
Index:
All the elements in an array can be referenced by a subscript is known as index.
Where M is the first element in the memory location. If each element requires one
word, then the location for any element A[i] in the array can be obtained as,
Address A[i]=M+(i-1)
An array can be written as A[L…..U], where L and U denotes the lower bound and
upper bound for the index.
If the array is stored starting from the memory location M, and for each element it
requires W number of words, then the address for A[i] will be,
Address A[i]=M+(i-L)xW
The above formula is known as the indexing formula which is used to map the
logical presentation of an array to physical presentation.
Step 3: Exit
Step 4: Else
Step 5: i = U
Step 6: While i > LOCATION do
Step 7: A[i] = A[i – 1]
Step 8: i = i – 1
Step 9: Endwhile
Step 10: A[LOCATION] = KEY
Step 11: Endif
Step 12: Stop
2.4.3 Deletion:
This operation is used to delete a particular element from an array. The element
will be deleted by overwriting it with its subsequent element and this subsequent element
then is also to be deleted.
Algorithm:
Input: KEY the element to be deleted.
Output: Slimed array without KEY.
Data Structures: An array A[L…U].
Step 1: i = SearchArray (A, KEY)
Step 2: if (i = 0) then
Step 3: print “KEY is not found: No deletion”
Step 4: Exit
Step 5: Else
Step 6: while i < U do
Step 7: A[i] = A[i + 1]
Step 8: i = i + 1
Step 9: End while
Step 10: End if
Step 11: A[U] = NULL
Step 12: U = U – 1
Step 13: Stop
2.4.4 Searching:
This operation is applied to search an element of interest in an array.
Algorithm:
Input: KEY is the element to be searched.
Output: Index of KEY in A or a message on failure.
Data Structures: An array A[L…U]
Step 1: i = L, found = 0, Location = 0
Step 2: while (i ≤ U) and (found = 0) do
Step 3: if compare (A[i], KEY) = TRUE then
Step 4: found = 1
Step 5: Location = i
Step 6: Else
Step 7: i = i + 1
Step 8: End if
Step 9: End while
Step 10: if found = 0 then
Step 11: print “Search is unsuccessful: KEY is not in the array”
Step 12: Else
Step 13: print “Search is successful: KEY is in location”, location.
Step 14: End if
Step 15: return (location)
Step 16: Stop
2.4.5 Sorting:
Sorting is to arrange the given numbers in a specified order either in ascending or
descending.
The following algorithm is used to store the elements of an integer array in
ascending order.
Algorithm:
Input: An array with integer data.
Output: An array with sorted elements in an order according to Order ().
Data Structures: An array A[L…U]
Step 1: i = U
Step 2: while i ≥ L do
Step 3: j = L
Step 4: while j < i do
Step 5: if order ( A[j], A[j+1]) = FALSE
Step 6: Swap (A[j], A[j+1])
Step 7: End if
Step 8: j = j+1
Step 9: End while
Step 10: i = i – 1
Step 11: End while
Step 12: Stop
2.4.6 Merging:
Merging is used to combine the elements from two different arrays.
Algorithm:
Input: Two arrays A1[L1….U1] , A2[L2….U2]
Output: Resultant array A[L…U] where, L= L1, and U = U1 +(U2 - L2 + 1) when
A2 is appended after A1.
Data Structures: Array structure
Step 1: i1 = L1, i2 = L2
Step 2: L = L1, U = U1 + U2 – L2 + 1
Step 3: i = L
Step 4: Allocate Memory (Size (U – L + 1))
Step 5: while i1 ≤ U1 do
Step 6: A[i] = A1[i1]
Step 7: i = i + 1, i1 = i1 + 1
Step 8: End while
Step 9: while i2 ≤ U2 do
Step 10: A[i] = A[i2]
Step 11: i = i + 1, i2 = i2 + 1
Step 12: End while
Step 13: Stop
The subscripts of any arbitrary element, say (Aij) represent the ith row and jth
column.
Memory representation of a Matrix:
Matrices are also stored in continuous memory locations. There are two
conventions of storing any matrix in the memory.
➢ Row-major order
➢ Column-major order
Row Major Order:
In row major order, the elements of a matrix are stored on a row-by-row basis, ie,
all the elements in the first row, then in second row and so on.
Example:
If an array A has 3 x 3 order
The following formula is used to find the address of any element in 2-Dimensional
array.
ADDRESS A[i][j] = Base(A) + W [ N(i-LB) + (j-LB) ]
Where,
Base(A) - Base Address of an Array A
W - Data type size
N - Number of columns
LB - Lower bound
i, j - Subscripts
Example:
To find the address of A[1][2],
Where,
i=1
j=2
N=3
ADDRESS A[1][2] = 4000 + 2 [ 3(1-0) + (2-0) ]
= 4000 + 2 [ 3(1) + 2 ]
= 4000 + 2 [3 + 2]
= 4000 + 2 [5]
= 4000 + 10
= 4010
Column Major Order:
In column major order, all elements are stored column by column, ie, all elements
in the first column are stored then in second column, third column and so on.
The following formula is used to find the address of any element in 2-Dimensional
array.
ADDRESS A[i][j] = Base(A) + W [ M(j-LB) + (i-LB) ]
Where,
Base(A) - Base Address of an Array A
W - Data type size
M - Number of rows
LB - Lower bound
i, j - Subscripts
Example:
To find the address of A[1][2],
Where, i = 1
j=2
M=3
ADDRESS A[1][2] = 6000 + 2 [ 3(2-0) + (1-0) ]
= 6000 + 2 [ 3(2) + 1 ]
= 6000 + 2 [6 + 1]
= 6000 + 2 [7]
= 6000 + 14
= 6016
2.6.1.1 SPARSE MATRICES:
Matrices with relatively high proportions of zero elements are called sparse
matrices.
Let A be a sparse matrix given below.
There exist 6 rows and 6 columns, totally 36 elements. There exists 8 non-zero
entries and the remaining 28 entries are zeros.
Example:
Convert the following sparse matrix into alternate form
Solution:
In the above sparse matrix, there exists only 5 non-zero entries.
t = Number of non-zero entries
t=5
In the first row, the elements 5, 4 and 5 indicates no. of rows, no. of columns and
no. of non-zero entries is the given sparse matrix.
2.6.1.2 Types of sparse matrix:
Storing this n dimensional array in memory, any element can be referenced using
the following formula:
Here N1, N2, N3, N4 are the nodes in the list, HEADER is an empty node and only
used to store a pointer to the first node N1, thus if one knows the address of the HEADER
node from the link field of this node, the next node can be traced and so on.
The single linked list can move left to right only, that is why single linked list is
also called one way list.
2.8.1 Representation of a Linked List in memory:
There are two ways to represent a linked list in memory.
Static representation using Array
Dynamic representation using free pool of storage
Static representation:
In static representation of a single linked list, two arrays are maintained, one array
for data and the other for links.
Two parallel arrays of equal size are allocated which should be sufficient to store
the entire linked list.
Dynamic representation:
In this method, there is a memory bank (collection of free memory spaces) and
memory manager (program).
During the creation of a linked list, when a node is required the request is placed
to the memory manager, free memory manager will search the memory bank, if found,
grants the desired block to the caller.
There is also another program called the garbage collector, it plays whenever a
node is no more in use it returns the unused node to the memory bank. Such memory
management is known as dynamic memory management.
A list of available memory spaces are stored in AVAIL, for a request of a node, the
list AVAIL is searched for the block of right size.
If AVAIL is null or if the block of desired size is not found, the memory manager
will return a message accordingly.
Suppose the block is found and let it be XY then the memory manager will return
the pointer of XY to the caller in a temporary buffer i.e., NEW.
The newly availed node XY then can be inserted at any position in the linked list
by changing the pointers of the concerned nodes.
The pointers which are required to be manipulated while returning a node are
shown with dotted arrows.
2.8.2 OPERATIONS ON A SINGLE LINKED LIST:
The operations on a single linked list are,
❖ Traversing the list
❖ Inserting a node into the list
❖ Deleting a node from the list
❖ Copying a list to make a duplicate of it
❖ Merging the linked list with another one to make a larger list
❖ Searching for an element in the list
Traversing a Single Linked List:
In traversing a single linked list, we visit every node in the list starting from the
first node to the last node.
Algorithm:
Step 1: ptr = HEADER➔LINK
Step 2: While (ptr ≠ NULL) do
Algorithm:
Step 1: new = GetNode(NODE)
Step 2: If(new = NULL) then
Step 3: Print “Memory underflow: No insertion”
Step 4: Exit
Step 5: Else
Step 6: new➔LINK = HEADER➔LINK
Step 7: new➔DATA = X
Step 8: HEADER➔LINK = new
Step 9: End if
Step 10: Stop
The above algorithm is used to insert a node at the front of a single linked list.
Inserting a node at the end of the Single Linked List:
In this case, a node will be inserted at the end of a linked list.
Algorithm:
Step 1: new = GetNode (NODE)
Step 2: If (new = NULL) then
Step 3: Print “Memory is insufficient: Insertion is not possible”
Step 4: Exit
Step 5: Else
Step 6: ptr = HEADER
Step 7: While (ptr➔LINK ≠ NULL) do
Step 8: ptr = ptr➔LINK
Step 9: End while
Step 10: ptr➔LINK = new
Step 11: new➔DATA = X
Step 12: End if
Step 13: Stop
Inserting a node into a Single Linked List at any position in the list:
Algorithm:
Step 1: new = GetNode (NODE)
Step 2: If (new = NULL) do
Step 3: Print “Memory is insufficient: No insertion is possible”
Step 4: Exit
Step 5: Else
Step 6: ptr = HEADER
Step 7: While (ptr➔DATA ≠ KEY) and (ptr➔LINK ≠ NULL) do
Step 8: ptr = ptr➔LINK
Step 9: End while
Step 10: If (ptr➔LINK = NULL) then
Step 11: Print “KEY is not available in the list”
Algorithm:
Step 1: ptr = HEADER➔LINK
Step 2: If (ptr = NULL) then
Step 3: Print “The list is empty: No deletion”
Step 4: Exit
Step 5: Else
Step 6: ptr1 = ptr➔LINK
Step 7: HEADER➔LINK = ptr1
Step 8: ReturnNode(ptr)
Step 9: End if
Step 10: Stop
Algorithm:
Step 1: ptr = HEADER
Step 2: If (ptr➔LINK = NULL) then
Step 3: Print “The list is empty: No deletion”
Step 4: Exit
Step 5: Else
Step 6: While (ptr➔LINK ≠ NULL) do
Step 7: ptr1 = ptr
Step 8: ptr = ptr➔LINK
Step 9: End while
Step 10: ptr1➔LINK = NULL
Step 11: ReturnNode(ptr)
Step 12: End if
Step 13: Stop
Deleting the node at any position of a Single Linked List:
Algorithm:
Step 1: ptr1 = HEADER
Step 2: ptr = ptr1➔LINK
Step 3: While (ptr ≠ NULL) do
Step 4: If (ptr➔DATA ≠ KEY) then
Step 5: ptr1 = ptr
Merging can be done by setting the pointer of the link field of the last node in the
list L1 with the pointer of the first node in L2.
Algorithm:
Step 1: ptr = HEADER1
Step 2: While (ptr➔LINK ≠ NULL) do
Step 3: ptr = ptr➔LINK
Step 4: End while
Step 5: ptr➔LINK = HEADER2➔LINK
Step 6: ReturnNode(HEADER)
Step 7: HEADER = HEADER1
Step 8: Stop
Searching for an element in a Single Linked List:
It is used to search an item in a single linked list.
Algorithm:
Step 1: ptr = HEADER➔LINK
Step 2: flag = 0, LOCATION = NULL
Step 3: While (ptr ≠ NULL) and (flag = 0) do
Step 4: If (ptr➔DATA = KEY) then
Step 5: flag = 1
Step 6: LOCATION = ptr
Step 7: Print “Search is successful”
Step 8: Return(LOCATION)
Step 9: Else
Step 10: ptr = ptr➔LINK
Step 11: End if
Except the header node and the last node, points to its immediate predecessor and
immediate successor.
2.10.1 Operations on a Double Linked List:
1. Inserting a node into a double linked list:
i. Inserting a node at the front of the list
ii. Inserting a node at the end of the list
iii. Inserting a node at any position of the list
2. Deleting a node from a double linked list
Algorithm:
Step 1: ptr = HEADER➔RLINK
Step 2: new = GetNode(NODE)
Step 3: If (new ≠ NULL) then
Step 4: new➔LLINK = HEADER
Step 5: HEADER➔RLINK = new
Step 6: new➔RLINK = ptr
Step 7: ptr➔LLINK = new
Step 8: new➔DATA = X
Step 9: Else
Step 10: Print “Unable to allocate memory: No insertion”
Step 11: End if
Step 12: Stop
Inserting a node at the end of a Double Linked List:
Algorithm:
Step 1: ptr = HEADER
Step 2: While (ptr➔RLINK ≠ NULL) do
Step 3: ptr = ptr➔RLINK
Algorithm:
Step 1: ptr = HEADER
Step 2: While (ptr➔DATA ≠ KEY) and (ptr➔RLINK ≠ NULL) do
Step 3: ptr = ptr➔RLINK
Step 4: End while
Step 5: new = GetNode(NODE)
Step 6: If (new = NULL) then
Step 7: Print “Memory is not available”
Step 8: Exit
Step 9: Else
Step 10: If (ptr➔RLINK = NULL) then
Step 11: new➔LLINK = ptr
Step 12: ptr➔RLINK = new
Step 13: new➔RLINK = NULL
Step 14: new➔DATA = x
Algorithm:
Step 1: ptr = HEADER➔RLINK
Step 2: If (ptr = NULL) then
Step 3: Print “List is empty: No deletion is possible”
Step 4: Exit
Step 5: Else
Step 6: ptr1 = ptr➔RLINK
Step 7: HEADER➔RLINK = ptr1
Step 8: If (ptr1 ≠ NULL) then
Step 9: ptr1➔LLINK = HEADER
Step 10: End if
Step 11: ReturnNode (ptr)
Step 12: End if
Step 13: Stop
Algorithm:
Step 1: ptr = HEADER
Step 2: While (ptr➔RLINK ≠ NULL) do
Step 3: ptr = ptr➔RLINK
Step 4: End while
Step 5: If (ptr = HEADER) then
Step 6: Print “List is empty: No deletion”
Step 7: Exit
Step 8: Else
Step 9: ptr1 = ptr➔LLINK
Step 10: ptr1➔RLINK = NULL
Step 11: ReturnNode (NODE)
Step 12: End if
Step 13: Stop
Deleting a node from any position of a Double Linked List:
Algorithm:
Step 1: ptr = HEADER➔RLINK
Step 2: If (ptr = NULL) then
Step 3: Print “List is empty: No deleteion”
Step 4: Exit
Step 5: End if
Step 6: While (ptr➔DATA ≠ KEY) and (ptr➔RLINK ≠ NULL) do
Step 7: ptr = ptr➔RLINK
Circular link lists have certain advantages over ordinary linked list, they are
Accessibility of a member node in the list
In an ordinary list, a member node is accessible from a particular node, that is,
from the header node only. But in a circular linked list, every member node is accessible
from any node.
Null link problem
The null value in the link field may create some problem during the execution of
programs, this is explained by two algorithms to perform search on ordinary linked list
and circular linked list.
Disadvantage:
One main disadvantage is that without adequate care in processing, it is possible
to get trapped into an infinite loop. This problem occurs when we are unable to detect the
end of the list while moving from one node to the next.
2.11 APPLICATIONS OF LINKED LIST:
➢ Sparse Matrix Manipulation
➢ Polynomial Representation
➢ Dynamic Storage Management
2.11.1 SPARSE MATRIX MANIPULATION:
A sparse matrix is a two-dimensional array, where the majority of the elements
have the value NULL.
Structure of a node to represent sparse matrices,
The fields i and j store the row and column numbers for a matrix element.
DATA field stores the matrix element at the ith row and jth column.
The ROWLINK points the next node in the same row and COLLINK points the
next node in the same column.
To illustrate the sparse matrix of order 6 x 5 is assumed.
Here CH1, CH2, CH3, CH4, CH5 are the 5 headers, heading 5 columns. RH1, RH2,
RH3, RH4, RH5 and RH6 are the 6 header heading 6 rows. HEADER is one additional
header node keep the starting address of the sparse matrix.
Algorithm for create Sparse Matrix_LL:
Step 1: Read m, n
Step 2: HEADER = GetNode (NODE)
Step 3: If (HEADER = NULL) then
Step 4: Print “Non availability of storage space: Quit”
Step 5: Exit
Step 6: Else
Step 7: HEADER➔i = m
Step 8: HEADER➔j = n
Step 9: HEADER➔ROWLINK = HEADER
Step 10: HEADER➔COLLINK = HEADER
Step 11: HEADER➔DATA = NULL
Step 12: ptr = HEADER
In the single linked list representation, a node should have three fields: COEFF,
EXP and a LINK.
Single linked list representation of the polynomial P(x) = 3x 8 – 7x6 + 14x3 +10x – 5
would be stored as,
POLYNOMIAL ADDITION:
In order to add two polynomials say P and Q to get a resultant polynomial R.
There may arise three cases during the comparison between the terms of two
polynomials.
Case 1: The exponents of two terms are equal. In this case the coefficients in the two
nodes are added and a new term is created.
Rptr➔Coeff = Pptr➔Coeff + Qptr➔Coeff and
Rptr➔Exp = Pptr➔Exp
Case 2: Pptr➔Exp > Qptr➔Exp ie, the exponent of the current in P is greater than the
exponent of the current term in Q. In this case, a duplicate of the current term in P
is created and inserted in the polynomial R.
Case 3: Pptr➔Exp < Qptr➔Exp ie, the exponent of the current in P is less than the
exponent of the current term in Q. In this case, a duplicate of the current term in
Q is created and inserted in the polynomial R.
Algorithm for adding two polynomials:
Step 1: Pptr = PHEADER➔LINK, Qptr = QHEADER➔LINK
Step 2: RHEADER = GetNode(NODE)
Step 3: RHEADER➔LINK = NULL, RHEADER➔EXP = NULL,
RHEADER➔COEFF = NULL
Step 4: Rptr = RHEADER
Step 5: While (Pptr ≠ NULL) and (Qptr ≠ NULL) do
There are two memory management schemes for the storage allocations of data.
➢ Static Storage Management
➢ Dynamic Storage Management
In static storage management scheme, the net amount of memory required for
various data for a program is allocated before the start of the execution of the program.
Once memory is allocated, it can neither be extended nor be returned to the
memory bank.
The dynamic storage management scheme allows the user to allocate and
deallocate the memory during the execution of the program.
The dynamic storage management scheme is suitable in multiprogramming as
well as single user environment.
Allocation schemes:
There are two strategies for allocation,
❖ Fixed block allocation
❖ Variable block allocation
There are four strategies under variable block allocation,
i. First fit
ii. Next fit
iii. Best fit
iv. Worst fit
Deallocation schemes:
i. Random deallocation
ii. Ordered deallocation
UNIT - 3
STACKS
3.1 STACK:
A stack is an ordered collection of homogeneous data elements where the insertion
and deletion operations take place at only one end called as Top of the Stack, that is data
is stored and retrieved in Last In First Out (LIFO) order.
The insertion and deletion operations in the case of a stack are specially termed as
PUSH and POP respectively.
The position of the stack where these operations are performed is known as the top
of the stack.
An element in a stack is termed as ITEM. The maximum number of elements that
a stack can be accommodate is termed SIZE.
Examples of stacks:
➢ Trains in a railway station
➢ Goods in a cargo
➢ Plates on a tray
3.2 REPRESENTATION OF STACK:
There are two main ways are used to represent a stack.
➢ Using a one-dimensional array
➢ Using linked list
3.2.1 Array representation of stacks:
First we have to allocate a memory block of sufficient size to accommodate the full
capacity of the stack then starting from the first location of the memory block, the items of
the stack can be stored in a sequential fashion.
Item i denotes the ith item in the stack, l and u denote the index range of the array
in use. Top is a pointer to point the position of the array with this representation, the
following two ways can be started.
EMPTY: Top < L
FULL: Top ≥ U
3.2.2 Linked List representation of stacks:
Array representation of stack is very easy and convenient but it allows the
representation of only fixed sized stacks. In several applications, the size of the stack may
vary during program execution. A solution to this problem is to represent a stack using a
linked list. A single linked list structure is sufficient to represents any stack. The DATA
field is for the ITEM and LINK field is to point the next item.
In the linked list representation, the first node on the list is the current item that is
the item at the top of the stack and the last node is the node containing the bottom most
item. PUSH operation will add a new node in the front and POP operation will remove a
node from the front of the list. The SIZE of the stack is not important here, because this
representation allows dynamic stacks instead of static stacks.
3.3 OERATIONS ON STACKS:
The basic operations required to manipulate a stack are,
PUSH: To insert an item into a stack
POP: To remove an item from a stack
STATUS: To know the present state of a stack
3.4.5 Recursion:
In recursion, stack is used to store the current value and return address of the
function call.
At each recursive procedural call, the stack is pushed to save the necessary values;
the stack is poped to restore the saved values of the preceding level.
Implementation of Recursion:
Recursion is the process of calling a function itself (or) function calls another
function.
Example:
Calculation of the factorial value for an integer n.
n! = n x (n-1) x (n-2) x …. x 3 x 2 x 1
n! = n x (n-1)!
These two types of definitions are expressed as,
i) Iterative
ii) Recursive
Iterative definition of factorial
Algorithm:
Input: an integer number N
Output: the factorial value of N, i.e. N!
Step1: fact = 1
Step2: for i = 1 to N do
Step3: fact = i * fact
Step4: end for
Step5: return (fact)
Step6: stop
Recursive definition of factorial
Algorithm:
Input: an integer number N
Output: the factorial value of N, that is N!
Step1: if (N = 0) then
Step2: fact = 1
Step3: else
Here, it is required to push the intermediate calculations till the terminal condition
is required. Here, steps 1 to 6 are PUSH operations and steps 7 to 11 subsequent POP
operations will evaluate the value of intermediate calculations until stack is empty.
3.5 QUEUES:
A queue is an ordered collection of homogeneous data elements, in which deletion
can take place only at one end called the FRONT and insertion can take place only at the
other end called REAR.
A data in a queue is processed in the same order as it entered, that is, on a First-In-
First Out basis this is why a queue is also termed first-in-first out (FIFO).
A queue is also a linear data structure like a stack. The only difference between a
stack and a queue is that in the case of stack insertion and deletion (Push & Pop)
operations are at one end (top) only, but in a queue insertion(called ENQUEUE) and
deletion (called DEQUEUE) operations take place at two ends called the REAR and
FRONT of the queue.
An element in a queue is termed ITEM; the numbers of elements that a queue can
accommodate is termed as LENGTH.
With this representation, two pointers namely, FRONT and REAR are used to
indicate the two ends of queue.
The pointers FRONT and REAR point the first node and the last node in the list.
Two states of the queue is,
Queue is empty
FRONT=REAR=HEADER
HEADER→RLINK=NULL
Queue contains at least one element
Header→RLINK≠NULL
3.7 OPERATIONS ON QUEUE:
3.7.1 Operations on Queue Using Array:
Suppose the current state of the queue is FRONT=2, REAR=5.
Inserting:
Deleting:
2. If (FRONT=NULL)then
3. Print "Queue is empty"
4. Exit
5. Else
6. FRONT1=FRONT→RLINK
7. HEADER→RLINK=FRONT1
8. FRONT1→LLINK=HEADER
9. Endif
10. Return Node(FRONT)
11. Endif
12. Stop
3.8 VARIOUS QUEUE STRUCTURES:
3.8.1 CIRCULAR QUEUE:
In ordinary queue, when the REAR pointer reaches the end insertion will be avoid
even if space is available at the front, So the advantages of ordinary queue is wastage of
memory. One way to avoid this is use a circular array.
An circular array is the same as an ordinary array i.e., A[1...N],but logically it
implies that A[1] comes after A[N] or after A[N],A[1]appears.
Both pointers will move in the clockwise direction. This is controlled by the MOD
operation.
For example,
If the current pointer is at i then shift to next location will be i MOD LENGTH+1.
With this principle the two states of queue regarding the empty or full.
9. Else
10. FRONT=(FRONT MOD LENGTH) + 1
11. Endif
12. Endif
13. Stop
3.8.2 DEQUE:
A deque is a linear list, Where both insertion and deletion operations can be made
at either end of the structure. The term deque has originated from double ended queue.
A Deque Structure:
7. Ahead = FRONT - 1
8. Endif
9. If(ahead=REAR)then
10. Print "Deque is full"
11. Exit
12. Else
13. FRONT=ahead
14. DQ[FRONT]=ITEM
15. Endif
16. Endif
17. Stop
Algorithm Pop-DQ:
Steps:
1. If (FRONT=0)then
2. Print "queue is empty"
3. Exit
4. Else
5. ITEM=CQ[FRONT]
6. If(FRONT=REAR)then
7. FRONT=0
8. REAR=0
9. Else
10. FRONT=(FRONT MOD LENGTH)+1
11. Endif
12. Endif
13. Stop
Algorithm Inject
Steps:
1. If (FRONT=0)then
2. FRONT=1
3. REAR=1
4. CQ[FRONT]=ITEM
5. Else
6. next =(REAR MOD LENGTH)+1
7. If(next≠FRONT)then
8. REAR=next
9. CQ[REAR]=ITEM
10. Else
11. Print "Queue is full"
12. Endif
13. Endif
14. Stop
Algorithm Eject-DQ
Steps:
1. If (FRONT=0)then
2. Print "Deque is empty"
3. Exit
4. Else
5. If (FRONT=REAR)then
6. ITEM=DQ[REAR]
7. FRONT=REAR=0
8. Else
9. If (REAR=1)then
10. ITEM=DQ[REAR]
11. REAR=LENGTH
12. Else
13. If (REAR=LENGTH)then
14. ITEM=DQ[REAR]
15. REAR=1
16. Else
17. ITEM=DQ[REAR]
18. REAR=REAR-1
19. End if
20. End if
21. End if
22. End if
23. Stop
There are two variations of deque
Input-restricted deque
Output-restricted deque
Input-Restricted Deque:
In this case, deque allows insertion at one end only, but allows deletions at both
ends.
Example:
Example:
LLINK and RLINK are two usual link fields, DATA to store the actual content and
PRIORITY is to store priority value of the item.
With this structure, to delete an item having priority P, the list will be searched
starting from the node under pointer REAR and the first occurring node with
PRIORITY=P will be deleted.
Similarly, to insert a node containing an item with priority P, the search will begin
from the node under the pointer FRONT and the node will be inserted before a node
found first with priority value p or if not found then before a node with the next priority
value.
Algorithm Insert-PQ
Steps:
1. ptr=HEADER
2. new=GetNode(NODE)
3. new→DATA=ITEM
4. new→PRIORITY=P
5. while(ptr→RLINK=NULL)and(ptr→PRIORITY<P)do
6. ptr=ptr→RLINK
7. Endwhile
8. If (ptr→RLINK=NULL)then
9. ptr→RLINK=new
10. new→LLINK=ptr
11. new→RLINK=NULL
12. REAR=new
13. Else
14. If(ptr→priority≥P)then
15. ptr1=ptr→LLINK
16. ptr1→RLINK=new
17. new→RLINK=ptr
18. ptr→LLINK=new
19. new→LLINK=ptr1
20. Endif
21. Endif
22. FRONT=HEADER→RLINK
23. STOP
Algorithm Delete-DQ:-
Steps:
1. If (REAR=NULL)then
2. Print "Queue is empty"
3. Exit
4. Else
5. ptr=REAR
6. While(ptr→PRIORITY>P)or(ptr≠HEADER)do
7. ptr=ptr→LLINK
8. Endwhile
9. If (ptr=HEADER)or(ptr→PRIORITY<P)
10. Print "No item with priority", P
11. Exit
12. Else
13. If (ptr→priority=p)then
14. ptr1=ptr→LLINK
15. ptr2=ptr→RLINK
16. If(ptr=REAR)
17. REAR=ptr1
18. ptr1→RLINK=NULL
19. Else
20. ptr1→RLINK=ptr1
21. ptr2→LLINK=ptr2
22. Endif
23. Endif
24. Endif
25. item=ptr→DATA
26. ReturnNODE(item)
27. Endif
28. Stop
3.9 APPLICATIONS OF QUEUES:
• Simulation is modeling of a real-life problem (or) it is the model of a real-life
situation in the form of a computer program.
• CPU scheduling in a multiprogramming environment
• Round Robin Algorithm: The Round Robin (RR) algorithm is a well-known
scheduling algorithm and is designed especially for time sharing systems.
UNIT - 4
TREES
4.1 DEFINITION:
A tree is a finite set of one or more nodes such that,
i. There is a specially designated node called the root.
ii. The remaining nodes are collection of sub trees.
Binary Tree:
A binary tree is a special form of a tree
A binary tree can also be defined as a finite set of nodes
Such that,
i.T is empty
ii.T contains a specially designated node called the root of T, and the remaining nodes
of T form two disjoint binary trees T1 and T2 which are called left sub tree and
the right sub-tree.
Node:
A node of a tree stores the actual data and link to the other node.
Parent:
The parent of a node is the immediate predecessor of a node.
Fig (a)
Leaf:
The node which is at the end does not have any child is called leaf node. In above
fig (a) H, I, K, L and M are the leaf nodes.
A leaf node is also called terminal node.
Level:
Level is the rank in the hierarchy. The root node has level 0.
If a node is at level L, then its child is at level L + 1 and the parent is at level L - 1.
Height:
The maximum number of nodes that is possible in a path starting from the root
node to a leaf node is called the height of a tree.
The longest path is A-C-F-J-M and hence the height of this tree is 5. (Fig (a))
Degree:
The maximum number of the children that is possible for a node is known as the
degree of a node. For example, the degree of each node of the tree is 2. (Fig (a)
Siblings:
The nodes which have the same parent are called sibling.
For example, J & K are siblings.
Path:
Sequence of consecutive edges is called a path the path from root node A to M is A-
C-F-J-M and the length of this path is 5. (Fig (a))
4.3 REPRESENTATION OF BINARY TREE:
There are the two common methods used for representing this conceptual
structure.
1. Sequential (linear) or Array Representation of a Binary tree.
2. Linked representation of a binary tree.
4.3.1 Linear Representation of Binary Tree:
This type of representation is static, that a block of memory for an array is allocated
before storing the actual tree in it, and once the memory is allocated, the size of the tree is
restricted.
The representation is as follows:
1. The binary tree root node is stored at location 1.
2. The remaining nodes left and right are stored in following way.
The array blocks 8, 9, 10, 11, 12 and 13 are blank because nodes A, B and C does not
contain the childs.
Advantages:
1. Any node can be accessed from any other node by calculating the index and this is
efficient from execution point of view.
2. Only data are stored without any pointer.
3. It is efficient and convenient representation.
Disadvantages:
1. Other than the full binary tree, the majority of the array entries may be empty.
2. It allows only static representation; it is in no way possible to enhance the tree
structures if the array is limited.
3. Inserting a new node to the tree or deleting a node from it is inefficient with this
representation.
4. The size of the tree structure is unpredictable; this representation uses static
allocation leads to wastage of memory space.
4.3.2 Linked Representation of Binary Tree:
Linked representation of binary tree is similar to the way in which doubly linked
list are represent in memory that is maintained in memory by means of a linked list each
node will have three fields.
Advantages:
1. The linked representation of binary tree is the efficient use of computer storage and
computer time.
2. The insertion and deletion operation may be performed more easily.
3. When the size of a tree structure is unpredictable, the linked allocation technique is
suitable, because allocates memory space dynamically.
4.4 OPERATION ON BINARY TREE:
The major operations on a binary tree can be listed as follows:
1. Insertion
2. Deletion
3. Traversal
4. Merge
4.4.1 Insertion:
With this operation, a new node can be inserted into any position in a binary tree.
26. Endif
27. Stop
Algorithm Search-Seq:
Steps:
1. i=INDEX
2. If (A[i]≠KEY)then
3. If(2*i<_size)then
4. Search_SEQ(2*i,KEY)
5. Else
6. If (2*i+1<_SIZE)then
7. Search_SEQ(2*i+1,KEY)
8. Else
9. Return(0)
10. Endif
11. Endif
12. Else
13. Return(i)
14. Endif
15. Stop
Algorthim Insertbinary Tree-Link
1. ptr=Search_LINK(ROOT,KEY)
2. If(ptr=NULL)then
3. Print "Search is unsuccessful:No insertion"
4. Exit
5. Endif
6. If (ptr→LC=NULL)or(ptr→RC=NULL)
7. Read option to insert as left (L) or right(R) child
8. If (option→NULL)then
9. If(ptr→LC=NULL)then
10. new=GetNode(NODE)
11. new→DATA=ITEM
12. new→LC=new→RC=NULL
13. ptr→LC=new
14. Else
15. Print "Insertion is not possible as left child"
16. Exit
17. Endif
18. Else
19. If (ptr→RC=NULL)
20. new=GetNode(NODE)
21. new→DATA=ITEM
22. new→LC=new→RC=NULL
23. ptr→RC=new
24. Else
25. Print "Insertion is not possible as right child"
26. Exit
27. Endif
28. Else
29. Print "The key node already has child"
30. Endif
31. Endif
32. Stop
Algorithm Search_Link
1. ptr=PTRO
2. If (ptr→DATA≠KEY)
3. If(ptr→LC≠NULL)
4. Search_LINK(ptr→LC)
5. Else
6. Return(0)
7. Endif
8. If (ptr→RC≠NULL)
9. Search_LINK(ptr→RC)
10. Else
11. Return(0)
4.4.2 Deletion:
This operation is used to delete any node from any non-empty binary tree.
3. ptr1=ptr→LC,ptr2=PTR→RC
4. If(ptr1≠NULL)
5. Searchparent(ptr1)
6. Else
7. Parent=NULL
8. Endif
9. If(ptr2=NULL)then
10. Searchparent(ptr2)
11. Else
12. Parent=NULL
13. Endif
14. Else
15. Return(parent)
16. Endif
17. Stop
4.4.3 Traversal:
This operation is used to visit each node in the tree exactly once.
A full traversal on a binary tree gives a linear ordering of the data in the tree.
The traversal of a tree is performed in three different ways
i. Preorder
ii. Inorder
iii. Postorder
In each of these methods involves in visiting the root and traversal its left and right
sub trees.
Preorder Traversal:
In this traversal, the root is visited first, and then the left sub tree, in the then the
right sub-tree, that is
❖ Visit the root node R
❖ Traverse the left sub-tree of R
❖ Traverse the right sub-tree of R
R Tl Tr
Example1:
Preorder Traversal: + - A B * C / D E
Example2:
Preorder Traversal: 18, 16, 11, 14, 81, 64, 26, 73, 143
Algorithm For Preorder:
Steps:
1. ptr=ROOT
2. If(ptr≠NULL)then
3. Visit(ptr)
4. Preorder(ptr→LC)
5. preorder(ptr→RC)
6. Endif
7. Stop
Inorder Traversal:
With this traversal before visiting the root node, the left sub=tree of the root node is
visited, then the root node and after the visit of the node the right sub-tree of the root node
is visited i.e.,
❖ Traverse the left sub-tree of the root node R.
❖ Visit the root node R.
❖ Traverse the right sub-tree of the root node R.
Tl R Tr
Example1:
Inorder Traversal: A – B + C * D / E
Example2:
Inorder Traversal: 11, 14, 16, 18, 26, 64, 73, 81, 143
Algorithm For Inorder:
Steps:
1. ptr=ROOT
2. If(ptr≠NULL)then
3. Inorder(ptr→LC)
4. Vist(ptr)
5. Inorder(ptr→RC)
6. Endif
7. Stop
Postorder Traversal:
In this case, the root node is visited in the end, that is first visit the left sub-tree,
then the right sub-tree and finally the root node.
❖ Traverse the left sub-tree of the root R.
❖ Traverse the right sub-tree of the root R.
❖ Visit the root node R.
Tl Tr R
Example1:
Postorder Traversal: A B – C E F / * +
Example2:
Postorder Traversal: 14, 11, 16, 26, 73, 64, 143, 81, 18
Algorithm For Postorder:
Steps:
1. ptr=ROOT
2. If(ptr≠NULL)then
3. Postorder(ptr→LC)
4. postorder(ptr→RC)
5. Visit(ptr)
6. Endif
7. Stop
4.4.4 Merging of Binary Tree:
This operation is applicable to trees which are represented using a linked structure.
There are two ways that this operation can be carried out.
Suppose T1 and T2 are two binary trees. T2 can be merged with T1 if all the nodes
from T2 are inserted into the binary tree T1 one by one.
There is another way, when the entire tree T2 (or) T1 can be included as a sub-tree
of T1 (or) T2.
Before performing merging, we have to test for compatibility if in both trees, the
root node has both the left and right sub trees, and then the merge will fail.
If T1 has left sub-tree (or) right sub-tree empty then T2 will be added as the left sub-
tree of the T1.
T(n1+n2) = T1(n1) + T2(n2)
Where T is the resultant tree after merging T2 and T1
Algorithm
Steps:
1. If (ROOT1=NULL) then
2. ROOT=ROOT2
3. Exit
4. Else
5. If (ROOT2=NULL) then
6. ROOT=ROOT1
7. Exit
8. Else
9. If (ROOT1→LCHILD=NULL) then
10. Root1→LCHILD=ROOT2
11. ROOT=ROOT1
12. Else
13. If (ROOT→RCHILD=NULL) then
14. ROOT1→RCHILD=ROOT
15. ROOT=ROOT2
16. Else
Example
The value at N is greater than every value in the left sub-tree of N and is less then
every value in the right sub-tree of N.
Example
i) A binary search tree with numeric data ii) A binary search tree with alphabetic data
We start form the root node R, then if ITEM is less than the value in the root node
R, we proceed to its left child if the ITEM is greater than the value in the node R, and we
proceed to its right child. The process will be continued till the ITEM is not found or we
reach a dead end that is, the leaf node.
Algorithm For Search-B.S.T
Steps:
1. Ptr = ROOT, flag = FALSE
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 100
Data Structures UNIT - IV
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 101
Data Structures UNIT - IV
6. Case:ITEM>ptr→DATA
7. Ptr1=ptr
8. Ptr=ptr→RCHILD
9. Case:ptr→DATA=ITEM
10. Flag=TRUE
11. Print”ITEM already exists”
12. Exit
13. Endwhile’
14. Endcase
15. If(pt=NULL)then
16. New=Getnode(NODE)
17. new→DATA=ITEM
18. new→LCHILD=NULL
19. new→RCHILD=NULL
20. If (ptr1→DATA<ITEM)then
21. Ptr1→LCHILD=new
22. Else
23. Ptr1→LCHILD=new
24. Endif
25. Endif
26. Stop
Deleting a Node from the Binary Search Tree:
Deletion of N can then be carried out under the various situations,
Case1: N is the leaf node
Case2: N has exactly one child
Case3: N has two Childs
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 102
Data Structures UNIT - IV
Algorithm Delete-BST
Steps:
1. Ptr=ROOT,flag=FALSE
2. While (ptr≠NULL)and(flag=FALSE)do
3. Case:ITEM<ptr→DATA
4. Parent=ptr
5. Ptr=ptr→LCHILD
6. Case:ITEM>ptr→DATA
7. parent=ptr
8. Ptr=ptr→RCHILD
9. Case:ptr→ATA=ITEM
10. Flag=TRUE
11. Endcase
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 103
Data Structures UNIT - IV
12. Endwhile
13. I(flag=FALSE)then
14. Print ”ITEM does not exist:No deletion”
15. Exit
16. Endif
17. If (ptr→LCHILD=NULL)and(ptr→RCHILD=NULL)then
18. Case=1
19. Else
20. If (ptr→LCHILD≠NULL)and(ptr→RCHILD≠NULL)then
21. Case=3
22. Else
23. Case=2
24. Endif
25. Endif
26. If (case=1)then
27. If(parent→LCHILD=ptr)then
28. parent→LCHILD=NULL
29. else
30. parent→RCHILD=NULL
31. endif
32. returnNode(ptr)
33. endif
34. If(case=2)then
35. If(parent→LCHILD=ptr)then
36. If(ptr→LCHILD=NULL)then
37. Parent→LCHILD=ptr→RCHILD
38. Else
39. parent→LCHILD=ptr→LCHILD
40. endif
41. else
42. If(parent→RCHILD=ptr)then
43. If(ptr→LCHILD=NULL)then
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 104
Data Structures UNIT - IV
44. parent→RCHILD=ptr→RCHILD
45. else
46. parent→RCHILD=ptr→LCHILD
47. endif
48. endif
49. endif
50. return Node(ptr)
51. endif
52. If(case=3)
53. Ptr1=succ(ptr)
54. Item1=ptr→DATA
55. Delete-BST(item1)
56. ptr→DATA=item1
57. Endif
58. Stop
4.5.3 HEAP TREES:
H is a complete binary tree. It will be termed heap tree if it satisfies the following
properties:
✓ For each node N is H, the value at N is greater than or equal to the value of each of
the children of N.
✓ N has a value which is greater than are equal to the value of every successor of N.
✓ Such as heap tree is called max heap.
Similarly, a min heap is possible, where any node N has a value less then are equal
to the value of any of the successors of N.
In a max heap, the root node contains the largest data where as in a main heap it
contains the smallest data.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 105
Data Structures UNIT - IV
The principle of insertion is that first we have to adjoin the data in the complete
binary tree. Next, we have to compare it with the data in its parent.
Algorithm Insert Max Heap:
Steps:
1. If(N ≥ SIZE)then
2. Print “insertion is not possible”
3. Exit
4. Else
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 106
Data Structures UNIT - IV
5. N=N+1
6. A[N]=ITEM
7. i=N
8. p=i div 2
9. while (p>0) and (A[p] < A[i])do
10. temp=A[i]
11. A[i]=A[p]
12. A[p]=temp
13. i=p
14. p = p div 2
15. End while
16. Endif
17. Stop
Deletion of a Node from a Heap Tree:
Any node can be deleted from a heap tree, but deleting the root node has some
special importance.
Read the root node into a temporary storage a ITEM.
Replace the root node by the last node in the heap tree then reheap the tree as
stated below.
➢ Let the newly modified root node be the current no compare its values with the
values of its two children.
➢ Let X be the child whose value is the largest. Interchange the value of X with the
values of the current node.
➢ Make X as the current node.
➢ Continue reheap if the current node is not a empty node.
For Example,
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 107
Data Structures UNIT - IV
The root node is 99. The last node is 26,then it is in level 3,so 99 is replaced by 26
and this node with data 26 is removed from the tree.
Next 26 at the root node is compared with its two children 45 and 63. As 63 is
greater, so they are interchanged. Now, 26 is compared with the children 57 and 42, as 57
is greater. So, they are interchanged now 26 appears as the leaf node, hence the reheap is
completed.
Algorithm Delete Max Heap:
Steps:
1. If(N=0)then
2. Print “heap deletion is not possible”
3. Exit
4. Endif
5. ITEM=A[1]
6. A[1]=A[N]
7. N=N-1
8. Flag=FALSE,i=1
9. While(flag=FALSE)and(i<N)do
10. lchild=2*i,rchild=2*i+1
11. If (lchild≤N)then
12. X=A[lchild]
13. Else
14. X=-∞
15. Endif
16. If(rchild≤N)then
17. Y=A[rchild]
18. Else
19. y=-∞
20. endif
21. If(A[i]>x)and(A[i]>y)then
22. Flag=TRUE
23. Else
24. If(x>y)and(A[i]<x)
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 108
Data Structures UNIT - IV
25. Swap(A[i],A[lchild])
26. I=lchild
27. Else
28. If(y>x)and(A[i]<y)
29. Swap(A[i],A[rchild])
30. I=rchild
31. Endif
32. Endif
33. Endif
34. Endwhile
35. Stop
4.6 GRAPH & GRAPH TERMINOLOGIES:
Graph is another important non-linear data structure.
A graph G consists of two sets.
i. A set V called the set of all vertices (or nodes).
ii. A set E called the set of all edges (or arcs).
For Example,
Graph G1
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 109
Data Structures UNIT - IV
Adjacent Vertices:
A vertex Vi is adjacent to another vertex say Vj if there is an edge from Vi to Vj.
Parallel Edges:
If there is more than one edge between the same pair of vertices, then they are
known as parallel edges.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 110
Data Structures UNIT - IV
Simple Graph:
A graph if it does not have any self loop or parallel edges is called simple graph.
Complete Graph:
A graph G is said to be complete if each vertex V1 is adjacent to every other vertex
Vj in G.
Acyclic Graph:
If there is a path containing one or more edges which starts from a vertex Vi and
terminates into same vertex then the path is known as a cycle.
Isolated Vertex:
A vertex is isolated if there is no edge connected from any other vertex to the
vertex.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 111
Data Structures UNIT - IV
Degree Of Vertex:
The number of edges connected with vertex Vi is called the degree of vertex Vi and
is donated by degree (Vi).
There are two degrees, indegrees and outdegree
In degree of Vi denoted as indegree(Vi)=number of edges incident into Vi.
Outdegree(Vi)=number of edges starting from Vi.
i. Indegree(V1)=2, Outdegree(V1)=1
ii. Indegree(V2)=2, Outdegree(V2)=0
iii. Indegree(V3)=1, Outdegree(V3)=2
iv. Indegree(V4)=0, Outdegree(V4)=2
Pendant Vertex:
A vertex Vi is pendant if its indegree(Vi)=1 and outdegree(Vi)=0.
Connected Graph:
In a graph G, two vertices Vi and Vj are said to be connected if there is a path in G
from Vi to Vj.
4.7 REPRESENTATION OF GRAPHS:
A graph can be represented in many ways,
➢ Set representation
➢ Linked representation
➢ Sequential(matrix) representation
Types of Graphs,
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 112
Data Structures UNIT - IV
Representation of Graph G1
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 113
Data Structures UNIT - IV
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 114
Data Structures UNIT - IV
4.8.1 Insertion:
If we insert a vertex into a graph, the different steps involved.
In case of insertion of a vertex into an undirected graph, if Vx is inserted and Vi be
its adjacent vertex then Vi has to be incorporated in the adjacent list of Vx and also Vx has
to be incorporated in the adjacent list of Vi.
If it is a digraph and if there is a path from Vx to Vi then we add a node for Vi into
adjacency list of Vx, if there is an edge from Vi to Vx we add a node for Vx in the adjacent
list of Vi.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 115
Data Structures UNIT - IV
1. N=N+1,Vx=N
2. For (i=1 to m do)
3. Let j=x[i]
4. If j≥N then
5. Print “No vertex labeled X[i] exists: Edge from Vx to X[i] is not established”.
6. Else
7. InsertEnd_SL(DGptr[N],X[i])
8. Endif
9. Endfor
10. For i=1 to n do
11. Let j=y[i]
12. If j≥N then
13. Print “No vertex labeled y[i] exists: Edge from y[i] Vx is not established”
14. Else
15. InsertEnd_SL(DGptr[j],Vx)
16. Endif
17. Endfor
18. Stop
Algorithm Insert Edge-LL_UG:
1. Let N=number of vertices in the graph.
2. If(Vi>N) or (Vj>N)then
3. Print “Edge is not possible between Vi and Vj”
4. Else
5. InsertEnd_SL(UGptr[Vi],Vj)
6. InsertEnd_SL(UGptr[Vj],Vi)
7. Endif
8. Stop
Algorithm Insert Edge-LL_DG:
Steps:
1. Let N=number of vertices in the graph.
2. If(Vi>N) or (Vj>N)then
3. Print “Edge is not possible between Vi and Vj”
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 116
Data Structures UNIT - IV
4. Else
5. InsertEnd_SL(DGptr[Vi],Vj)
6. Endif
7. Stop
4.8.2 Deletion:
This operation is again different for undirected graph and directed graph.
If we want to delete the vertex V8 from the graph to do this ,first we have to look
for the adjacency list of Vg all the vertices which are parent in the adjacency list of V8,the
node labeled V8 has to be deleted from the adjacent lists of those vertices.
For example, in the adjacency list of V8,two vertices namely V1 and V4.so we have
to delete the node labeled V8 from the adjacent list of V1 and V4.
Algorithm Delete Vertex-LL-UG:
1. If (N=0)then
2. Print “graph is empty: No deletion”
3. Exit
4. Endif
5. Ptr=Ugptr[Vx]→LINK
6. While(ptr≠NULL)do
7. J=ptr→LABEL
8. Delete any_SL(Ugptr[j],Vx)
9. Delete any_SL(Ugptr[Vx],j)
10. Ptr=Ugptr[Vx]→LINK
11. Endwhile
12. UGptr[Vx]→LABEL=NULL
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 117
Data Structures UNIT - IV
13. Ugptr[Vx]→LINK=NULL
14. Return Node(ptr)
15. N=N-1
16. Stop
Delete A Vertex_LL_DG:
We should delete whole of the adjacency list of the V8. This removes all the edges
which emanating from V8 and removed from the adjacency list of V4.
Algorithm:
1. If (N=0)then
2. Print “Graph is empty: No deletion”
3. Exit
4. Endif
5. Ptr=DGptr[Vx]→LINK
6. DGptr[Vx]→LINK=NULL
7. DGptr[Vx]→LABEL=NULL
8. N=N-1
9. Return Node(ptr)
10. For i=1 to N do
11. Delete any_SL(DGptr[i],Vx)
12. Endfor
13. Stop
Algorithm Delete Edge_LL_UG:
1. Let N=number of vertices in the graph
2. If (Vi>N)or(Vj>N)then
3. Print “vertex does not exist :error in edge removed”
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 118
Data Structures UNIT - IV
4. Else
5. Delete any_SL(UGptr[Vi],Vj)
6. Delete any_SL (UGptr[Vij],Vi]
7. Endif
8. Stop
Algorithm Delete Edge_LL_DG:
1. Let N=number of vertices in the graph
2. If (Vi>N)or(Vj>N)then
3. Print “vertex does not exist: Error in edge removal”
4. Else
5. Deleteany _SL(DGptr[Vi],Vj)
6. Endif
7. Stop
4.8.3 Graph Traversal:
Traversal a graph means visiting all the vertices in the graph exactly once.
Several methods are known to traverse a graph systematically, out of them two
methods are accepted as standard,
❖ Depth First Search (DFS)
❖ Breadth First Search (BFS)
Depth First Search (DFS):
Depth first search (DFS) traversal is similar to the in-order traversal of a binary tree.
Starting from a given node we can visit all the nodes which are reachable from that
starting node. This traversal visits all the nodes up to the deepest level and so on.
DFS (G1) = V1 – V2 – V5 – V7 – V4 – V8 – V6 – V3
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 119
Data Structures UNIT - IV
DFS (G2) = V1 – V2 – V5 – V7 – V4 – V8 – V3 – V6
In this case the traversals take place up to the deepest level, for example V1-V2-V5-
V7 then V4-V8 and V6-V3 (in G1), V3-V6(in G2)
DFS traversal beginning at vertex V is that of visiting the vertex V first, then visit all
the vertices along the path which begins at v.
Visit the vertex V then the vertex immediate adjacent to V, let it be Vx if Vx has an
immediate adjacent say Vy then visit it and so on. Till there is a ‘dead end’ this result in a
path p, V-Vx-Vy.
Dead end means a vertex which does not have an immediate adjacent or its
immediate adjacent has already been invited.
After coming to a ‘dead end’, we backtrack along P to V to see has another adjacent
vertex other than Vx.
A stack can be used to maintain the track of all paths from any other.
Initially, the starting vertex will be pushed onto the stack. To visit a vertex ,We are
to pop a vertex from OPEN, and the PUSH all the adjacent vertices onto it .
A list, VISIT can be maintained to store the vertices already visited.
When a vertex is poped, whether it is already or not, that can be known by
searching the list VISIT: If the vertex is already visited, we want simply ignore it and we
will pop the stack for the next vertex to be visited this procedure will be continued till the
stack is not empty.
Algorithm DFS (Informal Description)
1. Push the standing vertex into stack OPEN
2. While OPEN is not empty do
3. Pop a vertex V
4. If V is not in VISIT
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 120
Data Structures UNIT - IV
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 121
Data Structures UNIT - IV
BFS (G1) = V1 - V2 - V8 - V3 - V5 - V4 – V6 - V7
BFS (G2) = V1 - V2 – V3 – V5 – V4 – V6 – V7 – V8
In G1, V1 is in the first level and visited first then V2, V3 and V8 are visited which
are in the same level similarly V4, V5 and V6 and so on.
In G2, V2 and V3 are in same level V4, V5 and V6 are in one level again V7, V8 are
in one level.
In implementation idea of the BFS traversal is almost same as the DFS traversal
except that is BFS we will use a queue structure inserted of a stack structure as in DFS.
Algorithm BFS_LL:
1. Gptr=NULL)then
2. Print “Graph is empty”
3. Exit
4. Endif
5. U=v
6. OPEN Q.ENQUEUE(u)
7. While (OPEN Q.STATUS()≠EMPTY)do
8. U=OPEN Q.DEQUEUE
9. If (search_SL(VISIT,u)=FALSE)then
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 122
Data Structures UNIT - IV
10. insertENd_SL(VISIT,u)
11. ptr=Gptr[u]
12. While(ptr→LINK≠NULL)do
13. Vptr=ptr→LINK
14. OPEN Q.ENQUEUE(Vptr→LABEL)
15. Endwhile
16. Endif
17. Endwhile
18. Return (VISIT)
19. Stop
4.8.4 Merging Two Graphs:
Consider two graphs G1 and G2; by merging we can combine these two graphs into
a single component. This can be accomplished by establishing one or more edges between
the vertices in G1 and G2.
Algorithm:
1. For i=1 to N1 do
2. UGPTR[i]→LINK=UGPTR[i]→LINK
3. Endfor
4. For i=1 to N2 do
5. UGPTR[N=1+i]→LINK=UG2PTR[i]→LINK
6. Endfor
7. N=N1+N2
8. While(S≠NULL) do
9. Read(V,W)
10. If (V<N1) and (W<N2)then
11. InsertEdge_UG_LL(UGPTR,V,N1+W)
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 123
Data Structures UNIT - IV
12. Endif
13. Endwhiile
14. Return(UGPTR)
15. Stop
4.9 APPLICATION OF GRAPH:
Graph is an important data structure whose extensive application is known in
almost all areas.
Nowadays, many application related with computation can be merged efficiently
with graph structures.
For example,
Consider two simple problems,
Transportation problem
Map coloring
These problems can be solved by using graph structures.
Transportation problem:
This is a well-known problem in shipping goods.
There are several warehouses which are located in different places. It is required to
transport goods from a given warehouse to another.
The problem is to find a path of transportation from warehouse A to warehouse B,
the cost of transportation is minimum.
This problem can easily be represented using a graph structure where each
warehouse can be considered a vertex and path can be edge and weight of an edge is cost
of transportation.
If there are several paths from A to B then we have to find the path where the sum
at weights is minimum.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 124
Data Structures UNIT - IV
Map coloring:
We have to color a map so that no two adjacent regions have the same color.
This can be presented with the help of graph, each region can be represented as
vertex, and if two regions are adjacent use can represent this by an edge between the two
vertices which represent two regions.
The different shortest path, assuming V1 as the source vertex, as listed below.
The algorithm to find such paths was first proposed by E.W.Dijkstra and is
popularly known as Dijkstra’s algorithm.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 125
Data Structures UNIT - IV
Assume that all the vertices in the graph are labeled as 1, 2, 3,……., N and the
graph is represented through an adjacency matrix.
Dijkstra’s algorithm requires three arrays as follows.
LENGTH [1….N] = Array of Distances
PATH [1…..N] = Array of Vertices
SET [1…..N] = Array of Boolean Tags
The shortest distance from the source to any vertex is stored in LENGTH[i],
PATH[i] contain the nearest predecessor of vertex i on the path of shortest distance, from
source to vertex.
The Boolean array SET is used during the execution of the algorithm. SET[i] = 1
means the shortest distance, and the path from the source to vertex i is already
enumerated.
This algorithm consists of two major parts,
An initialization part
An iteration part
Algorithm:
Input: Gptr, the pointer to the graph S, the source vertex. Let N be the number of
vertices.
Output: LENGTH, an array of distance from S to all other vertices. PATH, an array
of string vertices giving the track of all shortest path.
Data structure: Matrix representation of graph with Gptr as the pointer to it.
Steps:
1. For i = 1 to N do
2. SET[i] = 0
3. End for
4. For i = 1 to N do
5. If Gptr[S][i] = 0 then
6. LENGTH[i] = ∞
7. PATH[i] = NULL
8. Else
9. LENGTH[i] = Gptr[S][i]
10. PATH[i] = S
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 126
Data Structures UNIT - IV
11. End if
12. End for
13. SET[S] = 1
14. LENGTH[S] = 0
15. Complete = FALSE
16. While (not complete) do
17. j = SearchMin(LENGTH, SET)
18. SET[j] = 1
19. For i = 1 to N
20. If SET[i] = 1 then
21. i = i+1
22. Else
23. If Gptr[i][j] ≠ 0 then
24. If ((LENGTH[j] + Gptr[i][j]) < LENGTH[i]) then
25. LENGTH[i] = LENGTH[j] + Gptr[i][j]
26. PATH[i] = j
27. End if
28. End if
29. End if
30. End for
31. Complete = TRUE
32. For i = 1 to N do
33. If SET[i] = 0 then
34. Complete = FALSE
35. Break
36. Else
37. i = i+1
38. End if
39. End for
40. End while
41. Return (LENGTH, PATH)
42. Stop
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 127
Data Structures UNIT - IV
The result of the application of Dijkstra algorithm on the graph is given below.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 128
Data Structures UNIT - IV
The shortest path from the array can be obtained by backward movement.
For example,
For the vertex 5, its immediate predecessor is 3 (at PATH[5]), immediate
predecessor of vertex 3 is 2 (at PATH[3]), immediate predecessor of vertex 2 is 1 (at
PATH[2]) and vertex 1 is the source vertex. Thus, the shortest path vertex 5 from the
source vertex 1 is 1-2-3-5, and the length of this shortest path is 5.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 129
Data Structures UNIT - V
UNIT – 5
SEARCHING
5.1 SEARCHING:
Searching is the process of locating a particular element present in a given set of
elements. The element may be record, a table (or) a file.
The search is said to be successful or unsuccessful according to whether the
element does (or) does not belong to the list.
There are two types of searching,
i) Linear Search
ii) Non-Linear Search
Basic Terminologies:
1. Key
2. Item
3. Table
4. File
5. Database
6. Successful
7. Unsuccessful
Key:
Key is a special field in a record with which the record can be uniquely identified.
This is the element to be searched.
Item:
This is same as key. It is an element under search.
Table:
The collection of all records is called a table. A column in a table is called a field.
File:
It is similar to table. The file is used to indicate a very large table.
Database:
A large file (or) group of files is called a database.
Successful:
A search will be termed successful, if the key is found in the table, file (or) array of
search.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 130
Data Structures UNIT - V
Unsuccessful:
When the entire table, array (or) file of search is exhausted and the key is not
available then the search will be termed unsuccessful (or) failure.
5.2 LINEAR SEARCH TECHNIQUES:
Searching method involving data stored in the form of a linear data structure like
array, linked list are called linear search method.
i) Linear Search with Array
ii) Linear Search with Linked List
iii) Linear Search with Ordered List
iv) Binary Search
5.2.1 Linear Search with Array:
The simplest searching method is the sequential search with an array. This
searching method is applicable when data are stored in an array.
This method searches the element sequentially until the right key is found (or)
reached at the end of the array. The algorithm terminates whichever occurs first.
Algorithm:
Step 1: i = 1
Step 2: If (K = A[i]) then
Step 3: Print “Search Successful” at location i
Step 4: Exit
Step 5: Else
Step 6: i = i + 1
Step 7: While (i ≤ n) do
Step 8: Go to Step 2
Step 9: Else
Step 10: Print “Search Unsuccessful”
Step 11: Exit
Step 12: End if
Step 13: End if
Step 14: Stop
Complexity Analysis of the algorithm:
We consider the number of comparisons required for three cases.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 131
Data Structures UNIT - V
Here, H denotes the pointer to the header node. The linear search begins with the
node pointed to by the header node H.
Starting from this node, it compares the key value stored in it. If key matches, then
search successfully terminates, else moves to the next node.
The process is repeated for subsequent nodes until a key matches or the end of the
list reached.
Algorithm:
Step1: ptr = H→LINK
Step2: flag = FALSE
Step3: While (ptr ≠ NULL) && (flag = FALSE) do
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 132
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 133
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 134
Data Structures UNIT - V
The middle entry of the list is located and the key value is tested.
If its value is bigger than key value, the searching takes place in the lower part i.e.,
from the first element to the mid of the list.
If its value is smaller than the key value, the searching takes places in the upper
part i.e., from the mid to the last element.
Algorithm:
Step1: l = 1, u = n
Step2: flag = FALSE
Step3: while (flag ≠ TRUE) and (l < u) do
Step4: mid = (l + u) /2
Step5: If (K = A[mid]) then
Step6: Print “Search Successful”
Step7: flag = TRUE
Step8: Return (mid)
Step9: End if
Step10: if (K < A[mid]) then
Step11: u = mid - 1
Step12: Else
Step13: l = mid + 1
Step14: End if
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 135
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 136
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 137
Data Structures UNIT - V
Consider all the elements are stored in the form of a binary search tree and K is the
element to be searched.
The searching operation begins at the root node. If the elements are stored at the
root node, then the search is successful and the search stops here else if K is less than (or)
greater than the element at the root node then we repeat the same procedure but at the
left (or) right subtree, depending on whether K is less than (or) greater than the element
at the root node.
We assume that the binary search tree is represented with linked structure,
Here, ROOT is the pointer to the root node, and K is the item to be searched.
Algorithm:
Step1: ptr = ROOT
Step2: If (ptr = NULL) then
Step3: Print “Search Unsuccessful”
Step4: Return
Step5: End if
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 138
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 139
Data Structures UNIT - V
Ascending order:
An arrangement of data is called in ascending order if it satisfies the “less than or
equal to (≤)” relation between any two consecutive data.
For example, 10, 20, 30, 40, 50
Descending order:
An arrangement of data is said to be descending order if it satisfies the “greater
than or equal to (≥)” relation between any two consecutive data.
For example, 50, 40, 30, 20, 10
Lexicographic order:
If the data are in the form of characters or string of characters and are arranged in
the same order as in dictionary is called lexicographic order.
For example, Apple, Axe, Bat, Camel, Cat, Dog, Dull
Collating Sequence:
This is an ordering set of characters that determines whether a character is in
higher, lower (or) same order compared to another.
For example, AmaZon, amaZon, amazon1
Random order:
If the data in a list do not follow any ordering mentioned above, then the list is
arranged in random order.
For example, 30, 10, 50, 20, 40 ,Bat, Dog, Cat, Axe, Apple, Camel, Dull
Swap:
Swap between two data storages implies the interchange of their contents. Swap is
also called as interchange.
Before swap: A[1] = 10, A[5] = 50
After swap: A[1] = 50, A[5] = 10
Stable sort:
A list of unsorted data may contain two or more equal data. If a sorting method
maintains the same relative position of their occurrences in the sorted list, then it is called
stable sort.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 140
Data Structures UNIT - V
In place sort:
If a sorting method takes place within the array only, that is, without using any
other extra storage space is called in place sort. It does not require extra memory space
other than the list itself and it is a memory efficient method.
Item:
An item is a data (or) element in the list to be sorted. An item may be an integer
value, a string of characters, a record, etc. An item I also termed key, data, element, etc.
5.5 SORTING TECHNIQUES:
Sorting can be classified into two categories.
1. Internal sorting
2. External sorting
5.5.1 Internal Sorting:
In internal sorting, all items to be sorted are kept entirely in the main (primary)
memory. The main storage is limited, internal sorting is restricted to sort a small set of
data items only.
Internal sorting allows a more flexible approach in the structuring and accessing of
the items.
Internal sorting technique based on two principles,
❖ Sorting by comparison
❖ Sorting by distribution
5.5.1.1 Sorting by Comparison:
The basic operation involved in this type of sorting technique is comparison. A
data item is compared with other items in the list of items in order to find its place in the
sorted list. There are four choices in this technique,
❖ Insertion
❖ Exchange
❖ Selection
❖ Merge
Insertion:
In a list of items, one item is considered at a time and inserted into an appropriate
position relative to the previously sorted items. The item can be inserted into the same list
(or) a different list.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 141
Data Structures UNIT - V
Exchange:
If two items are found to be out of order, they are interchanged. The process is
repeated until no more exchange is required.
Selection:
First the smallest (or) largest item is located and it is separated from the rest, then
the next smallest (or) largest is selected and so on until all items are separated.
Merge:
Two (or) more input lists are merged into an output list and while merging, the
items from an input list are chosen following the required sorting order.
5.5.1.2 Sorting by Distribution:
In this sorting, no key comparison takes place. All items under sorting and
distributed over an auxiliary storage space based on the constituent elements in each and
then grouped together to get the sorted list.
Distributions of items are based on the following choices:
❖ Radix
❖ Counting
❖ Hashing
Radix:
An item is placed in a space decided by the bases (or) radixes of its components
with which it is composed of.
Counting:
Items are sorted based on their relative counts.
Hashing:
In this method, items are hashed, that is, dispersed into a list based on a hash
function. It is a calculation of a relative address of the item.
5.6 BUBBLE SORT:
In bubble sort, adjacent data items are compared and swapped for n-1 number of
items. With each iteration value moves like a bubble to the top of the array.
It is the simplest of all sorting algorithm. It is easy to understand and implement
this algorithm and hence it is most popular.
Example:
30, 20, 40, 10, 50
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 142
Data Structures UNIT - V
Algorithm:
Step 1: For i=1 to n-1 do
Step 2: For j=1 to n-1 do
Step 3: If (A[j] > A [j+1] ) then
Step 4: Swap (A[j], A[j+1])
Step 5: End if
Step 6: j = j+1
Step 7: End for
Step 8: End for
Step 9: Stop
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 143
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 144
Data Structures UNIT - V
A[K] = LOC
A[LOC] = temp
Step 5: Exit
Min {A, K, N}
A is array
N is number of elements
K is number of pass
Step1: Set MIN = A[K]
LOC = K
Step 2: Repeat for j = K+1, K+2, ……….N-1
If MIN > A[j]
Set MIN = A[j]
and LOC = j
Step 3: Return [LOC]
Complexity of selection sort:
This algorithm is not efficient for large arrays. The method of selection sort relies
heavily in a comparison mechanism to achieve its goals.
5.8 INSERTION SORT:
The insertion sort algorithm scans a from A[1] to A[n],inserting each element A[K]
into it proper position in the previously sorted sub array A[1],A[2],…………..A[K-1].
The insertion sort algorithm functions as follows:
1. Initially the whole array is completely in unsorted state .a second element is
considered as the element to be inserted from the unordered part of the list.
2. The first element is considered to be in the ordered part. The second element is
inserted either in the first or the second position as appropriate.
3. In the order words, an insertion sort reads the entire array elements and picks an item
from the unsorted list, inserts each element into its appropriate position in the
previously sorted sub-arrays.
For example:
30, 20, 40, 10, 50
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 145
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 146
Data Structures UNIT - V
The sub sequences to be sorted are determined by a sequence ht , ht-1 ,ht-2 ,……..h,
these parameters are called increments.
Shell sort is also known as diminishing increment sort because each pass is defined
by increment hi.
For example, at any pass, the increment hi is 5, and then the array is divided into 5
subsequences, as follows.
1. A[1], A[6], A[11]…………
2. A[2], A[7], A[12]…………
3. A[3], A[8 ], A[13]………...
4. A[4], A[9], A[14]…………
5. A[5], A[10], A[15]………..
A[1] is compared with A[6], A[6] is compared with A[11] and so on. Similarly A[2]
is compared with A[7], A[7] is compared with A[12], etc.
In general with an increment hi a whole array is grouped into hi, a whole array is
grouped into hi subsequences, then the sorting of each subsequences is done as if the
subsequence are adjacent.
Comparisons and data moments are limited to elements in intra subsequences.
After all subsequences with hi are passed that is compared and exchanged.
To illustrate shell sort with an array of 16 element and increments 7,5,3,1
In pass 1, h=7
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 147
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 148
Data Structures UNIT - V
A[1],A[4],A[7],A[10],A[13],A[16]={16,32,59,73,43,94}
A[2],A[5],A[11],A[14]={45,56,38,72,67}
A[3],A[6],A[12],A[15]={24,21,60,85,91}
After sorting in pass 3,
A[1],A[4],A[7],A[10],A[13],A[16]={16,32,43,59,73,94}
A[2],A[5],A[8],A[11],A[14]={38,45,56,67,72}
A[3],A[6],A[9],A[12],A[15]={21,24,60,85,91}
In pass 4 h=1
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 149
Data Structures UNIT - V
Step 12: k = k - hi
Step 13: else
Step14: k = 0
Step 15: end if
Step 16: end while
Step 17: end while
Step 18: hi= (hi-1)/3
Step 19: end while
Step 20: stop
5.10 RADIX SORT:
A sorting technique which is based on radix (or) base of constituent element in
keys called radix sort.
Radices in some number system
Radix sort is a method frequently used by people, when alphabetizing a large list
of names. Specifically, the list of names is first sored according to the first letter of each
name.
The radix sort is method used by card sorter. The sorter uses a radix reverse digit
sort on number.
The card sorter contains 13 receiving packets labeled as follow
9,8,7,6,5,4,3,2,1,0,11,12.
Each packet corresponds to a row on a card in which a can be punched.
Decimal number where the radix is 10 are punched in the obvious way and hence
use only first ten pockets of the sorted.
The card sorted according to the unit digits, on the second pass, the cards are
sorted according to the tens, on the third pass, the cards can sorted according to the
hundreds digits.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 150
Data Structures UNIT - V
Example:
538,249,112,589,699,478,728,246,532.
Input 0 1 2 3 4 5 6 7 8 9
538 538
249 249
112 112
589 589
699 699
478 478
728 278
246 246
532 532
First pass
Take back the element in column –wise and add into the input column
Input 0 1 2 3 4 5 6 7 8 9
112 112
532 532
246 246
538 538
478 478
728 728
249 249
589 589
699 699
Second pass
Take back the elements in column –wise and add into the input column.
Input 0 1 2 3 4 5 6 7 8 9
112 112
728 728
532 532
538 538
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 151
Data Structures UNIT - V
246 246
249 249
478 478
589 589
699 699
After taking back the elements using column-wise, we get,
112, 246, 249, 478, 532, 538, 589, 699, 728
Before sorting: 538,249,112,589,699,478,728,246,532
After sorting: 112,246,249,478,532,538,589,699,728
Algorithm:
Step 1: for k= least significant digit to most significant digit.
Step 2: for i=0 to n-1
Step 3: y=a[i]
Step 4: j=k th digit of y
Step 5: place y at rear of queue[j]
Step 6: for x=0 to 9
Step 7: place elements of queue [x] to the array a
Step 8: exit.
Time complexity of radix sort:
The run time of the radix sort algorithm is mainly due to two operations,
Distribution of key elements
Combination
Time requirement:
A=time to extract a component from an element
E=time to enque an element in an array
D=time to deque an element from an array
Time distribution operation=(a+e)n
Time combination=(d+e)n
Since, the total time of computation is,
T(n) = {(a+e)n+(d+e)n}c
= (a+d+2e)*n*c.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 152
Data Structures UNIT - V
Low= 0
High =5
Key = A[low]
Key = A[0] = 45
i=low+1 j=high
i=1 j=5
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 153
Data Structures UNIT - V
45>36→ true
If it is true increment i by1
Check 45>15→true
Increment “i” by one
Check 45>92→false
In this stop incrementing “i’’ value
Compare Key with A[j]
45< 71→true
Decrement j by one
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 154
Data Structures UNIT - V
Now the element 45 reached to the correct position in an array i.e., all the elements
on left of array are less than 45 and right are greater than 45.
Dividing the table into two parts we get,
TABLE 1 TABLE 2
Apply the same procedure to table1 and table2 separately, at the end of every stage
one element reaches to the correct position
TABLE 1:
Key =35
Check 35 > 36→ false
Stop incrementing “i’’ valve
Compare key with a [j]
35 < 15→false
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 155
Data Structures UNIT - V
Check 35>15→true
Hence increment “i” by one
TABLE 2:
Key = 92
Check 92 >71→true
There is no way to increment of i value, so we will apply another condition.
Check 92 < 71 false
Exchange A[j] and Key
Finally, we merge the partitioned table
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 156
Data Structures UNIT - V
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 157
Data Structures UNIT - V
These two sets are individually sorted in ascending order and finally are merged
two produce into a single sorted sequence of n elements.
The technique described above can be performed in the following steps:
1. Divide the sequence of elements into two equal parts.
2. Recursively sort the elements on the left part.
3. Recursively sort the elements on the right part.
4. Merge the sorted left and right parts into a single sorted array.
Eg: 35, 10, 15, 45, 25, 20, 50, 30, 40
Before Sorting: 35, 10, 15, 45, 25, 20, 50, 30, 40
After Sorting: 10, 15, 20, 25, 30, 35, 40, 45, 50
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 158
Data Structures UNIT - V
Algorithm:
Step 1: If (low < high)
Step 2: mid = (low + high) / 2.
Step 3: Call merge (A, low, mid, high)
Step 4: Exit
Merge (A, low, high, mid, high)
Step1: i =low.
Step2: j = mid+1.
Step3: k=low
Step4: While ((i <= mid) && (j <= high))do
Step5: If (A[i] < A[j])
C[k] = A[i]
k=k+1
i=i+1
Else step 6
Step 6: C[k] = A[j]
k=k+1
j=j+1
Step 7: While (i <= mid)
C[k] = A[i]
k=k+1
i=i+1
Step 8: While (j <= high)
C[k] = A[j]
k=k+1
j=j+1
Step 9: For i= low to k-1
A[i] = C[i]
Step 10: Return.
Prepared By: Mrs. M. Pavithra, Asst. Prof, Dept. of BCA Page 159
Data Structures Question Bank
UNIT – 1
Multiple Choice Questions:
1. What is the primary focus of the top-down approach in problem solving?
A. Breaking down a problem into smaller sub-problems
B. Addressing the problem as a whole without decomposing it
C. Starting from the detailed implementation and working up to the broader
system
D. Solving the problem by integrating various sub-solutions
2. Which approach involves starting with the broadest possible view of a problem and
progressively refining it into more detailed components?
A. Bottom-Up Approach B. Top-Down Approach
C. Lateral Thinking D. Systematic Analysis
3. In which problem-solving approach are sub-solutions combined to create a solution to
the overall problem?
A. Top-Down Approach B. Bottom-Up Approach
C. Recursive Approach D. Iterative Approach
4. Which of the following is a characteristic of the bottom-up approach?
A. It begins with the identification of high-level objectives and then details the
components.
B. It involves working on individual components or modules and integrating
them to form a complete solution.
C. It emphasizes the analysis of a system from a broad perspective.
D. It is mainly used in theoretical problem-solving rather than practical
implementations.
5. Which problem-solving method is generally preferred when the solution requires
complex interactions between components?
A. Top-Down Approach B. Bottom-Up Approach
C. Analytical Approach D. Heuristic Approach
6. When would a top-down approach be more advantageous than a bottom-up approach?
A. When the problem is well-defined and can be decomposed into clear sub-
problems.
B. When dealing with a completely new problem with unknown sub-components.
13. What is typically done during the design phase to ensure the algorithm will work
correctly?
A. Coding and compiling the algorithm
B. Analyzing and modeling the problem to create a solution strategy
C. Running test cases to find errors
D. Debugging the algorithm’s code
14. In which phase is the algorithm translated into a programming language?
A. Design Phase B. Implementation Phase
C. Verification Phase D. Testing Phase
15. Which type of testing is performed during the verification phase to check whether an
algorithm produces the expected output?
A. Unit Testing B. Integration Testing
C. System Testing D. Regression Testing
16. What does time complexity of an algorithm measure?
A. The amount of memory required by the algorithm
B. The number of operations the algorithm performs as a function of the input
size
C. The time taken to execute the algorithm in a real-world scenario
D. The maximum input size the algorithm can handle
19. What does space complexity of an algorithm refer to?
A. The time required to execute the algorithm
B. The total amount of memory used by the algorithm, including both fixed and
variable parts
C. The number of variables used in the algorithm
D. The size of the input data
20. Which notation is commonly used to express the upper bound of an algorithm’s time
complexity?
A. Θ (Theta) Notation B. Ω (Omega) Notation
C. O (Big O) Notation D. ψ (Psi) Notation
21. Which of the following is the best-case time complexity of accessing an element in a
hash table with a good hash function?
A. O(n) B. O(log n) C. O(1) D. O(n^2)
22. In the context of algorithm analysis, what does frequency count involve?
A. Counting the number of times each operation is performed by the algorithm
B. Determining the total execution time of the algorithm
C. Measuring the maximum recursion depth
D. Calculating the total number of variables used
23. Which of the following algorithms has a time complexity of O(n log n) for the average
case?
A. Bubble Sort B. Quick Sort C. Selection Sort D. Insertion Sort
24. What is the space complexity of a recursive algorithm with a depth of recursion
proportional to the input size n, assuming each recursive call uses constant space?
A. O(1) B. O(n) C. O(n log n) D. O(n^2)
25. If an algorithm has a time complexity of O(n^2), which of the following statements is
true?
A. The algorithm’s execution time increases linearly with the input size.
B. The algorithm’s execution time increases exponentially with the input size.
C. The algorithm’s execution time increases quadratically with the input size.
D. The algorithm’s execution time remains constant regardless of input size.
26. Which of the following is an example of an Abstract Data Type (ADT)?
A) Array B) Linked List C) Stack D) Hash Table
27. Which ADT supports the operations of adding, removing, and accessing elements in a
Last In First Out (LIFO) order?
A) Queue B) Stack C) List D) Tree
28. What is the primary operation supported by a Queue ADT?
A) LIFO (Last In, First Out) B) FIFO (First In, First Out)
C) Random Access D) Ordered Insertion
29. Which of the following operations is not typically supported by the List ADT?
A) Insertion B) Deletion C) Access by Index D) Sorting
30. Which ADT would be most appropriate for implementing a priority-based scheduling
system?
A) Stack B) Queue C) Priority Queue D) Deque
31. What is the main difference between a Stack and a Queue ADT?
A) Stack is LIFO while Queue is FIFO B) Queue is LIFO while Stack is FIFO
5 Mark Questions:
1. Explain in detail about top down and bottom-up approaches of problem solving.
2. Write a note on how to measure the efficiency of algorithms.
3. Explain about the efficiency analysis of algorithms.
4. Write a note on Abstract Data Types (ADTs).
9 Mark Questions:
1. Describe in detail about problem solving.
2. Write a detailed note on design of algorithms.
3. Explain Design, Verification and Implementation of algorithms.
4. Describe in detail about data structures.
5. What is data structure? What are the types of data structure? Explain in detail.
UNIT – 2
Multiple Choice Questions:
1. What is the time complexity of accessing an element in an array by its index?
A) O(1) B) O(n) C) O(log n) D) O(n^2)
2. Which of the following operations is generally not efficient for arrays?
A) Accessing an element by index B) Inserting an element at the beginning
C) Deleting an element at the end D) Iterating through all elements
3. What is the primary limitation of arrays compared to linked lists?
A) Arrays support dynamic sizing. B) Arrays allow fast random access.
C) Arrays require contiguous memory allocation.
D) Arrays are generally more efficient for insertions and deletions.
4. In a two-dimensional array, what is the time complexity of accessing an element
located at (i, j) if the array is stored in row-major order?
A) O(1) B) O(i + j) C) O(i * j) D) O(n)
5. If an array is implemented with a fixed size of 100 and we need to expand its size
dynamically, what data structure is typically used to achieve this?
A) Linked List B) Stack C) Queue D) Priority Queue
6. In which of the following scenarios is a dynamic array preferred over a static array?
A) When the size of the array is known and fixed.
B) When frequent resizing of the array is required.
C) When the array is used for storing elements that are never modified.
D) When random access to elements is not necessary.
7. What is the purpose of the sizeof operator in the context of arrays in C/C++?
A) To find the number of elements in the array
B) To get the size of each element in the array
C) To get the total size of the array in bytes
D) To get the maximum index of the array
8. Which of the following is a disadvantage of using arrays for data storage?
A) Fixed size B) Fast access
C) Efficient memory usage D) Random access capability
9. If you need to implement a stack using arrays, which operation would require shifting
elements to maintain the stack order?
A) Push B) Pop C) Peek D) All of the above
10. What is the result of accessing an element at an index that is out of bounds in an
array?
A) The program crashes with an error.
B) The program returns a default value.
C) The program returns an undefined value.
D) The program automatically resizes the array.
11. What is a singly linked list?
A. A list where each node points to the next node
B. A list where each node points to the previous node
C. A list where each node points to both the next and previous nodes
D. A list where each node points to all other nodes in the list
12. What is the time complexity for inserting an element at the beginning of a singly
linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
13. What is the time complexity for inserting an element at the end of a singly linked list if
the list does not maintain a tail pointer?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
14. What is the time complexity for searching an element in a singly linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
15. Which of the following operations can be implemented more efficiently using a singly
linked list than using an array?
A. Accessing the ith element B. Inserting an element at the end
C. Deleting an element at the beginning D. Finding the minimum element
16. How do you denote the end of a singly linked list?
A. By setting the last node's next pointer to NULL
B. By setting the first node's previous pointer to NULL
C. By having a circular reference to the first node
D. By having a tail pointer pointing to the last node
17. What happens if you try to access the next pointer of the last node in a singly linked
list?
A. It results in a segmentation fault or access violation
B. It points to the first node C. It returns NULL D. It throws an exception
18. In a singly linked list, what does the head pointer represent?
A. The last node in the list B. The middle node in the list
C. The first node in the list D. A node with a special value
19. Which of the following statements is true about singly linked lists?
A. Each node contains a data part and two pointers.
B. Each node contains a data part and a single pointer.
C. Each node contains a data part and no pointers.
D. Each node contains only a pointer.
20. How would you delete a node from a singly linked list given only a pointer to that
node?
A. Copy the data from the next node to the current node and delete the next
node.
B. Set the pointer to NULL.
C. Free the memory allocated to the node directly.
D. Move the head pointer to the next node.
21. What is a doubly linked list?
A. A list where each node points to the next node
B. A list where each node points to the previous node
C. A list where each node points to both the next and previous nodes
D. A list where each node points to all other nodes in the list
22. What is the time complexity for inserting an element at the beginning of a doubly
linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
23. What is the time complexity for inserting an element at the end of a doubly linked list
if the list maintains a tail pointer?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
24. What is the time complexity for searching an element in a doubly linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
25. Which of the following operations can be implemented more efficiently using a
doubly linked list than using a singly linked list?
A. Accessing the ith element B. Inserting an element at the end
C. Deleting an element from the middle given only a pointer to that element
D. Finding the minimum element
26. How do you denote the end of a doubly linked list?
A. By setting the last node's next pointer to NULL
B. By setting the first node's previous pointer to NULL
C. By having a circular reference to the first node
D. By having a tail pointer pointing to the last node
27. What happens if you try to access the next pointer of the last node in a doubly linked
list?
A. It results in a segmentation fault or access violation
B. It points to the first node C. It returns NULL D. It throws an exception
28. In a doubly linked list, what does the head pointer represent?
A. The last node in the list B. The middle node in the list
C. The first node in the list D. A node with a special value
29. Which of the following statements is true about doubly linked lists?
A. Each node contains a data part and two pointers.
B. Each node contains a data part and a single pointer.
C. Each node contains a data part and no pointers.
D. Each node contains only a pointer.
30. How would you delete a node from a doubly linked list given only a pointer to that
node?
A. Copy the data from the next node to the current node and delete the next node.
B. Adjust the previous and next pointers of the adjacent nodes to bypass the
node to be deleted.
C. Set the pointer to NULL. D. Move the head pointer to the next node.
31. What is a circular linked list?
A. A list where each node points to the next node
B. A list where each node points to the previous node
C. A list where the last node points back to the first node
D. A list where each node points to both the next and previous nodes
32. What is a circular singly linked list?
A. A list where each node has only one pointer pointing to the previous node
B. A list where each node has only one pointer pointing to the next node, and
the last node points to the first node
C. A list where each node has two pointers, one to the next and one to the previous
node
D. A list that does not have a head node
33. What is a circular doubly linked list?
A. A list where each node has only one pointer pointing to the next node
B. A list where each node has two pointers, one to the next node and one to the
previous node, and the last node points to the first node
C. A list where each node points to the head node
D. A list where the first node points to the last node
34. What is the time complexity for inserting an element at the beginning of a circular
linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
35. What is the time complexity for inserting an element at the end of a circular linked list
if the list does not maintain a tail pointer?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
36. What is the time complexity for searching an element in a circular linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
37. Which of the following is an advantage of a circular linked list over a singly linked
list?
A. It uses less memory per node
B. It allows easier traversal back to the head node from the last node
C. It simplifies deletion of a node D. It is easier to implement
38. How do you denote the end of a circular linked list?
A. By setting the last node's next pointer to NULL
B. By setting the first node's previous pointer to NULL
C. By having a circular reference to the first node
D. By having a tail pointer pointing to the last node
39. In a circular linked list, what does the head pointer represent?
A. The last node in the list B. The middle node in the list
C. The first node in the list D. A node with a special value
40. Which of the following statements is true about circular linked lists?
A. They cannot be used to implement stacks or queues
B. They allow for easy cyclic iteration over the list
C. They require more memory per node than doubly linked lists
D. They do not support insertion at the beginning
41. Which of the following is an application of linked lists?
A. Implementing array-based data structures
B. Implementing dynamic memory allocation
C. Implementing a static stack D. Implementing a fixed-size queue
42. Linked lists are particularly useful for which of the following applications?
A. Storing elements in a contiguous block of memory
B. Storing large amounts of data that frequently change
C. Storing fixed-size records D. Implementing static hashing
43. Which data structure is typically implemented using a linked list?
A. Binary Search Tree (BST) B. Heap C. Hash Table D. B-Tree
44. What type of linked list is most suitable for implementing a priority queue?
A. Singly linked list B. Doubly linked list
C. Circular linked list D. Skip list
45. Which of the following operations is more efficiently performed using a linked list
than an array?
A. Accessing the ith element directly B. Inserting an element at a specific position
C. Sorting the elements D. Accessing elements in reverse order
46. What is an advantage of using a linked list over an array for implementing a stack?
A. Faster access to elements B. Fixed size
C. Dynamic size D. Easier random access
47. Which of the following applications would benefit from using a circular linked list?
A. Implementing a binary search algorithm
B. Implementing a text editor's undo feature
C. Implementing a round-robin scheduler D. Implementing a priority queue
5 Mark Questions:
1. Write a note on array and its terminologies.
2. Explain briefly about one-dimensional array.
3. Write a note on sparse matrix.
4. What are the applications of array? Explain.
5. Explain three-dimensional array and n-dimensional array.
6. Write a note on single linked list and its representation.
7. Explain in detail about circular linked list.
9 Mark Questions:
1. What are the operations involved in array? Explain with algorithms.
2. Write a detailed note on two-dimensional array.
3. Describe in detail about the operations on single linked list.
4. Write a detailed note on double linked list.
5. Describe in detail about applications of linked list.
UNIT – 3
Multiple Choice Questions:
1. What is a stack?
A. A linear data structure that follows FIFO order
B. A linear data structure that follows LIFO order
C. A non-linear data structure that follows FIFO order
D. A non-linear data structure that follows LIFO order
2. Which operation is not associated with a stack?
A. Push B. Pop C. Peek D. Enqueue
3. What is the time complexity of the push operation in a stack implemented using an
array?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
4. What is the time complexity of the pop operation in a stack implemented using a linked
list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
5. Which of the following applications can be implemented using a stack?
A. Breadth-First Search (BFS) B. Depth-First Search (DFS)
C. Dijkstra's algorithm D. Prim's algorithm
6. Which of the following is true about stack overflow?
A. It occurs when trying to pop an element from an empty stack
B. It occurs when trying to push an element into a full stack
C. It occurs when the stack is empty D. It occurs when the stack is full
7. What is the initial value of the top pointer in an empty stack?
A. 0 B. 1 C. -1 D. NULL
8. Which of the following applications can be implemented using a stack?
A. Function call management B. Sorting algorithms
C. Database indexing D. Graph traversal
9. In a stack, which of the following operations can be performed?
A. Insert at the end B. Insert at the beginning
C. Delete from the end D. Delete from the beginning
10. What data structure is used to evaluate a postfix expression?
A. Queue B. Stack C. Linked list D. Tree
20. Which of the following data structures is used to convert infix notation to postfix
notation?
A. Queue B. Stack C. Linked list D. Tree
21. What is a queue?
A. A linear data structure that follows LIFO order
B. A linear data structure that follows FIFO order
C. A non-linear data structure that follows LIFO order
D. A non-linear data structure that follows FIFO order
22. Which operation is not associated with a queue?
A. Enqueue B. Dequeue C. Peek D. Push
23. What is the time complexity of the enqueue operation in a queue implemented using
an array?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
24. What is the time complexity of the dequeue operation in a queue implemented using a
linked list?
A. O(1) B. O(n) C. O(log n) D. O(n^2)
25. Which of the following applications can be implemented using a queue?
A. Breadth-First Search (BFS) B. Depth-First Search (DFS)
C. Dijkstra's algorithm D. Prim's algorithm
26. Which of the following is true about queue overflow?
A. It occurs when trying to dequeue from an empty queue
B. It occurs when trying to enqueue into a full queue
C. It occurs when the queue is empty
D. It occurs when the queue is full
27. What is the initial value of the front pointer in an empty queue?
A. 0 B. 1 C. -1 D. NULL
28. Which of the following applications can be implemented using a queue?
A. Function call management B. Sorting algorithms
C. CPU scheduling D. Graph traversal
29. In a queue, which of the following operations can be performed?
A. Insert at the end B. Insert at the beginning
C. Delete from the end D. Delete from the beginning
30. What data structure is used to evaluate a level-order traversal of a binary tree?
A. Queue B. Stack C. Linked list D. Tree
31. Which of the following is not an application of a queue?
A. Expression evaluation B. Job scheduling
C. Breadth-First Search D. Printer spooling
32. How can you check if a queue is empty?
A. Check if front is equal to the maximum size
B. Check if rear is greater than front
C. Check if front is less than 0 D. Check if front is equal to -1 or front > rear
33. What is the result of the following sequence of queue operations?
Enqueue(1), Enqueue(2), Dequeue(), Enqueue(3), Dequeue()
A. 1 B. 2 C. 3 D. 1 and 3
34. Which of the following statements is true for a queue?
A. Queues cannot be implemented using linked lists
B. Queues follow LIFO order
C. Queues can be used to reverse a word
D. Queues are used for breadth-first traversal
35. What is a real-life example of a queue?
A. A stack of plates B. A queue in a ticket counter
C. A line of people waiting D. A train with multiple compartments
36. What is the purpose of the peek operation in a queue?
A. To remove the front element B. To insert a new element at the rear
C. To return the front element without removing it
D. To check if the queue is full
37. Which of the following is a common use of queues in computing?
A. Function call management B. Job scheduling
C. Graph traversal D. Database management
38. How do you implement a queue using two stacks?
A. Use one stack for enqueue operations and the other for dequeue operations
B. Move elements between the stacks to simulate queue behavior
C. Use one stack for enqueue and dequeue operations
D. Implement queue using a single stack
5 Mark Questions:
1. Write a note on stack and its representation.
2. Explain how to evaluate an arithmetic expression.
3. Explain about the evaluation of postfix expression.
4. Write a note on recursion.
5. What is queue? How can we represent a queue using array and linked list?
6. Explain about circular queue.
7. Write a note on deque.
8. Explain priority queue in detail.
9 Mark Questions:
1. What is stack? What are the operations involved in stack? Explain.
2. Describe in detail about the operations of stack.
3. Elucidate how to convert an infix expression into postfix expression with an
example.
4. Describe about the operations on queue.
5. Elucidate briefly about the types of queue.
UNIT – 4
Multiple Choice Questions:
1. What is a tree?
A. A linear data structure B. A non-linear data structure
C. A linear data structure that follows LIFO order
D. A linear data structure that follows FIFO order
2. What is the degree of a node in a tree?
A. The number of nodes in the tree B. The number of edges in the tree
C. The number of children a node has D. The depth of the node
3. Which of the following is true about a binary tree?
A. Each node has at most one child B. Each node has at most two children
C. Each node has exactly two children D. Each node has exactly three children
4. What is a full binary tree?
A. A tree in which all nodes have at most one child
B. A tree in which all nodes have at most two children
C. A tree in which all nodes have two children except the leaf nodes
D. A tree in which all levels are completely filled
5. What is a complete binary tree?
A. A tree in which all levels are completely filled
B. A tree in which all nodes have at most two children
C. A tree in which all nodes have two children except the leaf nodes
D. A tree in which all levels, except possibly the last, are completely filled, and
all nodes are as far left as possible
6. Which traversal method is used to process the nodes of a tree in the order left-root-
right?
A. Pre-order B. In-order C. Post-order D. Level-order
7. Which traversal method is used to process the nodes of a tree in the order root-left-
right?
A. Pre-order B. In-order C. Post-order D. Level-order
8. Which traversal method is used to process the nodes of a tree in the order left-right-
root?
A. Pre-order B. In-order C. Post-order D. Level-order
32. Which of the following statements is true for a directed graph (digraph)?
A. The edges have no direction B. The edges have direction
C. The graph has no cycles D. The graph is always connected
33. What is the adjacency matrix representation of a graph?
A. A 2D array where each element represents the presence or absence of an edge
between two vertices
B. A list where each element represents a vertex and its adjacent vertices
C. A matrix where each element represents the degree of a vertex
D. A matrix where each element represents the path length between two vertices
34. What is the time complexity of the Depth-First Search (DFS) algorithm for a graph
with V vertices and E edges?
A. O(V) B. O(E) C. O(V + E) D. O(VE)
35. Which algorithm is used to detect cycles in a graph?
A. Kruskal's Algorithm B. Dijkstra's Algorithm
C. Floyd-Warshall Algorithm D. Depth-First Search (DFS)
5 Mark Questions:
1. Write a note on tree and its terminologies.
2. Explain in detail about representation of binary tree.
3. Explain the expression tree with an example.
4. What is graph? What are the terminologies of graph? Explain in detail.
5. Write a note on representation of graphs.
9 Mark Questions:
1. Elucidate the traversal on a binary tree.
2. Explain in detail about the operations on a binary tree.
3. Describe in detail about binary search tree.
4. Explain in detail about heap tree.
5. Elucidate the graph traversals with example.
6. What are the applications of graph? Explain in detail.
7. Elucidate the shortest path problem using Dijkstra’s algorithm.
UNIT – 5
Multiple Choice Questions:
1. Which of the following best describes linear search?
a) Searching in a sorted array using a divide-and-conquer approach
b) Searching sequentially through each element of an array until the desired
element is found
c) Searching using a binary tree structure
d) Searching in a hash table using a hash function
2. What is the time complexity of linear search in the worst case?
a) O(1) b) O(log n) c) O(n) d) O(n^2)
3. Which of the following is a key characteristic of linear search?
a) It requires the array to be sorted.
b) It can only be used on arrays with unique elements.
c) It can be applied to both sorted and unsorted arrays.
d) It has a logarithmic time complexity.
4. In which case does linear search perform better than binary search?
a) When the list is sorted b) When the list is very large
c) When the list is small or unsorted d) When the list contains repeated elements
5. Which of the following is true about linear search?
a) It always finds the element in constant time.
b) It is the most efficient searching algorithm.
c) It does not require additional memory.
d) It cannot be used for linked lists.
6. In linear search, how many comparisons are required on average to find an element in
a list of n elements?
a) O(1) b) O(log n) c) O(n) d) O(n^2)
7. What happens if the element being searched for in a linear search is not present in the
array?
a) The search will fail immediately. b) The algorithm will throw an error.
c) The search will go through the entire array before concluding that the element
is not present.
d) The search will restart from the beginning of the array.
37. Which of the following algorithms can be used to sort linked lists efficiently?
a) Quick Sort b) Merge Sort c) Bubble Sort d) Heap Sort
38. Which sorting algorithm repeatedly builds a sorted sublist one element at a time?
a) Bubble Sort b) Selection Sort c) Insertion Sort d) Heap Sort
39. Which sorting algorithm is often considered inefficient for large arrays due to its
O(n^2) time complexity?
a) Quick Sort b) Merge Sort c) Bubble Sort d) Heap Sort
40. In which case is the time complexity of Selection Sort the same as its average and
worst cases?
a) O(n log n) b) O(n^2) c) O(log n) d) O(n)
5 Mark Questions:
1. Write a note on searching and the terminologies use din searching.
2. Explain binary search.
3. Explain the nonlinear searching techniques.
4. What is sorting? What are the terminologies used in sorting? Explain.
5. Explain bubble sort with an example.
6. Explain insertion sort with an example.
7. Explain selection sort with an example.
9 Mark Questions:
1. What is searching? What are the techniques in linear search? Explain in detail.
2. Explain shell sort with an example.
3. Explain radix sort with an example.
4. Explain quick sort with an example.
5. Explain merge sort with an example.