Linked List Types and Operations Explained
Linked List Types and Operations Explained
Q1) What is Linked List in data structure? Explain its representation and working.
Ans.
o Ans. Linked List can be defined as collection of objects called nodes that are
randomly stored in the memory.
o A node contains two fields i.e. first which stores address and the second which
stores address of the next.
o The last node of the list contains pointer to the null.
1. struct node
2. {
3. int data;
4. struct node *next;
5. }
In a circular Singly linked list, the last node of the list contains a pointer to the first node of
the list.
We traverse a circular singly linked list until we reach the same node where we started. The
circular singly liked list has no beginning and no ending. There is no null value present in the
next part of any of the nodes.
Circular Doubly Linked List
Circular doubly linked list is a more complexed type of data structure in which a node
contains pointers to its previous node as well as the next node. Circular doubly linked list
doesn't contain NULL in any of the node. The last node of the list contains the address of the
first node of the list. The first node of the list also contains address of the last node in its
previous pointer.
Representation:
1. Node: Each element in a linked list is called a node. A node typically consists of two
fields:
i)Data field: This field stores the actual data element, like an integer, a string, or an
object.
ii)Pointer field: This field holds the memory address of the next node in the
sequence. It can be null for the last node.
Working:
1. Head Node: To access the first element, you need a starting point, typically called the
head node. It has a pointer that points to the second node in the list.
2. Following the Chain: By following the pointers through each node, you can traverse
the entire list and access all its elements. This sequential access is one of the main
strengths of linked lists.
3. Dynamic Memory Allocation: Unlike arrays with fixed sizes, linked lists grow and
shrink dynamically. Adding and removing elements only involves adjusting the
pointers, making them efficient for manipulating data of unknown size.
Q2) What is Double Linked List in data structure? (Explain append (), addatBeg (),
addafter () & delete () operation in detail.
Ans. Doubly linked list is a complex type of linked list in which a node contains a pointer to
the previous as well as the next node in the sequence. Therefore, in a doubly linked list, a
node consists of three parts: node data, pointer to the next node in sequence (next pointer) ,
pointer to the previous node (previous pointer). A sample node in a doubly linked list is
shown in the figure.
1. Data: This stores the actual data element like an integer, a string, or an object.
2. Next Pointer: This points to the next node in the sequence.
3. Previous Pointer: This points to the previous node in the sequence, enabling backward
traversal.
1. struct node
2. {
3. struct node *prev;
4. int data;
5. struct node *next;
6. }
The prev part of the first node and the next part of the last node will always contain
null indicating end in each direction.
1. append(data):
o Create a new node with the given data and set its both pointers to null.
o If the list is empty, make the new node the head node.
o Otherwise, find the last node by traversing the list using the next
pointers.
o Set the last node's next pointer to the new node.
o Update the new node's previous pointer to point to the last node.
2. addAtBeg(data):
o Create a new node with the given data and set its next pointer to the
current head node.
o Update the current head node's previous pointer to point to the new
node.
o Make the new node the new head node.
4. delete(data):
o Find the node containing the target data by traversing the list using the
next pointers.
o If the target is not found, return error.
o If the target is the head node, update the head node to point to the next
node.
o If the target is not the head or tail node, update the previous and next
pointers of the surrounding nodes to bypass the deleted node.
o Free the memory allocated to the deleted node.
Q3) What is merging of Linked List? Show its working with example.
Ans. Merging Linked Lists: Combining Two Sorted Lists
Merging two linked lists refers to the process of combining them into a single sorted
list while maintaining the ascending order of elements. This is a fundamental
operation in many algorithms like sorting and mergesort.
Step 1: Define two sorted linked lists and initialize pointers for each (head1 and
head2).
LL1: 1 -> 3 -> 5 -> 7
LL2: 2 -> 4 -> 6 -> 8
head1 = 1 -> 3 -> 5 -> 7
head2 = 2 -> 4 -> 6 -> 8
Step 2: Create a new empty list to hold the merged elements (mergedList).
mergedList = null
Iteration 1:
compare 1 (head1) with 2 (head2) - 1 is smaller
add 1 to mergedList, advance head1
mergedList = 1 -> null
Recursive Approach:
The recursive approach involves similar comparisons and merging but partitions the
lists and builds the merged list from the bottom up. This can be a bit more complex
than the iterative approach but might be more efficient for larger lists.
Both approaches achieve the same result of creating a single sorted list by merging
two sorted linked lists. Choosing the optimal approach depends on your specific
needs and coding style.
1. Traversal:
• Printing/Visiting Elements: A function can recursively visit each node in the
list, printing its data or performing any desired operation on it. It typically calls
itself on the next pointer of the current node, stopping at the empty list (null
pointer).
2. Searching:
• Finding a Specific Element: A function can compare the target value with the
current node's data and recursively search the next pointer if not found. If
found, it can return the matching node or its position.
3. Modification:
• Reversing the List: A function can recursively reverse the order of nodes by
creating a new node at each step and updating its next pointer to point to the
previously processed node. It stops at the head node of the original list,
resulting in a reversed list.
• Inserting Elements: A function can find the target position (e.g., before/after a
specific node) and recursively insert a new node, adjusting the next pointers
appropriately.
• Deleting Elements: A function can find the target node and bypass it by
adjusting the next pointers of surrounding nodes, then recursively delete the
isolated node.
4. Calculations:
• Finding Length: A function can recursively call itself on the next pointer and
increment a counter, returning the final count at the end.
• Sum of Elements: A function can sum the current node's data with the
recursively calculated sum from the remaining list.
The decision to use recursion vs. iteration depends on the specific operation, its
complexity, and performance requirements. Consider factors like readability,
potential for stack overflow, and desired efficiency before making a choice.
Q5) What is polynomial? Explain its representation of polynomial using linked list data
structure.
Ans.
A polynomial is a collection of different terms, each comprising coefficients, and
exponents. It can be represented using a linked list. This representation makes
polynomial manipulation efficient.
While representing a polynomial using a linked list, each polynomial term represents
a node in the linked list. To get better efficiency in processing, we assume that the
term of every polynomial is stored within the linked list in the order of decreasing
exponents. Also, no two terms have the same exponent, and no term has a zero
coefficient and without coefficients. The coefficient takes a value of 1.
o The first part contains the value of the coefficient of the term.
o The second part contains the value of the exponent.
o The third part, LINK points to the next term (next node).
o Consider a polynomial P(x) = 7x2 + 15x3 - 2 x2 + 9. Here 7, 15, -2, and 9 are the
coefficients, and 4,3,2,0 are the exponents of the terms in the polynomial. On
representing this polynomial using a linked list, we have
o
o Observe that the number of nodes equals the number of terms in the
polynomial. So, we have 4 nodes. Moreover, the terms are stored to decrease
exponents in the linked list. Such representation of polynomial using linked lists
makes the operations like subtraction, addition, multiplication, etc., on
polynomial very easy.
Benefits of Linked List Representation:
Example:
Consider the polynomial 2x^3 + 4x^2 - 3x + 1. Its linked list representation would
look like this:
Head -> (2, 3) -> (4, 2) -> (-3, 1) -> null
Each node stores the coefficient and exponent of a term. Traversing the list allows
evaluating the polynomial for any given value of x.
Further Considerations:
• Some implementations store variables explicitly in each node, while others
might use a single variable reference throughout the list.
• Additional complexity can be added to handle polynomials with multiple
variables or complex coefficients.
Overall, representing polynomials with linked lists offers a dynamic and efficient way
to store and manipulate these mathematical expressions in computer programs.
I hope this explanation clarifies the concept and its advantages! Feel free to ask if
you have any further questions or want to explore specific operations on polynomial
linked lists.
Q6) What is stack in data structure? Explain in detail with example. (Explain any
representation on array on Linked list).
Ans. A Stack is a linear data structure that follows the LIFO (Last-In-First-
Out) principle. Stack has one end, whereas the Queue has two ends (front and rear).
It contains only one pointer top pointer pointing to the topmost element of the stack.
Whenever an element is added in the stack, it is added on the top of the stack, and the
element can be deleted only from the stack. In other words, a stack can be defined
as a container in which insertion and deletion can be done from the one end
known as the top of the stack.
Working of Stack
Stack works on the LIFO pattern. As we can observe in the below figure there are five
memory blocks in the stack; therefore, the size of the stack is 5.
Suppose we want to store the elements in a stack and let's assume that stack is empty.
We have taken the stack of size 5 as shown below in which we are pushing the
elements one by one until the stack becomes full.
Since our stack is full as the size of the stack is 5. In the above cases, we can observe
that it goes from the top to the bottom when we were entering the new element in
the stack. The stack gets filled up from the bottom to the top.
When we perform the delete operation on the stack, there is only one way for entry
and exit as the other end is closed. It follows the LIFO pattern, which means that the
value entered first will be removed last. In the above case, the value 5 is entered first,
so it will be removed only after the deletion of all the other elements.
An array is a container that can hold a fixed number of elements and these elements
should be of the same type. Most of the data structures make use of arrays to
implement their algorithms.
A linked list is a linear data structure consisting of nodes where each node contains a
reference to the next node. To create a link list we need a pointer that points to the first
node of the list.
Approach: To create an array of linked lists below are the main requirements:
1. An array of pointers.
2. For keeping the track of the above-created array of pointers then another
pointer is needed that points to the first pointer of the array. This pointer is
called pointer to pointer. Below is the pictorial representation of the array of
linked lists:
Q7) What is polish notations? Explain its features and give examples.
Ans. Polish Notation in Data Structures
Polish notation, also known as prefix and postfix notation, is a way of representing
mathematical and logical expressions without parentheses. Instead of using
parentheses to define the order of operations, Polish notation relies on the position
of operators relative to operands to indicate precedence. This can lead to simpler
and more concise expressions compared to infix notation (the standard
mathematical notation with operators between operands).
Overall, Polish notation offers a powerful and efficient alternative to infix notation for
representing and manipulating expressions. Its simplicity, clear operator precedence,
and suitability for stack-based processing make it a valuable tool in various data
structures and algorithms.
Q8) What are queues in data structure? Explain all features with example.
Ans.
Queue
1. A queue can be defined as an ordered list which enables insert operations to be
performed at one end called REAR and delete operations to be performed at another
end called FRONT.
3. For example, people waiting in line for a rail ticket form a queue.
Applications of Queue
Due to the fact that queue performs actions on first in first out basis which is quite fair
for the ordering of actions. There are various applications of queues discussed as
below.
1. Queues are widely used as waiting lists for a single shared resource like printer,
disk, CPU.
2. Queues are used in asynchronous transfer of data (where data is not being
transferred at the same rate between two processes) for eg. pipes, file IO,
sockets.
3. Queues are used as buffers in most of the applications like MP3 media player,
CD player, etc.
4. Queue are used to maintain the play list in media players in order to add and
remove the songs from the play-list.
5. Queues are used in operating systems for handling interrupts.
Q9) Explain Queue representation using array with suitable example.
The above figure shows the queue of characters forming the English word "HELLO".
Since, No deletion is performed in the queue till now, therefore the value of front
remains -1. However, the value of rear increases by one every time an insertion is
performed in the queue. After inserting an element into the queue shown in the above
figure, the queue will look something like following. The value of rear will become 5
while the value of front remains same.
After deleting an element, the value of front will increase from -1 to 0. however, the
queue will look something like following.
Q10) Explain Queue representation using Linked List with suitable example.
In a linked queue, each node of the queue consists of two parts i.e. data part and the
link part. Each element of the queue points to its immediate next element in the
memory.
In the linked queue, there are two pointers maintained in the memory i.e. front pointer
and rear pointer. The front pointer contains the address of the starting element of the
queue while the rear pointer contains the address of the last element of the queue.
Insertion and deletions are performed at rear and front end respectively. If front and
rear both are NULL, it indicates that the queue is empty.
There was one limitation in the array implementation of Queue. If the rear reaches to
the end position of the Queue then there might be possibility that some vacant spaces
are left in the beginning which cannot be utilized. So, to overcome such limitations,
the concept of the circular queue was introduced.
As we can see in the above image, the rear is at the last position of the Queue and front is
pointing somewhere rather than the 0th position. In the above array, there are only two
elements and other three positions are empty. The rear is at the last position of the Queue; if
we try to insert the element then it will show that there are no empty spaces in the Queue.
There is one solution to avoid such wastage of memory space by shifting both the elements
at the left and adjust the front and rear end accordingly. It is not a practically good approach
because shifting all the elements will consume lots of time. The efficient approach to avoid
the wastage of the memory is to use the circular queue data structure.
Circular Queue Working Example: Buffering Data
Let's consider a circular queue used to buffer data between a sensor and a
processing unit in a real-time system. The sensor generates data points at a
constant rate, while the processing unit can only handle them at a slower pace.
Scenario:
Steps:
1. Initialization: The queue is initially empty, with both head and tail pointers
pointing to the same location.
2. Sensor Data Arrival: Every second, the sensor generates a new data point.
This data point is added to the queue after the current tail pointer. The tail
pointer is incremented to point to the newly added element.
3. Queue Buffering: As the sensor keeps generating data, the queue fills up.
Since it's circular, the tail pointer wraps around and starts overwriting the
oldest element in the queue (FIFO principle).
4. Processing Unit Data Consumption: Every 2 seconds, the processing unit
fetches the data point at the head of the queue. The head pointer is then
incremented to point to the next element.
5. Dynamic Size Adjustment: Through this process, the queue dynamically
adjusts its size based on the data flow. If the sensor generates data faster
than the processing unit consumes it, the queue will be full and potentially
overflow. If the processing unit consumes data faster than the sensor
generates it, the queue will become empty.
The deque stands for Double Ended Queue. Deque is a linear data structure where the
insertion and deletion operations are performed from both ends. We can say that
deque is a generalized version of the queue.
Though the insertion and deletion in a deque can be performed on both ends, it does
not follow the FIFO rule. The representation of a deque is given as follows -
Types of deque
There are two types of deque -
In input restricted queue, insertion operation can be performed at only one end, while
deletion can be performed from both ends.
Output restricted Queue
In output restricted queue, deletion operation can be performed at only one end, while
insertion can be performed from both ends.
Operations(Working):
Example:
Imagine a deque representing a waiting list for a restaurant. You can use it for the
following operations:
• Add new customers to the front of the line (PushFront): This prioritizes them
for immediate service.
• Add new customers to the back of the line (PushBack): This adds them to the
queue for later seating.
• Serve the customer at the front (PopFront): This removes the first customer
from the list and returns their information.
• Remove a customer who changed their mind (PopFront or PopBack): This
allows flexible manipulation of the waiting list.
This demonstrates the versatility of deques in managing data from both ends
efficiently. They can be used for various scenarios beyond waitlists, including:
• Managing browser history (forward and back navigation)
• Undo/redo functionality in editors
• Implementing backtracking algorithms
• Balancing parentheses in expressions
Q13) What is priority queue in data structure? Explain need, working and
advantages/ disadvantages with example? Write applications of priority queue.
The priority queue supports only comparable elements, which means that the
elements are either arranged in an ascending or descending order.
For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority
queue with an ordering imposed on the values is from least to the greatest. Therefore,
the 1 number would be having the highest priority while 22 will be having the lowest
priority.
o Every element in a priority queue has some priority associated with it.
o An element with the higher priority will be deleted before the deletion of the lesser
priority.
o If two elements in a priority queue have the same priority, they will be arranged using
the FIFO principle.
1, 3, 4, 8, 14, 22
All the values are arranged in ascending order. Now, we will observe how the priority
queue will look after performing the following operations:
o poll(): This function will remove the highest priority element from the priority
queue. In the above priority queue, the '1' element has the highest priority, so
it will be removed from the priority queue.
o add(2): This function will insert '2' element in a priority queue. As 2 is the
smallest element among all the numbers so it will obtain the highest priority.
o poll(): It will remove '2' element from the priority queue as it has the highest
priority queue.
o add(5): It will insert 5 element after 4 as 5 is larger than 4 and lesser than 8, so
it will obtain the third highest priority in a priority queue.
We will create the priority queue by using the list given below in which INFO list
contains the data elements, PRN list contains the priority numbers of each data
element available in the INFO list, and LINK basically contains the address of the next
node.
In the case of priority queue, lower priority number is considered the higher
priority, i.e., lower priority number = higher priority.
Step 1: In the list, lower priority number is 1, whose data value is 333, so it will be
inserted in the list as shown in the below diagram:
Step 2: After inserting 333, priority number 2 is having a higher priority, and data
values associated with this priority are 222 and 111. So, this data will be inserted based
on the FIFO principle; therefore 222 will be added first and then 111.
Step 3: After inserting the elements of priority 2, the next higher priority number is 4
and data elements associated with 4 priority numbers are 444, 555, 777. In this case,
elements would be inserted based on the FIFO principle; therefore, 444 will be added
first, then 555, and then 777.
Step 4: After inserting the elements of priority 4, the next higher priority number is 5,
and the value associated with priority 5 is 666, so it will be inserted at the end of the
queue.
Advantages:
• Efficient handling of urgent tasks: Prioritizes important elements for faster
processing.
• Dynamic data handling: Can accommodate elements with varying priorities
seamlessly.
• Versatility: Applicable in diverse domains like networking, scheduling, and
search algorithms.
Disadvantages:
• Implementation complexity: Maintaining order based on priorities can be more
complex than simple FIFO queues.
• Performance overhead: Comparing and adjusting priorities for each operation
might impact performance compared to simpler data structures.
• Potential for starvation: Low-priority elements might wait indefinitely if high-
priority tasks constantly arrive.
Applications of Priority Queues:
• Hospital emergency queue management: Prioritizing critical patients based on
their medical condition.
• Network traffic management: Prioritizing time-sensitive packets for smooth
communication.
• Operating system scheduling: Prioritizing high-priority processes for efficient
resource allocation.
• Search algorithms: Efficiently exploring promising paths in algorithms like
Dijkstra's shortest path.
• Event management: Processing urgent events first in systems handling real-
time data streams.
Example:
Imagine a download manager downloading multiple files simultaneously. You can
prioritize downloading a critical work document before less urgent movies. The
priority queue prioritizes the work document, ensuring it finishes first despite being
added later than the movies.
Limitations:
• Performance overhead: Enqueue and dequeue operations involve searching
for empty slots and adjusting priorities, which can be slower than using heaps
or sorted lists.
• Fixed size: The array has a predefined size, limiting the maximum number of
elements the queue can hold. Resizing the array can be expensive and
inefficient.
• Wastage of space: Empty slots within the array represent wasted memory.
Advantages:
• Simple implementation: Array-based priority queues are easier to understand
and implement compared to other methods like heaps.
• Random access: You can access any element in the array directly using its
index, which can be useful in some scenarios.
Applications:
• Situations where a simple and efficient implementation is needed, and
performance overhead is not a major concern.
• Applications with a limited number of elements where the fixed size limitation
is not an issue.
Overall, while array representation can be used for priority queues, it's generally not
the preferred method due to its performance limitations and fixed size constraints.
Heaps and sorted linked lists offer better performance and flexibility for most
practical applications.
Feel free to ask if you have any further questions about specific aspects of array-
based priority queues, comparisons with other implementations, or any other data
structure concepts!
Q15) Explain representation of array using Linked List data structure.
Ans. A priority queue is a type of queue in which each element in a queue is associated
with some priority, and they are served based on their priorities. If the elements have
the same priority, they are served based on their order in a queue.
Mainly, the value of the element can be considered for assigning the priority. For
example, the highest value element can be used as the highest priority element. We
can also assume the lowest value element to be the highest priority element. In other
cases, we can also set the priority based on our needs.
The following are the functions used to implement priority queue using linked
list:
The linked list of priority queue is created in such a way that the highest priority
element is always added at the head of the queue. The elements are arranged in a
descending order based on their priority so that it takes O(1) time in deletion. In case
of insertion, we need to traverse the whole list in order to find out the suitable position
based on their priority; so, this process takes O(N) time.
Suppose we want to add the node that contains the value 1. Since the value 1 has
more priority than the other nodes so we will insert the node at the beginning of the
list shown as below:
Now we have to add 7 element to the linked list. We will traverse the list to insert
element 7. First, we will compare element 7 with 1; since 7 has lower priority than 1, so
it will not be inserted before 7. Element 7 will be compared with the next node, i.e., 2;
since element 7 has a lower priority than 2, it will not be inserted before 2.. Now, the
element 7 is compared with a next element, i.e., since both the elements have the same
priority so they will be served based on the first come first serve. The new element 7
will be added after the element 7 shown as below:
Q16) What are binary trees? Explain representation of binary trees in memory.
he above tree is a binary tree because each node contains the utmost two children.
The logical representation of the above tree is given below:
In the above tree, node 1 contains two pointers, i.e., left and a right pointer pointing to the
left and right node respectively. The node 2 contains both the nodes (left and right node);
therefore, it has two pointers (left and right). The nodes 3, 5 and 6 are the leaf nodes, so all
these nodes contain NULL pointer on both left and right parts.
o The full binary tree is also known as a strict binary tree. The tree can only be
considered as the full binary tree if each node must contain either 0 or 2
children. The full binary tree can also be defined as the tree in which each node
must contain 2 children except the leaf nodes.
The complete binary tree is a tree in which all the nodes are completely filled except
the last level. In the last level, all the nodes must be as left as possible. In a complete
binary tree, the nodes should be added from the left.
The above tree is a complete binary tree because all the nodes are completely filled,
and all the nodes in the last level are added at the left first.
Perfect Binary Tree
A tree is a perfect binary tree if all the internal nodes have 2 children, and all the leaf
nodes are at the same level.
The above tree is also a degenerate binary tree because all the nodes have only one
child. It is also known as a left-skewed tree as all the nodes have a left child only.
The balanced binary tree is a tree in which both the left and right trees differ by atmost
1.
1. struct node
2. {
3. int data,
4. struct node *left, *right;
5. }
In the above structure, data is the value, left pointer contains the address of the left
node, and right pointer contains the address of the right node.
Q17) What are binary search tree? Explain basic operation with suitable examples.
In the above figure, we can observe that the root node is 40, and all the nodes of the
left subtree are smaller than the root node, and all the nodes of the right subtree are
greater than the root node.
Similarly, we can see the left child of root node is greater than its left child and smaller
than its right child. So, it also satisfies the property of binary search tree. Therefore, we
can say that the tree in the above image is a binary search tree.
Advantages of Binary search tree
o Searching an element in the Binary search tree is easy as we always have a hint that
which subtree has the desired element.
o As compared to array and linked lists, insertion and deletion operations are faster in
BST.
Suppose the data elements are - 45, 15, 79, 90, 10, 55, 12, 20, 50
o First, we have to insert 45 into the tree as the root of the tree.
o Then, read the next element; if it is smaller than the root node, insert it as the root of
the left subtree, and move to the next element.
o Otherwise, if the element is larger than the root node, then insert it as the root of the
right subtree.
o Now, let's see the process of creating the Binary search tree using the given
data element. The process of creating the BST is shown below -
o Step 1 - Insert 45.
o
o Step 2 - Insert 15.
o As 15 is smaller than 45, so insert it as the root node of the left subtree.
o
o Step 3 - Insert 79.
o As 79 is greater than 45, so insert it as the root node of the right subtree.
o
o Step 4 - Insert 90.
o 90 is greater than 45 and 79, so it will be inserted as the right subtree of 79.
o
o Step 6 - Insert 55.
o 55 is larger than 45 and smaller than 79, so it will be inserted as the left subtree
of 79.
12 is smaller than 45 and 15 but greater than 10, so it will be inserted as the right
subtree of 10.
Step 8 - Insert 20.
20 is smaller than 45 but greater than 15, so it will be inserted as the right subtree of
15.
o
o Step 9 - Insert 50.
o 50 is greater than 45 but smaller than 79 and 55. So, it will be inserted as a left
subtree of 55.
o Now, the creation of binary search tree is completed. After that, let's move
towards the operations that can be performed on Binary search tree.
o We can perform insert, delete and search operations on the binary search tree.
Q.18) Explain operations on binary search trees.
Ans. Binary trees offer various operations for manipulating and accessing data
efficiently. Here are some common operations:
1. Traversal:
Traversing a binary tree involves visiting each node in a specific order. Different
traversal techniques offer different perspectives on the tree structure:
• Pre-order: Visit the current node, then its left child, then its right child (e.g.,
visit root, left subtree, right subtree).
• In-order: Visit the left child, then the current node, then the right child (e.g.,
visit left subtree, root, right subtree). This orders elements in
ascending/descending order for Binary Search Trees (BSTs).
• Post-order: Visit the left child, then the right child, then the current node (e.g.,
visit left subtree, right subtree, root).
2. Searching:
Searching for a specific element in a binary tree is efficient, especially in BSTs. The
comparison of the element's value with the current node's value guides the search
down the left or right subtree until the element is found or the search reaches an
empty leaf.
3. Insertion:
Inserting a new element into a binary tree involves finding the appropriate position
based on its value (less than the current node goes left, greater goes right). This
preserves the ordering property in BSTs.
4. Deletion:
Deleting an element from a binary tree requires finding the element and then
adjusting the tree structure to maintain its properties. Different cases arise
depending on the node's degree (number of children) and the structure of the
subtree.
5. Balancing:
Certain types of binary trees, like AVL trees and Red-Black trees, maintain a balance
factor to ensure efficient search and insertion operations. Balancing operations
involve rotations of nodes to adjust the height of subtrees and restore balance.
6. Additional operations:
Other operations include finding the minimum or maximum element, counting the
number of nodes, calculating the tree's height, and checking for specific properties
like completeness or balance.
These are just some of the common operations on binary trees. The specific
implementation and complexity of these operations vary depending on the chosen
representation (array vs. linked list) and the type of binary tree.
There are 3 tree traversals that are mostly based on the DFS. They are given below:
• Preorder Traversal
• Inorder Traversal
• Postorder Traversal
1. Inorder Tree Traversal
The inorder tree traversal of binary tree is also known as the LNR traversal, because in this
traversal, we first visit the left node (abbreviation L), followed by the root
node(abbreviation N), and finally the right node (abbreviation R) of the tree.
In the inorder traversal, we first start from the root node of the tree and go deeper and deeper
into the left subtree in a recursive manner.
When we reach the left-most node of the tree with the above steps, then we visit that current
node and go to the left-most node of its right subtree (if exists).
Simply put:
It is performed in a similar way as the inorder traversal is performed, but here the order of
visiting the nodes is different than that of the inorder traversal.
In the case of the preorder traversal of binary tree, we visit the current node first. After that,
we visit the leftmost subtree. Once we reach the leaf node(or have covered all the nodes of
the left subtree), we move towards the right sub-tree. In the right subtree, we recursively call
our function to do the traversal in a similar manner.
It follows the NLR structure. It means first visit the current node, followed by recursively
visiting the left subtree and then the right subtree. Below given is an image for the same.
The algorithm for the preorder traversal of binary tree can be stated as:
After having discussed a lot on inorder and preorder traversal of binary tree, you might have
actually understood how they apply dfs for their traversal. Similar to them is the Postorder
traversal of binary tree, where, we basically visit the left subtree and the right subtree
before visiting the current node in recursion.
In a nutshell, postorder traversal of binary tree follows the following order of visiting the
nodes in a tree:
The binary tree can be represented using an array of size 2n+1 if the depth of the binary tree
is n. If the parent element is at the index p, Then the left child will be stored in the
index (2p)+1, and the right child will be stored in the index (2p)+2.
Ans. Binary Search Trees (BSTs) are powerful data structures used in various
applications due to their efficient search and retrieval capabilities. Here are some of
their key applications:
2. Sorting Algorithms:
• Merge Sort: BSTs can be used in merge sort algorithms to efficiently split and
merge sorted sub-lists, leading to faster overall sorting.
• Quick Sort: BSTs can be used to partition elements during quick sort,
improving its performance in specific scenarios.
3. Set Operations:
• Union: Finding the union of two sets represented by BSTs can be achieved
efficiently by traversing and merging the trees while maintaining order.
• Intersection: Finding the intersection of two sets represented by BSTs can be
done by comparing elements in both trees and adding matches to a new BST.
4. In-Memory Caching:
• Applications can use BSTs to cache frequently accessed data in memory for
faster retrieval, improving performance and reducing database load.
5. Network Routing:
• Some routing algorithms utilize BSTs to efficiently map network addresses to
their destinations, enabling faster data routing.
Dictionaries:
Imagine a dictionary containing words and their definitions. A BST can be used to
store this information, where each node holds a word and its definition, and the
nodes are ordered alphabetically. To find the definition of a word, you can start at the
root of the tree and compare the word you're searching for with the word at the
current node.
• If the search word is alphabetically less than the current node's word, you
move to the left child subtree.
• If the search word is alphabetically greater, you move to the right child
subtree.
• If the search word matches the current node's word, you've found the
definition.
This process continues until you reach a leaf node or find the desired word. This
efficient searching is a key benefit of BSTs in dictionary applications.
BSTs offer a versatile and efficient solution for various data organization and
retrieval tasks. Understanding their applications and how they work can be valuable
for programmers and data structure enthusiasts alike.
Feel free to ask if you have any further questions about specific applications of
BSTs, their implementation details, or comparisons with other data structures for
similar tasks. I'm happy to help you explore this topic further!
Q22) What is Threaded binary tree? Write advantages and disadvantages of it. Explain it with
suitable example. Its application.
Ans. In the linked representation of binary trees, more than one half of the link fields contain
NULL values which results in wastage of storage space. If a binary tree consists of n nodes
then n+1 link fields contain NULL values. So in order to effectively manage the space, a
method was devised by Perlis and Thornton in which the NULL links are replaced with special
links known as threads. Such binary trees with threads are known as threaded binary trees.
Each node in a threaded binary tree either contains a link to its child node or thread to other
nodes in the tree.
In one-way threaded binary trees, a thread will appear either in the right or left link field of a
node. If it appears in the right link field of a node then it will point to the next node that will
appear on performing in order traversal. Such trees are called Right threaded binary trees.
If thread appears in the left field of a node, then it will point to the nodes inorder
predecessor. Such trees are called Left threaded binary trees. Left threaded binary trees are
used less often as they don't yield the last advantages of right threaded binary trees. In one-
way threaded binary trees, the right link field of last node and left link field of first node
contains a NULL. In order to distinguish threads from normal links they are represented by
dotted lines.
Two-way threaded Binary Trees:
In two-way threaded Binary trees, the right link field of a node containing NULL values
is replaced by a thread that points to nodes inorder successor and left field of a node
containing NULL values is replaced by a thread that points to nodes inorder
predecessor.
o In threaded binary tree, linear and fast traversal of nodes in the tree so there is
no requirement of stack. If the stack is used then it consumes a lot of memory
and time.
o It is more general as one can efficiently determine the successor and
predecessor of any node by simply following the thread and links. It almost
behaves like a circular linked list.
o When implemented, the threaded binary tree needs to maintain the extra
information for each node to indicate whether the link field of each node points
to an ordinary node or the node's successor and predecessor.
o Insertion into and deletion from a threaded binary tree are more time
consuming since both threads and ordinary links need to be maintained.
Example:
Imagine a binary tree representing a family tree with names stored in each node. A threaded
tree version might look like this:
John (Thread to Mary)
/ \
Mary Peter
(Thread to | (Thread to
George) John)
In this example, John's left thread points to Mary, while Mary's thread points to George
(John's son). Peter's right thread points back to John, completing the cycle for in-order
traversal.
Applications:
In-order traversal is dominant: If the main operation is traversing the tree in order, threaded
trees can be faster and simpler.
Simplicity is preferred: For basic data structures where functionality is limited but ease of
implementation is important, threading can be a good choice.
However, threaded trees are not generally recommended for general-purpose data
structures due to their limitations in flexibility and error-proneness. Traditional binary trees
with explicit pointers offer more versatility and are better suited for diverse applications.
Q23) What is Heap Sort Method? Explain binary tree sorting using heap sort.
Ans. Graph
A graph can be defined as group of vertices and edges that are used to connect these
vertices. A graph can be seen as a cyclic tree, where the vertices (Nodes) maintain any
complex relationship among them instead of having parent child relationship.
Definition
A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C), (C,E), (E,D), (D,B),
(D,A)) is shown in the following figure.
An undirected graph, edges are not associated with the directions with them. An undirected
graph is shown in the above figure since its edges are not attached with any of the
directions. If an edge exists between vertex A and B then the vertices can be traversed from B
to A as well as A to B.
In a directed graph, edges form an ordered pair. Edges represent a specific path from
some vertex A to another vertex B. Node A is called initial node while node B is called
terminal node.
losed Path
A path will be called as closed path if the initial node is same as terminal node. A path
will be closed path if V0=VN.
Simple Path
If all the nodes of the graph are distinct with an exception V 0=VN, then such path P is
called as closed simple path.
Cycle
A cycle can be defined as the path which has no repeated edges or vertices except the
first and last vertices.
Connected Graph
A connected graph is the one in which some path exists between every two vertices
(u, v) in V. There are no isolated nodes in connected graph.
Complete Graph
A complete graph is the one in which every node is connected with all other nodes. A
complete graph contain n(n-1)/2 edges where n is the number of nodes in the graph.
Weighted Graph
In a weighted graph, each edge is assigned with some data such as length or weight.
The weight of an edge e can be given as w(e) which must be a positive (+) value
indicating the cost of traversing the edge.
Digraph
A digraph is a directed graph in which each edge of the graph is associated with some
direction and the traversing can be done only in the specified direction.
Loop
An edge that is associated with the similar end points can be called as Loop.
Adjacent Nodes
If two nodes u and v are connected via an edge e, then the nodes u and v are called
as neighbours or adjacent nodes.
Ans.