0% found this document useful (0 votes)
80 views60 pages

Linked List Types and Operations Explained

Uploaded by

jatharkartik0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views60 pages

Linked List Types and Operations Explained

Uploaded by

jatharkartik0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

--------------------------------------------------------- DS C++ ------------------------------------------------------------------

Q1) What is Linked List in data structure? Explain its representation and working.
Ans.

o Ans. Linked List can be defined as collection of objects called nodes that are
randomly stored in the memory.
o A node contains two fields i.e. first which stores address and the second which
stores address of the next.
o The last node of the list contains pointer to the null.

Why use linked list


1. It allocates the memory dynamically. All the nodes of linked list are non-
contiguously stored in the memory and linked together with the help of
pointers.
2. Sizing is no longer a problem since we do not need to define its size at the time
of declaration. List grows as per the program's demand and limited to the
available memory space.
Singly linked list:
Here, each node has data and pointer of the next node.

Representation of the node in a singly linked list

1. struct node
2. {
3. int data;
4. struct node *next;
5. }

Doubly linked list


Doubly linked list is a complex type of linked list in which a node contains a pointer to the
previous as well as the next node in the sequence. Therefore, in a doubly linked list, a node
consists of three parts: node data, pointer to the next node in sequence (next pointer) ,
pointer to the previous node (previous pointer). A sample node in a doubly linked list is
shown in the figure.
Circular Singly Linked List

In a circular Singly linked list, the last node of the list contains a pointer to the first node of
the list.

We traverse a circular singly linked list until we reach the same node where we started. The
circular singly liked list has no beginning and no ending. There is no null value present in the
next part of any of the nodes.
Circular Doubly Linked List
Circular doubly linked list is a more complexed type of data structure in which a node
contains pointers to its previous node as well as the next node. Circular doubly linked list
doesn't contain NULL in any of the node. The last node of the list contains the address of the
first node of the list. The first node of the list also contains address of the last node in its
previous pointer.

Representation:

1. Node: Each element in a linked list is called a node. A node typically consists of two
fields:
i)Data field: This field stores the actual data element, like an integer, a string, or an
object.

ii)Pointer field: This field holds the memory address of the next node in the
sequence. It can be null for the last node.
Working:
1. Head Node: To access the first element, you need a starting point, typically called the
head node. It has a pointer that points to the second node in the list.
2. Following the Chain: By following the pointers through each node, you can traverse
the entire list and access all its elements. This sequential access is one of the main
strengths of linked lists.
3. Dynamic Memory Allocation: Unlike arrays with fixed sizes, linked lists grow and
shrink dynamically. Adding and removing elements only involves adjusting the
pointers, making them efficient for manipulating data of unknown size.
Q2) What is Double Linked List in data structure? (Explain append (), addatBeg (),
addafter () & delete () operation in detail.

Ans. Doubly linked list is a complex type of linked list in which a node contains a pointer to
the previous as well as the next node in the sequence. Therefore, in a doubly linked list, a
node consists of three parts: node data, pointer to the next node in sequence (next pointer) ,
pointer to the previous node (previous pointer). A sample node in a doubly linked list is
shown in the figure.

1. Data: This stores the actual data element like an integer, a string, or an object.
2. Next Pointer: This points to the next node in the sequence.
3. Previous Pointer: This points to the previous node in the sequence, enabling backward
traversal.

In C, structure of a node in doubly linked list can be given as:

1. struct node
2. {
3. struct node *prev;
4. int data;
5. struct node *next;
6. }

The prev part of the first node and the next part of the last node will always contain
null indicating end in each direction.
1. append(data):
o Create a new node with the given data and set its both pointers to null.
o If the list is empty, make the new node the head node.
o Otherwise, find the last node by traversing the list using the next
pointers.
o Set the last node's next pointer to the new node.
o Update the new node's previous pointer to point to the last node.

2. addAtBeg(data):
o Create a new node with the given data and set its next pointer to the
current head node.
o Update the current head node's previous pointer to point to the new
node.
o Make the new node the new head node.

3. addAfter (data, target):


o Find the node containing the target data by traversing the list using the
next pointers.
o If the target is not found, return error.
o Create a new node with the given data.
o Set the new node's next pointer to the target node's next pointer.
o Set the target node's next pointer to the new node.
o Update the new node's previous pointer to point to the target node.

4. delete(data):
o Find the node containing the target data by traversing the list using the
next pointers.
o If the target is not found, return error.
o If the target is the head node, update the head node to point to the next
node.
o If the target is not the head or tail node, update the previous and next
pointers of the surrounding nodes to bypass the deleted node.
o Free the memory allocated to the deleted node.
Q3) What is merging of Linked List? Show its working with example.
Ans. Merging Linked Lists: Combining Two Sorted Lists
Merging two linked lists refers to the process of combining them into a single sorted
list while maintaining the ascending order of elements. This is a fundamental
operation in many algorithms like sorting and mergesort.

Two Key Approaches:


There are two main approaches to merging linked lists:
1. Iterative Approach: This method uses two pointers, one for each list, and
compares their values at each step. The smaller element is added to the
merged list, and its corresponding pointer is advanced. This process
continues until both lists are exhausted.
2. Recursive Approach: This method repeatedly partitions the lists into smaller
halves until reaching single elements. Then, it compares and merges these
single elements, building the resulting list bottom-up.

Working Example (Iterative Approach):

Step 1: Define two sorted linked lists and initialize pointers for each (head1 and
head2).
LL1: 1 -> 3 -> 5 -> 7
LL2: 2 -> 4 -> 6 -> 8
head1 = 1 -> 3 -> 5 -> 7
head2 = 2 -> 4 -> 6 -> 8

Step 2: Create a new empty list to hold the merged elements (mergedList).
mergedList = null

Step 3: Repeat until both head pointers reach null:


• Compare the values pointed to by head1 and head2.
• If head1's value is smaller, add it to mergedList and advance head1.
• Otherwise, add head2's value to mergedList and advance head2.

Iteration 1:
compare 1 (head1) with 2 (head2) - 1 is smaller
add 1 to mergedList, advance head1
mergedList = 1 -> null

... remaining iterations follow the same logic ...


Final result:
mergedList = 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8

Recursive Approach:
The recursive approach involves similar comparisons and merging but partitions the
lists and builds the merged list from the bottom up. This can be a bit more complex
than the iterative approach but might be more efficient for larger lists.
Both approaches achieve the same result of creating a single sorted list by merging
two sorted linked lists. Choosing the optimal approach depends on your specific
needs and coding style.

Q4) What is recursive operations on linked list?


Ans. Recursive Operations on Linked Lists

Recursion is a powerful programming technique where a function calls itself, often


with smaller or simpler inputs. This makes it well-suited for manipulating data
structures like linked lists, where operations tend to involve traversing and
processing each element sequentially.

Here are some common recursive operations on linked lists:

1. Traversal:
• Printing/Visiting Elements: A function can recursively visit each node in the
list, printing its data or performing any desired operation on it. It typically calls
itself on the next pointer of the current node, stopping at the empty list (null
pointer).

2. Searching:
• Finding a Specific Element: A function can compare the target value with the
current node's data and recursively search the next pointer if not found. If
found, it can return the matching node or its position.
3. Modification:
• Reversing the List: A function can recursively reverse the order of nodes by
creating a new node at each step and updating its next pointer to point to the
previously processed node. It stops at the head node of the original list,
resulting in a reversed list.
• Inserting Elements: A function can find the target position (e.g., before/after a
specific node) and recursively insert a new node, adjusting the next pointers
appropriately.
• Deleting Elements: A function can find the target node and bypass it by
adjusting the next pointers of surrounding nodes, then recursively delete the
isolated node.

4. Calculations:
• Finding Length: A function can recursively call itself on the next pointer and
increment a counter, returning the final count at the end.
• Sum of Elements: A function can sum the current node's data with the
recursively calculated sum from the remaining list.

Benefits of Recursive Operations:


• Concise and Elegant Code: Recursive solutions can be shorter and more
readable than iterative loops, especially for complex operations.
• Natural Fit for Linked Lists: Recursion mirrors the sequential nature of linked
lists, making it intuitive and easy to reason about.

Drawbacks of Recursive Operations:


• Overhead and Stack Usage: Each recursive call adds a frame to the call
stack, which can cause memory limitations for large lists and deep recursion.
• Less Performant than Iterative Solutions: Iterative loops can be faster and
more memory-efficient, especially for simple operations.

Choosing the Right Approach:

The decision to use recursion vs. iteration depends on the specific operation, its
complexity, and performance requirements. Consider factors like readability,
potential for stack overflow, and desired efficiency before making a choice.
Q5) What is polynomial? Explain its representation of polynomial using linked list data
structure.
Ans.
A polynomial is a collection of different terms, each comprising coefficients, and
exponents. It can be represented using a linked list. This representation makes
polynomial manipulation efficient.

While representing a polynomial using a linked list, each polynomial term represents
a node in the linked list. To get better efficiency in processing, we assume that the
term of every polynomial is stored within the linked list in the order of decreasing
exponents. Also, no two terms have the same exponent, and no term has a zero
coefficient and without coefficients. The coefficient takes a value of 1.

Each node of a linked list representing polynomial constitute three parts:

o The first part contains the value of the coefficient of the term.
o The second part contains the value of the exponent.
o The third part, LINK points to the next term (next node).

The structure of a node of a linked list that represents a polynomial is


shown below:

o Consider a polynomial P(x) = 7x2 + 15x3 - 2 x2 + 9. Here 7, 15, -2, and 9 are the
coefficients, and 4,3,2,0 are the exponents of the terms in the polynomial. On
representing this polynomial using a linked list, we have

o
o Observe that the number of nodes equals the number of terms in the
polynomial. So, we have 4 nodes. Moreover, the terms are stored to decrease
exponents in the linked list. Such representation of polynomial using linked lists
makes the operations like subtraction, addition, multiplication, etc., on
polynomial very easy.
Benefits of Linked List Representation:

• Dynamic Size: Adding or removing terms only involves manipulating pointers,


making the representation flexible for polynomials of any degree.
• Efficient Operations: Performing operations like addition, subtraction, and
multiplication becomes easier by traversing the list and comparing terms with
matching exponents.
• Memory Efficiency: Only non-zero terms need to be stored, saving space
compared to fixed-size representations.

Example:
Consider the polynomial 2x^3 + 4x^2 - 3x + 1. Its linked list representation would
look like this:
Head -> (2, 3) -> (4, 2) -> (-3, 1) -> null
Each node stores the coefficient and exponent of a term. Traversing the list allows
evaluating the polynomial for any given value of x.

Further Considerations:
• Some implementations store variables explicitly in each node, while others
might use a single variable reference throughout the list.
• Additional complexity can be added to handle polynomials with multiple
variables or complex coefficients.

Overall, representing polynomials with linked lists offers a dynamic and efficient way
to store and manipulate these mathematical expressions in computer programs.
I hope this explanation clarifies the concept and its advantages! Feel free to ask if
you have any further questions or want to explore specific operations on polynomial
linked lists.
Q6) What is stack in data structure? Explain in detail with example. (Explain any
representation on array on Linked list).

Ans. A Stack is a linear data structure that follows the LIFO (Last-In-First-
Out) principle. Stack has one end, whereas the Queue has two ends (front and rear).
It contains only one pointer top pointer pointing to the topmost element of the stack.
Whenever an element is added in the stack, it is added on the top of the stack, and the
element can be deleted only from the stack. In other words, a stack can be defined
as a container in which insertion and deletion can be done from the one end
known as the top of the stack.

Some key points related to stack


o It is called as stack because it behaves like a real-world stack, piles of books, etc.
o A Stack is an abstract data type with a pre-defined capacity, which means that it can
store the elements of a limited size.
o It is a data structure that follows some order to insert and delete the elements, and
that order can be LIFO or FILO.

Working of Stack
Stack works on the LIFO pattern. As we can observe in the below figure there are five
memory blocks in the stack; therefore, the size of the stack is 5.

Suppose we want to store the elements in a stack and let's assume that stack is empty.
We have taken the stack of size 5 as shown below in which we are pushing the
elements one by one until the stack becomes full.
Since our stack is full as the size of the stack is 5. In the above cases, we can observe
that it goes from the top to the bottom when we were entering the new element in
the stack. The stack gets filled up from the bottom to the top.

When we perform the delete operation on the stack, there is only one way for entry
and exit as the other end is closed. It follows the LIFO pattern, which means that the
value entered first will be removed last. In the above case, the value 5 is entered first,
so it will be removed only after the deletion of all the other elements.

Representation on array on Linked list


An array language is a collection of similar data items stored at contiguous memory
locations and elements that can be accessed randomly using indices of an array. They
can be used to store the collection of primitive data types such as int, float, double,
char, etc of any particular type. To add to it, an array in C/C++ can store derived data
types such as structures, pointers, etc. Given below is the picture representation of an
array.

An array is a container that can hold a fixed number of elements and these elements
should be of the same type. Most of the data structures make use of arrays to
implement their algorithms.

A linked list is a linear data structure consisting of nodes where each node contains a
reference to the next node. To create a link list we need a pointer that points to the first
node of the list.

Approach: To create an array of linked lists below are the main requirements:
1. An array of pointers.
2. For keeping the track of the above-created array of pointers then another
pointer is needed that points to the first pointer of the array. This pointer is
called pointer to pointer. Below is the pictorial representation of the array of
linked lists:
Q7) What is polish notations? Explain its features and give examples.
Ans. Polish Notation in Data Structures
Polish notation, also known as prefix and postfix notation, is a way of representing
mathematical and logical expressions without parentheses. Instead of using
parentheses to define the order of operations, Polish notation relies on the position
of operators relative to operands to indicate precedence. This can lead to simpler
and more concise expressions compared to infix notation (the standard
mathematical notation with operators between operands).

Types of Polish Notation:


• Prefix Notation (Polish Notation): Operators come before operands. Example:
+ 2 3, which is equivalent to 2 + 3 in infix notation.
• Postfix Notation (Reverse Polish Notation): Operators follow operands.
Example: 2 3 +, which is equivalent to 2 + 3 in infix notation.

Features of Polish Notation:


• Parenthesis-free: Eliminates the need for parentheses to define operation
order, making expressions more compact and easier to read.
• Explicit operator precedence: The position of the operator relative to operands
clearly defines its priority.
• Efficient evaluation: Expressions can be evaluated directly using a stack,
eliminating the need for complex parsing rules or operator precedence tables.
• Flexibility: Suited for a wide range of mathematical and logical operations,
including arithmetic, comparison, and Boolean operations.
Examples:
• Addition: + 2 3 (prefix) or 2 3 + (postfix)
• Multiplication: * 2 3 (prefix) or 2 3 * (postfix)
• Negation: - 2 (prefix) or 2 - (postfix)
• More complex expressions: + / 2 3 4 (prefix) or 2 3 4 / + (postfix)

Applications of Polish Notation:


• Compiler design: Polish notation is used internally in some compilers for
easier parsing and code generation.
• Stack machines: Certain computer architectures use a stack-based approach
and rely on Polish notation for efficient instruction execution.
• Cryptography: Hash functions and some encryption algorithms utilize postfix
notation for secure data processing.
• Logic programming: Logic programming languages like Prolog often use
Polish notation for representing and manipulating logical expressions.

Overall, Polish notation offers a powerful and efficient alternative to infix notation for
representing and manipulating expressions. Its simplicity, clear operator precedence,
and suitability for stack-based processing make it a valuable tool in various data
structures and algorithms.

Q8) What are queues in data structure? Explain all features with example.
Ans.
Queue
1. A queue can be defined as an ordered list which enables insert operations to be
performed at one end called REAR and delete operations to be performed at another
end called FRONT.

2. Queue is referred to be as First In First Out list.

3. For example, people waiting in line for a rail ticket form a queue.
Applications of Queue
Due to the fact that queue performs actions on first in first out basis which is quite fair
for the ordering of actions. There are various applications of queues discussed as
below.

1. Queues are widely used as waiting lists for a single shared resource like printer,
disk, CPU.
2. Queues are used in asynchronous transfer of data (where data is not being
transferred at the same rate between two processes) for eg. pipes, file IO,
sockets.
3. Queues are used as buffers in most of the applications like MP3 media player,
CD player, etc.
4. Queue are used to maintain the play list in media players in order to add and
remove the songs from the play-list.
5. Queues are used in operating systems for handling interrupts.
Q9) Explain Queue representation using array with suitable example.

Ans. Array representation of Queue


We can easily represent queue by using linear arrays. There are two variables i.e. front
and rear, that are implemented in the case of every queue. Front and rear variables
point to the position from where insertions and deletions are performed in a queue.
Initially, the value of front and queue is -1 which represents an empty queue. Array
representation of a queue containing 5 elements along with the respective values of
front and rear, is shown in the following figure.

The above figure shows the queue of characters forming the English word "HELLO".
Since, No deletion is performed in the queue till now, therefore the value of front
remains -1. However, the value of rear increases by one every time an insertion is
performed in the queue. After inserting an element into the queue shown in the above
figure, the queue will look something like following. The value of rear will become 5
while the value of front remains same.
After deleting an element, the value of front will increase from -1 to 0. however, the
queue will look something like following.
Q10) Explain Queue representation using Linked List with suitable example.

Ans. Linked List implementation of Queue


Due to the drawbacks discussed in the previous section of this tutorial, the array
implementation can not be used for the large scale applications where the queues are
implemented. One of the alternative of array implementation is linked list
implementation of queue.

The storage requirement of linked representation of a queue with n elements is o(n)


while the time requirement for operations is o(1).

In a linked queue, each node of the queue consists of two parts i.e. data part and the
link part. Each element of the queue points to its immediate next element in the
memory.

In the linked queue, there are two pointers maintained in the memory i.e. front pointer
and rear pointer. The front pointer contains the address of the starting element of the
queue while the rear pointer contains the address of the last element of the queue.

Insertion and deletions are performed at rear and front end respectively. If front and
rear both are NULL, it indicates that the queue is empty.

The linked representation of queue is shown in the following figure.


Q11) What are Circular Queues? Explain working with example.
Ans. A circular queue is similar to a linear queue as it is also based on the FIFO (First In
First Out) principle except that the last position is connected to the first position in a
circular queue that forms a circle. It is also known as a Ring Buffer.

There was one limitation in the array implementation of Queue. If the rear reaches to
the end position of the Queue then there might be possibility that some vacant spaces
are left in the beginning which cannot be utilized. So, to overcome such limitations,
the concept of the circular queue was introduced.

As we can see in the above image, the rear is at the last position of the Queue and front is
pointing somewhere rather than the 0th position. In the above array, there are only two
elements and other three positions are empty. The rear is at the last position of the Queue; if
we try to insert the element then it will show that there are no empty spaces in the Queue.
There is one solution to avoid such wastage of memory space by shifting both the elements
at the left and adjust the front and rear end accordingly. It is not a practically good approach
because shifting all the elements will consume lots of time. The efficient approach to avoid
the wastage of the memory is to use the circular queue data structure.
Circular Queue Working Example: Buffering Data
Let's consider a circular queue used to buffer data between a sensor and a
processing unit in a real-time system. The sensor generates data points at a
constant rate, while the processing unit can only handle them at a slower pace.
Scenario:

• The queue has a capacity of 5 elements.


• The sensor generates a data point every 1 second.
• The processing unit consumes data points from the queue every 2 seconds.

Steps:

1. Initialization: The queue is initially empty, with both head and tail pointers
pointing to the same location.
2. Sensor Data Arrival: Every second, the sensor generates a new data point.
This data point is added to the queue after the current tail pointer. The tail
pointer is incremented to point to the newly added element.
3. Queue Buffering: As the sensor keeps generating data, the queue fills up.
Since it's circular, the tail pointer wraps around and starts overwriting the
oldest element in the queue (FIFO principle).
4. Processing Unit Data Consumption: Every 2 seconds, the processing unit
fetches the data point at the head of the queue. The head pointer is then
incremented to point to the next element.
5. Dynamic Size Adjustment: Through this process, the queue dynamically
adjusts its size based on the data flow. If the sensor generates data faster
than the processing unit consumes it, the queue will be full and potentially
overflow. If the processing unit consumes data faster than the sensor
generates it, the queue will become empty.

This example illustrates how circular queues can be implemented in real-world


scenarios to effectively manage data flow and buffer data between producers and
consumers with varying rates.
Q12) What is Dequeue in data structure. Explain its working with example.
Ans. A queue is a data structure in which whatever comes first will go out first, and it follows
the FIFO (First-In-First-Out) policy. Insertion in the queue is done from one end known as
the rear end or the tail, whereas the deletion is done from another end known as the front
end or the head of the queue.

The deque stands for Double Ended Queue. Deque is a linear data structure where the
insertion and deletion operations are performed from both ends. We can say that
deque is a generalized version of the queue.

Though the insertion and deletion in a deque can be performed on both ends, it does
not follow the FIFO rule. The representation of a deque is given as follows -

Types of deque
There are two types of deque -

o Input restricted queue


o Output restricted queue

Input restricted Queue

In input restricted queue, insertion operation can be performed at only one end, while
deletion can be performed from both ends.
Output restricted Queue

In output restricted queue, deletion operation can be performed at only one end, while
insertion can be performed from both ends.

Operations(Working):

1. Front Insertion (PushFront):


o Create a new node with the given data.
o Update the new node's next pointer to point to the current head.
o Update the head pointer to point to the newly created node.

2. Rear Insertion (PushBack):


o Create a new node with the given data.
o Update the new node's previous pointer to point to the current tail.
o Update the tail pointer to point to the newly created node.

3. Front Deletion (PopFront):


o Store the data pointed to by the head pointer.
o Update the head pointer to point to the next node (or null if the deque
becomes empty).
o Return the stored data.

4. Rear Deletion (PopBack):


o Store the data pointed to by the tail pointer.
o Update the tail pointer to point to the previous node (or null if the deque
becomes empty).
o Return the stored data.

Example:
Imagine a deque representing a waiting list for a restaurant. You can use it for the
following operations:

• Add new customers to the front of the line (PushFront): This prioritizes them
for immediate service.
• Add new customers to the back of the line (PushBack): This adds them to the
queue for later seating.
• Serve the customer at the front (PopFront): This removes the first customer
from the list and returns their information.
• Remove a customer who changed their mind (PopFront or PopBack): This
allows flexible manipulation of the waiting list.

Here's how these operations would look in action:


• PushFront("John"): Creates a new node with "John" as the data and inserts it
at the head, making John the first in line.
• PushBack("Alice"): Creates a node with "Alice" and inserts it at the tail, adding
her to the back of the line.
• PopFront(): Retrieves "John" from the head and removes him from the list.
• PopBack(): Retrieves "Alice" from the tail and removes her from the list.

This demonstrates the versatility of deques in managing data from both ends
efficiently. They can be used for various scenarios beyond waitlists, including:
• Managing browser history (forward and back navigation)
• Undo/redo functionality in editors
• Implementing backtracking algorithms
• Balancing parentheses in expressions
Q13) What is priority queue in data structure? Explain need, working and
advantages/ disadvantages with example? Write applications of priority queue.

Ans. What is a priority queue?


A priority queue is an abstract data type that behaves similarly to the normal queue
except that each element has some priority, i.e., the element with the highest priority
would come first in a priority queue. The priority of the elements in a priority queue
will determine the order in which elements are removed from the priority queue.

The priority queue supports only comparable elements, which means that the
elements are either arranged in an ascending or descending order.

For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority
queue with an ordering imposed on the values is from least to the greatest. Therefore,
the 1 number would be having the highest priority while 22 will be having the lowest
priority.

Characteristics of a Priority queue


A priority queue is an extension of a queue that contains the following characteristics:

o Every element in a priority queue has some priority associated with it.
o An element with the higher priority will be deleted before the deletion of the lesser
priority.
o If two elements in a priority queue have the same priority, they will be arranged using
the FIFO principle.

Let's understand the priority queue through an example.

We have a priority queue that contains the following values:

1, 3, 4, 8, 14, 22

All the values are arranged in ascending order. Now, we will observe how the priority
queue will look after performing the following operations:

o poll(): This function will remove the highest priority element from the priority
queue. In the above priority queue, the '1' element has the highest priority, so
it will be removed from the priority queue.
o add(2): This function will insert '2' element in a priority queue. As 2 is the
smallest element among all the numbers so it will obtain the highest priority.
o poll(): It will remove '2' element from the priority queue as it has the highest
priority queue.
o add(5): It will insert 5 element after 4 as 5 is larger than 4 and lesser than 8, so
it will obtain the third highest priority in a priority queue.

Representation of priority queue


Now, we will see how to represent the priority queue through a one-way list.

We will create the priority queue by using the list given below in which INFO list
contains the data elements, PRN list contains the priority numbers of each data
element available in the INFO list, and LINK basically contains the address of the next
node.

Let's create the priority queue step by step.

In the case of priority queue, lower priority number is considered the higher
priority, i.e., lower priority number = higher priority.

Step 1: In the list, lower priority number is 1, whose data value is 333, so it will be
inserted in the list as shown in the below diagram:

Step 2: After inserting 333, priority number 2 is having a higher priority, and data
values associated with this priority are 222 and 111. So, this data will be inserted based
on the FIFO principle; therefore 222 will be added first and then 111.
Step 3: After inserting the elements of priority 2, the next higher priority number is 4
and data elements associated with 4 priority numbers are 444, 555, 777. In this case,
elements would be inserted based on the FIFO principle; therefore, 444 will be added
first, then 555, and then 777.

Step 4: After inserting the elements of priority 4, the next higher priority number is 5,
and the value associated with priority 5 is 666, so it will be inserted at the end of the
queue.

Need for Priority Queues:


• Efficiently managing urgent tasks: Imagine a hospital emergency queue.
Patients are prioritized based on their severity, ensuring critical cases are
treated first. Priority queues offer a similar mechanism for handling tasks with
varying importance.
• Optimizing resource allocation: Imagine a network router handling data
packets. Prioritizing time-sensitive packets like voice calls over file transfers
ensures smooth communication.
• Implementing search algorithms: Priority queues can be used in algorithms
like Dijkstra's shortest path algorithm to prioritize exploring promising paths
first, leading to faster and more efficient solutions.
Working of Priority Queues:
• Data Representation: Elements can be represented as nodes containing both
data and a priority value.
• Ordering Mechanism: Different implementations exist, like binary heaps or
sorted lists, but all ensure elements with higher priorities are positioned closer
to the front for quicker access.
• Operations:
o Enqueue: Adds an element with its priority, maintaining the order
based on priority values.
o Dequeue: Removes the element with the highest priority, returning its
data.
o Peek: Retrieves the element with the highest priority without removing
it.

Advantages:
• Efficient handling of urgent tasks: Prioritizes important elements for faster
processing.
• Dynamic data handling: Can accommodate elements with varying priorities
seamlessly.
• Versatility: Applicable in diverse domains like networking, scheduling, and
search algorithms.

Disadvantages:
• Implementation complexity: Maintaining order based on priorities can be more
complex than simple FIFO queues.
• Performance overhead: Comparing and adjusting priorities for each operation
might impact performance compared to simpler data structures.
• Potential for starvation: Low-priority elements might wait indefinitely if high-
priority tasks constantly arrive.
Applications of Priority Queues:
• Hospital emergency queue management: Prioritizing critical patients based on
their medical condition.
• Network traffic management: Prioritizing time-sensitive packets for smooth
communication.
• Operating system scheduling: Prioritizing high-priority processes for efficient
resource allocation.
• Search algorithms: Efficiently exploring promising paths in algorithms like
Dijkstra's shortest path.
• Event management: Processing urgent events first in systems handling real-
time data streams.

Example:
Imagine a download manager downloading multiple files simultaneously. You can
prioritize downloading a critical work document before less urgent movies. The
priority queue prioritizes the work document, ensuring it finishes first despite being
added later than the movies.

Understanding the need, working principles, and applications of priority


queues equips you with a valuable tool for handling tasks with varying urgency in
various data structure and algorithm applications.
Q14) Explain Priority Queue using array representation.
Ans. Priority Queue is an extension of the Queue data structure where each element
has a particular priority associated with it. It is based on the priority value, the
elements from the queue are deleted.
Operations on Priority Queue:
1. enqueue(): This function is used to insert new data into the queue.
2. dequeue(): This function removes the element with the highest priority
from the queue.
3. peek()/top(): This function is used to get the highest priority element in
the queue without removing it from the queue.
Approach:
The idea is to create a structure to store the value and priority of the element and
then create an array of that structure to store elements. Below are the functionalities
that are to be implemented:
• enqueue(): It is used to insert the element at the end of the queue.
• peek(): Traverse across the priority queue and find the element with the
highest priority and return its index. In the case of multiple elements with the
same priority, find the element with the highest value having the highest
priority.
• dequeue(): Find the index with the highest priority using the peek() function
let’s call that position as ind, and then shift the position of all the elements
after the position ind one position to the left. Decrease the size by one.

Limitations:
• Performance overhead: Enqueue and dequeue operations involve searching
for empty slots and adjusting priorities, which can be slower than using heaps
or sorted lists.
• Fixed size: The array has a predefined size, limiting the maximum number of
elements the queue can hold. Resizing the array can be expensive and
inefficient.
• Wastage of space: Empty slots within the array represent wasted memory.

Advantages:
• Simple implementation: Array-based priority queues are easier to understand
and implement compared to other methods like heaps.
• Random access: You can access any element in the array directly using its
index, which can be useful in some scenarios.
Applications:
• Situations where a simple and efficient implementation is needed, and
performance overhead is not a major concern.
• Applications with a limited number of elements where the fixed size limitation
is not an issue.

Overall, while array representation can be used for priority queues, it's generally not
the preferred method due to its performance limitations and fixed size constraints.
Heaps and sorted linked lists offer better performance and flexibility for most
practical applications.
Feel free to ask if you have any further questions about specific aspects of array-
based priority queues, comparisons with other implementations, or any other data
structure concepts!
Q15) Explain representation of array using Linked List data structure.

Ans. A priority queue is a type of queue in which each element in a queue is associated
with some priority, and they are served based on their priorities. If the elements have
the same priority, they are served based on their order in a queue.

Mainly, the value of the element can be considered for assigning the priority. For
example, the highest value element can be used as the highest priority element. We
can also assume the lowest value element to be the highest priority element. In other
cases, we can also set the priority based on our needs.

The following are the functions used to implement priority queue using linked
list:

o push(): It is used to insert a new element into the Queue.


o pop(): It removes the highest priority element from the Queue.
o peep(): This function is used to retrieve the highest priority element from the queue
without removing it from the queue.

The linked list of priority queue is created in such a way that the highest priority
element is always added at the head of the queue. The elements are arranged in a
descending order based on their priority so that it takes O(1) time in deletion. In case
of insertion, we need to traverse the whole list in order to find out the suitable position
based on their priority; so, this process takes O(N) time.

Let's understand through an example.

Consider the below-linked list that consists of elements 2, 7, 13, 15.

Suppose we want to add the node that contains the value 1. Since the value 1 has
more priority than the other nodes so we will insert the node at the beginning of the
list shown as below:
Now we have to add 7 element to the linked list. We will traverse the list to insert
element 7. First, we will compare element 7 with 1; since 7 has lower priority than 1, so
it will not be inserted before 7. Element 7 will be compared with the next node, i.e., 2;
since element 7 has a lower priority than 2, it will not be inserted before 2.. Now, the
element 7 is compared with a next element, i.e., since both the elements have the same
priority so they will be served based on the first come first serve. The new element 7
will be added after the element 7 shown as below:

Q16) What are binary trees? Explain representation of binary trees in memory.

Ans. Binary Tree


The Binary tree means that the node can have maximum two children. Here, binary
name itself suggests that 'two'; therefore, each node can have either 0, 1 or 2 children.

Let's understand the binary tree through an example.

he above tree is a binary tree because each node contains the utmost two children.
The logical representation of the above tree is given below:
In the above tree, node 1 contains two pointers, i.e., left and a right pointer pointing to the
left and right node respectively. The node 2 contains both the nodes (left and right node);
therefore, it has two pointers (left and right). The nodes 3, 5 and 6 are the leaf nodes, so all
these nodes contain NULL pointer on both left and right parts.

Types of Binary Tree


There are four types of Binary tree:

o Full/ proper/ strict Binary tree


o Complete Binary tree
o Perfect Binary tree
o Degenerate Binary tree
o Balanced Binary tree

1. Full/ proper/ strict Binary tree

o The full binary tree is also known as a strict binary tree. The tree can only be
considered as the full binary tree if each node must contain either 0 or 2
children. The full binary tree can also be defined as the tree in which each node
must contain 2 children except the leaf nodes.

o Let's look at the simple example of the Full Binary tree.


In the above tree, we can observe that each node is either containing zero or two children;
therefore, it is a Full Binary tree.

Complete Binary Tree

The complete binary tree is a tree in which all the nodes are completely filled except
the last level. In the last level, all the nodes must be as left as possible. In a complete
binary tree, the nodes should be added from the left.

Let's create a complete binary tree.

The above tree is a complete binary tree because all the nodes are completely filled,
and all the nodes in the last level are added at the left first.
Perfect Binary Tree

A tree is a perfect binary tree if all the internal nodes have 2 children, and all the leaf
nodes are at the same level.

Degenerate Binary Tree


The degenerate binary tree is a tree in which all the internal nodes have only one
children.

Let's understand the Degenerate binary tree through examples.


The above tree is a degenerate binary tree because all the nodes have only one child.
It is also known as a right-skewed tree as all the nodes have a right child only.

The above tree is also a degenerate binary tree because all the nodes have only one
child. It is also known as a left-skewed tree as all the nodes have a left child only.

Balanced Binary Tree

The balanced binary tree is a tree in which both the left and right trees differ by atmost
1.

Let's understand the balanced binary tree through examples.


The above tree is a balanced binary tree because the difference between the left
subtree and right subtree is zero.

Binary Tree Implementation


A Binary tree is implemented with the help of pointers. The first node in the tree is
represented by the root pointer. Each node in the tree consists of three parts, i.e., data,
left pointer and right pointer. To create a binary tree, we first need to create the node.
We will create the node of user-defined as shown below:

1. struct node
2. {
3. int data,
4. struct node *left, *right;
5. }

In the above structure, data is the value, left pointer contains the address of the left
node, and right pointer contains the address of the right node.
Q17) What are binary search tree? Explain basic operation with suitable examples.

Ans. What is a tree?


A tree is a kind of data structure that is used to represent the data in hierarchical form.
It can be defined as a collection of objects or entities called as nodes that are linked
together to simulate a hierarchy. Tree is a non-linear data structure as the data in a
tree is not stored linearly or sequentially.

Now, let's start the topic, the Binary Search tree.

What is a Binary Search tree?


A binary search tree follows some order to arrange the elements. In a Binary search
tree, the value of left node must be smaller than the parent node, and the value of
right node must be greater than the parent node. This rule is applied recursively to the
left and right subtrees of the root.

Let's understand the concept of Binary search tree with an example.

In the above figure, we can observe that the root node is 40, and all the nodes of the
left subtree are smaller than the root node, and all the nodes of the right subtree are
greater than the root node.

Similarly, we can see the left child of root node is greater than its left child and smaller
than its right child. So, it also satisfies the property of binary search tree. Therefore, we
can say that the tree in the above image is a binary search tree.
Advantages of Binary search tree
o Searching an element in the Binary search tree is easy as we always have a hint that
which subtree has the desired element.
o As compared to array and linked lists, insertion and deletion operations are faster in
BST.

Example of creating a binary search tree


Now, let's see the creation of binary search tree using an example.

Suppose the data elements are - 45, 15, 79, 90, 10, 55, 12, 20, 50

o First, we have to insert 45 into the tree as the root of the tree.
o Then, read the next element; if it is smaller than the root node, insert it as the root of
the left subtree, and move to the next element.
o Otherwise, if the element is larger than the root node, then insert it as the root of the
right subtree.
o Now, let's see the process of creating the Binary search tree using the given
data element. The process of creating the BST is shown below -
o Step 1 - Insert 45.

o
o Step 2 - Insert 15.
o As 15 is smaller than 45, so insert it as the root node of the left subtree.

o
o Step 3 - Insert 79.
o As 79 is greater than 45, so insert it as the root node of the right subtree.

o
o Step 4 - Insert 90.
o 90 is greater than 45 and 79, so it will be inserted as the right subtree of 79.

o Step 5 - Insert 10.


o 10 is smaller than 45 and 15, so it will be inserted as a left subtree of 15.

o
o Step 6 - Insert 55.
o 55 is larger than 45 and smaller than 79, so it will be inserted as the left subtree
of 79.

Step 7 - Insert 12.

12 is smaller than 45 and 15 but greater than 10, so it will be inserted as the right
subtree of 10.
Step 8 - Insert 20.

20 is smaller than 45 but greater than 15, so it will be inserted as the right subtree of
15.

o
o Step 9 - Insert 50.
o 50 is greater than 45 but smaller than 79 and 55. So, it will be inserted as a left
subtree of 55.

o Now, the creation of binary search tree is completed. After that, let's move
towards the operations that can be performed on Binary search tree.
o We can perform insert, delete and search operations on the binary search tree.
Q.18) Explain operations on binary search trees.

Ans. Binary trees offer various operations for manipulating and accessing data
efficiently. Here are some common operations:

1. Traversal:
Traversing a binary tree involves visiting each node in a specific order. Different
traversal techniques offer different perspectives on the tree structure:
• Pre-order: Visit the current node, then its left child, then its right child (e.g.,
visit root, left subtree, right subtree).
• In-order: Visit the left child, then the current node, then the right child (e.g.,
visit left subtree, root, right subtree). This orders elements in
ascending/descending order for Binary Search Trees (BSTs).
• Post-order: Visit the left child, then the right child, then the current node (e.g.,
visit left subtree, right subtree, root).

2. Searching:
Searching for a specific element in a binary tree is efficient, especially in BSTs. The
comparison of the element's value with the current node's value guides the search
down the left or right subtree until the element is found or the search reaches an
empty leaf.

3. Insertion:
Inserting a new element into a binary tree involves finding the appropriate position
based on its value (less than the current node goes left, greater goes right). This
preserves the ordering property in BSTs.

4. Deletion:
Deleting an element from a binary tree requires finding the element and then
adjusting the tree structure to maintain its properties. Different cases arise
depending on the node's degree (number of children) and the structure of the
subtree.

5. Balancing:
Certain types of binary trees, like AVL trees and Red-Black trees, maintain a balance
factor to ensure efficient search and insertion operations. Balancing operations
involve rotations of nodes to adjust the height of subtrees and restore balance.
6. Additional operations:
Other operations include finding the minimum or maximum element, counting the
number of nodes, calculating the tree's height, and checking for specific properties
like completeness or balance.

These are just some of the common operations on binary trees. The specific
implementation and complexity of these operations vary depending on the chosen
representation (array vs. linked list) and the type of binary tree.

Q19) Explain Traversal of binary list.

Ans. What is Traversing?


Traversing is the process by which we can visit and access each and every element present in
any data structure like an array, linked list, or tree. It is an operation which can be
implemented on any data structure. Traversal is the most basic of the operations that can be
performed on any data structure. In this article, we will learn about the traversal of binary
tree.

What is Traversal of a Binary Tree?


In computer science, traversal of binary tree (also known as tree search) refers to the process
of visiting (checking or updating) each node in a tree data structure, exactly once. Such
traversals are classified by the order in which the nodes are visited. There are different ways (
and order ) of visiting a tree data structure.

There are 3 tree traversals that are mostly based on the DFS. They are given below:

• Preorder Traversal
• Inorder Traversal
• Postorder Traversal
1. Inorder Tree Traversal

The inorder tree traversal of binary tree is also known as the LNR traversal, because in this
traversal, we first visit the left node (abbreviation L), followed by the root
node(abbreviation N), and finally the right node (abbreviation R) of the tree.

In the inorder traversal, we first start from the root node of the tree and go deeper and deeper
into the left subtree in a recursive manner.

When we reach the left-most node of the tree with the above steps, then we visit that current
node and go to the left-most node of its right subtree (if exists).

Simply put:

• Go to the left subtree


• Visit the Current Node
• Go to the right subtree
2. Preorder Tree Traversal

It is performed in a similar way as the inorder traversal is performed, but here the order of
visiting the nodes is different than that of the inorder traversal.

In the case of the preorder traversal of binary tree, we visit the current node first. After that,
we visit the leftmost subtree. Once we reach the leaf node(or have covered all the nodes of
the left subtree), we move towards the right sub-tree. In the right subtree, we recursively call
our function to do the traversal in a similar manner.

It follows the NLR structure. It means first visit the current node, followed by recursively
visiting the left subtree and then the right subtree. Below given is an image for the same.

The algorithm for the preorder traversal of binary tree can be stated as:

• Visit the current node


• Recursively traverse the current node's left subtree.
• Recursively traverse the current node's right subtree.
3. Postorder Traversal

After having discussed a lot on inorder and preorder traversal of binary tree, you might have
actually understood how they apply dfs for their traversal. Similar to them is the Postorder
traversal of binary tree, where, we basically visit the left subtree and the right subtree
before visiting the current node in recursion.

Postorder follows the L->R->N order of traversing the binary tree.

In a nutshell, postorder traversal of binary tree follows the following order of visiting the
nodes in a tree:

• Recursively traverse the current node's left subtree.


• Recursively traverse the current node's right subtree.
• Visit the current node (in the figure: position blue).
Q20) Explain array representation of binary tree.
Ans. Array Representation :

The binary tree can be represented using an array of size 2n+1 if the depth of the binary tree
is n. If the parent element is at the index p, Then the left child will be stored in the
index (2p)+1, and the right child will be stored in the index (2p)+2.

The array representation of the above binary tree is :


As in the above binary tree, A was the root node, so that it will be stored in the 0th index. The
left child of A will be stored in the 2(0)+1 equal to the 1st location. So, B is stored in index 1.
And similarly, the right child of A will be stored in the 2(0)+2 index. For every node, the left
and right child will be stored accordingly.

Q21) Explain Linked List representation of binary tree.


Ans. For the linked list representation, we will use the doubly linked list, which has two
pointers so that we can point to the left and right children of a binary tree node. NULL is
given to the pointer as the address when no child is connected.
The Linked list representation of the above binary tree is:

Q22) Application of binary search trees. Explain one of them.

Ans. Binary Search Trees (BSTs) are powerful data structures used in various
applications due to their efficient search and retrieval capabilities. Here are some of
their key applications:

1. Data Organization and Retrieval:


• Dictionaries: BSTs are a common choice for implementing dictionaries,
allowing efficient lookup of key-value pairs based on the keys' sorted order.
• Database Indexing: Databases use BSTs to index data based on specific
fields, enabling fast retrieval of records based on indexed values.
• Autocompletion: Applications like search engines and text editors can utilize
BSTs for autocompletion suggestions based on the user's typed characters.

2. Sorting Algorithms:
• Merge Sort: BSTs can be used in merge sort algorithms to efficiently split and
merge sorted sub-lists, leading to faster overall sorting.
• Quick Sort: BSTs can be used to partition elements during quick sort,
improving its performance in specific scenarios.
3. Set Operations:
• Union: Finding the union of two sets represented by BSTs can be achieved
efficiently by traversing and merging the trees while maintaining order.
• Intersection: Finding the intersection of two sets represented by BSTs can be
done by comparing elements in both trees and adding matches to a new BST.

4. In-Memory Caching:
• Applications can use BSTs to cache frequently accessed data in memory for
faster retrieval, improving performance and reducing database load.

5. Network Routing:
• Some routing algorithms utilize BSTs to efficiently map network addresses to
their destinations, enabling faster data routing.

Let's explore one specific application in detail:

Dictionaries:
Imagine a dictionary containing words and their definitions. A BST can be used to
store this information, where each node holds a word and its definition, and the
nodes are ordered alphabetically. To find the definition of a word, you can start at the
root of the tree and compare the word you're searching for with the word at the
current node.

• If the search word is alphabetically less than the current node's word, you
move to the left child subtree.
• If the search word is alphabetically greater, you move to the right child
subtree.
• If the search word matches the current node's word, you've found the
definition.
This process continues until you reach a leaf node or find the desired word. This
efficient searching is a key benefit of BSTs in dictionary applications.
BSTs offer a versatile and efficient solution for various data organization and
retrieval tasks. Understanding their applications and how they work can be valuable
for programmers and data structure enthusiasts alike.
Feel free to ask if you have any further questions about specific applications of
BSTs, their implementation details, or comparisons with other data structures for
similar tasks. I'm happy to help you explore this topic further!
Q22) What is Threaded binary tree? Write advantages and disadvantages of it. Explain it with
suitable example. Its application.

Ans. In the linked representation of binary trees, more than one half of the link fields contain
NULL values which results in wastage of storage space. If a binary tree consists of n nodes
then n+1 link fields contain NULL values. So in order to effectively manage the space, a
method was devised by Perlis and Thornton in which the NULL links are replaced with special
links known as threads. Such binary trees with threads are known as threaded binary trees.
Each node in a threaded binary tree either contains a link to its child node or thread to other
nodes in the tree.

Types of Threaded Binary Tree

There are two types of threaded Binary Tree:

o One-way threaded Binary Tree


o Two-way threaded Binary Tree
1. One-way threaded Binary trees:

In one-way threaded binary trees, a thread will appear either in the right or left link field of a
node. If it appears in the right link field of a node then it will point to the next node that will
appear on performing in order traversal. Such trees are called Right threaded binary trees.
If thread appears in the left field of a node, then it will point to the nodes inorder
predecessor. Such trees are called Left threaded binary trees. Left threaded binary trees are
used less often as they don't yield the last advantages of right threaded binary trees. In one-
way threaded binary trees, the right link field of last node and left link field of first node
contains a NULL. In order to distinguish threads from normal links they are represented by
dotted lines.
Two-way threaded Binary Trees:

In two-way threaded Binary trees, the right link field of a node containing NULL values
is replaced by a thread that points to nodes inorder successor and left field of a node
containing NULL values is replaced by a thread that points to nodes inorder
predecessor.

Advantages of Threaded Binary Tree:

o In threaded binary tree, linear and fast traversal of nodes in the tree so there is
no requirement of stack. If the stack is used then it consumes a lot of memory
and time.
o It is more general as one can efficiently determine the successor and
predecessor of any node by simply following the thread and links. It almost
behaves like a circular linked list.

Disadvantages of Threaded Binary Tree:

o When implemented, the threaded binary tree needs to maintain the extra
information for each node to indicate whether the link field of each node points
to an ordinary node or the node's successor and predecessor.
o Insertion into and deletion from a threaded binary tree are more time
consuming since both threads and ordinary links need to be maintained.
Example:

Imagine a binary tree representing a family tree with names stored in each node. A threaded
tree version might look like this:
John (Thread to Mary)
/ \
Mary Peter
(Thread to | (Thread to
George) John)

In this example, John's left thread points to Mary, while Mary's thread points to George
(John's son). Peter's right thread points back to John, completing the cycle for in-order
traversal.

Applications:

Threaded trees are primarily used in situations where:

Memory conservation is crucial: Threading can save memory on resource-constrained


systems.

In-order traversal is dominant: If the main operation is traversing the tree in order, threaded
trees can be faster and simpler.

Simplicity is preferred: For basic data structures where functionality is limited but ease of
implementation is important, threading can be a good choice.

However, threaded trees are not generally recommended for general-purpose data
structures due to their limitations in flexibility and error-proneness. Traditional binary trees
with explicit pointers offer more versatility and are better suited for diverse applications.
Q23) What is Heap Sort Method? Explain binary tree sorting using heap sort.

Ans. What is a heap?


A heap is a complete binary tree, and the binary tree is a tree in which the node can
have the utmost two children. A complete binary tree is a binary tree in which all the
levels except the last level, i.e., leaf node, should be completely filled, and all the nodes
should be left-justified.

What is heap sort?


Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to
eliminate the elements one by one from the heap part of the list, and then insert them
into the sorted part of the list.

Heapsort is the in-place sorting algorithm.

Now, let's see the algorithm of heap sort.

Binary Tree Sorting with Heap Sort


Heap Sort is a powerful sorting algorithm that utilizes a special binary tree called a
heap to efficiently organize and sort elements. While heaps can be implemented in
various ways, understanding how they work in the context of binary tree sorting
provides valuable insight into this efficient sorting technique.
Key Concepts:
• Heap: A complete binary tree where each node's value is greater than or
equal to its children's values. This property is called the heap property. Heaps
can be categorized as max-heaps (largest element at the root) or min-heaps
(smallest element at the root). We'll focus on max-heaps for sorting.
• Heapify: An operation that rearranges a violated sub-tree of a heap to restore
the heap property.
Steps of Heap Sort using Binary Trees:
1. Build Heap: Convert the input array into a max-heap. This can be done by
iteratively applying heapify operations to the sub-trees starting from the
bottom level and moving upwards.
2. Extract and Reheapify: Extract the largest element (root) from the heap, which
is the first sorted element.
3. Replace the extracted element with the last element in the heap and reheapify
the remaining sub-tree to restore the heap property.
4. Repeat steps 2 and 3 until only one element remains in the heap, which is the
last sorted element.
Benefits:
• Efficient: Heap Sort has a time complexity of O(n log n), making it comparable
in performance to other sorting algorithms like Merge Sort and Quick Sort.
• In-place: Heap Sort only requires modifying the input array itself, making it
memory-efficient compared to algorithms that require additional data
structures.
• Stable: Heap Sort preserves the original order of elements with equal values.
Visualization:
Imagine a binary tree representing a pile of coins. The largest coin sits at the top
(root), and smaller coins are arranged in a cascading fashion, satisfying the heap
property. Heap Sort works by repeatedly removing the largest coin (root) from the
pile and maintaining the heap structure by rearranging the remaining coins.
Applications:
Heap Sort finds various applications in real-world scenarios:
• Priority Queues: Heap Sort is the basis for efficient implementations of priority
queues, where elements with higher priorities are processed first.
• Network algorithms: Routing algorithms can utilize Heap Sort to prioritize data
packets based on their importance.
• Scheduling algorithms: Resource allocation and task scheduling problems can
benefit from Heap Sort's efficient sorting capabilities.
Understanding the connection between binary trees and Heap Sort provides a
deeper appreciation for its efficiency and versatility. By leveraging the power of
heaps, Heap Sort offers a robust solution for various sorting tasks across diverse
domains.
Feel free to ask further questions about specific aspects of Heap Sort or its
implementation details. I'm happy to help you explore this topic and its applications in
more depth!
Q24) What is graph in data structure? Explain with suitable examples. (Different
Terminologies).

Ans. Graph
A graph can be defined as group of vertices and edges that are used to connect these
vertices. A graph can be seen as a cyclic tree, where the vertices (Nodes) maintain any
complex relationship among them instead of having parent child relationship.

Definition
A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C), (C,E), (E,D), (D,B),
(D,A)) is shown in the following figure.

An undirected graph, edges are not associated with the directions with them. An undirected
graph is shown in the above figure since its edges are not attached with any of the
directions. If an edge exists between vertex A and B then the vertices can be traversed from B
to A as well as A to B.

In a directed graph, edges form an ordered pair. Edges represent a specific path from
some vertex A to another vertex B. Node A is called initial node while node B is called
terminal node.

A directed graph is shown in the following figure.


Graph Terminology
Path
A path can be defined as the sequence of nodes that are followed in order to reach
some terminal node from the initial node.

losed Path
A path will be called as closed path if the initial node is same as terminal node. A path
will be closed path if V0=VN.

Simple Path
If all the nodes of the graph are distinct with an exception V 0=VN, then such path P is
called as closed simple path.

Cycle
A cycle can be defined as the path which has no repeated edges or vertices except the
first and last vertices.

Connected Graph
A connected graph is the one in which some path exists between every two vertices
(u, v) in V. There are no isolated nodes in connected graph.
Complete Graph
A complete graph is the one in which every node is connected with all other nodes. A
complete graph contain n(n-1)/2 edges where n is the number of nodes in the graph.

Weighted Graph
In a weighted graph, each edge is assigned with some data such as length or weight.
The weight of an edge e can be given as w(e) which must be a positive (+) value
indicating the cost of traversing the edge.

Digraph
A digraph is a directed graph in which each edge of the graph is associated with some
direction and the traversing can be done only in the specified direction.

Loop
An edge that is associated with the similar end points can be called as Loop.

Adjacent Nodes
If two nodes u and v are connected via an edge e, then the nodes u and v are called
as neighbours or adjacent nodes.

Degree of the Node


A degree of a node is the number of edges that are connected with that node. A node
with degree 0 is called as isolated node.

Q25) Explain Linked List representation of graph.

Ans.

You might also like