0% found this document useful (0 votes)
48 views27 pages

Dsa Unit 1notes

The document provides an overview of data structures and algorithms, emphasizing the importance of organizing data for efficient use. It classifies data structures into linear and non-linear types, detailing popular structures such as arrays, linked lists, stacks, queues, trees, and graphs, along with their operations and applications. Additionally, it covers concepts of algorithms, time and space complexity, and asymptotic notations for analyzing algorithm performance.

Uploaded by

skamalraj2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views27 pages

Dsa Unit 1notes

The document provides an overview of data structures and algorithms, emphasizing the importance of organizing data for efficient use. It classifies data structures into linear and non-linear types, detailing popular structures such as arrays, linked lists, stacks, queues, trees, and graphs, along with their operations and applications. Additionally, it covers concepts of algorithms, time and space complexity, and asymptotic notations for analyzing algorithm performance.

Uploaded by

skamalraj2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

DATA STRUCTURES AND ALGORITHMS

UNIT I

INTRODUCTION TO DATA STRUCTURES

A data structure is a particular way of organising data in a computer so that it can be used
effectively. The idea is to reduce the space and time complexities of different tasks.

Need of Data Structure:


The structure of the data and the synthesis of the algorithm are relative to each other. Data
presentation must be easy to understand so the developer, as well as the user, can make an
efficient implementation of the operation.
Data structures provide an easy way of organising, retrieving, managing, and storing data.
● Data structure modification is easy.
● It requires less time.
● Save storage memory space.
● Data representation is easy.
● Easy access to the large database
Classification/Types of Data Structures:
1. Linear Data Structure
2. Non-Linear Data Structure.
Linear Data Structure:
● Elements are arranged in one dimension ,also known as linear dimension.
● Example: lists, stack, queue, etc.
Non-Linear Data Structure
● Elements are arranged in one-many, many-one and many-many dimensions.
● Example: tree, graph, table, etc.

Most Popular Data Structures:

1
1. Array:
An array is a collection of data items stored at contiguous memory locations. The idea is to
store multiple items of the same type together. This makes it easier to calculate the position
of each element by simply adding an offset to a base value, i.e., the memory location of the
first element of the array (generally denoted by the name of the array).

2. Linked Lists:
Like arrays, Linked List is a linear data structure. Unlike arrays, linked list elements are not
stored at a contiguous location; the elements are linked using pointers.

3. Stack:
Stack is a linear data structure which follows a particular order in which the operations are
performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). In stack, all
insertion and deletion are permitted at only one end of the list.

Stack Operations:
● push(): When this operation is performed, an element is inserted into the stack.

2
● pop(): When this operation is performed, an element is removed from the top of the stack
and is returned.
● top(): This operation will return the last inserted element that is at the top without
removing it.
● size(): This operation will return the size of the stack i.e. the total number of elements
present in the stack.
● isEmpty(): This operation indicates whether the stack is empty or not.
4. Queue:
Like Stack, Queue is a linear structure which follows a particular order in which the
operations are performed. The order is First In First Out (FIFO). In the queue, items are
inserted at one end and deleted from the other end. A good example of the queue is any queue
of consumers for a resource where the consumer that came first is served first. The difference
between stacks and queues is in removing. In a stack we remove the item the most recently
added; in a queue, we remove the item the least recently added.

Queue Operations:
● Enqueue(): Adds (or stores) an element to the end of the queue..
● Dequeue(): Removal of elements from the queue.
● Peek() or front(): Acquires the data element available at the front node of the queue
without deleting it.
● rear(): This operation returns the element at the rear end without removing it.
● isFull(): Validates if the queue is full.
● isNull(): Checks if the queue is empty.

5. Binary Tree:
Unlike Arrays, Linked Lists, Stack and queues, which are linear data structures, trees are
hierarchical data structures. A binary tree is a tree data structure in which each node has at
most two children, which are referred to as the left child and the right child. It is implemented
mainly using Links.

3
A Binary Tree is represented by a pointer to the topmost node in the tree. If the tree is empty,
then the value of root is NULL. A Binary Tree node contains the following parts.
1. Data
2. Pointer to left child
3. Pointer to the right child

6. Binary Search Tree:


A Binary Search Tree is a Binary Tree following the additional properties:
● The left part of the root node contains keys less than the root node key.
● The right part of the root node contains keys greater than the root node key.
● There is no duplicate key present in the binary tree.
A Binary tree having the following properties is known as Binary search tree (BST).

7. Heap:
A Heap is a special Tree-based data structure in which the tree is a complete binary tree.
Generally, Heaps can be of two types:
● Max-Heap: In a Max-Heap the key present at the root node must be greatest among the
keys present at all of its children. The same property must be recursively true for all sub-
trees in that Binary Tree.

4
● Min-Heap: In a Min-Heap the key present at the root node must be minimum among the
keys present at all of its children. The same property must be recursively true for all sub-
trees in that Binary Tree.

8. Hashing Data Structure:


Hashing is an important Data Structure which is designed to use a special function called the
Hash function which is used to map a given value with a particular key for faster access of
elements. The efficiency of mapping depends on the efficiency of the hash function used.
Let a hash function H(x) maps the value x at the index x%10 in an Array. For example, if the
list of values is [11, 12, 13, 14, 15] it will be stored at positions {1, 2, 3, 4, 5} in the array or
Hash table respectively.

9. Matrix:
A matrix represents a collection of numbers arranged in an order of rows and columns. It is
necessary to enclose the elements of a matrix in parentheses or brackets.
A matrix with 9 elements is shown below.

5
10. Graph:
Graph is a data structure that consists of a collection of nodes (vertices) connected by edges.
Graphs are used to represent relationships between objects and are widely used in computer
science, mathematics, and other fields. Graphs can be used to model a wide variety of real-
world systems, such as social networks, transportation networks, and computer networks.

11. Applications of Data Structures:


Data structures are used in various fields such as:
● Operating system
● Graphics
● Computer Design
● Block chain
● Genetics
● Image Processing
● Simulation

12. Define Algorithm

An algorithm is a set of step-by-step instructions for solving a problem or completing a task. It tells us
exactly what to do and how to get the final result. Computers use algorithms to help them make
decisions, process data, or perform actions automatically. They can be very simple, like sorting a list
of numbers, or very complex, like recommending videos on YouTube.

6
An algorithm needs to be clear, precise, and finish after a certain number of steps. It should
not go on forever without reaching an answer.

13. Singly Linked Lists

Singly linked lists contain two "buckets" in one node; one bucket holds the data and the other
bucket holds the address of the next node of the list. Traversals can be done in one direction
only as there is only a single link between two nodes of the same list.

14.Doubly Linked Lists

Doubly Linked Lists contain three "buckets" in one node; one bucket holds the data and the
other buckets hold the addresses of the previous and next nodes in the list. The list is
traversed twice as the nodes in the list are connected to each other from both sides.

15.Circular Linked Lists

Circular linked lists can exist in both singly linked list and doubly linked list.

Since the last node and the first node of the circular linked list are connected, the traversal in
this linked list will go on forever until it is broken.

7
What Is Time Complexity?

Time complexity is defined in terms of how many times it takes to run a given
algorithm, based on the length of the input. Time complexity is not a measurement of
how much time it takes to execute a particular algorithm because such factors as
programming language, operating system, and processing power are also considered.

How is Time complexity computed?


To estimate the time complexity, we need to consider the cost of each fundamental
instruction and the number of times the instruction is executed.
● If we have statements with basic operations like comparisons, return
statements, assignments, and reading a variable.
● We can assume they take constant time each O(1).
Statement 1: int a=5; // reading a variable
statement 2; if( a==5) return true; // return statement
statement 3; int x= 4>5 ? 1:0; // comparison
statement 4; bool flag=true; // Assignment

This is the result of calculating the overall time complexity.

total time = time(statement1) + time(statement2) + ... time (statementN)

Assuming that n is the size of the input, let's use T(n) to represent the overall time and t to
represent the amount of time that a statement or collection of statements takes to execute.

T(n) = t(statement1) + t(statement2) + ... + t(statementN);


Overall, T(n)= O(1), which means constant complexity.

What Is Space Complexity?


When an algorithm is run on a computer, it necessitates a certain amount of memory space.
The amount of memory used by a program to execute it is represented by its space
complexity. Because a program requires memory to store input data and temporal values
while running, the space complexity is auxiliary and input space.

How is space complexity computed?


The space Complexity of an algorithm is the total space taken by the algorithm with respect
to the input size. Space complexity includes both Auxiliary space and space used by input.
Space complexity is a parallel concept to time complexity. If we need to create an array of
size n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will
require O(n2) space.
In recursive calls stack space also counts.

Example:
int add (int n){
if (n <= 0){
return 0;
}

8
return n + add (n-1);
}
Here each call add a level to the stack :
1. add(4)
2. -> add(3)
3. -> add(2)
4. -> add(1)
5. -> add(0)
Each of these calls is added to call stack and takes up actual memory.
So it takes O(n) space.

However, just because you have n calls total doesn’t mean it takes O(n) space.

Time Complexity vs. Space Complexity

Time Complexity Space Complexity

Calculates the time required Estimates the space memory required

Memory space is counted for all variables,


Time is counted for all statements
inputs, and outputs.

The size of the input data is the primary Primarily determined by the auxiliary
determinant. variable size

More crucial in terms of solution More essential in terms of solution


optimization optimization

T(n)= O(1), which means constant


complexity.

If we need to create an array of size n, this


will require O(n) space.

What Are Asymptotic Notations?

9
Asymptotic Notations are programming languages that allow you to analyze an algorithm's
running time by identifying its behavior as its input size grows. This is also referred to as an
algorithm's growth rate. When the input size increases, does the algorithm become incredibly
slow? Is it able to maintain its fast run time as the input size grows? You can answer these
questions thanks to Asymptotic Notation.

As a result, you compare space and time complexity using asymptotic analysis. It compares
two algorithms based on changes in their performance as the input size is increased or
decreased.

Asymptotic notations are classified into three types:

1. Big-Oh (O) notation

2. Big Omega ( Ω ) notation

3. Big Theta ( Θ ) notation

Best Case, Worst Case, and Average Case in Asymptotic Analysis

Best Case: It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time. In this case, the execution time serves as a lower
bound on the algorithm's time complexity.

Average Case: You add the running times for each possible input combination and take the
average in the average case. Here, the execution time serves as both a lower and upper bound
on the algorithm's time complexity.

Worst Case: It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time possible. In this case, the execution time serves as an
upper bound on the algorithm's time complexity.

1. REPRESENTATION OF ARRAYS

The representation of an array can be defined by its declaration. A declaration means


allocating memory for an array of a given size.

10
Array
Arrays can be declared in various ways in different languages. For better illustration, below
are some language-specific array declarations.

int arr[5]; // This array will store integer type element

char arr[10]; // This array will store char type element

float arr[20]; // This array will store float type element

However, the above declaration is static or compile-time memory allocation, which means
that the array element’s memory is allocated when a program is compiled.
Here only a fixed size (i,e. the size that is mentioned in square brackets []) of memory will be
allocated for storage, but don’t you think it will not be the same situation as we know the size
of the array every time, there might be a case where we don’t know the size of the array. If
we declare a larger size and store a lesser number of elements will result in a waste of
memory or either be a case where we declare a lesser size then we won’t get enough memory
to store the rest of the elements. In such cases, static memory allocation is not preferred.

1.1 APPLICATIONS OF ARRAY

Applications of Array Data Structure:


Below are some applications of arrays.
● Storing and accessing data: Arrays are used to store and retrieve data in a specific order.
For example, an array can be used to store the scores of a group of students, or the
temperatures recorded by a weather station.
● Sorting: Arrays can be used to sort data in ascending or descending order. Sorting
algorithms such as bubble sort, merge sort, and quicksort rely heavily on arrays.
● Searching: Arrays can be searched for specific elements using algorithms such as linear
search and binary search.
● Matrices: Arrays are used to represent matrices in mathematical computations such as
matrix multiplication, linear algebra, and image processing.
● Stacks and queues: Arrays are used as the underlying data structure for implementing
stacks and queues, which are commonly used in algorithms and data structures.
● Graphs: Arrays can be used to represent graphs in computer science. Each element in the
array represents a node in the graph, and the relationships between the nodes are
represented by the values stored in the array.

11
● Dynamic programming: Dynamic programming algorithms often use arrays to store
intermediate results of subproblems in order to solve a larger problem.

Real-Time Applications of Array:


● Signal Processing: Arrays are used in signal processing to represent a set of samples that
are collected over time. This can be used in applications such as speech recognition,
image processing, and radar systems.
● Multimedia Applications: Arrays are used in multimedia applications such as video and
audio processing, where they are used to store the pixel or audio samples. For example,
an array can be used to store the RGB values of an image.
● Data Mining: Arrays are used in data mining applications to represent large datasets.
This allows for efficient data access and processing, which is important in real-time
applications.
● Robotics: Arrays are used in robotics to represent the position and orientation of objects
in 3D space. This can be used in applications such as motion planning and object
recognition.
● Real-time Monitoring and Control Systems: Arrays are used in real-time monitoring
and control systems to store sensor data and control signals. This allows for real-time
processing and decision-making, which is important in applications such as industrial
automation and aerospace systems.
● Financial Analysis: Arrays are used in financial analysis to store historical stock prices
and other financial data. This allows for efficient data access and analysis, which is
important in real-time trading systems.
● Scientific Computing: Arrays are used in scientific computing to represent numerical
data, such as measurements from experiments and simulations. This allows for efficient
data processing and visualization, which is important in real-time scientific analysis and
experimentation.

2.SPARSE MATRIX AND ITS REPRESENTATIONS


A matrix is a two-dimensional data object made of m rows and n columns, therefore having
total m x n values. If most of the elements of the matrix have 0 value, then it is called a
sparse matrix.
Why to use Sparse Matrix instead of simple matrix ?
● Storage: There are lesser non-zero elements than zeros and thus lesser memory can be
used to store only those elements.
● Computing time: Computing time can be saved by logically designing a data structure
traversing only non-zero elements..
Example:
00304
00570
00000
02600
Representing a sparse matrix by a 2D array leads to wastage of lots of memory as zeroes in
the matrix are of no use in most of the cases. So, instead of storing zeroes with non-zero
elements, we only store non-zero elements. This means storing non-zero elements
with triples- (Row, Column, value).
Sparse Matrix Representations can be done in many ways following are two common
representations:
1. Array representation

12
2. Linked list representation

Method 1: Using Arrays:


2D array is used to represent a sparse matrix in which there are three rows named as
● Row: Index of row, where non-zero element is located
● Column: Index of column, where non-zero element is located
● Value: Value of the non zero element located at index – (row, column)

Method 2: Using Linked Lists


In linked list, each node has four fields. These four fields are defined as:
● Row: Index of row, where non-zero element is located
● Column: Index of column, where non-zero element is located
● Value: Value of the non zero element located at index – (row,column)
● Next node: Address of the next node

3.LINEAR LIST
Linear Data Structures are a type of data structure in computer science where data elements
are arranged sequentially or linearly. Each element has a previous and next adjacent, except
for the first and last elements.

Characteristics of Linear Data Structure:


● Sequential Organization: In linear data structures, data elements are arranged
sequentially, one after the other. Each element has a unique predecessor (except for the
first element) and a unique successor (except for the last element)

13
● Order Preservation: The order in which elements are added to the data structure is
preserved. This means that the first element added will be the first one to be accessed or
removed, and the last element added will be the last one to be accessed or removed.
● Fixed or Dynamic Size: Linear data structures can have either fixed or dynamic sizes.
Arrays typically have a fixed size when they are created, while other structures like linked
lists, stacks, and queues can dynamically grow or shrink as elements are added or
removed.
● Efficient Access: Accessing elements within a linear data structure is typically efficient.
For example, arrays offer constant-time access to elements using their index.
Linear data structures are commonly used for organising and manipulating data in a
sequential fashion.

Some of the most common linear data structures include:


● Arrays: A collection of elements stored in contiguous memory locations.
● Linked Lists: A collection of nodes, each containing an element and a reference to the
next node.
● Stacks: A collection of elements with Last-In-First-Out (LIFO) order.
● Queues: A collection of elements with First-In-First-Out (FIFO) order

SINGLY LINKED LIST IMPLEMENTATION

What is Singly Linked List?


A singly linked list in data structures is essentially a series of connected elements where
each element, known as a node, contains a piece of data and a reference to the next
node in the sequence.

This structure allows for easy and efficient addition or removal of elements without
rearranging the entire data structure, making it a flexible choice for many applications.

Example of Singly Linked List


14
Consider an application like a dynamic to-do list where users can add and remove tasks.
Here, a singly linked list allows new tasks to be added or old tasks to be deleted without
the need to shift other elements.

This flexibility and efficiency make singly linked lists a popular choice for implementing
other foundational data structures, such as stacks and queues, and for applications like
memory management, where the allocation and deallocation of memory happen
frequently and in varying sizes.

Representation of Singly Linked List


Node:
The fundamental part of a singly linked list. Each node consists of two components:

● Data: This part of the node stores the actual data that the list is meant to hold. It
can be any type of data—numbers, characters, or even more complex data
structures.

● Next: This is a pointer (or reference) to the next node in the list. It's how the list
maintains its sequence, by linking each node to the subsequent one.

Head:
The first node in a linked list is called the head. It is the entry point to the list and used
as a reference point to traverse it.

Null:
The last node of a linked list, which points to null, indicating the end of the list.

Functionality of Singly Linked List:


The list starts with a head pointer, which points to the first node in the list. The last node
in the list points to null, indicating that there are no more nodes after it. This null marking
the end of the list is crucial—it tells any process or algorithm when to stop iterating
through the list.

15
Singly Linked List Operations in Data Structure
Following are the primary operations you can perform on a singly linked list:

1. Insertion Operations
● Insertion at the Beginning: Also known as "Insertion at Head." This operation
involves adding a new node right at the start of the list. The new node then
becomes the head of the list. This operation is quick because it simply includes
pointing the new node to the current head of the list and then updating the head
to this new node, making it a constant time operation, O(1).

● Insertion at the End: Known as "Insertion at Tail." This requires traversing the
entire list to find the last node and then adding the new node after this. The new
node's next pointer is set to null, indicating the end of the list. Since you need to
traverse the entire list, this operation has a time complexity of O(n), where n is
the number of nodes in the list.

● Insertion after a Given Node: If you need to insert a new node after a specified
node, you connect the new node to the list by adjusting the pointers. The new
node points to the next node of the current node, and the current node's next
pointer is updated to point to the new node. This operation generally requires
O(n) in the worst case because you might need to traverse the list to find the
specified node.

2. Deletion Operations
● Deletion at the Beginning: Removing the first node (head) of the list can be
done by simply updating the head to point to the second node. This is also a
constant time operation, O(1).

● Deletion from the Middle or Specific Node: To delete a node after a specific
node, you update the next pointer of the preceding node to skip the node to be

16
deleted and point to the following node. Depending on the position of the node to
delete, this can also involve traversing the list, making it O(n).

● Deletion at the End: Deleting the last node requires traversing the list to find the
second last node and updating its next pointer to null. This operation takes O(n)
time as well.

3. Display Operation
To display or traverse the singly linked list, start from the head and move through each
node until you reach the end. Each node's data is accessed as you traverse, and this
operation has a time complexity of O(n).

4. Search Operation
Searching involves traversing the list from the head and checking each node's data
against what you are searching for. This operation also has a time complexity of O(n) in
the worst case.

Advantages of Singly Linked Lists


● Singly linked lists in data structures can grow and shrink during runtime as
needed without a predefined size.

● They only allocate memory for nodes that are actually in use, reducing memory
waste compared to pre-allocated data structures like arrays.

● Adding or removing nodes doesn't require shifting elements, which can be a


costly operation in arrays. This is particularly beneficial at the beginning of the
list.

17
● Unlike arrays, linked lists don’t reserve memory in advance, which can be more
efficient for certain types of applications where the size of the data structure
fluctuates.

Disadvantages of Singly Linked Lists


● Accessing an element in a singly linked list requires traversal from the head to
the point of interest, which can be time-consuming.

● Each node in a singly linked list requires additional memory for the pointer
alongside the actual data.

● Implementing a singly linked list is more complex than using an array, particularly
when it comes to handling pointers, which can introduce bugs like memory leaks.

A linked list is a linear data structure which can store a collection of "nodes" connected
together via links i.e. pointers. Linked lists nodes are not stored at a contiguous location,
rather they are linked using pointers to the different memory locations. A node consists of the
data value and a pointer to the address of the next node within the linked list.

A linked list is a dynamic linear data structure whose memory size can be allocated or de-
allocated at run time based on the operation insertion or deletion, this helps in using system
memory efficiently. Linked lists can be used to implment various data structures like a stack,
queue, graph, hash maps, etc.

A linked list starts with a head node which points to the first node. Every node consists of
data which holds the actual data (value) associated with the node and a next pointer which
holds the memory address of the next node in the linked list. The last node is called the tail
node in the list which points to null indicating the end of the list.

Basic Operations in Linked List

The basic operations in the linked lists are insertion, deletion, searching, display, and deleting
an element at a given key. These operations are performed on Singly Linked Lists as given
below −

● Insertion − Adds an element at the beginning of the list.

18
● Deletion − Deletes an element at the beginning of the list.
● Display − Displays the complete list.
● Search − Searches an element using the given key.
● Delete − Deletes an element using the given key.

LINKED LIST - INSERTION OPERATION

Adding a new node in linked list is a more than one step activity. We shall learn this with
diagrams here. First, create a node using the same structure and find the location where it has
to be inserted.

Imagine that we are inserting a node B (NewNode), between A (LeftNode) and C


(RightNode). Then point B.next to C −

INSERTION AT BEGINNING

In this operation, we are adding an element at the beginning of the list.

Algorithm

1. START
2. Create a node to store the data
3. Check if the list is empty
4. If the list is empty, add the data to the node and
assign the head pointer to it.
5. If the list is not empty, add the data to a node and link to the
current head. Assign the head to the newly added node.
6. END

Example:
public class Node
{
int data;
Node next;

// Constructor to initialize the node with data


public Node(int data)
{
this.data = data;
this.next = null;
}
}

19
INSERTION AT ENDING

In this operation, we are adding an element at the ending of the list.

Algorithm

1. START
2. Create a new node and assign the data
3. Find the last node
4. Point the last node to new node
5. END
Example
class Node {
public int data;
public Node next;

// Constructor with both data and next node


public Node(int data1, Node next1) {
data = data1;
next = next1;
}

// Constructor with only data (assuming next is initially null)


public Node(int data1) {
data = data1;
next = null;
}

20
LINKED LIST - DELETION OPERATION

Deletion is also a more than one step process. We shall learn with pictorial representation.
First, locate the target node to be removed, by using searching algorithms.

The left (previous) node of the target node now should point to the next node of the target
node −

LeftNode.next -> TargetNode.next;

This will remove the link that was pointing to the target node. Now, using the following code,
we will remove what the target node is pointing at.

TargetNode.next -> NULL;

We need to use the deleted node. We can keep that in memory otherwise we can simply
deallocate memory and wipe off the target node completely.

21
DELETION AT BEGINNING

You can delete either from the beginning, end or from a particular position.

1. Delete from beginning


● Point head to the second node

head = head->next;

2. Delete from end


● Traverse to second last element
● Change its next pointer to null

struct node* temp = head;

while(temp->next->next!=NULL){

22
temp = temp->next;

temp->next = NULL;

3. Delete from middle


● Traverse to element before the element to be deleted
● Change next pointers to exclude the node from the chain

for(int i=2; i< position; i++) {

if(temp->next!=NULL) {

temp = temp->next;

temp->next = temp->next->next;

23
Search an Element on a Linked List

You can search an element on a linked list using a loop using the following steps. We are
finding item on a linked list.
● Make head as the current node.
● Run a loop until the current node is NULL because the last element points to NULL.
● In each iteration, check if the key of the node is equal to item. If it the key matches the item,
return true otherwise return false.

// Search a node

bool searchNode(struct Node** head_ref, int key) {

struct Node* current = *head_ref;

while (current != NULL) {

if (current->data == key) return true;

current = current->next;

return false;

24
Doubly Linked List

A doubly linked list is a type of linked list in which each node consists of 3 components:
● *prev - address of the previous node
● data - data item
● *next - address of next node

Representation of Doubly Linked List


Let's see how we can represent a doubly linked list on an algorithm/code. Suppose we have a
doubly linked list:

struct node {

int data;

struct node *next;

25
struct node *prev;

Insertion on a Doubly Linked List

Pushing a node to a doubly-linked list is similar to pushing a node to a linked list, but extra
work is required to handle the pointer to the previous node.

We can insert elements at 3 different positions of a doubly-linked list:

1. Insertion at the beginning


2. Insertion in-between nodes
3. Insertion at the End
1. Insertion at the Beginning
Let's add a node with value 6 at the beginning of the doubly linked list we made above.
1. Create a new node
● allocate memory for newNode
● assign the data to newNode.

New node

2. Set prev and next pointers of new node


● point next of newNode to the first node of the doubly linked list
● point prev to null

3. Make new node as head node


● Point prev of the first node to newNode (now the previous head is the second node)
● Point head to newNode

26
27

You might also like