FDS Unit 2 Notes
FDS Unit 2 Notes
Unit – II
Linear Data Structure Using Sequential
Organization
1. Concept of Sequential Organization
Sequential organization refers to the way data is stored and accessed in a linear order, one
element after another. In this method, data elements are arranged sequentially, and to retrieve
or manipulate a particular element, you often have to go through other elements before it.
Advantages:
1. Simplicity: Easy to implement and understand, especially for simple data structures like
arrays or sequential files.
2. Efficient for Sequential Access: Best suited for operations where all the data needs to be
accessed or processed in order (e.g., reading a file or traversing a list).
4. Low Overhead: No additional data structures or pointers are needed to keep track of
elements, as in linked data structures, so there is less memory overhead.
Disadvantages:
1. Inefficient for Random Access: Searching for a specific element requires a linear search
(O(n) time complexity), which can be inefficient for large datasets, especially if the
element is near the end.
2. Insertion and Deletion Overhead: Inserting or deleting an element in the middle of a
sequentially organized structure (like an array) requires shifting the subsequent elements,
which can be time-consuming (O(n)).
2|Page
3. Fixed Size: In structures like arrays, the size is fixed at the time of creation, which makes
resizing difficult. Allocating more memory than needed leads to wasted space, while
under-allocating may result in running out of space.
4. Not Suitable for Dynamic Data: Sequential organization is less efficient for dynamic
data where frequent insertions, deletions, or reordering is required, as these operations are
expensive.
Syntax:
type variable_name[size]
Example:
start = 0
end = n - 1
declare
CREATE() → array
The function RETRIEVE() takes as input an array and an index, and either returns the
appropriate value of an error.
Arrays are a basic data structure that support a variety of operations. Here are the common
operations performed on arrays:
3. Inserting an Element: Insert an element at a specific position in the array. If the position is
not the last index, elements may need to be shifted.
4. Deleting an Element: Remove an element from a specific position, and shift elements to fill
the gap.
5. Traversing an Array: : Access and process each element of the array, typically using a loop.
6. Searching for an Element: Find the position of a specific value in the array. This can be
done either by:
Merging refers to the process of combining two arrays into a single array. The elements from
both arrays are combined while maintaining their original order, either sorted or unsorted,
depending on the arrays' characteristics.
5|Page
When merging two sorted arrays, the goal is to maintain the sorted order in the final array:
2. Append the smaller element to C[] and move the pointer of the respective array forward.
Pseudocode:
C.append(A[i])
i += 1
while j < length(B):
C.append(B[j])
j += 1
return C
Storage representation refers to how data structures (like arrays) are stored in memory, and
address calculation refers to the method used to compute the memory address of any specific
element within the data structure.
If the elements are stored in a row wise manner, then it is called row major representation.
For example, if we want to store elements in row major representation then in a two-
dimensional array:
If elements are stored in column wise manner, then it is called column major representation.
For example, if we want to store elements in column major representation, then in a two-
dimensional array:
7|Page
Declaration:
data_type array_name[rows][columns];
Example:
int A[3][4];
This creates a 2D array with 3 rows and 4 columns, capable of holding 12 elements.
Storage in Memory:
8|Page
Declaration:
data_type array_name[size1][size2][size3];
Where:
Example:
int A[3][4][2];
9|Page
Storage in Memory:
Row-major order: Elements are stored with rows varying first, followed by columns,
and then layers.
Column-major order: Elements are stored with columns varying first, followed by
rows, and then layers.
Characteristics:
1. Order of Elements: The elements in an ordered list are arranged based on some logical
order (e.g., numerical, alphabetical).
3. Position Matters: The position of each element is important and defines the order in
the list.
4. Access by Index: Elements are often accessed by their position (index) in the list.
Example:
function printEvenNumbers():
10 | P a g e
Explanation:
evenNumbers = [] initializes an empty list.
After the loop, the complete list of even numbers is printed at once.
Polynomial is the sum of terms where each term consists of variable, coefficient and exponent.
For representing a single variable polynomial one can make use of one-dimensional array. In
single dimensional array the index of an array will act as the exponent and the coefficient can
be stored at that particular index which can be represented as follows:
2. Efficient Access: Accessing the coefficient of any term with a given exponent is fast,
as it involves direct index access (i.e., O(1) time complexity).
1. Space Inefficiency: If the polynomial has large gaps between exponents (e.g.,
x100+x5+1), a lot of space is wasted by storing zero coefficients for the missing
exponents. This leads to inefficient memory usage.
2. Fixed Size: The size of the array is fixed based on the highest degree of the polynomial.
For dynamic polynomial manipulation (e.g., multiplication or division), adjusting the
array size may be required.
Let P1 = [(5, 3), (4, 2), (2, 0)] represent 5x3 + 4x2 + 2, and P2 = [(3, 3), (2,
1)] represent 3x3 + 2x.
Step-by-step:
1. Compare (5, 3) and (3, 3): Exponents are equal → Add coefficients →
Result: (5+3, 3) = (8, 3).
3. Compare remaining (2, 0) and (2, 1): Exponent 1 > 0 → Append (2, 1) to
result.
Final result: [(8, 3), (4, 2), (2, 1), (2, 0)], representing 8x3 + 4x2 + 2x + 2.
Time Complexity:
The time complexity is O(n + m), where n is the number of terms in P1 and m is the
number of terms in P2.
This is because we iterate through both polynomials exactly once, merging them.
Algorithm to Find the Multiplication of Two Single Variable Polynomials Using Array:
Step 1: Start.
Step 2: Input the two polynomials P1[] and P2[], each containing pairs
of coefficients and exponents.
Step 7: Stop.
Explanation:
1. The algorithm starts by taking two input polynomials represented as arrays of terms
(each term consists of a coefficient and an exponent).
2. It then iterates over each term in the first polynomial (P1[]), multiplying it with every
term in the second polynomial (P2[]).
3. The result of multiplying two terms involves multiplying their coefficients and adding
their exponents.
14 | P a g e
4. If a term with the same exponent already exists in the result, the coefficients are added
together (to combine like terms).
Time Complexity:
The outer loop iterates over all terms in the first polynomial P1[], and the inner loop
iterates over all terms in the second polynomial P2[].
Example:
Let:
P1(x) = 3x2 + 2x + 1
P2(x) = x + 1
3. Sparse Matrix
For example, if the matrix is of size 100 x 100 and only 10 elements are non-zero. Then for
accessing these 10 elements one has to make 10000 times scan. Also, only 10 spaces will be
with non-zero elements. Remaining spaces of matrix will be filled with zeros only. i.e. we have
to allocate the memory of 100 x 100 x 2 = 20000.
Hence sparse matrix representation is a kind of representation in which only non-zero elements
along with their rows and columns is stored.
15 | P a g e
2D array is used to represent a sparse matrix in which there are three rows named as
Time Complexity: O(nm), where n is the number of rows in the sparse matrix, and m is the
number of columns in the sparse matrix.
Auxiliary Space: O(nm), where n is the number of rows in the sparse matrix, and m is the
number of columns in the sparse matrix.
This algorithm adds two sparse matrices, represented using triplet representation (each non-
zero element is stored as a triplet of row index, column index, and value).
Assume two matrices A and B of size m×n are given in the form of a triplet representation:
Each matrix is represented by an array of triplets, where each triplet is {row, column,
value}.
16 | P a g e
The matrices are already sorted in row-major order (first by row index, then by column
index).
Step 1: Start
Step 2: Input sparse matrices A[] and B[] (both stored as triplets).
Step 10: While j < length(B), add remaining elements of B[] to C[].
Explanation:
1. The two input sparse matrices A[] and B[] are traversed using two pointers i and j.
2. For each triplet, the algorithm compares the row and column indices to determine which
element (from A[] or B[]) to add to the result matrix C[].
3. If the indices match, the corresponding values are added, and if the sum is non-zero, it
is stored in C[].
4. After one matrix is completely traversed, the remaining elements of the other matrix
are added directly to the result.
Time Complexity:
The algorithm iterates through both matrices in parallel, so the time complexity is
O(k1+k2), where:
This ensures that the addition is performed efficiently by skipping over zero elements, only
focusing on non-zero entries.
The simple transpose of a sparse matrix involves swapping the rows and columns of the non-
zero elements and then arranging them in row-major order.
Assume that the sparse matrix is stored in triplet form, where each element is represented as
{row, column, value}.
Step 1: Start
Step 4.2: Insert the transposed element into result matrix T[].
Step 5: Sort the matrix T[] by row (now the original columns) and by
column (now the original rows).
Step 7: Stop
Explanation:
1. Swapping Rows and Columns: For each non-zero element in the matrix, the row index
and column index are swapped, converting the original sparse matrix into its transposed
form.
2. Sorting: After the swap, the matrix needs to be sorted by row (original column index)
and column (original row index) to maintain the proper row-major order.
Time Complexity:
1. Swapping rows and columns: This operation takes O(k), where k is the number of
non-zero elements.
2. Sorting: Sorting the matrix T[] takes O(k log k) time, where k is the number of non-
zero elements.
This approach is simple but less efficient compared to the fast transpose method, which avoids
the need for sorting.
The Fast Transpose of a sparse matrix improves upon the simple transpose by avoiding the
need to sort the elements after transposing them. It efficiently computes the transposed
positions using the row counts and cumulative positions of elements.
19 | P a g e
Given a sparse matrix in triplet representation, where each triplet represents {row, column,
value}, the fast transpose algorithm rearranges the matrix in O(m + n) time, where m is the
number of non-zero elements, and n is the number of columns.
Step 1: Start
Step 2: Input sparse matrix A[] with m rows, n columns, and k non-
zero elements.
Step 7: Compute the starting position for each column in the transposed
matrix
Step 8.1.2: Place the element (with row and column swapped)
in T[] at position[position[A[i].column]].
Explanation:
1. Step 1 counts how many non-zero elements are in each column of matrix A (which will
become rows in the transposed matrix).
2. Step 2 calculates the starting index for each column in the transposed matrix using a
cumulative sum of the counts from row_count[].
3. Step 3 places each element of A in its correct position in the transposed matrix T[] based
on the position[] array, which tells where to place elements from each column.
Time Complexity:
Step 1 (counting non-zero elements per column): O(k), where k is the number of non-
zero elements.
Thus, the total time complexity is O(k + n), which is optimal compared to the simple
transpose's O(k log k) sorting step. This makes Fast Transpose more efficient for large sparse
matrices.
The time and space tradeoff refers to a situation where you can reduce the time complexity
of an algorithm by using more space (memory), or reduce the space complexity by allowing
the algorithm to take more time. This tradeoff is a common consideration in algorithm design
and optimization.
Key Concepts:
Example:
o Hashing: In searching problems, you can use a hash table (which requires extra
space) to find elements in O(1) time, instead of using a linear search, which
takes O(n) time.
Practical Consideration:
If memory is limited, you may need to choose an algorithm that uses less space, even
if it takes longer to execute.
If speed is critical and memory is abundant, you might choose an algorithm that uses
more space to achieve faster execution times.