0% found this document useful (0 votes)
15 views

FDS Unit 2 Notes

Uploaded by

Prasad Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

FDS Unit 2 Notes

Uploaded by

Prasad Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1|Page

Unit – II
Linear Data Structure Using Sequential
Organization
1. Concept of Sequential Organization
Sequential organization refers to the way data is stored and accessed in a linear order, one
element after another. In this method, data elements are arranged sequentially, and to retrieve
or manipulate a particular element, you often have to go through other elements before it.

1.1 Advantages and Disadvantages of Sequential Organization

Advantages:

1. Simplicity: Easy to implement and understand, especially for simple data structures like
arrays or sequential files.

2. Efficient for Sequential Access: Best suited for operations where all the data needs to be
accessed or processed in order (e.g., reading a file or traversing a list).

3. Memory Efficiency: Data is stored in contiguous memory locations, which reduces


memory fragmentation and makes memory management simpler.

4. Low Overhead: No additional data structures or pointers are needed to keep track of
elements, as in linked data structures, so there is less memory overhead.

Disadvantages:

1. Inefficient for Random Access: Searching for a specific element requires a linear search
(O(n) time complexity), which can be inefficient for large datasets, especially if the
element is near the end.
2. Insertion and Deletion Overhead: Inserting or deleting an element in the middle of a
sequentially organized structure (like an array) requires shifting the subsequent elements,
which can be time-consuming (O(n)).
2|Page

3. Fixed Size: In structures like arrays, the size is fixed at the time of creation, which makes
resizing difficult. Allocating more memory than needed leads to wasted space, while
under-allocating may result in running out of space.
4. Not Suitable for Dynamic Data: Sequential organization is less efficient for dynamic
data where frequent insertions, deletions, or reordering is required, as these operations are
expensive.

1.2 Overview of Array

An array is a fundamental data structure in programming used to store a collection of elements


of the same type, such as integers, characters, or floating-point numbers. The elements in an
array are stored in contiguous memory locations and can be accessed using an index.

Syntax:

type variable_name[size]

Example:

int arr[5] = {10, 20, 30, 40, 50};

 arr[0] gives 10, arr[1] gives 20, and so on.

 The array has a size of 5, which cannot be changed after initialization.

One dimensional Array:

The one-dimensional array 'a' is declared as int a[6];

Pseudocode to reverse the numbers in a one-dimensional array:

function reverseArray(arr, n):


3|Page

start = 0
end = n - 1

while start < end:


// Swap arr[start] and arr[end]
temp = arr[start]
arr[start] = arr[end]
arr[end] = temp

// Move the pointers


start = start + 1
end = end - 1
1.3 Array as an Abstract Data Type

ADT ARRAY can be declared as below:

Structure ARRAY(value, index)

declare

CREATE() → array

RETRIEVE(array, index) → value

STORE(array, index, value) → array

The function CREATE() produces an empty array.

The function RETRIEVE() takes as input an array and an index, and either returns the
appropriate value of an error.

The function STORE() is used to enter new index-value pairs.

1.4 Operations on Array

Arrays are a basic data structure that support a variety of operations. Here are the common
operations performed on arrays:

1. Accessing an Element: Retrieve an element at a specific index.

element = arr[i]; // Access element at index i


4|Page

2. Updating an Element: Modify the value of an element at a specific index.

arr[i] = new_value; // Update element at index i

3. Inserting an Element: Insert an element at a specific position in the array. If the position is
not the last index, elements may need to be shifted.

insert(arr, pos, value); // Insert value at position pos

4. Deleting an Element: Remove an element from a specific position, and shift elements to fill
the gap.

delete(arr, pos); // Delete element at position pos

5. Traversing an Array: : Access and process each element of the array, typically using a loop.

for (int i = 0; i < n; i++) { process(arr[i]); }

6. Searching for an Element: Find the position of a specific value in the array. This can be
done either by:

// Linear search example:


for (int i = 0; i < n; i++) {
if (arr[i] == value) { return i; } // Found at index i
}
1.5 Merging of two arrays

Merging refers to the process of combining two arrays into a single array. The elements from
both arrays are combined while maintaining their original order, either sorted or unsorted,
depending on the arrays' characteristics.
5|Page

Steps to Merge Two Arrays:

When merging two sorted arrays, the goal is to maintain the sorted order in the final array:

1. Compare the current elements of A[] and B[].

2. Append the smaller element to C[] and move the pointer of the respective array forward.

3. Repeat until all elements of both arrays are added to C[].

Pseudocode:

function mergeSorted(A, B):


i = j = 0
C = []

while i < length(A) and j < length(B):


if A[i] <= B[j]:
C.append(A[i])
i += 1
else:
C.append(B[j])
j += 1

// Append remaining elements


while i < length(A):
6|Page

C.append(A[i])
i += 1
while j < length(B):
C.append(B[j])
j += 1

return C

1.6 Storage Representation and their Address Calculation

Storage representation refers to how data structures (like arrays) are stored in memory, and
address calculation refers to the method used to compute the memory address of any specific
element within the data structure.

1. Row Major Representa on

If the elements are stored in a row wise manner, then it is called row major representation.

For example, if we want to store elements in row major representation then in a two-
dimensional array:

Formula for Address Calculation in Row-Major Order:

Address of A[i][j] = Base Address + [(i×number of columns + j) × size of each element]

2. Column Major Representa on

If elements are stored in column wise manner, then it is called column major representation.

For example, if we want to store elements in column major representation, then in a two-
dimensional array:
7|Page

Formula for Address Calculation in Column-Major Order:

Address of A[i][ j] = Base Address + [( j×number of rows + i) × size of each element]

1.7 Multidimensional Arrays

1. Two Dimensional Arrays:

A two-dimensional (2D) array is a collection of elements arranged in rows and columns,


much like a matrix or table. Each element in the array is identified by two indices: one for the
row and one for the column.

Declaration:

In most programming languages, a 2D array can be declared as:

data_type array_name[rows][columns];

Example:

int A[3][4];

This creates a 2D array with 3 rows and 4 columns, capable of holding 12 elements.

Storage in Memory:
8|Page

A 2D array is stored in memory either in:

 Row-major order: Rows are stored one after the other.

 Column-major order: Columns are stored one after the other.

2. Three Dimensional Arrays:

A three-dimensional (3D) array is an extension of a 2D array that adds another dimension,


often visualized as a collection of 2D arrays stacked on top of each other, forming a cube-like
structure. It is used to store data in a multi-dimensional space.

Declaration:

In most programming languages, a 3D array can be declared as:

data_type array_name[size1][size2][size3];

Where:

 size1 represents the number of 2D arrays.

 size2 represents the number of rows in each 2D array.

 size3 represents the number of columns in each 2D array.

Example:

int A[3][4][2];
9|Page

This creates a 3D array with:

 3 layers (2D arrays),

 4 rows per layer,

 2 columns per row.

In total, the array holds 3 × 4 × 2 = 24 elements.

Storage in Memory:

 Row-major order: Elements are stored with rows varying first, followed by columns,
and then layers.

 Column-major order: Elements are stored with columns varying first, followed by
rows, and then layers.

1.8 Concept of Ordered List

An ordered list is a collection of elements arranged in a specific sequential order based on


a predefined criterion, such as ascending or descending value, or by the order of insertion.

Characteristics:

1. Order of Elements: The elements in an ordered list are arranged based on some logical
order (e.g., numerical, alphabetical).

2. Uniqueness: Depending on the application, elements may be unique (no duplicates) or


can allow repeated values.

3. Position Matters: The position of each element is important and defines the order in
the list.

4. Access by Index: Elements are often accessed by their position (index) in the list.

Example:

 A list of numbers sorted in ascending order: [2, 5, 7, 10].

 A list of words arranged alphabetically: ["apple", "banana", "cherry"].

Example: Pseudocode to Print a List of Even Numbers from 0 to 10

function printEvenNumbers():
10 | P a g e

evenNumbers = [] // Initialize an empty list


for i = 0 to 10:
if i % 2 == 0: // Check if the number is even
evenNumbers.append(i) // Add the even number to the list
print evenNumbers // Print the list of even numbers

Explanation:
 evenNumbers = [] initializes an empty list.

 The loop adds even numbers to the list using append(i).

 After the loop, the complete list of even numbers is printed at once.

2. Single Variable Polynomial

2.1 Representation using arrays

Polynomial is the sum of terms where each term consists of variable, coefficient and exponent.

For representing a single variable polynomial one can make use of one-dimensional array. In
single dimensional array the index of an array will act as the exponent and the coefficient can
be stored at that particular index which can be represented as follows:

For example, 3x4 + 5x3 + 7x2 + 10x – 19

This polynomial can be stored in single dimensional array.


11 | P a g e

Advantages of Representing Single Variable Polynomials Using 1-D Array:

1. Simplicity: The representation is straightforward and easy to understand. You can


directly map coefficients to their respective exponents using indices.

2. Efficient Access: Accessing the coefficient of any term with a given exponent is fast,
as it involves direct index access (i.e., O(1) time complexity).

3. Ease of Arithmetic Operations: Basic polynomial operations like addition and


subtraction are easy to implement since polynomials are stored in a single array, and
coefficients can be directly matched by their indices (exponents).

Disadvantages of Representing Single Variable Polynomials Using 1-D Array:

1. Space Inefficiency: If the polynomial has large gaps between exponents (e.g.,
x100+x5+1), a lot of space is wasted by storing zero coefficients for the missing
exponents. This leads to inefficient memory usage.

2. Fixed Size: The size of the array is fixed based on the highest degree of the polynomial.
For dynamic polynomial manipulation (e.g., multiplication or division), adjusting the
array size may be required.

3. Limited Flexibility: Insertion or deletion of terms (e.g., during dynamic polynomial


operations) can be difficult since the array size must be fixed in advance.

2.2 Polynomial Addition

Algorithm to find addi on of two single variable polynomials using array:


Step 1: Start.
Step 2: Input two arrays P1 and P2, where:
Step 2.1: Each element in the arrays is a pair: (coefficient,
exponent).
Step 2.2: The arrays are sorted by exponent in descending order.
Step 3: Initialize two pointers, i = 0 (for P1) and j = 0 (for P2),
and an empty result array result.
Step 4: While both P1 and P2 have terms left:
Step 4.1: If P1[i].exponent > P2[j].exponent:
Step 4.1.1: Append P1[i] to result.
Step 4.1.2: Increment i.
12 | P a g e

Step 4.2: Else if P1[i].exponent < P2[j].exponent:


Step 4.2.1: Append P2[j] to result.
Step 4.2.2: Increment j.
Step 4.3: Else (the exponents are equal):
Step 4.3.1: Add the coefficients of P1[i] and P2[j].
Step 4.3.2: If the sum is non-zero:
 Append the term (coefficient sum, exponent) to
result.
Step 4.3.4: Increment both i and j.
Step 5: Append any remaining terms from P1 or P2 to result.
Step 6: Output the result array.
Step 7: Stop.
Example:

Let P1 = [(5, 3), (4, 2), (2, 0)] represent 5x3 + 4x2 + 2, and P2 = [(3, 3), (2,
1)] represent 3x3 + 2x.

 Step-by-step:

1. Compare (5, 3) and (3, 3): Exponents are equal → Add coefficients →
Result: (5+3, 3) = (8, 3).

2. Compare (4, 2) and (2, 1): Exponent 2 > 1 → Append (4, 2) to


result.

3. Compare remaining (2, 0) and (2, 1): Exponent 1 > 0 → Append (2, 1) to
result.

4. Append remaining term (2, 0) to result.

Final result: [(8, 3), (4, 2), (2, 1), (2, 0)], representing 8x3 + 4x2 + 2x + 2.

Time Complexity:

 The time complexity is O(n + m), where n is the number of terms in P1 and m is the
number of terms in P2.

 This is because we iterate through both polynomials exactly once, merging them.

2.3 Polynomial Multiplication


13 | P a g e

Algorithm to Find the Multiplication of Two Single Variable Polynomials Using Array:

Step 1: Start.

Step 2: Input the two polynomials P1[] and P2[], each containing pairs
of coefficients and exponents.

Step 3: Initialize an empty array result[] to store the result of the


multiplication.

Step 4: For each term in P1[] (with index i):

Step 4.1: For each term in P2[] (with index j):

Step 4.1.1: Multiply the coefficients: coeff = P1[i].coefficient


* P2[j].coefficient.

Step 4.1.2: Add the exponents: expo = P1[i].exponent +


P2[j].exponent.

Step 4.1.3: Check if a term with the same exponent already


exists in result[]:

 If yes, add the new coefficient to the existing term.


 If no, add a new term {coeff, expo} to result[].

Step 5: Combine like terms (if necessary).

Step 6: Output the result array result[].

Step 7: Stop.

Explanation:

1. The algorithm starts by taking two input polynomials represented as arrays of terms
(each term consists of a coefficient and an exponent).

2. It then iterates over each term in the first polynomial (P1[]), multiplying it with every
term in the second polynomial (P2[]).

3. The result of multiplying two terms involves multiplying their coefficients and adding
their exponents.
14 | P a g e

4. If a term with the same exponent already exists in the result, the coefficients are added
together (to combine like terms).

5. After processing all terms, the result is output as a new polynomial.

Time Complexity:

 The outer loop iterates over all terms in the first polynomial P1[], and the inner loop
iterates over all terms in the second polynomial P2[].

 Time Complexity: O(n × m), where:

o n is the number of terms in P1[],

o m is the number of terms in P2[].

Example:

Let:

 P1(x) = 3x2 + 2x + 1

 P2(x) = x + 1

The result of multiplying these two polynomials would be:

P(x) = (3x2 + 2x + 1) × (x+1) = 3x3 + 5x2 + 3x + 1

The algorithm would output the polynomial as:

result[] = { {3, 3}, {5, 2}, {3, 1}, {1, 0} }

3. Sparse Matrix

A sparse matrix is a matrix where most of the elements are zero.

For example, if the matrix is of size 100 x 100 and only 10 elements are non-zero. Then for
accessing these 10 elements one has to make 10000 times scan. Also, only 10 spaces will be
with non-zero elements. Remaining spaces of matrix will be filled with zeros only. i.e. we have
to allocate the memory of 100 x 100 x 2 = 20000.

Hence sparse matrix representation is a kind of representation in which only non-zero elements
along with their rows and columns is stored.
15 | P a g e

3.1 Sparse matrix representation using array

2D array is used to represent a sparse matrix in which there are three rows named as

 Row: Index of row, where non-zero element is located

 Column: Index of column, where non-zero element is located

 Value: Value of the non-zero element located at index – (row, column)

Time Complexity: O(nm), where n is the number of rows in the sparse matrix, and m is the
number of columns in the sparse matrix.

Auxiliary Space: O(nm), where n is the number of rows in the sparse matrix, and m is the
number of columns in the sparse matrix.

3.2 Sparse Matrix Addition

This algorithm adds two sparse matrices, represented using triplet representation (each non-
zero element is stored as a triplet of row index, column index, and value).

Assume two matrices A and B of size m×n are given in the form of a triplet representation:

 Each matrix is represented by an array of triplets, where each triplet is {row, column,
value}.
16 | P a g e

 The matrices are already sorted in row-major order (first by row index, then by column
index).

Algorithm to Perform Sparse Matrix Addition

Step 1: Start

Step 2: Input sparse matrices A[] and B[] (both stored as triplets).

Step 3: Initialize an empty result matrix C[].

Step 4: Initialize i = 0, j = 0 (pointers for A and B).

Step 5: While i < length(A) and j < length(B), repeat steps 6 to 9:

Step 5.1: If A[i].row < B[j].row OR (A[i].row == B[j].row AND


A[i].column < B[j].column):

Step 5.1.1: Add A[i] to result C[].

Step 5.1.2: Increment i by 1.

Step 5.2: Else if A[i].row > B[j].row OR (A[i].row == B[j].row


AND A[i].column > B[j].column):

Step 5.2.1: Add B[j] to result C[].

Step 5.2.2: Increment j by 1.

Step 5.3: Else if A[i].row == B[j].row AND A[i].column ==


B[j].column:

Step 5.3.1: Add the values of A[i] and B[j].

Step 5.3.2: If sum is not zero, add the result as a new


triplet to C[].

Step 5.3.4: Increment both i and j by 1.

Step 9: While i < length(A), add remaining elements of A[] to C[].

Step 10: While j < length(B), add remaining elements of B[] to C[].

Step 11: Output the result matrix C[].


17 | P a g e

Step 12: Stop.

Explanation:

1. The two input sparse matrices A[] and B[] are traversed using two pointers i and j.

2. For each triplet, the algorithm compares the row and column indices to determine which
element (from A[] or B[]) to add to the result matrix C[].

3. If the indices match, the corresponding values are added, and if the sum is non-zero, it
is stored in C[].

4. After one matrix is completely traversed, the remaining elements of the other matrix
are added directly to the result.

Time Complexity:

 The algorithm iterates through both matrices in parallel, so the time complexity is
O(k1+k2), where:

o k1 is the number of non-zero elements in matrix A[].

o k2 is the number of non-zero elements in matrix B[].

This ensures that the addition is performed efficiently by skipping over zero elements, only
focusing on non-zero entries.

3.3 Simple Transpose of Sparse Matrix

Algorithm to Find the Simple Transpose of a Sparse Matrix (Using Triplet


Representation)

The simple transpose of a sparse matrix involves swapping the rows and columns of the non-
zero elements and then arranging them in row-major order.

Assume that the sparse matrix is stored in triplet form, where each element is represented as
{row, column, value}.

Step 1: Start

Step 2: Input sparse matrix A[] in triplet form (with m rows, n


columns, and k non-zero elements).

Step 3: Initialize an empty result matrix T[] for the transpose.


18 | P a g e

Step 4: For each element A[i] in A (where i ranges from 1 to n):

Step 4.1: Swap A[i].row with A[i].column.

Step 4.2: Insert the transposed element into result matrix T[].

Step 5: Sort the matrix T[] by row (now the original columns) and by
column (now the original rows).

Step 6: Output the transposed matrix T[].

Step 7: Stop

Explanation:

1. Swapping Rows and Columns: For each non-zero element in the matrix, the row index
and column index are swapped, converting the original sparse matrix into its transposed
form.

2. Sorting: After the swap, the matrix needs to be sorted by row (original column index)
and column (original row index) to maintain the proper row-major order.

Time Complexity:

1. Swapping rows and columns: This operation takes O(k), where k is the number of
non-zero elements.

2. Sorting: Sorting the matrix T[] takes O(k log k) time, where k is the number of non-
zero elements.

Thus, the overall time complexity is O(k log k).

This approach is simple but less efficient compared to the fast transpose method, which avoids
the need for sorting.

3.4 Fast Transpose of Sparse Matrix

Algorithm to Find Fast Transpose of Sparse Matrix

The Fast Transpose of a sparse matrix improves upon the simple transpose by avoiding the
need to sort the elements after transposing them. It efficiently computes the transposed
positions using the row counts and cumulative positions of elements.
19 | P a g e

Given a sparse matrix in triplet representation, where each triplet represents {row, column,
value}, the fast transpose algorithm rearranges the matrix in O(m + n) time, where m is the
number of non-zero elements, and n is the number of columns.

Step 1: Start

Step 2: Input sparse matrix A[] with m rows, n columns, and k non-
zero elements.

Step 3: Initialize result matrix T[] to store the transposed matrix.

Step 4: Create an array `row_count[]` of size n (number of columns of


A) to store the count of non-zero elements in each column of A.

Step 5: Initialize an array `position[]` of size n to store the


starting position of each column in the transposed matrix T[].

Step 6: Count the number of non-zero elements in each column of A

Step 6.1: For each element A[i] in A:

Step 6.1.1: row_count[A[i].column]++

Step 7: Compute the starting position for each column in the transposed
matrix

Step 7.1: Set position[0] = 0

Step 7.2: For each column j from 1 to n-1:

Step 7.2.1: position[j] = position[j-1] + row_count[j-1]

Step 8: Place the transposed elements in the correct positions

Step 8.1: For each element A[i] in A:

Step 8.1.1: Find the position where the element should be


placed using the ‘position[]’ array.

Step 8.1.2: Place the element (with row and column swapped)
in T[] at position[position[A[i].column]].

Step 8.1.3: Increment the value of position[A[i].column].


20 | P a g e

Step 9: Output the transposed matrix T[].

Step 10: Stop

Explanation:

1. Step 1 counts how many non-zero elements are in each column of matrix A (which will
become rows in the transposed matrix).

2. Step 2 calculates the starting index for each column in the transposed matrix using a
cumulative sum of the counts from row_count[].

3. Step 3 places each element of A in its correct position in the transposed matrix T[] based
on the position[] array, which tells where to place elements from each column.

Time Complexity:

 Step 1 (counting non-zero elements per column): O(k), where k is the number of non-
zero elements.

 Step 2 (calculating starting positions): O(n), where n is the number of columns.

 Step 3 (placing elements in the result matrix): O(k).

Thus, the total time complexity is O(k + n), which is optimal compared to the simple
transpose's O(k log k) sorting step. This makes Fast Transpose more efficient for large sparse
matrices.

3.5 Time and Space Tradeoff

The time and space tradeoff refers to a situation where you can reduce the time complexity
of an algorithm by using more space (memory), or reduce the space complexity by allowing
the algorithm to take more time. This tradeoff is a common consideration in algorithm design
and optimization.

Key Concepts:

 Time Efficiency: How fast an algorithm runs (time complexity).

 Space Efficiency: How much memory an algorithm requires (space complexity).

Example:

1. Using Extra Space to Save Time:


21 | P a g e

o Hashing: In searching problems, you can use a hash table (which requires extra
space) to find elements in O(1) time, instead of using a linear search, which
takes O(n) time.

2. Using Less Space but More Time:

o In-Place Sorting Algorithms: Algorithms like bubble sort or selection sort


use little extra memory, but have a higher time complexity (O(n²)). On the other
hand, more efficient sorting algorithms like merge sort require extra space but
run faster (O(n log n)).

Practical Consideration:

 If memory is limited, you may need to choose an algorithm that uses less space, even
if it takes longer to execute.

 If speed is critical and memory is abundant, you might choose an algorithm that uses
more space to achieve faster execution times.

You might also like