100% found this document useful (1 vote)
20 views24 pages

DAAFile

H

Uploaded by

pk9690123472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
20 views24 pages

DAAFile

H

Uploaded by

pk9690123472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1.

Insertion Sort
Insertion Sort is a simple sorting algorithm that builds the final sorted array one item at a time. It is much
less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
However, insertion sort provides several advantages: it is simple, adaptive, and stable.

Algorithm:
The algorithm works by iteratively building a sorted portion of the array. It starts with an assumption that
the first element (index 0) is already sorted. Then, for each subsequent element, it is compared with the
elements in the sorted portion, and if it is smaller, it is moved to the appropriate position in the sorted
portion.

Start with the second element (index 1) and compare it with the first element.
If the second element is smaller, swap them.
Move to the third element and compare it with the second element, then the first element.
Repeat this process for each element in the array until the entire array is sorted.
The pseudocode for the insertion sort algorithm is as follows:

for i = 1 to length(array):
key = array[i]
j=i-1

while j >= 0 and array[j] > key:


array[j + 1] = array[j]
j=j-1

array[j + 1] = key

Time Complexity:
The worst-case time complexity of insertion sort is O(n^2), which occurs when the input array is in
reverse order. The best-case time complexity is O(n) when the input array is already sorted. On average,
insertion sort has a time complexity of O(n^2).

Space Complexity:
The space complexity of insertion sort is O(1) since it only requires a constant amount of additional
space for the key variable and the loop indices. It is an in-place sorting algorithm.
Program:-

#include<stdio.h>
#include<math.h>
void insertionsort(int arr[],int n)
{
int i,j,key;
for(i=1;i<n;i++)
{
key=arr[i];
j=i-1;
while(j>=0 && arr[j]>key)
{
arr[j+1]=arr[j];
j=j-1;
}
arr[j+1]=key;
}
}
void printarray(int arr[],int n)
{
int i;
printf("Sorted Array is : ");
for(i=0;i<n;i++)
{ printf("%d",arr[i]);
printf("\n");
}
}
int main()
{
int arr[]={18,12,6,45,5,7};
int n=6;
insertionsort(arr,n);
printarray(arr,n);
return 0;
}
Output:
2. Selection Sort:
Selection Sort is a simple sorting algorithm that works by repeatedly finding the minimum (or
maximum) element from the unsorted part of the array and putting it at the beginning (or end) of the
sorted portion. The algorithm divides the array into two parts: the sorted part, initially empty, and the
unsorted part, which contains all the elements. In each iteration, the minimum element from the
unsorted part is selected and swapped with the first element of the unsorted part.

Algorithm:

Initial State: The entire array is considered unsorted.


Find the Minimum: Iterate through the unsorted part to find the minimum element.
Swap: Swap the minimum element with the first element of the unsorted part.
Expand Sorted Part: Move the boundary between the sorted and unsorted parts one element to the
right.
Repeat: Repeat steps 2-4 until the entire array is sorted.
The pseudocode for the Selection Sort algorithm is as follows:

for i = 0 to length(array) - 1:
// Find the minimum element in the unsorted part
min_index = i
for j = i + 1 to length(array):
if array[j] < array[min_index]:
min_index = j

// Swap the minimum element with the first element of the unsorted part
swap(array[min_index], array[i])

Time Complexity:

The time complexity of Selection Sort is O(n^2), where 'n' is the number of elements in the array.
This is because, in each iteration, the algorithm finds the minimum element in the unsorted part, and
this process is repeated for each element in the array.

Space Complexity:

The space complexity of Selection Sort is O(1), as it only requires a constant amount of additional
space for variables like min_index and i. Selection Sort is an in-place sorting algorithm, meaning it
doesn't require additional memory proportional to the size of the input array.
Program:-

#include<stdio.h>
int main()
{
int arr[10]={15,14,7,28,13,22,5};
int n=7;
int i,j,position,temp;
for(i=0;i<(n-1);i++)
{
position=i;
for(j=i+1;j<n;j++)
{
if(arr[position]>arr[j])
position=j;
}
if(position !=i )
{
temp=arr[i];
arr[i]=arr[position];
arr[position]=temp;
}
}
for(i=0;i<n;i++)
{
printf("%d\n",arr[i]);
}
return 0;
}
Output:
3. Merge Sort:

Merge Sort is a popular and efficient sorting algorithm that follows the divide-and-conquer paradigm. It
works by recursively dividing the unsorted array into two halves, sorting each half, and then merging the
sorted halves to produce a fully sorted array. The key step in the algorithm is the merging process, where
two sorted arrays are combined to form a single sorted array.

Algorithm:
Divide: Split the unsorted array into two halves.
Recursively Sort: Apply the merge sort algorithm to each of the two halves.
Merge: Merge the sorted halves to produce a single sorted array. The merging process involves comparing
elements from both halves and placing them in the correct order.
The pseudocode for the Merge Sort algorithm is as follows:
merge_sort(array):
if length(array) <= 1:
return array
// Divide the array into two halves
middle = length(array) / 2
left_half = merge_sort(array[0:middle])
right_half = merge_sort(array[middle:end])
// Merge the sorted halves
return merge(left_half, right_half)
merge(left, right):
result = []
while left is not empty and right is not empty:
if left[0] <= right[0]:
result.append(left[0])
left = left[1:]
else:
result.append(right[0])
right = right[1:]
// Append the remaining elements (if any)
result.extend(left)
result.extend(right)
return result
Time Complexity:

The time complexity of Merge Sort is O(n log n), where 'n' is the number of elements in the array. This
makes it more efficient than simple quadratic algorithms like bubble sort or insertion sort, especially for
large datasets. The logarithmic factor comes from the recursive division of the array, and the linear factor is
due to the merging step.

Space Complexity:

The space complexity of Merge Sort is O(n) because it requires additional space to store the two halves
during the recursive calls. This additional space is a drawback when compared to in-place sorting
algorithms, but it provides stability and consistent performance across different inputs.

Code:-
#include<stdio.h>
#include<stdlib.h>
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1=m-l+1;
int n2= r-m;
int L[n1], R[n2];

for (i=0; i<n1; i++)


{
L[i] = arr[l+i];
}
for( j=0; j<n2; j++)
{
R[j]=arr[m+1+j];
}
i=0;
j=0;
k=l;
while (i<n1 && j<n2)
{
if(L[i] <= R[j])
{
arr[k] =L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1)
{
arr[k]=L[i];
i++;
k++;
}
while (j < n2)
{
arr[k]=R[j];
j++;
k++;
}
}
void mergeSort(int arr[],int l, int r)
{
if (l < r)
{
int m = l + ( r - l ) / 2;
mergeSort(arr,l,m);
mergeSort(arr, m+1,r);
merge(arr, l, m, r);
}
}
void printArray(int A[],int size)
{
int i;
for( i = 0; i < size; i++)
{
printf("%d\t",A[i]);
}
printf("\n");
}
int main()
{
int arr[]= {15, 10, 9, 14, 17, 22, 20, 45};
int arr_size = sizeof(arr) / sizeof(arr[0]);
printf("Unsorted Array is:\n");
printArray(arr, arr_size);
mergeSort(arr, 0, arr_size-1);
printf("\n Sorted array is :\n");
printArray(arr, arr_size);
return 0;
}

Output:-
4. Quick Sort:

Quick Sort is a widely used and efficient sorting algorithm that falls under the category of comparison-based
sorting. It employs a divide-and-conquer strategy to sort an array. The key idea is to partition the array into
smaller segments based on a chosen pivot element, such that elements less than the pivot are on one side,
and elements greater than the pivot are on the other. The process is then applied recursively to the subarrays.
Algorithm:
Choose a Pivot: Select a pivot element from the array. Common choices include the first element, last
element, or a randomly chosen element.
Partitioning: Rearrange the elements in the array such that all elements less than the pivot are on the left, and
all elements greater than the pivot are on the right. The pivot itself is now in its final sorted position.
Recursively Sort Subarrays: Apply the quick sort algorithm recursively to the subarrays formed on the left
and right of the pivot.
The pseudocode for the Quick Sort algorithm is as follows:
quick_sort(array, low, high):
if low < high:
// Partition the array and get the pivot index
pivot_index = partition(array, low, high)
// Recursively sort the subarrays
quick_sort(array, low, pivot_index - 1)
quick_sort(array, pivot_index + 1, high)
partition(array, low, high):
// Choose the pivot (for example, the last element)
pivot = array[high]
// Initialize the index of the smaller element
i = low - 1
// Iterate through the array and rearrange elements
for j = low to high - 1:
if array[j] <= pivot:
i=i+1
swap(array[i], array[j])
// Swap the pivot with the element at (i + 1), putting it in its final sorted position
swap(array[i + 1], array[high])
// Return the index of the pivot
return i + 1
Time Complexity:
The average and best-case time complexity of Quick Sort is O(n log n), making it highly efficient. However,
in the worst case (if an unfavorable pivot selection occurs), the time complexity can degrade to O(n^2).
Various strategies, such as random pivot selection, are employed to mitigate the likelihood of encountering
the worst-case scenario.

Space Complexity:
Quick Sort has a space complexity of O(log n) for the recursive call stack in the average and best cases. In
the worst case, the space complexity may be O(n), but this is rare in practice due to the efficient pivot
selection and partitioning steps. Quick Sort is an in-place sorting algorithm.
Code:-
#include<stdio.h>
void swap(int *a, int *b)
{
int t=*a;
*a=*b;
*b = t;
}
int partition(int array[], int low, int high)
{
int pivot = array[high];
int i = (low -1);
for ( int j=low; j< high; j++)
{
if (array[j]<=pivot)
{
i++;
swap(&array[i], &array[j]);
}
}
swap(&array[i+1], &array[high]);
return(i+1);
}
void quicksort(int array[], int low, int high)
{
if (low<high)
{
int pi = partition(array, low , high);
quicksort(array, low, pi-1);
quicksort(array, pi+1, high);
}
}
void printArray(int array[], int size)
{
for(int i=0; i<size; i++)
{
printf("%d\t", array[i]);
}
printf("\n");
}
int main()
{
int A[]={12,10,5,14,18,22,2};
int n = sizeof(A)/ sizeof(A[0]);
printf(" Unsorted array is:\n");
printArray(A,n);
quicksort(A,0,n-1);
printf("\n Sorted Array is:\n");
printArray(A,n);
return 0;
}
Output:-
5. Binary Search and Linear Search
Binary Search:
Binary Search is an efficient search algorithm that works on sorted arrays. It repeatedly divides the search
interval in half, narrowing down the possibilities until the element is found or the interval becomes empty.
The key to binary search is that it can discard half of the remaining elements at each step.

Algorithm:
Compare the target value to the middle element of the array.
If they are equal, the search is successful.
If the target value is less than the middle element, continue the search on the left half of the array.
If the target value is greater, continue the search on the right half of the array.
Repeat the process until the target is found or the interval becomes empty.
Complexity:
Binary Search has a time complexity of O(log n) in the average and worst cases, where 'n' is the number of
elements in the array. It's significantly more efficient than linear search for large sorted datasets.

C Code:

#include <stdio.h>
int binarySearch(int arr[], int low, int high, int target) {
while (low <= high) {
int mid = low + (high - low) / 2;
if (arr[mid] == target)
return mid;
if (arr[mid] < target)
low = mid + 1;
else
high = mid - 1;
}
return -1;
}
int main() {
int arr[] = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
int n = sizeof(arr) / sizeof(arr[0]);
int target = 14;
int result = binarySearch(arr, 0, n - 1, target)
if (result == -1)
printf("Element %d is not present in the array\n", target);
else
printf("Element %d is present at index %d\n", target, result);
return 0;
}
Output:-

Linear Search:

Linear Search is a simple search algorithm that sequentially checks each element in a list until a match is
found or the entire list has been searched. It is suitable for small datasets or unordered lists.

Algorithm:
Start from the beginning of the array.
Compare the target value with each element in the array.
If a match is found, return the index.
If the end of the array is reached without finding the target, return -1.
Complexity:
Linear Search has a time complexity of O(n), where 'n' is the number of elements in the array. It performs a
constant number of operations for each element in the worst case.
Code:

#include <stdio.h>
int linearSearch(int arr[], int n, int target) {
for (int i = 0; i < n; i++) {
if (arr[i] == target)
return i; // Target found, return index
}
return -1; // Target not present in the array
}
int main() {
int arr[] = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
int n = sizeof(arr) / sizeof(arr[0]);
int target = 14;
int result = linearSearch(arr, n, target);
if (result == -1)
printf("Element %d is not present in the array\n", target);
else
printf("Element %d is present at index %d\n", target, result);
return 0;
}
Output:-
6. Kruskal's Algorithm:

Kruskal's Algorithm is a greedy algorithm used to find the minimum spanning tree (MST) of a connected,
undirected graph. The minimum spanning tree is a subset of the edges of the graph that connects all the
vertices with the minimum possible total edge weight and without forming any cycles. Kruskal's algorithm
starts with an empty graph and successively adds the smallest edge that does not form a cycle with the edges
already chosen.
Algorithm:
Sort Edges: Sort all the edges in the graph in non-decreasing order of their weights.
Initialize MST: Create an empty graph to represent the minimum spanning tree.
Iterate through Edges: Starting with the smallest edge, add each edge to the MST unless it forms a cycle.
Cycle Detection: To check for cycles, use a data structure such as a disjoint-set (union-find) to keep track of
connected components.
Repeat: Continue adding edges until the MST has (V-1) edges, where V is the number of vertices in the
graph.

Complexity:
The time complexity of Kruskal's Algorithm is O(E log E), where E is the number of edges in the graph. The
dominant factor is the sorting of the edges. The algorithm is efficient for sparse graphs (where E is much
less than V^2).

Code:
#include <stdio.h>
#include <stdlib.h>
struct Edge {
int src, dest, weight;
};
struct Subset {
int parent, rank;
};
int find(struct Subset subsets[], int i);
void unionSets(struct Subset subsets[], int x, int y);
int compareEdges(const void* a, const void* b);
void kruskal(struct Edge edges[], int V, int E);
int main() {
int V = 4;
int E = 5;
struct Edge edges[] = {
{0, 1, 10},
{0, 2, 6},
{0, 3, 5},
{1, 3, 15},
{2, 3, 4}
};
kruskal(edges, V, E);
return 0;
}
int find(struct Subset subsets[], int i) {
if (subsets[i].parent != i)
subsets[i].parent = find(subsets, subsets[i].parent);
return subsets[i].parent;
}
void unionSets(struct Subset subsets[], int x, int y) {
int xroot = find(subsets, x);
int yroot = find(subsets, y);
if (subsets[xroot].rank < subsets[yroot].rank)
subsets[xroot].parent = yroot;
else if (subsets[xroot].rank > subsets[yroot].rank)
subsets[yroot].parent = xroot;
else {
subsets[yroot].parent = xroot;
subsets[xroot].rank++;
}
}

int compareEdges(const void* a, const void* b) {


return ((struct Edge*)a)->weight - ((struct Edge*)b)->weight;
}
void kruskal(struct Edge edges[], int V, int E) {
struct Subset* subsets = (struct Subset*)malloc(V * sizeof(struct Subset));
for (int i = 0; i < V; i++) {
subsets[i].parent = i;
subsets[i].rank = 0;
}
qsort(edges, E, sizeof(edges[0]), compareEdges);
struct Edge result[V - 1];
int i = 0;
while (i < V - 1 && E > 0) {
struct Edge next_edge = edges[--E];
int x = find(subsets, next_edge.src);
int y = find(subsets, next_edge.dest);
if (x != y) {
result[i++] = next_edge;
unionSets(subsets, x, y);
}
}
printf("Edges in the Minimum Spanning Tree:\n");
for (int i = 0; i < V - 1; i++)
printf("(%d, %d) weight=%d\n", result[i].src, result[i].dest, result[i].weight);
free(subsets);
}
Output:-
7. 0/1 Knapsack Problem:

The 0/1 Knapsack Problem is a classic optimization problem in computer science and mathematics. In this
problem, you are given a set of items, each with a weight and a value, and a knapsack with a maximum
weight capacity. The goal is to determine the most valuable combination of items to include in the knapsack
without exceeding its weight capacity. The "0/1" in the name indicates that an item can either be included (1)
or excluded (0), not partially included.

Algorithm:
The dynamic programming approach for solving the 0/1 Knapsack Problem involves creating a table to store
intermediate results. The table is filled iteratively, considering the optimal solution for each subproblem.
Create a table (2D array) to store the maximum value that can be obtained for different combinations of
items and weights.
Initialize the table with zeros.
Iterate through each item and each possible weight capacity, filling in the table based on the following
recurrence relation:
table[i][w] = max(value[i] + table[i-1][w-weight[i]], table[i-1][w])
where i is the current item index, w is the current weight capacity, value[i] is the value of the current item,
weight[i] is the weight of the current item, and table[i-1][w] represents the maximum value obtained without
including the current item.
The final cell of the table, table[n][W], where n is the total number of items and W is the maximum weight
capacity of the knapsack, contains the maximum value that can be obtained.
To reconstruct the solution, trace back the table from the bottom-right corner to the top-left corner,
considering the choices made for each item.

Complexity:

The time complexity of the dynamic programming solution for the 0/1 Knapsack Problem is O(nW), where
'n' is the number of items and 'W' is the maximum weight capacity of the knapsack. The space complexity is
also O(nW) due to the need to store intermediate results in the table.
Code:

#include <stdio.h>
int max(int a, int b) {
return (a > b) ? a : b;
}
int knapsack(int W, int weight[], int value[], int n) {
int i, w;
int table[n + 1][W + 1];
for (i = 0; i <= n; i++) {
for (w = 0; w <= W; w++) {
if (i == 0 || w == 0)
table[i][w] = 0;
else if (weight[i - 1] <= w)
table[i][w] = max(value[i - 1] + table[i - 1][w - weight[i - 1]], table[i - 1][w]);
else
table[i][w] = table[i - 1][w];
}
}
return table[n][W];
}
int main() {
int value[] = {60, 100, 120};
int weight[] = {10, 20, 30};
int W = 50; // Knapsack capacity
int n = sizeof(value) / sizeof(value[0]);
printf("Maximum value that can be obtained = %d\n", knapsack(W, weight, value, n));
return 0;
}
Output:-
8. Traveling Salesman Problem (TSP):

The Traveling Salesman Problem is a well-known combinatorial optimization problem. In this problem, a
salesman is given a set of cities, and the task is to find the shortest possible route that visits each city exactly
once and returns to the original city. The problem is NP-hard, meaning that there is no known polynomial-
time algorithm to solve it optimally, and it is often approached using heuristics and approximation
algorithms.

Algorithm: One common approach to solving the TSP is to use the brute-force method, which involves
generating all possible permutations of the cities and calculating the total distance for each permutation. The
permutation with the minimum total distance represents the optimal tour.
However, the brute-force approach becomes impractical for a large number of cities because the number of
permutations grows factorially with the number of cities. For more efficient solutions, heuristic algorithms
like the nearest neighbor algorithm, the minimum spanning tree approach, or dynamic programming (with
memoization) are often employed.
Complexity:
The time complexity of the brute-force approach to the TSP is O(n!), where 'n' is the number of cities. This
makes it impractical for large instances of the problem. Heuristic approaches, while not guaranteed to find
the optimal solution, can significantly reduce the computation time to a more manageable level.
Code (Brute-force Approach):
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <stdbool.h>
#define V 4
int tsp_bruteforce(int graph[V][V], int *path, bool *visited, int n, int current, int cost, int min_cost);
void tsp(int graph[V][V]);
int main() {
int graph[V][V] = {
{0, 10, 15, 20},
{10, 0, 35, 25},
{15, 35, 0, 30},
{20, 25, 30, 0}
};
tsp(graph);
return 0;
}
int tsp_bruteforce(int graph[V][V], int *path, bool *visited, int n, int current, int cost, int min_cost) {
if (n == 1) {
return graph[current][0];
}
int min_path_cost = INT_MAX;
for (int i = 0; i < V; i++) {
if (!visited[i] && graph[current][i] && current != i) {
visited[i] = true;
path[n - 1] = i;
int current_path_cost = tsp_bruteforce(graph, path, visited, n - 1, i, cost + graph[current][i],
min_cost);
visited[i] = false;
if (current_path_cost < min_path_cost) {
min_path_cost = current_path_cost;
}
}
}
return min_path_cost;
}
void tsp(int graph[V][V]) {
int path[V];
bool visited[V];
int min_cost = INT_MAX;
for (int i = 0; i < V; i++) {
visited[i] = false;
}
path[0] = 0;
visited[0] = true;
int cost = tsp_bruteforce(graph, path, visited, V, 0, 0, min_cost);
printf("Minimum cost of the TSP tour: %d\n", cost);
}

Output:-
9. N-Queens Problem:

The N-Queens problem is a classical combinatorial problem that involves placing N chess queens on an
N×N chessboard so that no two queens threaten each other. In other words, no two queens should be in the
same row, column, or diagonal.

Algorithm:
The most common algorithm to solve the N-Queens problem is the backtracking algorithm. The basic idea is
to place queens on the board one by one, making sure that no two queens threaten each other. If, at any
point, it is not possible to place a queen without violating the constraint, the algorithm backtracks to the
previous queen and explores other possibilities.
Start with an empty chessboard.
Begin with the first row and place a queen in the first column.
Move to the next row and place a queen in a column where it is safe from attacks.
Continue this process until all queens are placed or no safe positions are left.
If all queens are placed, a solution is found. Otherwise, backtrack to the previous queen and explore other
possibilities.
Complexity:
The time complexity of the N-Queens problem is O(N!), where N is the number of queens. This is because
there are N possibilities for the first queen, (N-1) for the second, (N-2) for the third, and so on. The
backtracking algorithm efficiently prunes the search space by avoiding invalid configurations.

Code:
Below is a simple implementation of the N-Queens problem in C. This code finds and prints one solution for
an 8x8 chessboard. You can adjust the value of N to find solutions for larger or smaller chessboards.

#include <stdio.h>
#include <stdbool.h>
#define N 8
void printSolution(int board[N][N]) {
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++)
printf("%2d ", board[i][j]);
printf("\n");
}
}
bool isSafe(int board[N][N], int row, int col) {
int i, j;
for (i = 0; i < col; i++)
if (board[row][i])
return false;
for (i = row, j = col; i >= 0 && j >= 0; i--, j--)
if (board[i][j])
return false;
for (i = row, j = col; j >= 0 && i < N; i++, j--)
if (board[i][j])
return false;
return true;
}
bool solveNQUtil(int board[N][N], int col) {
if (col >= N)
return true;
for (int i = 0; i < N; i++) {
if (isSafe(board, i, col)) {
board[i][col] = 1;
if (solveNQUtil(board, col + 1))
return true;
board[i][col] = 0; // backtrack
}
}
return false;
}
bool solveNQ() {
int board[N][N] = {0};
if (!solveNQUtil(board, 0)) {
printf("Solution does not exist");
return false;
}
printSolution(board);
return true;
}
int main() {
solveNQ();
return 0;
}
Output:-

You might also like