0% found this document useful (0 votes)
4 views

1

The document provides an overview of data structures, algorithmic complexities, and recursion, emphasizing the importance of algorithms in computing. It discusses various types of data structures, including linear and non-linear structures, and explains algorithm analysis in terms of time and space complexity. Additionally, it covers recursive algorithms for factorial computation, GCD, Fibonacci sequence, and the Towers of Hanoi, along with searching techniques like linear and binary search.

Uploaded by

ammu02829
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

1

The document provides an overview of data structures, algorithmic complexities, and recursion, emphasizing the importance of algorithms in computing. It discusses various types of data structures, including linear and non-linear structures, and explains algorithm analysis in terms of time and space complexity. Additionally, it covers recursive algorithms for factorial computation, GCD, Fibonacci sequence, and the Towers of Hanoi, along with searching techniques like linear and binary search.

Uploaded by

ammu02829
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT-I

DATA STRUCTURES-Algorithmic Complexities, Recursion


Algorithms, Searching & Sorting Techniques

Preliminaries of algorithm:

Algorithms are well-defined procedures for solving problems. In computing,


algorithms are essential because they serve as the systematic procedures that
computers require. A good algorithm is like using the right tool in a workshop. It does
the Job with the right amount of effort.
It builds up a solution in small consecutive steps
One of the most common tools for defining algorithms is Pseudocode.Pseudocode is
an English representation of the algorithm logic. It is part English and part structured
code. The English part provides a relaxed syntax that describes what must be done
without showing unnecessary details such as error messages. The code part consists
of an extended version of the basic algorithmic constructs- sequence, selection and
iteration. Three reasons for using formal algorithms are Efficiency, Abstraction, and
Reusability.

Algorithm Analysis and Complexity

To analyze an algorithm is to determine the amount of resources (such as time


and storage) necessary to execute it. Most algorithms are designed to work with
inputs of arbitrary length. Usually the efficiency or running time of an algorithm is
stated as a function relating the input length to the number of steps (time complexity)
or storage locations (space complexity).
Time complexity
The time complexity of an algorithm quantifies the amount of time taken by an
algorithm to run as a function of the size (length of the string ) of input. The time
complexity of an algorithm is commonly expressed using big o notation, which
excludes coefficients and lower order terms.
Space Complexity

Amount of computer memory required during the program execution, as a


function of the input size.

Data Structures
Data structure is a collection of organized data that are related to each other.
Data structures can be classified into two types:

Linear data structures:


In linear data structures, the elements are logically organized in a sequential
order. The elements are stored in contiguous locations in memory.
Examples of linear data structures are arrays, linked lists, stacks and Queues etc.
An array is used to store data in consecutive memory locations. Arrays are considered
as a good example for implementing simple data structures. Arrays can be effectively
used for random access of fixed amount of data. We can also use arrays to create
data structures, such as stack and queue. Stacks and Queues are usually known as
linear data structures, because data items in stacks and queues are arranged in linear
sequence.

10 32 45 54 60

a[1] a[2] a[3] a[4] a[5]

Non-linear data structures:


In non-linear data structures, the elements are logically organized in a non-
linear order. The elements are cannot be stored in contiguous locations in memory.
Examples of Non-linear data structures are trees and graphs.

Fig: Tree
B C

D E F G
Recursion

A user-defined function, which calls itself is known as recursive function and


this technique is known as Recursion.

In many programming situations, we required to execute same functions


several times. In these cases we can use recursion technique. Generally recursive
technique is used in smaller functions.

#include <stdio.h>
main()
{
int fact(int n); /* function declaration*/
int n,m;
clrscr();
printf("enter n value");
scanf("%d", &n);
m=fact(n); /* function calling by main */
printf("factorial of a %d is %d",n,m);
getch();
}

fact(int n)
{
if(n<=1)
return 1;
else
return n*fact(n-1); /* function call by itself*/
}

Design methodology and implementation of Recursion algorithm


All recursive algorithms have two elements
One is solve the problem and
Second is reduces the size of program.
Let us consider a factorial of a given number

1 if (n<=1) Here n is +ve


1.1 return 1;
2 else
2.1 return n*fact (n-1);
 In the above algorithm the statement 1.1 solves a small piece of the problem
i.e, fact (0) is 1.
 The statement 2.1 on the other hand, reduces the size of the problem by
recursively calling fact with n-1. Once the solution to fact (n-1), is known
statement 2.1 provides a solution to the general problem by returning a value
to the calling function.

Let Consider fact(3)


fact (int n) fact (int n)
fact (int n)
{ {
{
if(n<=1) if(n<=1)
if(n<=1)
return 1; return 1;
return 1;
else else
else
return n*fact(n-1); return n*fact(n-1);
return n*fact(n-1);
} 3*fact(2) }
} 2*fact(1);
3*2=6
2*1

Then return the value 6 to main function.

 The statement that “solves” the problem is known as the BaseCase(if(n<=1))


 Every recursive algorithm must have a base case and the rest of the algorithm
is known as the general case. In our factorial example the base case is fact(0)
and the general case is n*fact(n-1).
 So finally following points are needed for designing a recursive algorithm.
o First determine the base case
o And the determine the general case.
Combine the base case and general case in to an algorithm.

Linear and Binary recursion:

Linear Recursion:
In linear recursion a function calls exactly once to itself each time the function is
invoked, and grows linearly in proportion to the size of the problem.
Example: Factorial of a given number using recursion
Factorial (int n)
{
if (n <= 1)
return 1;
else
return n * Factorial(n – 1);
}
Binary Recursion:
The binary recursion has the possible for calling itself twice instead of once as
linear recursion. This is more useful in such situation as binary trees as well as the
Fibonacci sequence.
Example: To find the nth Fibonacci number in the series using recursion
Fib (int n)
{
if (n <= 1)
return n;
else
return (fib(n-1) + fib(n-2));
}

Recursive Algorithm for Factorial Function

Remember that n! = n*(n-1)*(n-2)*...*2*1, and that 0! = 1. In other words,


The recursive algorithm for factorial function using pseudocode:

Algorithm fact (int n)


1. if (n less than or equals to 1)
1.1 return 1
2. else
2.1 return (n * fact(n-1));
3. end if
4. end fact.

In C-Language representation for the above pseudocode is

fact(int n)
{
if (n <= 1)
return 1;
else
return (n * fact(n-1));
}

Here fact() is the recursive function

Recursive Algorithm for GCD computation

(Greatest Common Divisor)


We use the Euclid's Algorithm to determine the GCD between the two non-negative
integers.
The mathematical form of the Euclidean algorithm for GCD as follows.

(N not equal to 0)

The recursive algorithm for GCD computation using pseudocode:


Algorithm gcd (m, n)
1. if ((m%n) equals to 0)
1.1 return n
2. else
2.1 return gcd(n, m%n)
3. end if
4. end gcd.

In C-Language representation for the above pseudocode is

gcd (int m, int n)


{
if ((m % n) == 0)
return n;
else
return gcd(n, m % n);
}

Here gcd() is the recursive function.

Recursive Algorithm for Fibonacci sequence

The Fibonacci sequence can be defined as each number in the series is the sum of the
previous two numbers and the series is starting from zero and one.
The first few numbers in the Fibonacci series are
0,1,1,2,3,5,8,13,21,34……………….
We can generate the mathematical definition of the Fibonacci series as follows:
The recursive algorithm for Fibonacci sequence using pseudocode:
Algorithm fib(n)
1. if (n equals to 0 or n equals to 1)
1.1 return n
2. else
2.1 return fib (n-1) + fib (n-2);
3. end if
4. end fib.
In C-Language representation for the above pseudocode is
fib(int n)
{
if (n==0||n==1)
return n;
else
return (fib(n-1) + fib(n-2));
}
Here fib() is the recursive function.
This combines results from 2 different recursive calls. This is sometimes known as
"deep" recursion, or in other cases "divide and conquer."
Let's consider all the recursive calls for fib(5).

Recursive Algorithm for Towers of Hanoi


The Tower of Hanoi is a mathematical puzzle invented by a French Mathematician
Edouard Lucas in 1883. The game starts by having few discs stacked in increasing
order of size. The number of discs can vary, but there are only three pegs.
The disks are moved one peg to another peg subject to the following rules.
1. Only one disk could be moved at a time.
2. A larger disk must never be placed above a smaller disk.
3. One and only one Auxiliary peg (intermediate peg) could be used between
source and destination pegs.

Source Auxiliary Destination


The recursive algorithm for Towers of Hanoi using pseudocode:
Algorithm towers(int N, char source, char dest, char intermediate)
1. static int step=0
2. if(N equals to 0)
2.1 print "plz enter the disks & try agin"
2.2 return
3. end if.
4. if (N==1)
4.1 print “move from source to destination peg”.
4.2 return;
5. end if.
6. else
6.1 towers(N-1,source,intermediate,des)
6.2 print “move from source to destination peg”.
6.3 towers(N-1,intermediate,dest,source);
6.4 return;
7. end else.
8. end towers.

In C-Language representation for the above pseudocode is

towers(int N, char source, char dest, char intermediate)


{
static int step=0;
if(N==0)
{
printf("plz enter the disks & try agin");
return;
}
if (N==1)
{
printf("\tStep %2d:move disk from peg %c to peg
%c\n",++step,source,dest);
return;
}
else
{
towers(N-1,source,intermediate,des
printf("\tStep %2d:move from %c to
%c\n",++step,source,dest);
towers(N-1,intermediate,dest,source);
return;
}
}
Here towers() is the recursive function.
Where n=3 then the moves are follows.

Tail Recursion:

Tail recursion is a special case of recursion, if a recursive call is the last


executable statement in the algorithm then it is called tail recursion; Tail recursion is
so named because the return point of each call is at the end of the algorithm. Thus,
there are no executable statements to be executed after each call.
A function call is said to be tail recursive if there is nothing to do after the
function returns except return its value.
Recursion Algorithm for Factorial
factorial (n)
{
if (n == 0)
return 1;
return n * factoria(n - 1);
}
The above factorial function is not tail-recursive; because its recursive call is not in
tail position (That means this is not a last executable statement in function), it builds
up late multiplication operations that must be performed after the final recursive call
completes. With a compiler or interpreter that treats tail-recursive calls as GOTO
statement rather than function calls.
Tail-recursion algorithm for Factorial

factorial(n)
{
return fact(n, 1);
}
fact(n, accumulator)
{
if (n == 0)
return accumulator;
return fact(n - 1, n * accumulator);
}
The inner function fact () calls itself last executable statement in the function

call factorial (3)


replace arguments with (3 1), jump to "fact"
replace arguments with (2 3), jump to "fact"
replace arguments with (1 6), jump to "fact"
replace arguments with (0 6), jump to "fact"
return 6

Significance of Tail-Recursion
The significance of tail recursion is that when making a tail-recursive call, the
caller's return position need not be saved on the call stack; when the recursive call
returns, it will branch directly on the previously saved return position. Therefore, on
compilers that support tail-recursion optimization, tail recursion saves both space and
time.

Linear Search

Linear search or sequential search is a method for finding a


particular value in a list that consists of checking every one of its
elements, one at a time and in sequence, until the desired one is
found.

Recursive algorithm for linear search

Algorithm linearsearch(int data[],int length, int


val)
1. --length;
2. if (length < 0)
2.1 return -1;
3. end if
4. else if (data[length] == val)
4.1 return (length);
5. end else if
6. else
6.1 return linearsearch(data,length, val);
7. end else
8. end linearsearch

Here linearsearch () is the recursive function.

Analysis for Linear search

For a list with n items, the best case is when the value is equal to the first element of
the list, in which case only one comparison is needed.

The worst case is when the value is not in the list (or occurs only once at the
end of the list), in which case n comparisons are needed.

Binary Search

The binary search algorithm is a method of searching an ordered array for a


single element by cutting the array in half with each pass. The trick is to pick a
midpoint near the center of the array, compare the data at that point with the data
being searched and then responding to one of three possible conditions: the data is
found at the midpoint, the data at the midpoint is greater than the data being
searched for, or the data at the midpoint is less than the data being searched for.
Recursion is used in this algorithm because with each pass a new array is
created by cutting the old one in half. The binary search procedure is then called
recursively, then that time a new (and smaller) array is created. Typically the array's
size is adjusted by manipulating a beginning and ending index. The algorithm exhibits
a logarithmic order of growth because it essentially divides the problem domain in
half with each pass.

Non Recursive algorithm for Binary search

Algorithm binrecursive(int data[],int val, int first, int last)


1. if (first >greater than last)
1.1return -1;
2.end if
3. mid = (first + last) / 2;
4. if (data[mid] == val)
4.1 return mid;
5. end if
6. if (data[mid] < val)
6.1 return recursive(data, val, mid+1, last);
7. else
7.1 return recursive(data, val, first, mid-1);
8. end if else
end binrecursive ()

Here binrecursive() is the recursive function.

Fibonacci Search

The Fibonacci search technique is a method of searching a


sorted array using a divide and conquer algorithm that narrows down possible
locations with the help of Fibonacci numbers. Compared to binary search, Fibonacci
search examines locations whose addresses have lower dispersion. Therefore, when
the elements being searched have non-uniform access memory storage (i.e., the time
needed to access a storage location varies depending on the location previously
accessed), the Fibonacci search has an advantage over binary search in slightly
reducing the average time needed to access a storage location.
Recursive algorithm for Fibonacci search
Algorithm fibrecursive(int data[], int length, int val,int nn,int inf,int k)
1. Declare pos;
2. Declare fib[]= {0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,
377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657,
46368, 75025, 121393, 196418, 317811, 514229,
832040,1346269,2178309, 3524578, 5702887, 9227465,149303,
2415787,39088169, 63245986, 102334155, 165580141};
3. if(nn not equal to length)
3.1 while(fib[k]<length)
3.1.1 k++;
3.1.2 nn equal to length;
4. end if
5. if(k==-1)
5.1 return -1;
6. end if
7. pos equal to inf+fib[--k];
8. if((pos>=length)||(val<data[pos]))
8.1 return recursive(data,length,val,nn,inf,k);
9. end if
10. if(val>data[pos])
10.1 return recursive(data,length,val,nn,pos+1,k--);
11. end if
12. if (val==data[pos])
12.1 return pos;
13. end if
14. end fibrecursive
Here fibrecursive() is the recursive function.
Sorting Techniques
It is a technique to rearrange the elements of a list in ascending or descending
order, which can be numerical, lexicographical, or any user-defined order.

Ex: Ranking of students is the process of sorting in descending order.


EMCET Ranking is an example for sorting with user-defined order.
EMCET Ranking is done with the following priorities.
First priority is marks obtained in EMCET.
If marks are same, the ranking will be done with comparing marks obtained in
the Mathematics subject.
If marks in Mathematics subject are also same, then the date of births will be
compared.

Internal Sorting:
If all the data that is to be sorted can be accommodated at a time in memory is
called internal sorting.

External Sorting:

It is applied to huge amount of data that cannot be accommodated in memory


all at a time. So data in disk or file is loaded into memory part by part. Each part that
is loaded is sorted separately and stored in an intermediate file and all parts are
merged into one single sorted list.

INSERTION SORT

It is a very simple sorting algorithm, in which the sorted arrays built one
element at a time.
The main idea behind insertion sort is that it inserts each item into its proper
place in the final list.

Features:
 Sorted by considering one item at a time.
 Efficient to use on small sets of data.
 Twice as fast as the bubble sort.
 40% faster than the selection sort.
 No swapping is required.
 It is said to be online sorting because it continues the sorting a list as and when
it receives new elements.
 It does not change the relative order of elements with equal keys.
 Reduces unnecessary travel through the array.
 Requires low and constant amount of extra memory space.
 Less efficient for larger lists.

Insertion sort works as follows:

 The array of values to be sorted is divided into two sets. One that stores sorted
values and the other contains unsorted values.

 The sorting algorithm will proceed until there are elements in the unsorted set.

 Suppose there are n elements in the array. Initially the element with index
0(Lower Bound = 0) is in the sorted set, rest all the elements are in the
unsorted set.

 the first element of the unsorted partition has array index 1 (if LB = 0)

 During each iteration of the algorithm, the first element in the unsorted set is
picked up and inserted into the correct position in the sorted list.

Example:

Insertion_Sort ( A [ ] , N )
Step 1 : Repeat For K = 1 to N – 1
Begin
Step 2 : Set Temp = A [ K ]
Step 3 : Set J = K – 1
Step 4 :Repeat while Temp < A [ J ] AND J >= 0
Begin
Set A [ J + 1 ] = A [ J ]
ALGORITHM: Set J = J - 1
End While
Step 5 : Set A [ J + 1 ] = Temp
End For
Step 4 : Exit
Complexity of Insertion Sort
Best Case :O(n)
Average Case : O ( n2 )
Worst Case : O ( n2 )

SELECTION SORT

It is a sorting algorithm that is independent of the original order of the elements


in the array. In pass 1, selecting the element with smallest value calls for scanning all
n elements; thus, n-1 comparisons are required in the first pass. Then the smallest
value is swapped with the element in the first position. In pass 2, selecting the second
smallest value requires scanning the remaining n-1 elements and so on. Therefore
2
(n-1)+(n-2)+…+2+1=n(n-1) /2 = O(n )

Selection Sort (Select the smallest and Exchange)


Features:

 No of swapping will be minimized. i.e., one swap on one pass.


 Generally used for sorting files with large objects and small keys.
 It is 60% more efficient than bubble sort and 40% less efficient than insertion
sort.
 It is preferred over bubble sort for jumbled array as it requires less items to be
exchanged.
 Uses internal sorting that requires more memory space.
 It cannot recognize sorted list and carryout the sorting from the beginning,
when new elements are added to the list.

Advantages:

 It is simple and easy to implement.


 It can be used for small databases.
 It is 60% more efficient than bubble sort algorithm.

Disadvantages:

 It is inefficient for large data bases.


 Insertion sort is better than selection sort and bubble sort.

Example: Sorting of elements

Algorithm:
Selection_Sort ( A [ ] , N )
Step 1 : Repeat For K = 0 to N – 2
Begin
Step 2 : Set POS = K
Step 3 : Repeat for J = K + 1 to N – 1
Begin
If A[ J ] < A [ POS ]
Set POS = J
End For
Step 5 : Swap A [ K ] with A [ POS ]
End For
Step 6 : Exit
Complexity of Selection Sort

Best Case : O (n2)


Average Case : O (n2)
Worst Case : O (n2)

BUBBLE SORT

It is a very simple method that sorts the array elements by repeatedly moving
the largest element to the highest index position of the array. In bubble sorting,
consecutive adjacent pairs of elements in the array are compared with each other. If
the element lower index is greater than the element at the higher index, the two
elements are interchanged so that the smaller element is placed before the bigger
one. This procedure of sorting is called bubble sorting because the smaller elements
“bubble” to the top of the list.

Features:

 Very primitive algorithm like linear search, and least efficient.


 No of swapping are more compare with other sorting techniques.
 It is not capable of minimizing the travel through the array like insertion
sort.

Example:
Let us an array that has the following elements

A[ ] = { 23,19,54,12,47,10};

Pass 1:

 Compare 23 and 19,Since 23 > 19, swapping is done 19,23,54,12,47,10


 Compare 23 and 54, Since 23 < 54, no swapping is done
 Compare 54 and 12,Since 54 > 12, swapping is done 19,23,12,54,47,10
 Compare 54 and 47,Since 54 > 47, swapping is done 19,23,12,47,54,10
 Compare 54 and 10,Since 54 > 10, swapping is done 19,23,12,47,10,54

Pass 2: 19,23,12,47,10,54

 Compare 19 and 23, Since 19 < 23, no swapping is done 19,23,12,47,10,54


 Compare 23 and 12, Since 23 > 12, swapping is done 19,12,23,47,10,54
 Compare 23 and 47,Since 23 < 47, no swapping is done 19,12,23,47,10,54
 Compare 47 and 10,Since 47 > 10, swapping is done 19,12,23,10,47,54
 Compare 47 and 54,Since 47 < 54, no swapping is done 19,12,23,10,47,54

Pass 3: 19,12,23,10,47,54

 Compare 19 and 12,Since 19 > 12, swapping is done 12,19,23,10,47,54


 Compare 19 and 23, Since 19 < 23, no swapping is done 12,19,23,10,47,54
 Compare 23 and 10,Since 23 > 10, swapping is done 12,19,10,23,47,54
 Compare 23 and 47,Since 23 < 47, no swapping is done 12,19,10,23,47,54
 Compare 47 and 54,Since 47 < 54, no swapping is done 12,19,10,23,47,54

Pass 4: 12,19,10,23,47,54

 Compare 12 and 19,Since 12 < 19, no swapping is done 12,19,10,23,47,54


 Compare 19 and 10, Since 19 > 10, swapping is done 12,10,19,23,47,54
 Compare 19 and 23,Since 19 < 23, no swapping is done 12,10,19,23,47,54
 Compare 23 and 47,Since 23 < 47, no swapping is done 12,10,19,23,47,54
 Compare 47 and 54,Since 47 < 54, no swapping is done 12,10,19,23,47,54
Pass 5: 12,10,19,23,47,54

 Compare 12 and 10,Since 12 > 10, swapping is done 10,12,19,23,47,54


 Compare 12 and 19, Since 12 < 19, no swapping is done 10,12,19,23,47,54
 Compare 19 and 23,Since 19 < 23, no swapping is done 10,12,19,23,47,54
 Compare 23 and 47,Since 23 < 47, no swapping is done 10,12,19,23,47,54
 Compare 47 and 54,Since 47 < 54, no swapping is done 10,12,19,23,47,54

ALGORITHM:

Bubble_Sort ( A [ ] , N )
Step 1 : Repeat For P = 1 to N – 1
Begin
Step 2 : Repeat For J = 1 to N – P
Begin
Step 3 : If ( A [ J ] < A [ J – 1 ] )
Swap ( A [ J ] , A [ J – 1 ] )
End For
End For
Step 4 : Exit
Complexity of Bubble Sort

The complexity of sorting algorithm is depends upon the number of


comparisons that are made.
Total comparisons in Bubble sort is n ( n – 1) / 2 ≈ n 2 – n

Best Case : O (n)


Average Case : O (n2)
Worst Case : O (n2)

QUICK SORT:

It is a widely used sorting algorithm developed by C.A.R.Hoare that makes O(n


log n) comparisons in average case to sort an array of n elements. However, in the
worst case, quick sort algorithm has quadratic running time given as O(n2). basically,
the quick sort algorithm is faster than other O(n log n) algorithms, because efficient
implementation of the algorithm can minimize the probability of requiring quadratic
time.
Quick Sort – A recursive process of sorting
The procedure:

The quick sort algorithm works as follows:


1) Select an element pivot from the array elements.
2) Re-arrange the elements in the array in such a way that all elements that are
less than the pivot appear before the pivot and all elements greater than the
pivot element come after it(equal values can go either way). After such a
partitioning, the pivot is placed in its final position. This is called the partition
operation.
3) Recursively sort the two sub-arrays thus obtained. (one with sub-list of lesser
value than that of the pivot element and the other having higher value
element).

Example:
ALGORITHM:

Step 1: algorithm QuickSort(list)


Step 2: Pre: list!= 0;
Step 3: Post: list has been sorted into values of ascending order
Step 4: if list.Count = 1 // already sorted
Step 5: return list
end if
Step 6: pivot ÃMedianValue(list)
Step 7: for i à 0 to list.Count¡1
Step 8: if list[i] = pivot
Step 9: equal.Insert(list[i])
end if
Step 10: if list[i] < pivot
Step 11: less.Insert(list[i])
end if
Step 12: if list[i] > pivot
Step 13: greater.Insert(list[i])
end if
end for
Step 14: return Concatenate(QuickSort(less), equal, QuickSort(greater))
Step 15: end Quicksort
Complexity of Quick Sort
Best Case : O (n log n)

Average Case : O (n log n)

Worst Case : O (n2)

MERGE SORT:

It is a sorting algorithm that uses the divide, conquer and combine algorithmic
paradigm. Where,
 Divide means partitioning the n-element array to be sorted into two sub-
arrays of n/2 elements in each sub-array.(If A is an array containing zero
or one element, then it is already sorted. However, if there are more
elements in the array, divide A into two sub-arrays, A1 and A2, each
containing about half of the elements of A).
 Conquer means sorting the two sub-arrays recursively using merge sort.
 Combine means merging the two sorted sub-arrays of size n/2 each to
produce the sorted array of n elements.
 Merge sort technique sorts a given set of values by combining two sorted
arrays into one larger sorted arrays.
 A small list will take fewer steps to sort than a large list.
 Fewer steps are required to construct a sorted list from two sorted lists
than two unsorted lists.
 You only have to traverse each list once if they're already sorted .

MERGE SORT (DIVIDE AND CONQUER)

Example: Sort the array given below using the merge sort.

39 9 81 45 90 27 72 18
ALGORITHM:

Step 1: algorithm Mergesort(list)


Step 2: Pre: list!=0;
Step 3: Post: list has been sorted into values of ascending order
Step 4: if list.Count = 1 // already sorted
Step 5: return list
end if
Step 6: m list.Count/2
Step 7: left list(m)
Step 8: right list(list.Count-m)
Step 9: for i 0 to left.Count-1
Step 10: left[i] list[i]
end for
Step 11: for i 0 to right.Count-1
Step 12: right[i] list[i]
end for
Step 13: left Mergesort(left)
Step 14: right Mergesort(right)
Step 15: return MergeOrdered(left, right)
Step 16: end Mergesort
Complexity of Merge Sort

Worst case - O (n logn)

Best case - O (n logn)

Average case - O (n logn)

Radix sort:

Each key is first figuratively dropped into one level of buckets corresponding to
the value of the rightmost digit. Each bucket preserves the original order of the keys
as the keys are dropped into the bucket. There is a one-to-one correspondence
between the number of buckets and the number of values that can be represented by
a digit. Then, the process repeats with the next neighboring digit until there are no
more digits to process. In other words:
 Take the least significant digit (or group of bits, both being examples of
radices) of each key.
 Group the keys based on that digit, but otherwise keep the original order
of keys. (This is what makes the LSD radix sort a stable sort).
 Repeat the grouping process with each more significant digit.

The sort in step 2 is usually done using bucket sort or counting sort, which are
efficient in this case since there are usually only a small number of digits.
Finally the list is sorted. The radix sorted list is 3 9 38 67 123 478 537 721.

ALGORITHM:

Step 1: algorithm Radix(list, maxKeySize)


Step 2: Pre: list!=0;
Step 3: maxKeySize ≥0 and represents the largest key size in the list
Step 4: Post: list has been sorted
Step 5: queues Queue[10]
Step 6: indexOfKey 1
Step 7: fori 0 to maxKeySize -1
Step 8: foreach item in list
Step 9: queues[GetQueueIndex(item, indexOfKey)].Enqueue(item)
end foreach
Step 10: list CollapseQueues(queues)
Step 11: ClearQueues(queues)
Step 12: indexOfKey indexOfKey * 10
end for
Step 13: return list
Step 14: end Radix

Complexity of Radix Sort


Worst case - O (nk)
Best case - O (nk)

Average case – O (nk)

You might also like