ADA Lab Manual
ADA Lab Manual
0
Program- 1
The Binary search requires an ordered list of elements and is only applicable to the arrays. A
binary search or half-interval search algorithm locates the position of an item in a sorted array.
Binary search works by comparing an input value to the middle element of the array. The
comparison determines whether the element equals the input, less than the input or greater.
When the element being compared to equals the input the search stops and typically returns the
position of the element. If the element is not equal to the input then a comparison is made to
determine whether the input is less than or greater than the element. Binary search algorithms
typically halve the number of items to check with each successive iteration, thus locating the
given item (or determining its absence) in logarithmic time.
Note:
1. It is applicable to arrays not on linked list, because it is not possible to locate middle in the
linked list.
2. Elements should be sorted in the array.
3. Performance is good for sorting in a large collection of elements, but low for very less elements.
Iterative Algorithm: An iterative method attempts to solve a problem (for example, finding the
root of an equation or system of equations) by finding successive approximations to the solution
starting from an initial guess. This approach is in contrast to direct methods, which attempt to
solve the problem by a finite sequence of operations, and, in the absence of rounding errors,
would deliver an exact solution.
Example: Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
1
Algorithm
}
return null; // not found
}//end binary search
2
IMPLEMENTATION:
3
Program- 2
Recursive Binary Search Algorithms
Recursive Algorithm: A recursive algorithm is an algorithm which calls itself with "smaller (or
simpler)" input values, and which obtains the result for the current input by applying simple
operations to the returned value for the smaller (or simpler) input. More generally if a problem
can be solved utilizing solutions to smaller versions of the same problem, and the smaller versions
reduce to easily solvable cases, then one can use a recursive algorithm to solve that problem. In
general, recursive computer programs require more memory and computation compared with
iterative algorithms, but they are simpler and for many cases, a natural way of thinking about the
problem. In recursive algorithm there should be some stopping criteria.
In recursive binary search algorithm, the list of elements is divided into two equal sized half and
then searching begins recursively in only one half of the list where possibility of element to be
present.
Algorithm
Complexity: The Complexity of Recursive Binary Search algorithm is O (log2n) where n is the
number of elements in the array.
4
IMPLEMENTATION:
5
Program- 3
Merge Sort Algorithm
The sorting algorithm Mergesort produces a sorted sequence by sorting its two halves and
merging them.
Idea:
The Mergesort algorithm is based on divide and conquer strategy. First, the sequence to be sorted
is decomposed into two halves (Divide). Each half is sorted independently (Conquer). Then the
two sorted halves are merged to a sorted sequence (Combine).
The procedure mergesort sorts a sequence from index “low” to index “high”. First, index “mid” in
the middle between “low” and “high” is determined. Then the first part of the sequence (from low
to mid) and the second part (from mid+1 to high) are sorted by recursive calls of mergesort. Then
the two sorted halves are merged by procedure merge. Recursion ends when low = high, i.e. when
a subsequence consists of only one element.
The main work of the Mergesort algorithm is performed by function merge. Function Merge is
usually implemented in the following way: The two halves are first copied into an auxiliary
array b. Then the two halves are scanned by pointers i and j and the respective next-greatest
element at each time is copied back to array a.
At the end a situation occurs where one index has reached the end of its half, while the other has
not. Then, in principle, the rest of the elements of the corresponding half have to be copied back.
Actually, this is not necessary for the second half, since (copies of) the remaining elements are
already at their proper places.
Algorithm
6
{
mid=(low + high)/2;// Split the list from the middle.
// Solve the subproblems.
Merge-Sort(low, mid);
Merge-Sort(mid+1, high);
// Combine the solutions.
Merge(low, mid, high);
}
}
7
IMPLEMENTATION:
8
Program- 4
Quick Sort Algorithm
Quick sort was discovered by Tony Hoare in 1962. In this algorithm, the hard work is splitting the
array into subsets so that merging the final result is trivial.
Divide: Rearrange the elements and split the array into two subarrays and an element in between
such that so that each element in the left subarray is less than or equal the middle element and
each element in the right subarray is greater than the middle element.
Conquer: Recursively sort the two subarrays.
Combine: None
Example:
Sort the numbers: 65, 70, 75, 80, 85, 60, 55, 50, 45
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) i j
65 70 75 80 85 60 55 50 45 +∞ 2 9
65 45 75 80 85 60 55 50 70 +∞ 3 8
65 45 50 80 85 60 55 75 70 +∞ 4 7
65 45 50 55 85 60 80 75 70 +∞ 5 6
65 45 50 55 60 85 80 75 70 +∞ 6 5
60 45 50 55 65 85 80 75 70 +∞
______________ _____________
Sublist -1 Sublist-2
Algorithm
Algorithm Quicksort (a, low, high)
{
/* Termination condition! */
if ( high > low )
{
pivot = partition( a, low, high );
quicksort( a, low, pivot-1 );
quicksort( a, pivot+1, high );
}
}
9
/* Move right while item > pivot */
while( a[j] > v )
right--;
if ( i < j )
{
temp:=a[i];
a[i]:=a[j];
a[j]:=temp;
}
Complexity:
Best Case: O (nlogn)
Worst Case: O (n2)
Average Case: O (nlogn)
10
IMPLEMENTATION:
11
Program- 5
The Greedy Knapsack Problem
Greedy algorithm: A greedy algorithm for an optimization problem always makes the choice
that looks best at the moment and adds it to the current subsolution. A greedy algorithm obtains
an optimal solution to a problem by making a sequence of choices. At each decision point, the
algorithm chooses the locally optimal solution. In other words, when we are considering which
choice to make, we make the choice that looks best in the current situation, without considering
results from subproblems.
Knapsack Problem: Given a set of items, each with a weight and a value, determine the number
of each item to include in a collection so that the total weight is less than or equal to a given limit
and the total value is as large as possible. It derives its name from the problem faced by someone
who is constrained by a fixed-size knapsack and must fill it with the most useful items.
We have n kinds of items, 1 through n. Each kind of item i has a profit value p i and a weight wi.
We usually assume that all values and weights are nonnegative. To simplify the representation,
we can also assume that the items are listed in increasing order of weight. The maximum weight
that we can carry in the bag is W.
Concept:
1. Calculate pi/wi for each item i.
2. Then Sort the items in decreasing order of their pi/wi.
3. Select an item from the sorted list and place it into the bag such that the total weight of the
objects should not exceed the total capacity of the knapsack. Perform this step repeatedly.
Example :
i: (1, 2, 3, 4) pi: (5, 9, 4, 8) wi: (1, 3, 2, 2) and W= 4
now pi/wi: (5, 3, 2, 4)
Solution:
1st: all of item 1, x[1]=1, x[1]*W[1]=1
2nd: all of item 4, x[4]=1, x[4]*W[4]=2
3rd: 1/3 of item 2, x[2]=1/3, x[2]*W[2]=1
Now the total weight is 4=W
(x[3]=0)
The Algorithm:
12
for ( i =1 to n)
do x[i] =0
weight = 0
while ( weight < W)
do{
i = best remaining item
if (weight + w[i] ≤ W)
then x[i] = 1
weight = weight + w[i]
else
x[i] = (w - weight) / w[i]
weight = W
}
return x
}
Complexity
If the items are already sorted into decreasing order of vi / wi, then the while-loop takes a time
in O(n); Therefore, the total time including the sort is in O(n log n).
13
IMPLEMENTATION:
14
Program-6
Optimal merge patterns using Greedy method
Given n sorted files, there are many ways in which to pair-wise merge them into a single sorted
file. Different pairing requires differing amounts of computing time. Thus the optimal way to
pair-wise merge n sorted files can be performed using greedy method.
Greedy attempt to obtain an optimal merge pattern is easy to formulate. Since merging an n-
record file and m-record file requires possibly n+m record moves, the obvious choice for
selection criteria is: at each step merge the two smallest size files together.
For Example: Suppose there are 3 sorted lists L1, L2, and L3, of sizes 30, 20, and 10, respectively,
which need to be merged into a combined sorted list, but we can merge only two at a time. We
intend to find an optimal merge pattern which minimizes the total number of comparisons. For
example, we can merge L1 and L2, which uses 30 + 20 = 50 comparisons resulting in a list of size
50. We can then merge this list with list L3, using another 50 + 10 = 60 comparisons, so the total
number of comparisons is 50 + 60 = 110. Alternatively, we can merge lists L2 and L3, using 20 +
10 = 30 comparisons, the resulting list (size 30) can then be merged with list L1, for another 30 +
30 = 60 comparisons. So the total number of comparisons is 30 + 60 = 90. It doesn’t take long to
see that this latter merge pattern is the optimal one.
Binary Merge Trees: We can depict the merge patterns using a binary tree, built from the leaf
nodes (the initial lists) towards the root in which each merge of two nodes creates a parent node
whose size is the sum of the sizes of the two children. For example, the two previous merge
patterns are depicted in the following two figures:
15
Fig 7.2: Construction of optimal merge tree
If di is the distance from the root to the external node for file xi and qi is the length of file xi, then
the total number of record moves for this binary merge tree is given by:
n
dq
i =1
i i
This sum is called the weighted external path length of the tree.
Algorithm
16
IMPLEMENTATION:
17
Program-7
Huffman Coding
This problem is that of finding the minimum length bit string which can be used to encode a
string of symbols. One application is text compression:
Prefix (free) code: no codeword is also a prefix of some other code words (Un-ambiguous)
An optimal data compression achievable by a character code can always be achieved with a prefix
code. An optimal code is always represented by a full binary tree, in which every non-leaf node
has two children
Huffman's scheme uses a table of frequency of occurrence for each symbol (or character) in the
input. This table may be derived from the input itself or from data which is representative of the
input. For instance, the frequency of occurrence of letters in normal English might be derived
from processing a large number of text documents and then used for encoding all text documents.
We then need to assign a variable-length bit string to each character that unambiguously
represents that character. This means that the encoding for each character must have a unique
prefix. If the characters to be encoded are arranged in a binary tree:
An encoding for each character is found by following the tree from the route to the character in
the leaf: the encoding is the string of symbols on each branch followed.
For example:
String Encoding
TEA 10 00 010
SEA 011 00 010
TEN 10 00 110
Algorithm
18
; O(nlgn)
}
Complexity: The time complexity of the Huffman algorithm is O (n log n). Using a heap to store
the weight of each tree, each iteration requires O (log n) time to determine the cheapest weight
and insert the new weight. There are O (n) iterations, one for each item.
Another Example:
19
IMPLEMENTATION:
20
Program-8
Minimum Spanning Trees using Kruskal’s algorithm
Let G = (V, E) be an undirected connected graph. A subgraph T = (V, E’) of G is a spanning
tree of G iff T is a tree.
• A spanning tree is a minimal subgraph G’ of G such that V (G’) = V (G) and G’ is connected
• Any connected graph with n vertices must have n − 1 edges
• All connected graphs with n − 1 edges are trees
Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a
connected weighted graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. If the
graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for
each connected component). Kruskal's algorithm is an example of a greedy algorithm.
Example:
Solution:
Iteration Edge Components
0 - {1}; {2}; {3}; {4}; {5}; {6}; {7}
21
Iteration Edge Components
1 {1,2} {1,2}; {3}; {4}; {5}; {6}; {7}
2 {2,3} {1,2,3}; {4}; {5}; {6}; {7}
3 {4,5} {1,2,3}; {4,5}; {6}; {7}
4 {6,7} {1,2,3}; {4,5}; {6,7}
5 {1,4} {1,2,3,4,5}; {6,7}
6 {2,5} Not included (adds cycle)
7 {4,7} {1,2,3,4,5,6,7}
Algorithm
Algorithm MST_Kruskal ( G, t )
{
// G is the graph, with edges E(G) and vertices V(G).
// w(u,v) gives the weight of edge (u,v).
// t is the set of edges in the minimum spanning tree.
// Build a heap out of the edges using edge cost as the comparison criteria
// using the heapify algorithm
heap = heapify ( E(G) )
t = NULL;
// Change the parent of each vertex to a NULL
// Each vertex is in different set
for ( i = 0; i < |V(G)|; i++ )
{
parent[i] = NULL
}
i=0
while ( ( i < n - 1 ) AND (heap is not empty) )
{
e = delete ( heap ) // Get minimum cost edge from heap
adjust ( heap ) // Reheapify heap
// Find both sides of edge e = (u,v) in the tree grown so far
j = find ( u(e), t )
22
k = find ( v(e), t )
if ( j != k )// Both are in different sets
{
i++
t[i,1] = u
t[i,2] = v
union ( j, k )
}
}
}
Complexity: first sort the edges by weight using a comparison sort in O(E log E) time; this
allows the step "remove an edge with minimum weight from S" to operate in constant time. Next,
we use a disjoint-set data structure (Union & Find) to keep track of which vertices are in which
components. We need to perform O(E)operations, two 'find' operations and possibly one union for
each edge. Even a simple disjoint-set data structure such as disjoint-set forests with union by rank
can perform O(E) operations in O(E log V) time. Thus the total time is O(E log E) =O(E log V).
23
IMPLEMENTATION:
24
Program-9
Minimum Spanning Trees using Prim’s algorithm
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected
weighted undirected graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. . The
algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník and later
independently by computer scientist Robert C. Prim in 1957 and rediscovered by Edsger
Dijkstra in 1959. Therefore it is also sometimes called the DJP algorithm, the Jarník
algorithm, or the Prim–Jarník algorithm.
The Concept:
The algorithm continuously increases the size of a tree, one edge at a time, starting with a tree
consisting of a single vertex, until it spans all vertices.
▪ Initialize: Vnew = {x}, where x is an arbitrary node (starting point)
from V, Enew = {}
▪ Repeat until Vnew = V:
▪ Choose an edge (u, v) with minimal weight such that u is
in Vnew and v is not (if there are multiple edges with the same weight, any of them may be
picked)
▪ Add v to Vnew, and (u, v) to Enew
Example:
25
Algorithm
Complexity
A simple implementation using an adjacency matrix graph representation and searching an array
of weights to find the minimum weight edge to add requires O(V2) running time. Using a simple
binary heap data structure and an adjacency list representation, Prim's algorithm can be shown to
run in time O(E log V) where E is the number of edges and V is the number of vertices. Using a
more sophisticated Fibonacci heap, this can be brought down to O(E + V log V), which is
asymptotically faster when the graph is dense enough that E is Ω(V).
26
IMPLEMENTATION:
27
Program-10
Single Source Shortest path algorithm using Greedy
Method
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and
published in 1959, is a graph search algorithm that solves the single-source shortest path
problem for a graph with nonnegative edge path costs, producing a shortest path tree. This
algorithm is often used in routing. An equivalent algorithm was developed by Edward F. Moore
in 1957.
For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e.
the shortest path) between that vertex and every other vertex. It can also be used for finding
costs of shortest paths from a single vertex to a single destination vertex by stopping the
algorithm once the shortest path to the destination vertex has been determined. For example, if
the vertices of the graph represent cities and edge path costs represent driving distances between
pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest
route between one city and all other cities.
Concept:
Let the node at which we are starting be called the initial node. Let the distance of node Y be the
distance from the initial node to Y. Dijkstra's algorithm will assign some initial distance values
and will try to improve them step by step.
1. Assign to every node a distance value: set it to zero for our initial node and to infinity for all
other nodes.
2. Mark all nodes as unvisited. Set initial node as current.
3. For current node, consider all its unvisited neighbors and calculate their tentative distance. For
example, if current node (A) has distance of 6, and an edge connecting it with another node (B)
is 2, the distance to B through A will be 6+2=8. If this distance is less than the previously
recorded distance, overwrite the distance.
4. When we are done considering all neighbors of the current node, mark it as visited. A visited
node will not be checked ever again; its distance recorded now is final and minimal.
5. If all nodes have been visited, finish. Otherwise, set the unvisited node with the smallest distance
(from the initial node, considering all nodes in graph) as the next "current node" and continue
from step 3.
28
Algorithm:
29
// Here D[1:n] is an distance vector that indicates the cost of destination from the
source.
{
S= { v };
for (i = 1; i <= n; i++)
D[i] = C[v,i];
for (i=1; i < = n-1; i++)
{
choose a vertex w Є V-S such that D[w] is a minimum;
S=S {w };
for each vertex v Є V-S
D[v] = min (D[v], D[w] + C[w, v])
}
}
30
IMPLEMENTATION:
31
32