0% found this document useful (0 votes)
93 views11 pages

Ada 1

The document discusses the Tower of Hanoi problem and its recursive solution. It also discusses analyzing the efficiency of recursive and non-recursive algorithms. The Tower of Hanoi problem involves moving disks of different sizes between rods, where only one disk can be moved at a time and larger disks cannot be placed on top of smaller disks. The problem is solved recursively by moving all but the largest disk to the temporary rod, then moving the largest disk to the destination rod, and finally moving the remaining disks from the temporary rod to the destination rod. The document also provides steps for mathematically analyzing the efficiency of recursive and non-recursive algorithms, such as identifying the basic operation, determining if the operation count depends on

Uploaded by

kavithaangappan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views11 pages

Ada 1

The document discusses the Tower of Hanoi problem and its recursive solution. It also discusses analyzing the efficiency of recursive and non-recursive algorithms. The Tower of Hanoi problem involves moving disks of different sizes between rods, where only one disk can be moved at a time and larger disks cannot be placed on top of smaller disks. The problem is solved recursively by moving all but the largest disk to the temporary rod, then moving the largest disk to the destination rod, and finally moving the remaining disks from the temporary rod to the destination rod. The document also provides steps for mathematically analyzing the efficiency of recursive and non-recursive algorithms, such as identifying the basic operation, determining if the operation count depends on

Uploaded by

kavithaangappan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

ADA

INTERNAL 1

1 (source)

2(temp)

3(dest)

Tower Hanoi problem is a problem in which it is required to move a particular number of disks from source to destination , such that at any given time only one disk can be moved and a larger disk should not be present above a smaller disk. The solution for tower of Hanoi problem is as follows(considering there are n disks)1. Recursively move (n-1) disks from source to temp. 2. Now move the nth disk or the largest disk from source to dest.

3. Recursively move (n-1) disks from temp to destination. If n=1 then directly move from source to dest. To calculate c (n) 1. Input size is n (no of disks) 2. Basic operation performed is MOVEMENT(i.e movement of disks)

3. Yes, the no of times the basic operation is performed depends on the input. 4. To set up and solve the recurrence relationM(n)=m(n-1)+1+m(n-1) =2m(n-1)+1 By the method of backward substitution we have Substitute m (n-1) =2m (n-2) +1 We get m (n) = 2[2m (n-2) +1] +1 = 2^2 m (n-2) +2+1 Now sub m (n-2) = 2 m (n-3) +1 We get m (n) = 2^3 m (n-3) + 2^2 +2 +1 = 2 ^ i m( n-i ) + 2^(i-1)+ .. 2 + 1 Take i=n = 2 ^ n m (n-n) + 2 ^ (n-1) +.... 2 + 1 Further m(0) = 0 We get m (n ) = 2 ^ (n-1) + 2 ^ (n-2) + . + 2^2 +2 +1 The above equation is a geometric progression We have s = a* ( (r ^ n) -1) / ( r-1) {here 2^2 2 to the power of 2}

Here a=1 , r =2 and n = no of terms S = (2 ^ n) -1 M(n) = theta((2^n) - 1)= theta ( 2 ^ n) 1. b Discuss the average case efficiency of sequential search algorithm ? Solution The average case efficiency of an algorithm gives the efficiency of the algorithm for any randomized input. In the case of a sequential algorithm let us consider that the probability of a successful search to be given by P . the probability of the first match occurring in the ith position is the same for every i.

In the case of a successful search , the probability of the first match occurring in the ith position is given by p/n for every i . and the number of comparisions made is also i. in the case of an unsuccessful search , the number of comparisions made is n with the probability of such a search being (1-p) . Hence Cavg (n) = [ 1*p/n + 2*p/n.+ n* p/n] + n(1-p) = p/n [1+2+3. +n] +n (1-p) Cavg (n) = p/n * n (n+1)/2 + n (1-p) The above formula is the general formula which is used to determine the average case efficiency of the sequential search algorithm. For successful search i.e p=1 Cavg= (n+1)/2 This holds good. For unsuccessful search Cavg= n i.e. n comparisons have to be made. 2. a Give the algorithm for sieve of Eratosthenes and find the primary numbers upto 7 iterating through the algorithm. Prove why p*p <=n? Solution Algorithm //implements sieve of erastosthenes. //input: an integer n >= 2 // output: array L of all prime numbers less than or equal to n Input: Integer n 2 Output: List of primes less than or equal to n for p 2 to n do A[p] p for p 2 to FLOOR(sqrt(n)) do

if A[p] 0 //p hasnt been previously eliminated from the list j p* p while j n do A[j] 0 //mark element as eliminated jj+p i<- 0 for p <- 2 to n do if a[p]!= 0 L[i] <- a[p] i<- i+1 return L TRACE 234567 23 5 7 2 3 5 7

Now to determine the largest number p whose multiples can still remain on the list, we first consider p is a number whose multiples are being eliminated on the current pass , then the first multiple we should consider is p*p because all its smaller multiples 2p p(p-1) would have been eliminated in the previous passes. This assumption helps in not eliminating the same element more than once. Further p*p should not be greater than n and hence p should not exceed root(n) rounded down(i.e. floor). Hence p*p <=n. 2.b Define the following BagA bag or multiset can be defined as follows ,It is an un-ordered collection of items that are not necessarily distinct. Connected componentIf a graph is not connected, such a model will consists of several connected pieces that are called connected component of graph.

3.a If t1(n) O(g1(n)) and t2(n) o(g2(n)), then t1 (n)+ t2(n) O [ max { g1(n, g2(n)}].prove the Theorem Given t1(n) O(g1(n)) and t2(n) o(g2(n)) The principle used to prove the theorem is as follows. Let a1 b1 a2 b2 be four elements such that a1<=b1 a2<=b2 Then a1+a2 = 2 max {b1+b2} We have t1 (n) O (g1 (n)) and t2 (n) o (g2 (n)) This can be represented as t1 (n) = c1*g1(n) t2(n)= c2*g2(n) Consider c3 = max {c1, c2} We have t1 (n)+ t2(n) = c1*g1(n)+ c2*g2(n) = c3*g1(n)+ c3*g2(n) = c3 max { g1(n, g2(n)} Hence we have t1 (n)+ t2(n) O [ max { g1(n, g2(n)}] Hence proved

3.b Write the steps to mathematically analyze the efficiency of non-recursive algorithms ? Solution Steps to mathematically analyze the efficiency of non-recursive algorithms 1. Decide on parameter n indicating input size

2. Identify algorithms basic operation 3. Determine if the basic operation count depends not only on n 4. Set up summation for C(n) reflecting algorithms loop structure i.e. Express the number of times the algorithms basic operation is executed 5. Simplify summation using standard formulas and rules of sum manipulation i.e. Find a closedform formula for the count or, at the very least, establish its order of growth

3. Discuss about the asymptotic notations wrt definitions, graphs and prove that f(n)=n(n+1)/2 holds good for all notations when g(n)=n^2 ? Solution The asymptotic notations are1. Big oh(O) A function t(n) is said to be in O(g(n)) denoted by t(n) O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n i.e. if there exists some positive constant c and some non-negative integer n0 , such that t(n)<c*g(n).

Graphical representation

C*g(n) t(n)

n0 2. Big omega()

A function t(n) is said to be in (g(n)) denoted by t(n) (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n i.e. if there exists some positive constant c and some nonnegative integer n0, such that t(n)>c*g(n).

T(n)

n0

3. Big thetaA function t(n) is said to be in theta(g(n)) denoted by t(n) theta (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n i.e. if there exists some positive constants c1 and c2 and some non-negative integer n0, such that c2*g(n)<=t(n)<=c1*g(n).

C1*g(n)

C2*g(n)

n0

4. Deduce and differentiate the efficiency of selection sort and bubble sort and also write the algorithm for the same.

//ALGORITHM: Selection sort(A[0.....n-1]) //INPUT: An array A[0...n-1] of orderable elements //OUTPUT: An array A[0.....n-1] of ordered elements for i<-0 to n-2 do min<-i for j<-i+1 to n-1 do if A[j]<A[min] min<-j swap A[j] and A[min]

//ALGORITHM:Bubble sort(A[0...n-1]) //INPUT:An array A[0...n-1] of orderable elements //OUTPUTS:An array A[0..n-1] of order elements for i<-0 to n-2 do forj<-0 to n-2-i do if A[j+1]<a[j] swap A[j] and a[j+1] In Selection sort the number of swaps in each pass will be n-1 both in worst case and best case. In worst case selection sort requires n-1 In Bubble sort swaps will be more compared to selection sort, it requires (n-1)n/2 exchanges. But the order of elements doesnt matter for selection sort , Bubble sort it matters. Efficiency of selection sort is c(n)=(n-1)n/2(n^2) Efficiency of Bubble sort is c(n)=(n-1)n/2 (n^2)

4.Write an algorithm for string matching by Brute force method and check if the pattern 010010 is in the text 0101001101001010101001 using the same strategy and its efficiency

Efficiency C(n)=i=0 t0 n-m-1 j=0 to m-1(1) =i=0 to n-m-1 (m) =mi=0 to n-m-1(1) =mn-m^2 mn for large value of n C(n)(mn) 0101001101001010101001 010010 010010 010010 010010 010010

010010 010010 010010 010010 5. Define brute force and discuss about the analysis framework of an algorithm.

Brute force is a straightforward approach to solving a problem,usually directly based on the problem statement and definitions of the concepts involved. Analysis framework of an algorithm: Measuring an input size:The choice of an appropriate size metric can be influenced by operations of algorithm.The metric usually gives a better idea about the efficiency of algorithms in questions.Both time and space efficiencies are measured as functions of the algorithms input size. Units for Measuring Running time Standard unit of time measurement second or millisecond is used the it depends on the speed of a particular computer,Quality of program implementing the algorithm,compiler used. One possible approach is to count the number of times each of the algorithms operations is executed,this approach is difficult, so identify the most important operation of the algorithm, called the Basic operationLet cop be the time of execution of algorithms basic operation on a particular computer and let C(n) be the number of times this operation needs to be executed for this algorithm T(n) copC(n) Orders of growth A differences in running times on small inputs is not what really distinguishes efficient algorithms from inefficient ones. So,counts order of growth for large input sizes The logarithmic function grows slowly,on the other hand exponential functions grow fast that their values become astronomically large even for rather small values of n Worst case,Best case,Average case efficiencies,Amortized Worst case efficiency of an algorithm for input of size n algorithm runs the longest among all possible inputs of that size Best case is the algorithm runs the fastest among all possible inputs of size n

Average case provides an algorithms behavior on a typical or random inputs, to analyse we have to make assumptions about possible inputs of size n. Amortized efficiency applies not to a single run of an algorithm but rather to a sequence of operations performed on the same data structure 5.Give reasons 1.A Adjacency linked list representation is used for a sparce graph? If a graph is sparce, then we use adjacency linked rep because, the adjacency linked list uses lesser space when compared to adjacency matrix ,since sparce graph has fewer edges. 2.Scientists never tradeoffs between time and space? Scients never tradeoffs time because they are ready to lose time since they want accurate values They never trade off space ,even there algorithm demands more space they will not bother, since they deal with more important critical problems. 3.Order of growth is very important component of the analysis framework of an algorithm When comparing 2 different algorithms that solves same problem, if size of input is very less, it is not possible to distinguish between the time complexity of various algorithm not possible to choose,as value of input increases, some algorithm takes less time and some more so algorithm which takes less time can be choosen so order of growth for an algorithm . 4.C(n) value is less when the Fibonacci series is determined iteratively, why? We find iteratively 1 then 2 then 3 so on so the number of basic operations increases hence C(n) value is less. 5.When and why is the number of edges less than or equal to |v||v-1|/2? It holds good for Undirected graph. Every vertex is connected to every other vertex so |v||v-1| Since we are counting same edge twice so want to remove that hence divide by 2.

You might also like