Fundamentals of the Algorithmic Problem Solving
-- It depicts the Sequence of steps to be followed in designing and analyzing any algorithm
-- Algorithms to be procedural solutions to problems.
Step1: Understanding the problem
• Completely understand the given problem
• Apply known algorithm to solve if exists, design a new algorithm
• An Input to an algorithm specifies an instance of the problem the algorithm solves.
• Correct algorithm should work for all possible inputs
Step2: Ascertaining the capabilities of the computational Device
Determine the capabilities of the computational device based on the following factors.
• Architecture of the device: Based on this we may need to write two types of algorithms
-- (Von Neumann Architectures) Sequential Algorithms
-- RAM Model design Parallel Algorithms
• Speed of the device
-- Real time products, Military Applications
• Memory Space
Step3: Choosing between Exact and Approximate Problem solving
• There are two types of algorithms based on the obtained result
Exact Algorithms : Knapsack, TSP, Searching, Sorting, String Matching etc
Approximate Algorithms : Finding square root, Solving Non-Linear Equations
• Algorithm Design Techniques Is a general approach to solve a problem
Algorithms +Data structure= Write efficient Program
Decide on method of specify an algorithm : NL, Flowchart, Pseudocode
Select appropriate Algorithm techniques (Brute force, Divide n conquer etc)
Step 4: Proving Algorithms Correctness: Using Mathematical Induction
Step5: Analyzing the Algorithms
Time Efficiency: how fast the algorithm can be executed
Space Efficiency: how much (minimum) extra memory is used
• Coding an Algorithm
-- Selection of suitable Programming Language ; features mentioned in the design should support
Program Testing : is the process of identifying errors in the program and finding how well the program works.
Chapter 2: Fundamentals of the analysis of Algorithm Efficiency
Note: Components that effects the space Efficiency: Program space, Data space and Stack Space
Example
Note: Components that effects time efficiency are: choice of an algorithm, no of inputs and size of the i/p
Example
Common computing Time functions
• The change in behavior of the algorithm as the value of n increases is called Order of Growth
Common Computing Time functions
1: indicates that the running time of a program is constant
Log N: Running time of a algorithm is Logarithmic
-- solve problem by reducing the problem size by a constant factor in each iteration. Binary search
N: Running time is Linear . Ex: Sequential Search
N Log N: the divide and conquer algorithms such as quick sort, mergesort will have this type
N2: The running time is said be quadratic. Normally will have two loops. Bubble sort, selection sort subtraction
of 2 matrices etc
N3 The running of a program is cubic. The algorithm with running time will have 3 loops. Matrix multiplication,
algorithm to solve simultaneous equations using guass elimination
2N : Running time of an algorithm is exponential. Subset construction
N!: Running time of an algorithm is factorial. That generate all permutations of set , bruteforce technique
Sequential Search i
i=0 Index 0 1 2 3 4
N=5
While i<n & a[i]!=k do Values 10 20 30 40 50 Case 1: Key = 10
Case 2: 50
i=i+1 Case 3: 30
If i<n return i
else
Return -1
The efficiency of an algorithm for the input size of n for which the algorithm takes least time to execute among all possible
inputs
The efficiency of an algorithm for the input size of n for which the algorithm takes longest time to execute among all possible
inputs
• Average case efficiency required for randomized input
• Amortized Efficiency : applies to sequence of operations performed on same data structure and is used to handle
worst case
Example : Sequential Search
-- standard Assumptions made are
a) The Probability for the successful Search is equal to p (0<=p<=1
b) The Probability of the first match occurring in the ith position of the list is the same for every i
With these assumption now we can compute avg case efficiency Cavg(n) as follows
• In case of Successful Search, the probability of first match in the ith position of the list is p/n for every i,
and the number of comparisons made by the algorithm in such situation is obviously i.
• In case of an unsuccessful search , the number of comparisons will be n with the probability is 1-p. Therefore
Cavg(n)=[1.p/n+2.p/n+3.p/n+…..i.p/n+……………..n.p/n]+ n.(1-p)
=p/n[1+2+3+4+…i+…n]+n.(1-p)
=p/n[n(n+1)/2] + n.(1-p) = p(n+1)/2+n(1-p) for example ,
if p=1, Search must be successful and no of comparisons made by seq. search is (n+1)/2
p=0, Search must be un successful and no of comparisons made by seq. search is (n)
Asymptotic Notations and Basic Efficiency Classes
• The value of the function may increase or decrease as the value of n increases.
• Is the study of how the value of a function varies for large value of n where n is the size of the input
• To compare and rank the order of growth use following 3 notations
O Notation :Worst case
Informally,
Notation : Best case
• Using this we can denote shortest amount of time taken by an
algorithm
Notationθ: Average Case
Problems Consider the following f(n)’s and express using O, Ω, and Θ
1. Let f(n)=10n3+8
Find g(n) which is slightly greater than f(n) so replace
10n3+8 with 10n3+n3 ie 11n3
Therefore c= 11 g(n)=n3 for n>=2
Other Problems (Refer classnotes)
1. logn+ √n Prove that 3n3+2n2=O(n3) and 3n!=O(2n )
2. N+nlogn
2. 2n+2
3. 6*2n+n2
4. 100n+5
5. 2n2+3
Prove that n(n-1)/2=O(n2)
Prove that 2n(n-1)/2=O(n3)
Prove that 1/2n(n-1)=Θ (n2)
1. Prove that 3n3+2n2=O(n3) 2. Prove that 3n!=O(2n)
We can write f(n)<=c*g(n) then f(n) ɛ O(g(n)) Let f(n)= 3n , g(n)= 2n then let us find
Let us find the value of N0 and C such that 3n <=c* 2n
f(n)<=c*g(n) 3n /2n <=C= (3 /2)n <=C but there is
Ie
Assume f(n)=3n3+2n2 no such value of C which is >= (3 /2)n
Hence 3n!=O(2n)
g(n)=n3
Then for n>=2 and C =4 f(n) ɛ O(g(n)) evaluates true
That is when n= & c=4
f(n)=32
g(n)=8
C*g(n)=4*8=32 LHS
LHS=RHS is thus proved
Let f(n)= 100n++5 Express f(n) using big theta
Wkt the constraint to be satisfied is c1*g(n)<=f(n)<=c2*g(n) for n>=n0
Therefore 100n<=100n+5+105*n for n>=1
Where c1=100, c2=105, n0=1, g(n)=n
So by definition
F(n) ɛ Theta(n)
Useful Property involving Asymptotic Notations :
• Useful in analyzing algorithms that comprise two consecutively executed parts
THEOREM
T(n) =Algorithm running time
If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then g(n)= function to compare with
count
t1(n) + t2(n) ∈ O(max{g1(n), g2(n)})
PROOF
The proof extends to orders of growth the following simple fact about four
arbitrary real numbers a1, b1, a2, b2:
X Y X AND Y
if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1, b2}. F F F
F T F
T F F
Ex: a1=5 a2 =2, b1 = 3 b2=4 T T T
a1=3 a2 =5, b1 = 7 b2=12
Example
Algorithm to check whether an array has equal elements
1. Sort the array:
Use a sorting algorithm (like quicksort, mergesort, or heapsort) to arrange the array
elements in ascending order.
• It uses no more than 1/2n(n-1) comparisons and hence is in O(n 2)
2. Compare adjacent elements:
Iterate through the sorted array, comparing each element to its immediate next element.
• It uses no more than (n-1) comparisons and hence is in O(n)
3. Return result: If at any point, two adjacent elements are equal, then the array
contains equal elements; otherwise, it does not.
The efficiency of the entire algorithm will be O(max[n2, n])= O(n2 )
Using Limits for comparing order of growth :
• O, Ω, and Θ notations are essential for proving their abstract properties, but rarely used for comparing
• the Order of growth of 2 Specific Functions
• Computing the limit of the ratio of two functions is more convenient.
Here 1st 2 cases means t(n) ∈ O(g(n))
Last 2 cases t(n) ∈ Ω(g(n))
And 2nd case means t(n) ∈ Θ(g(n))
Powerful calculus techniques to compute limits are
Bernoulli's Rule :To evaluate indeterminate limits
For large values of n
Note: Provides approximate value for the factorial of a number
Apply limit based approach to compare the orders of growth of 2 functions
Change of base rule
∞/ ∞ form
General Plan for Analyzing the Time Efficiency of Non-recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the
innermost loop.)
3. Check whether the number of times the basic operation is executed
depends only on the size of an input. If it also depends on some additional
property, the worst-case, average-case, and, if necessary, best-case
efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic
operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed
form formula for the count or, at the very least, establish its order of growth
Sum Manipulation Rules Two Summation formulas
S1
S2
Mathematical Analysis of Non-Recursive Algorithms
1) problem of finding the value of the largest element in a list of n numbers
Basic operation
Analysis
Number of times for loop executes
Analysis
[Link] size: n=5
2. Basic operation : comparison Note: Use u-l+1 formula and simplify to get n-1
assignment
3 c(n)=no of times the basic operation executed
4. Apply summation formula
5. Establish order of Growth
2) Element uniqueness problem: check whether all the elements in a given array
of n elements are distinct.
break
or false
Analysis
1. Input size=n
2. Basic Operation : Comparison
3. Number of times basic operation executed, based on the
following 2 scenarios in worst case
• No Equal elements at all
• pair of equal elements
4. Express in summation form = O(n2 )
5. Establish order of Growth
3) Given two n × n matrices A and B, compute their product C = AB
Input size = n*n
Basic operation = Multiplication
4) finds the number of binary digits in the binary representation of a positive
decimal integer
General Plan for Analyzing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can
vary on different inputs of the same size; if it can, the worst-case, average-
case, and best-case efficiencies must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution
Mathematical Analysis of Recursive
Algorithms
1) Compute the factorial function F (n) = n!
we need an initial condition that tells us the value with which the
sequence starts
Such equations are called recurrence relations or recurrences
• solving recurrence relations can be done by a method of backward
substitutions.
• The method’s idea (and the reason for the name) is immediately clear from the
way it applies to solving our particular recurrence
=[M[n-i]+I
=M[n-n]+n
=O(n)
Analysis
Input size = n
Basic operation = f(n) = f(n-1)*n
2) Tower of Hanoi puzzle.
Tower of Hanoi
Algorithm: TOH(n,a,b,c)
//i/p: No of disks n’ Analysis
//o/p: Arranging the disks orderly to destination 1. Input size=3
2. Movement of disk =M(n)
{ if n=1 , Move(a,b)
n=1, M(1)=1
else n>1, M(n)=1+M(n-1)
TOH(n-1,a,b,c); =1+M(n-1)
M(n) =2M(n-1)+1
Move(a,c)
TOH(n-1,b,c,a);
}
Analysis
1. Input size = 3
2. Basic operation
(movement of disc) = m(n)
n=1 , m(n) = 1
n > 1 , m(n) = 1+m(n-1)
= 1 + m(n-1)
i.e m(n) = 2 m(n-1) + 1
3) finds the number of binary digits in the binary representation of a positive
decimal integer
Floor function rounded off to LHS value
Analysis = 0 times
Example=8
Floor(8/2)+1 =4
Floor(4/2)+1 = 3
Floor(2/2)+1=2
1
(1)=1
Chapter 3
Brute Force
Introduction
• Brute force is a straightforward approach to solving a problem
• Directly based on the problem statement and definitions of the concepts
involved
• “Just do it!” would be another way to describe the describing the brute-force
approach
• Ex: compute a^n
a^n = a ∗ ... ∗ a
n times
• Two Algorithms :
1) Selection Sort
2) Sequential Search
Selection Sort :
• Require n-1 passes
void selectionSort(int a[], int n) Lab cycle 1 experiment: Selection Sort
{
int i, j, min;
Consider the following list of Numbers and sort using selection sort
for (i = 0; i <= n-2; i++)
{ 45
min = i;
20
for (j = i+1; j<=n-1; j++)
40
{
if (a[j] < a[min]) 5
{ 15
min = j;
}
}
int temp = a[min];
a[min] = a[i];
a[i] = temp;
}}
Thus, selection sort is O(n2) algorithm on all inputs.
Sequential Search