0% found this document useful (0 votes)
16 views363 pages

Mcs 211 Complete Pcti

The document provides an overview of algorithms, their properties, and various problem-solving techniques in computer science. It covers the definition of algorithms, their characteristics, types of problems, and methods of analysis including time complexity. Additionally, it discusses different algorithmic approaches such as divide and conquer, greedy techniques, dynamic programming, and randomized algorithms.

Uploaded by

reevakhurana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views363 pages

Mcs 211 Complete Pcti

The document provides an overview of algorithms, their properties, and various problem-solving techniques in computer science. It covers the definition of algorithms, their characteristics, types of problems, and methods of analysis including time complexity. Additionally, it discusses different algorithmic approaches such as divide and conquer, greedy techniques, dynamic programming, and randomized algorithms.

Uploaded by

reevakhurana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 363

Design and Analysis of Algorithms

Block-1 Unit-1
Basics of an Algorithm and Its Properties
Topics to be Covered
Introduction

Example of an Algorithm

Basics Building Blocks of Algorithms

A Survey of Common Running Time

Analysis & Complexity of Algorithm

Types of Problems

+91 - 9319133134 , [email protected]


Topics to be Covered

Problem Solving Techniques


Deterministic and Stochastic Algorithms
Important Topics
Summary

+91 - 9319133134 , [email protected]


Introduction

• Studying algorithms is an exciting subject in computer science discipline.


• We come across a large number of interesting problems and techniques to
solve to solve these problems.
• Not every problem can be solved with the existing techniques but majority of
them can be. But let us define what is an algorithm first.
• The word algorithm is derived from the mathematician of the ninth century
Abdullah Jafar Muhammad ibn Musa Al-khowarizmi.
• The word „alkhowarizmi‟ is „Algorismus‟in Latin which became Algorithm
after his name.

+91 - 9319133134 , [email protected]


Example of an Algorithm

• An algorithm is not a coding instruction rather it is a sequence of


tasks written in common language, if executed produces certain
output within a time frame.
• An algorithm is completely independent of programming
language.

+91 - 9319133134 , [email protected]


Example of an Algorithm

• For a good algorithm, must satisfy the following characteristics


• Input: There must be a finite number of inputs for the algorithm.
• Output: There must be some output produced as a result of
execution of the algorithm.
• Definiteness: There must be a definite sequence of operations for
transformation of input into output.
• Effectiveness: Every step of the algorithm should be basic and
essential.
• Finiteness: The transformation of input to output must be achieved in finite
steps. +91 - 9319133134 , [email protected]
Example of an Algorithm

• Following are desirable characteristics of an algorithm:


• The algorithm should be general and is able to solve several cases.
• The algorithms should use resources efficiently, i.e. takes less time
and memory in producing the result.
• The algorithms should be understandable so that anyone can
understand and apply it to own problem.
• The algorithm should follow the uniqueness such that each
instruction of the algorithm is unambiguous and clear.

+91 - 9319133134 , [email protected]


Example of an Algorithm

• Let us find the GCD of a = 1071 and b = 462 using Euclid‟s algorithm :-
• Divide a=1071 by b=462 and store the remainder in r. r= 1071 % 462
(here, % represents the remainder operator). R =147
• If r = 0, the algorithm terminates and b is the GCD. Otherwise, go to
Step 3. Here, r is not zero, so we will go to Step 3.
• The integer will get the current value of integer b and the new value
of integer b will be the current value of r. Here, a=462 and b=147
• Go back to Step 1.

+91 - 9319133134 , [email protected]


Basics Building Blocks of Algorithms
• An algorithm is a procedural way to write the solution of a problem. It is
designed with five basic building blocks, namely

• Sequencing/Step by step actions

• Selection/Decision

• Iteration/Repetition or Loop

• Procedure

• Recursion
+91 - 9319133134 , [email protected]
A Survey of Common Running Time
• To compare two algorithms for a problem, running time is generally used
which is defined as the time taken by an algorithm in generating the output.
• An algorithm is better if it takes less running time. The “time” here is not
necessarily the clock time. However, this measure should be invariant to any
hardware used.
• The running time of an algorithm can be represented in terms of the number
of operations executed for a given input. More the number of operations, the
larger the running time of an algorithm.
• This running time of an algorithm for producing the output is also known as
time complexity.

+91 - 9319133134 , [email protected]


A Survey of Common Running Time
• Following are the generalized form of running time for the algorithms:

• Constant Time(O(1)): If the running time does not depend on the


input size (n) then it is known as constant running time. It can be
represented as

+91 - 9319133134 , [email protected]


A Survey of Common Running Time
• Linear Time O(kn): If the time complexity is at most a constant factor times
the size of the input, then it is known as linear time complexity and is
presented as T(n) <=k-----n where k is a constant or T(n) =O(n).
minimum = a[1]
for ί = 2 to n
if a[ί] < minimum
minimum = a[ί]
end
end if

+91 - 9319133134 , [email protected]


A Survey of Common Running Time
• Logarithmic Time(log(n)): The time complexity of an algorithm is
proportional to the logarithm of the input size and it is known as logarithmic
time complexity and depicted as O(log n)time.

+91 - 9319133134 , [email protected]


A Survey of Common Running Time
• Quadratic Time: (T(n)= O(n)2)- The algorithm is a pair of nested loops. The
outer loop iterates O(n) time and for each iteration the inner loop takes O(n)
time so we get O(n2) by multiplying these two factors of n , is useful for
problem for small input size or elementary sorting algorithms.

+91 - 9319133134 , [email protected]


A Survey of Common Running Time
• Cubic Time: (T(n)= O(n3)): It often occurs when the algorithm is having
there nested loops and each loop has a maximum n iterations. .

+91 - 9319133134 , [email protected]


A Survey of Common Running Time
• Polynomial Time(0(nk) :-This running time is obtained when the search
over all subsets of a set of a size kin performed.

+91 - 9319133134 , [email protected]


A Survey of Common Running Time

Exponential Time:
Beyond the
polynomial time • Exponential Time O(2n)
complexity there are • Factorial Time O(n!)
other two types of
bounds :

+91 - 9319133134 , [email protected]


Analysis & Complexity of Algorithm
• The term "analysis of algorithms" means to understand the complexity of an
algorithm in terms of time complexity and storage requirement.
• System performance is directly dependent on the efficiency of algorithm in
terms of both the time complexity as well the memory.
• There are 3 cases, to find the complexity function f(n):
• Worst-case − The maximum number of steps taken on any instance of
• size.
• Best-case − The minimum number of steps taken on any instance of size.
• Average case –The number of steps taken on average for all instances.

+91 - 9319133134 , [email protected]


Analysis & Complexity of Algorithm
• To understand the Best, Worst and Average cases of an algorithm,
consider a linear array
Algorithm: (Linear search)
/* Input: A linear list A with n elements and a searching element .
Output: Finds the location LOC of 𝑥 in the array A (by returning an index)or return LOC=0toindicate 𝑥 is not present
in A.*/
1. [Initialize]: Set K=1 and LOC=0.
2. Repeat step 3 and 4 while (LOC = = 0 && K < n)
3. If (𝑥 = = A[K])
4. {
5. LOC=K 6. K=K+1;7. }
8. If (LOC = = 0)
9. Print (“ 𝑥 is not present in the given array A);
10. Else
11. Print f(“𝑥 is present in the given array A at location A [LOC]);
12. Exit [end of algorithm] +91 - 9319133134 , [email protected]
Analysis & Complexity of Algorithm

• Analysis of linear search algorithm


• The complexity of the search algorithm is given by the number C of
comparisons between x and array elements A[K].
• Best case: Clearly the best case occurs when x is the first element in the
array A. That is . In this case .
• Worst case: Clearly the worst case occurs when x is the last element in
the array A or x is not present in given array A. In this case, we have

+91 - 9319133134 , [email protected]


Analysis & Complexity of Algorithm
• Average case: The searched element x appear in array A, and it is equally
likely to occur at any position in the array the number of comparisons can be
any of numbers1,2,3,…,n, and each number occurs with the probability p=1/n
then
1 1 1
C(n)= 1 , + 2 + ⋯ … … . +𝑛
𝑛 𝑛 𝑛
1
= ( 1 + 2 + …… … … … + n).
𝑛
1 𝑛 +1
=n(n+1). =
𝑛 2

+91 - 9319133134 , [email protected]


Types of Problems

Sorting
Searching
Graph problems
Combinatorial problems
Geometric problems
Numerical problems
+91 - 9319133134 , [email protected]
Types of Problems

Sorting
• The sorting is the process to arrange the given set of items in
a certain
• order, assuming that the nature of the items allow such an
ordering. For example, sorting a set of numbers in
increasing or decreasing order and sorting the character
strings, like names, in an alphabetical order. For any sorting
algorithm following two characteristics are desirable:
• Stability
• In-place
+91 - 9319133134 , [email protected]
Types of Problems

Searching
• Searching is finding an element, referred as search key, in a
given set of items. Searching is one of the most important
and frequently performed operation on any
dataset/database.

+91 - 9319133134 , [email protected]


Types of Problems

Graph Problems
• It is helpful for researchers to map a computational
problem to a graph problem. Many computational
problems can be solved using graph.
• Most of the problems like: visiting all the nodes of a
graph, routing in networks , finding the minimum cost
path, i.e.
• The shortest path, path with minimum delay etc. Can be
solved efficiently with graph algorithms.
+91 - 9319133134 , [email protected]
Types of Problems

Combinatorial Problems

• These types of problems have a combination of solutions


i.e. more than one solution are possible. The aim of the
combinatorial problems is to find permutations,
combinations, or subsets, satisfying the given conditions.

+91 - 9319133134 , [email protected]


Types of Problems

Geometric Problems
• These types of problems have a combination of solutions
i.e. more than one solution are possible.
• The aim of the combinatorial problems is to find
permutations, combinations, or subsets, satisfying the
given conditions.

+91 - 9319133134 , [email protected]


Types of Problems

• Following are widely known classic problems of computational geometry:


• The closest-pair problem
• The convex-hull problem
• The closest-pair problem is to find the closest pair out of a given set of
points in the plane.
• In the convex-hull problem, the smallest convex polygon is to be
constructed so that it includes all the points of a given set.

+91 - 9319133134 , [email protected]


Types of Problems

• Numerical Problems
• Problems of numerical computing nature are simultaneous linear
equations ,differential equations, definite integration, and statistics.
Most of the numerical problems could be solved approximately.
• The biggest drawback of numerical algorithms is the accumulation of
errors over the multiple iterations, due to rounding off the approximated
result at each iteration.

+91 - 9319133134 , [email protected]


Problem Solving Techniques

Divide and Conquer Approach


• This is one of the popular approaches in which a problem is
divided into smaller sub problems. These sub problems are
further divided into smaller sub problems until they can no
longer be divided.

+91 - 9319133134 , [email protected]


Problem Solving Techniques

• An algorithm, following divide & conquer technique, involves


following steps:

• Step 1. Divide the problem (top level) into a set of sub-problems (lower
level).

• Step 2. Solve every sub-problem individually by recursive approach.


• Step 3. Merge the solution of the sub-problems into a complete solution
of the problem.

+91 - 9319133134 , [email protected]


Problem Solving Techniques

• Greedy Technique
• Using Greedy approach, optimization problems are solved efficiently.
• In an optimization problem, the given set of input values are either to
be maximized or minimized, subject to some constraints or conditions.

• Greedy algorithm always picks the best choice (greedy approach) out
of many at a particular moment to optimize a given objective

+91 - 9319133134 , [email protected]


Problem Solving Techniques
• The greedy method chooses the local optimum at each step and this
decision may result in overall non-optimum or optimum solution.

• Following are some of the examples of the greedy approach.


• Kruskal‟s Minimum Spanning Tree

• Prim's Minimal Spanning Tree

• Dijkstra's shortest path

• Knapsack Problem
+91 - 9319133134 , [email protected]
Problem Solving Techniques

Dynamic Programming
• Dynamic Programming approach is a bottom-up approach
which involves finding solution of all sub-problems, saving
these partial results, and then reusing them to solve larger
sub-problems until the solution to the original problem is
obtained.

+91 - 9319133134 , [email protected]


Problem Solving Techniques

• Branch and Bound


• Branch and bound algorithm efficiently solves the discrete and
combinatorial optimization problems. In branch-and-bound algorithm, a
rooted tree is formed with the full solution set at the root.

• The algorithm explores the branches of this tree, representing the


subsets of the solution set.

+91 - 9319133134 , [email protected]


Problem Solving Techniques

• Randomized Algorithms
• In a randomized algorithm, a random number is selected at any stage of
the solution and is used for computation of the solution, that‟s why it is
called as randomized algorithm.
• In other words it can be said that algorithms that make random choices
for faster solutions are known as randomized algorithms.
Problem Solving Techniques

• Backtracking Algorithm
• Backtracking algorithm is like creating checkpoints while exploring
new solutions. It works analogues to depth-first search. It searches all
the possible solutions.

• During the exploration of solutions, if a solution doesn't work, it


back-track to the previous place and then find the other alternatives
to get the solution. If there are no more choice points the search fails.

+91 - 9319133134 , [email protected]


Deterministic & Stochastic algorithm

Algorithms can be categorized either deterministic or stochastic


in nature. An algorithm is deterministic if the next output can be
predicted/ determined from the input and the state of the
program, whereas stochastic algorithms are random in nature.
Problems with unpredictable result cannot be solved using
deterministic approach.

+91 - 9319133134 , [email protected]


Important Topics

Building Blocks of Algorithms


Sequencing Selection and Iteration
Procedure and Recursion
Common Running Time
Analysis and Complexity of Algorithm
Problem solving Techniques
Deterministic and Stochastic Algorithm

+91 - 9319133134 , [email protected]


Summary

• In this session, we have seen that, an algorithm is independent from a


programming language. Algorithm is designed to understand and
analyze the solution of a computational problem.
• An algorithm can be analyzed in terms of time complexity and space
complexity. To evaluate algorithms of a problem, time and space
complexities are considered. Algorithm, taking less time to produce
the desired output.

+91 - 9319133134 , [email protected]


+91 - 9319133134 , [email protected]
Design and Analysis of Algorithms
Block-1 Unit-2
Asymptotic Bounds
Topics to be Covered
Introduction
Some Useful Mathematical Functions & Notations
Mathematical Expectation
Principal of Mathematical Induction
Efficiency of an Algorithm
Asymptotic Functions & Notations
Important Topics
Summary
+91 - 9319133134, [email protected]
Introduction
• In the last session, we have discussed about algorithms and its basic
properties. We also discussed about deterministic and stochastic algorithms.
• In this session, we will discuss the process to compute complexities of
different algorithms, useful mathematical functions and notations, principle of
mathematical induction, and some well known asymptotic functions.
• Algorithmic complexity is an important area in computer science. If we know
complexities of different algorithms then we can easily answer the following
questions-
• How long will an algorithm/ program run on an input ?
• How much memory will it require ?
• Is the problem solvable ?
+91 - 9319133134, [email protected]
Some Useful Mathematical
Functions And Notations
• Functions & Notations : Just to put the subject matter in proper
context, we recall the following notations and definitions.
• Functions & Notations

• N = {1, 2, 3, …}
• I = {…, ─ 2, ─, 0, 1, 2, ….}
• R = set of Real numbers.

• Notation: If a1, a2…an are n real variables/numbers

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Summation

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Product
The expression 1 X 2 X….n n
n denoted in shorthand as

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Function

• For two given sets A and B a rule f which associates with each element
of A, a unique element of B, is called a function from A to B. If f is a
function from a set A to a set B then we denote the fact by f: A ->B. For
example the function f which associates the cube of a real number with a
given real number x, can be written as 𝑓(𝑥) = x3.

• Suppose the value of x is 2 there f maps 2 to 8

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Floor Function: Let x be a real number. The floor function maps each
real number x to the integer, which is the greatest of all integers less than
or equal to x. Example :-

• Ceiling Function: Let x be a real number. The ceiling function


denote each real number x to the integer, which is the least of all
integers greater than or equal to x. Example :-

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Multiplying by a constant or an expression
• If C is a constant,

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Logarithms
• Logarithms are important mathematical tools which are widely used
in analysis of algorithms. For n, a some important formulas related
to logarithms are given below
• loga(bc) = logab+logac
• loga(bn) = nlogab
• logba = logab
• loga(1/b) = ─logba(v)
• log b = 1/log b a
• a logb c = clogba

+91 - 9319133134, [email protected]


Some Useful Mathematical
Functions And Notations
• Modular Arithmetic/Mod Function
• The modular function or mod function returns the remainder
after a number
(called dividend) is divided by another number called divisor.

• Definition
• b mod n: if n is a given positive integer and b is any integer, then
b mod n=r where0 r <n and b = k * n +r
+91 - 9319133134, [email protected]
Mathematical Expectation

• In average-case analysis of algorithms, we need the concept


of Mathematical expectation. In order to understand the
concept better, let us first consider an example.
• Suppose, the students of MCA, who completed all the
courses in the year 2005, had the following distribution of
marks.

+91 - 9319133134, [email protected]


Mathematical Expectation

• If a student is picked up randomly from the set of students under consideration,


what is the % of marks expected of such a student? After scanning the table given
above, we intuitively expect the student to score around the 40% to 60% class,
because, more than half of the students have scored marks in and around this class.
+91 - 9319133134, [email protected]
Mathematical Expectation

• Assuming that marks within a class are uniformly scored by the students in
the class, the above table may be approximated by the following more
concise table:

• As explained earlier, we expect a student picked up randomly, to score


around 50% because more than half of the students have scored marks around
50%.
+91 - 9319133134, [email protected]
Mathematical Expectation
• Thus, we assign weight (8/100) to the score 10% (Therefore 8, out of 100
students, score on the average 10% marks); (20/100) to the score 30% and
so on.
• Thus

+91 - 9319133134, [email protected]


The Principle of Induction

Induction plays an important role to many facets of data


structure and algorithms.

Mathematical Induction is a method of writing a mathematical


proof generally to establish that a given statement is true for all
natural numbers.

+91 - 9319133134, [email protected]


The Principle of Induction
• It consists of the following three major steps:

• Induction base- In this stage we verify/establish the correctness of


the initial value. It is the proof that the statement is true for n = 1 or
some other starting value.

• Induction Hypothesis- it is the assumption that the statement is true


for any value of n where n ≥1.

• Induction Step- In this stage we make a proof that if the statement is


true for n , it must be true for n+1.
+91 - 9319133134, [email protected]
Efficiency of an Algorithm

• The size of an instance of the problem under consideration


and the role of size in determining complexity of the
solution.

• For example finding the product of two 2 × 2 matrices will


take much less time than the time taken by the same
algorithm for multiplying say two 100 × 100 matrices.

+91 - 9319133134, [email protected]


Efficiency of an Algorithm

• There are different measures of size of an instance of a problem are used


for different types of problem. Two examples are :

• In sorting and searching problems, the number of elements, which are


to be sorted or are considered for searching, is taken as the size of the
instance of the problem of sorting/searching.
• In the case of solving polynomial equations or while dealing with the
algebra of polynomials, the degrees of polynomial instances, may be
taken as the sizes of the corresponding instances.

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Asymptotic analysis is a more formal method for


analyzing algorithmic efficiency. It is a mathematical tool
to analyze the time and space complexity of an algorithm
as a function of input size.

• For example, when analyzing the time complexity of any


sorting algorithm such as Bubble sort, Insertion sort and
Selection sort in the worst .

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Algorithms in the worst case takes T(n) = n2 where n is a


size of the list.

In contrast, Merge sort takes time

T (n) = n*log2(n)

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Some common orders of growth seen often in complexity


analysis are
O(1) constant
O(log n) logarithmic
O(n) linear
O(n log n) n log n
O(n2) quadratic
O(n3) cubic
O(𝑛𝑘) polynomial
O(2𝑛) exponential

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Worst Case and Average Case Analysis


• The worst case algorithm is the element to be searched for is either
not in the list or located at the end of the list.
• In this case the algorithm runs for the longest possible time. It will
search the entire list.
• If an algorithm runs in time T(n), means that T(n) is an upper
bound on the running time that holds for all inputs of size n. This
is called worst-case analysis.
+91 - 9319133134, [email protected]
Asymptotic Analysis & Notations

• Average-case analysis which provides average amount of


time to solve a problem. In which calculate the expected
time spent on a randomly chosen input.

• This kind of analysis is generally more difficult


compared to worst case analysis.

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Asymptotic Notations

• There are mainly three asymptotic notations

• Big-O notation,

• Big-Θ ( Theta) notation

• Big-Ω (Omega) notation

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Big-O notation: Upper Bounds


• Big O is used to represent the upper bound or a worst case of
an
algorithm.
• It bounds the growth of the running time from above for large
value of input sizes. It notifies that a particular procedure
will never go beyond a specific time for every input n.
• One important advantage of big-o notation is that it makes
algorithms much easier to analyze.
+91 - 9319133134, [email protected]
Asymptotic Analysis & Notations

• Big-Oh notation can be defined as follows-


F(n)= O(g(n))
F(n) <= C.g(n): n ≥n0
• When a running time of f(n) is O(g(n)), it
means the running time of a function is
bounded from above for input size n by
c.g(n)

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Verify the complexity of 3n2 + 4n – 2 is O(n2)?


• In this case f(n) = 3n2 + 4n – 2 and g(n) = n2

• The above function is still a quadratic algorithm and can be


written as:

3n2 + 4n – 2 <= 3n2 + 4 n2 – 2 n2


<= (3 +4 -2) n2
= O(𝑛2)
+91 - 9319133134, [email protected]
Asymptotic Analysis & Notations

• Now to find out what values of c and n0 , so that

3n2 + 4n - 2 <= cn2 for all n >= n0.

• If n0 is 1, then c must be greater than or equal to 3 + 4 – 2 <= c, 6.


So, above function can now be written as-
3n2 + 4n - 2 <= 6n2 for all n >= 1
So, we can say :

3n2 + 4n – 2 = O(n2).
+91 - 9319133134, [email protected]
Asymptotic Analysis & Notations

• Big-Omega (𝗇) Notation


• Big Omega describes the asymptotic lower bound of an algorithm
whereas a big Oh(O)notation represents an upper bound of an
algorithm.

• That an algorithm takes at least this amount of time without mentioning


the upper bound.

• f(n) = 𝗇(g(n)) if and only if there exists some constants C and 𝑛0 such
that f(n) ³C.g(n) :" n ≥ 𝑛0 . The following graph illustrates the growth of
f(n) = 𝗇(g(n))
+91 - 9319133134, [email protected]
Asymptotic Analysis & Notations

• Let's define it more formally:


f(n) = 𝗇(g(n)) if and only if there exists some constants C and 𝑛0 such
that f(n) ³C.g(n) :" n ≥ 𝑛0 . The following graph illustrates the growth of
f(n) = 𝗇(g(n)).

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• If f(n) is Ω(g(n)) which means that the growth of f(n) is

asymptotically no slower than g(n) no matter what value of n is

provided.

𝒇 𝒏 = 𝗇 (𝒈 𝒏 )

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

Θ (Theta) notation: Tight Bounds


• The running time of an algorithm is θ(n), it means that
once n gets
• large enough, the running time is minimum c1⋅n, and
maximum c2⋅n, where c1 and c2 are constants. It
provides both upper and lower bounds of an algorithm.

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• The following figure illustrates the function


f(n) = θ(g(n).where value of f(n) lies between c1(g(n)) and c2(g(n))for
sufficiently large value of n.

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

• Theorem: For any two functions f(x) and g(x), f(x) = O (g(x)) if and
only if f(x) = O (g(x)) and f(x) = (g(x)).

• If f(n) is θ(g(n)) this means that the growth of f(n) is asymptotically


at the same rate as g(n) or we can say the growth f(n) is not
asymptotically slower or faster than g(n) no matter what value of n
is provided.

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations
• Some Useful Theorems for O, 𝛀, 𝚯

Let us assume |𝑎𝑚| + |𝑎𝑚−1| + ⋯ … … … + |𝑎1| + |𝑎0| = 𝑐

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

+91 - 9319133134, [email protected]


Asymptotic Analysis & Notations

+91 - 9319133134, [email protected]


Important Topics

Mathematical Functions & Notations


Summation & Product, Function, Logarithms
Mathematical Expectation
Asymptotic Functions & Notations
Principle of Mathematical Induction
Summary

• In this session we have seen that :-


• Solving any problem algorithm is designed. An algorithm is a definite, step-
by-step procedure for performing some task.

• An algorithm takes some sort of input and produces some sort of output

• Algorithm must be efficient and easy to understand.


• There are three popular asymptotic notations namely, big O, big Ω, and big
Θ.
+91 - 9319133134, [email protected]
+91 - 9319133134, [email protected]
Design and Analysis of Algorithms
Block -1 Unit-3
Complexity Analysis of Simple Algorithms
Topics to be Covered

• Introduction

• A Brief Review of Asymptotic Notations

• Analysis of Simple Constructs or Constant Time

• Analysis of Simple Algorithms

• Important Topics

• Summary
+91 - 9319133134 , [email protected]
Introduction

• Computational complexity describes the amount of processing


time required by an algorithm to give the desired result.

• The worst-case time complexity (big O notation) which is


the maximum amount of time required to execute an algorithm
for inputs of a given size.

• Whereas average case complexity, which is the average of the


time taken on inputs of a given size is less common.
+91 - 9319133134 , [email protected]
Asymptotic Notations
• These notations is to focus on only the time required to execute an algorithm &
compare the relative rate of growth of functions.
• Assume T(n) and f(n) are two functions
• 𝑇(𝑛) = 𝑂(𝑓(𝑛)) if there are two positive constants C and n0 such that𝑇(𝑛)
≤ 𝐶𝑓(𝑛)where n ≥ 𝑛0
• 𝑇(𝑛) = Ω(𝑓(𝑛)) if there are two positive constants C and n0 such that 𝑇(𝑛)
≥ CΩ f(n) where n ≥ n0
• 𝑇(𝑛) = 𝜃(𝑓(𝑛)) if and only if 𝑇(𝑛) = O(𝑓(𝑛))𝑎𝑛𝑑𝑇(𝑛) = Ω(𝑓(𝑛))
+91 - 9319133134 , [email protected]
Analysis of Simple Constructs or
Constant Time
• O(1): Time complexity of a function (or set of statements) is considered as
O(1) „if (i) statements are simple statement like assignment, increment or
decrement operation and declaration statement and (ii) there is no
recursion, loops or call to any other function. Example -

𝑖𝑛𝑡 𝑥;
𝑥=𝑥+5
𝑥=𝑥−5

+91 - 9319133134 , [email protected]


Analysis of Simple Constructs or
Constant Time
• O(n): This is running time of a single looping statement which includes
comparing time, increment or decrement by some constant value looping
statement. Example -

for (i = 1; i<= n; i += c) {
// simple statement(s) }
for (int i = n; i> 0; i -= c) {
// simple statement(s)
}
+91 - 9319133134 , [email protected]
Analysis of Simple Constructs or
Constant Time
• O(nc): This is a running time of nested loops. Time complexity of nested
loops is equal to the number of times the innermost statements is executed.
For example, the following sample loops have O(n2) time complexity.
Example –
for (int i = 1; i<=n; i += c) {

for (int j = 1; j <=n; j += c) {


// some simple statements
}}
+91 - 9319133134 , [email protected]
Analysis of Simple Constructs or
Constant Time
• O(logn): If the loop index in any code fragment is divided or multiplied
by a constant value, the time complexity of the code fragment is O(logn).
Example -

for (int i = 1; i<=n; i *= c ) {


// some simple statements }
for (int i = n; i> 0; i /= c) {
// simple statements
}
+91 - 9319133134 , [email protected]
Analysis of Simple Constructs or
Constant Time
• Time complexities of consecutive loops : If the code
fragment is having more than one loop, time complexity of the
fragment is sum of time complexities of the individual loops.
Example -
for (int i = 1; i<=m; i ++ c) {
// simple statements taking 𝜃(1)
} for (int f = 1; i<=n; if += X) {
// simple statements of 𝜃(1)
}
+91 - 9319133134 , [email protected]
Analysis of Simple Constructs or
Constant Time
• Consecutive if else statements
• The running time of the if-else statement is just running time of the
testing the condition plus the larger of the running of times statement1
or statement2
code fragment of
if – else is
if (condition)
statement 1 else
statement 2

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms

• A Summation Algorithm
• The following is a simple program to calculate
int sum of n cube (int n)
{
int i, temp result;
Temp result =0;
for (i=1 ; I <=n; i++)
Temp result = temp result + i * i * i
return tempresult;
}
+91 - 9319133134 , [email protected]
Analysis of Simple Algorithms

• Polynomial Evaluation
• A polynomial is an expression that contains more than two terms. A
term comprises of a coefficient and an exponent.
P(x) = 15x4+7x2+9x+7 P(x)=14x4+17x3−12x2+13x+16
• A polynomial may be represented in form of array or structure.

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms
• A structure representation of a polynomial contains two parts
• coefficient
• the corresponding exponent.
• The following is the structure definition of a polynomial:
Struct polynomial
{
int coefficient;
int exponent;

};
+91 - 9319133134 , [email protected]
Analysis of Simple Algorithms
• Analysis of Brute Force Method
• A brute force approach to evaluate a polynomial is to evaluate all
terms one by one.
• First calculate xn, multiply the value with the related coefficient 𝑎n,
repeat the same steps for other terms ,then return the sum
p(x)= an∗x∗x∗ …∗x∗x +an−1∗x∗x∗ … ∗x∗x +an−2∗x∗x∗
…∗x∗x +⋯+a2∗x∗x∗+a1∗x+a0

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms
• Analysis of Horner’s Method

In the first term it takes one multiplication, in the second term


one multiplication and so on..

P(x)=(…(((an∗x+an−1)∗x+an−2)∗x+...+a2)∗x+a1)∗x+a0

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms
• Pseudo code for polynomial evaluation using Horner method,
Horner(a,n,x)
• Assign value of poly p[n]= coefficient of nth term in the
polynomial
• set i=n-1
• compute p = p * x + poly[i];
• i=i-1
• if i is greater than or equal to 0 Go to step4.
• final polynomial value at x is p.

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms
• Step II
• Algorithm to evaluate polynomial at a given point x using Horner's
rule:
• Input: An array A[0..n] of coefficient of a polynomial of degree n
and a point x.
• Output: The value of polynomial at given point x.

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms

Evaluate_Horner (a,n,x)
{
p = A[n];
for (i = n-1; i≤0;i--)
p = p * x + A[i];
return p;
}

+91 - 9319133134 , [email protected]


Analysis of Simple Algorithms

• Complexity Analysis
• Polynomial of degree n using Horner‟s rule is evaluated as below:
• Initial assignment, p = a[n]
• After the first iteration p = xan +an–1
• After the second iteration, p = x(xan + an–1) + an–2

• Every subsequent iteration uses the result of previous iteration i.e next
iteration multiplies the previous value of p then adds the next
coefficient.

+91 - 9319133134 , [email protected]


Matrix (N x N) Multiplication

• Matrix is very important tool which is managing the data in matrix form it
will be easy to manipulate and obtain more information. One of the basic
operations on matrices is multiplication.

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication

• For Example multiply two square matrix of order n x n and find its time
complexity.
• Multiply two matrices A and B of order n*n each and store the result in
matrix C of order n*n.A square matrix of order n*n is an arrangement of
set of elements in n rows and n columns.

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication

• Matrix of order 3 x 3 which is represented as

a11 a12 a13


A= a21 a22 a23 3*3 matrix
a31 a32 a33

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication
• Step I:
• Pseudo code: For Matrix multiplication problem where we will
multiply two matrices A and B of order 3x3 each and store the result
in matrix C of order 3x3.
• Multiply first row first element of first matrix with first column
first element of second matrix.
• Similarly perform this multiplication for first row of first matrix
and first column of second matrix. Now take the sum of these
values.
• The sum obtained will be first element of product matrix C

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication

• Step I :
• Pseudo code: For Matrix multiplication problem where we will
multiply two matrices A and B of order 3x3 each and store the
result in matrix C of order 3x3.
• Multiply first row first element of first matrix with first column first
element of second matrix.

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication
• Step II :
• Algorithm for multiplying two square matrix of order n x n
and find the product matrix of order n x n
• Input: Two n x n matrices A and B
• Output: One n x n matrix C = A x B
Matrix Multiply(A,B,C,n)
{
for i = 0 to n-1
for j = 0 to n-1{
C[i][j]=0 //loop
C[i][j] = C[i][j] + A[i][k] * B[k][j] } }
+91 - 9319133134 , [email protected]
Matrix (N X N) Multiplication

• Complexity Analysis

• First step is, for loop that will be executed n number of times i.e. it
will take O(n) time.

• The second nested for loop will also run for n number of time and will
take O(n) time & constant time i.e. O(1) .

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication
• The third for loop i.e. innermost nested loop will also run for n number
of times and will take O(n ) time . Assignment statement inside third
for loop will cost O(1) as it includes one multiplication
• Total time complexity of the algorithm will be O(n3) for matrix
multiplication of order n*n.

+91 - 9319133134 , [email protected]


Matrix (N X N) Multiplication
• Exponent Valuation
• Exponent evaluation is the most important operation. It has
applications in cryptography and encryption methods, The exponent
tells us how many times to multiply the base by itself.
• Left to right binary exponentiation
• Right to left binary exponentiation

+91 - 9319133134 , [email protected]


Linear Search

• Linear search is the simplest method for searching. In Linear search


technique of searching; the element is searched sequentially in the list.
• This method can performed on a sorted or an unsorted list (usually arrays).
• In case of a sorted list searching starts from 0th element and continues until
the element is found from the list or the element whose value is greater than
(assuming the list is sorted in ascending order), the value being searched is
reached.

+91 - 9319133134 , [email protected]


Linear Search

• Linear_ Search( A[ ], X)
• Step 1: Initialize i to 1
• Step 2: if i exceeds the end of an array then print “element not
found” and Exit
• Step 3: if A[i] = X then Print “Element X Found at index i in the
array” and Exit
• Step 4: Increment i and go to Step 2

+91 - 9319133134 , [email protected]


Linear Search

+91 - 9319133134 , [email protected]


Sorting
• Sorting is the process of arranging a collection of data into either ascending
or descending order.
• Internal Sort: - Internal sorts are the sorting algorithms in which the
complete data set to be sorted is available in the computer‟s main
memory.
• External Sort: - External sorting techniques are used when the
collection of complete data cannot reside in the main memory but must
reside in secondary storage for example on a disk.

+91 - 9319133134 , [email protected]


Sorting
• Bubble Sort
• It is the simplest sorting algorithm in which each pair of adjacent
elements is compared and exchanged if they are not in order.

• This algorithm is not recommended for use for a bigger size array .
• Largest element in the given unsorted array, bubbles up towards the
last place in every cycle/pass .

+91 - 9319133134 , [email protected]


Sorting
Bubble Sort

+91 - 9319133134 , [email protected] https://www.computersciencebytes.com/sor


ting-algorithms/bubble-sort/
Sorting
• bubble_sort (A,n)
{
int i,j
for ( i= 1 to n-1 for (j = 0 to n-2
{
if (A[j]>A[j+1])
{
// swapping of two adjacent elements of an array A exchange (A[j],
A[j+1])
}}}

+91 - 9319133134 , [email protected]


Important Topics

• Explain asymptotic notations


• Analysis of simple constructs or constant time
• Summation algorithm
• Polynomial evaluation algorithm
• Exponent evaluation
• Sorting algorithm

+91 - 9319133134 , [email protected]


Summary

• In this session after making a brief review of asymptotic notations,


complexity analysis of simple algorithms in illustrated simple summation,
matrix multiplication, polynomial evaluation, searching and sorting.
• Horner‟s rule is discussed to evaluate the polynomial and its complexity is
O(n).
• Basic matrix multiplication is explained for finding product of two matrices
of order n*n with time complexity in the order of O(n3 ). For exponent
evaluation both approaches i.e left to right binary exponentiation and right
to left binary exponentiation is illustrated. Time complexity of these
algorithms to compute xn is O(log n).
• Different versions of bubble sort algorithm are presented and its
performance analysis is done at the end.
+91 - 9319133134 , [email protected]
Design and Analysis of Algorithms
Block-1 Unit-4
Solving Recurrences
Topics to be Covered

• Introduction
• Recurrence Relation
• Methods for Solving Recurrence Relation
• Important Topics
• Summary

+91-9319133134, [email protected]
Introduction

• Complexity analysis of iteration algorithms is much easier as


compared to recursive algorithms. But, once the recurrence
relation/equation is defined for a recursive algorithm, which is not
difficult task, then it becomes easier task to obtain the asymptotic
bounds (q, O) for the recursive solution.
• In this unit we focus on recursive algorithms exclusively. Three
techniques for solving recurrence equation are discussed: (i)
Substitution method (ii) Recursion Tree Method and Master
Method.
+91-9319133134, [email protected]
Recurrence Relation

• Recurrence relation to describe the running time of a recursive


algorithm. A recursive algorithm can be defined as an algorithm
which makes a recursive call to itself with smaller data size.
• Recursively, especially those problems which are solved through divide
and conquer technique. The main problem is divided into smaller sub-
problems which are solved recursively.
• Merge Sort, Binary search, Strassen’s multiplication
algorithm are formulated as recursive algorithms .

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Substitution Method
• Substitution is opposite of induction .We start at n and move
backward. A substitution method is one, in which we guess a bound
and then use mathematical induction to prove whether our guess is
correct or not. It comprises two steps:
• Step1: Guess the asymptotic bound of the Solution.
• Step2: Prove the correctness of the guess using
Mathematical Induction

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Example :- A Fibonacci sequence 𝑓0, 𝑓1, 𝑓2, … .. can be defined by the
recurrence relation as:

• (Basic Step)The given recurrence says that if n=0 then 𝑓0 = 0 and if n=1 then 𝑓1
= 1 . These two conditions (or values) where recursion does not call itself is
called an initial condition (or Base conditions).
• (Recursive step): This step is used to find new terms 𝑓2, 𝑓3, … . ., from the
existing (preceding) terms, by using the formula 𝑓𝑛 = 𝑓𝑛−1 + 𝑓𝑛−2 for 𝑛 ≥ 2.
• This formula says that “by adding two previous sequence (or term) we can get the
next term”.
• For example 𝒇𝟐 = 𝒇𝟏 + 𝒇𝟎 = 𝟏 + 𝟎 = 𝟏;
+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations

+91 - 9319137138 ,
[email protected]

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Recursion Tree method
• Recursion tree method is especially used to solve a recurrence of the form:

https://www.gatevidyalay.com/recursion-tree-
solving-recurrence-relations/#google_vignette

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations

+91 - 9319137138 ,
[email protected]

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Method (steps) for solving a recurrence
(𝒏) = 𝑻 (𝒏/𝒃) + (𝒏) using recursion
tree.
• We make a recursion tree for a
given recurrence as follows:
• To make a recursion tree of a
given recurrence (1), First put the
value of 𝒇(𝒏) at root node of a
tree and make a number of
child nodes of this root value 𝒇(𝒏)
Now tree will be looks like as:
+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Now we have to find the value of 𝑻 (𝒏) by putting (n/b) in place of
n in 𝒃equation .That is

+91 - 93191 37138 ,


query@cfte u.in
d

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Master Method
• A function f(n) is asymptotically positive if any only if there exists
a real number n such that f(x) > 0 for all x > n.
• The master method provides us a straight forward method for solving
recurrences of the form𝑻(𝒏) = 𝒂𝑻 𝒏 𝒃+ (𝒏), where a > 1 and b > 1 are
constants and f(n) is Asymptotically positive function. This
recurrence gives us the running time of an algorithm that divides a
problem of size n into a sub problems of size The a sub problems are
solved recursively, each in time T(𝒏)/𝒃.

+91-9319133134, [email protected]
Methods for Solving Recurrence
Relations
• Theorem1: Master Theorem
• The Master Method requires memorization of the following 3 cases;
then the solution of many recurrences can be determined quite easily,
often without using pencil & paper.

+91-9319133134, [email protected]
Important Topics

• Substitution Method with theorem


• Recursion-tree Method with theorem
• Master Theorem with theorem

+91-9319133134, [email protected]
Summary

• In this session we have seen that when an algorithm contains a


recursive call to itself, its running time can often be described by a
recurrence equation which describes a function in terms of its
value on smaller inputs.

• There are three basic methods of solving the recurrence relation:

• The Substitution Method


• The Recursion-tree Method
• The Master Theorem

+91-9319133134, [email protected]
+91-9319133134, [email protected]
Design and Analysis of Algorithms
Block-2 Unit-1
Greedy Techniques
Topics to be Covered

• Introduction
• Example to Understand Greedy Techniques
• Formalization of Greedy Techniques
• An overview of Local and Global Optima
• Fractional Knapsack Problem
• A Task Scheduling Algorithm
• Huffman Codes
• Important Topics
• Summary
+91 - 9319133134, [email protected]
Introduction

• Greedy Algorithm a set of resources are recursively divided


based on the maximum, immediate availability of that
resource at any given stage of execution.
• To solve a problem based on the greedy approach, there are
two stages Scanning the list of items Optimization.
• These stages are covered parallelly in this Greedy algorithm
tutorial, on course of division of the array.

+91 - 9319133134, [email protected]


Examples to Understand Greedy
Techniques
• Example 1:
• Suppose there is a list of tasks along with the time taken by each task.
However, you are given with limited time only. The problem is which
set of tasks will you be doing so that you can complete maximum
number of tasks in the given amount of time.
• Solution:
• The intuitive approach would be greedy approach and the task
selection criteria would be to select the task with the slowest amount
of time. So, the first task will be that task which takes minimum time.
The next task would be the one that takes minimum time among the
remaining set of tasks and so on.

+91 - 9319133134, [email protected]


Examples to Understand Greedy
Techniques
• Example 2:
• Consider a bus which can travel up to 40 kilometers (Km) with full
tank. We need to travel from location ‘A’ to location ‘B’ which has
distance of 95 Km as depicted in Figure

• In between ‘A’ and ‘B’, there are four gas stations, G1, G2, G3, and
G4, which are at distance of 20 KM, 37.5 KM, 55 KM, and 75 KM,
from location ‘A’ respectively. The problem is to determine the
minimum number of refills needed to reach the location ‘B’ from
location ‘A’.
+91 - 9319133134, [email protected]
Examples to Understand Greedy
Techniques
• Solution: Suppose, the tank refill decision criteria is considered to refill at
the gas station which is nearest when the tank is about to get empty.
• According to the criteria, if we start from location ‘A’, the tank will be
refilled at the second gas station (G2) as it is 37.5 KM from ‘A’.
• After this, we need to refill at the fourth gas station (G4) which is at a
distance of 37.5 KM from G2.
• From G4, we can easily reach the location ‘B’ as it is 20 KM. Therefore,
the number of refills required is two as represented in the Figure.

+91 - 9319133134, [email protected]


Formalization of Greedy Technique

• For a given problem with ‘n’ input values, the greedy approach will
decide criteria to select values, termed as selection criteria, and will run
for ‘n’ times. In each run, following steps will be performed:
• It will perform selection based upon the selection criteria. This
returns a value from the considered input values and also removes
the selected value from the input values.
• It will check the feasibility of the selected value.
• If the solution is feasible then the selected value is added to the
solution set else step 1 is repeated.

+91 - 9319133134, [email protected]


An Overview of Local And Global
Optima
• Local optimization involves finding the optimal solution for
a specific region of the search space, or the global optima
for problems with no local optima.
• Global optimization involves finding the optimal solution on
problems that contain local optima.

+91 - 9319133134, [email protected]


Fractional Knapsack Problem
• This is an optimization problem which we want to solve it through greedy
technique. In this problem, a Knapsack (or bag) of some capacity is considered
which is to be filled with objects.
• Example :- Given the weights and profits of N items, in the form of {profit, weight}
put these items in a knapsack of capacity W to get the maximum total profit in the
knapsack. In Fractional Knapsack, we can break items for maximizing the total
value of the knapsack.

https://www.geeksforg
eeks.org/fractional-
+91 - 9319133134, [email protected] knapsack-problem/
Fractional Knapsack Problem

• The fractional knapsack problem is defined as:


• Given a list of n objects say {𝑂1,𝑂2 , … … . , 𝑂n} and
a Knapsack (or a bag).
• Capacity of Knapsack is W
• Each object 𝑂i has a 𝑤i weight and a profit of 𝑝i
• If a fraction 𝑥i (where 𝑥i ∈ {0, … … . ,1. })of an object
𝑂i is placed into a knapsack then a profit of 𝑝i𝑥i is
earned.

+91 - 9319133134, [email protected]


Fractional Knapsack Problem

+91 - 9319133134, [email protected]


Task Scheduling Algorithm
• A task scheduling problem is formulated as an optimization problem in
which we need to determine the set of tasks from the given tasks that can
be accomplished within their deadlines along with their order of
scheduling such that the profit is maximum.

+91 - 9319133134, [email protected]


Task Scheduling Algorithm

• Example : Let us consider an example to understand. Suppose, there are 5


tasks and each task has associated profit in rupees. The amount of time
required to complete each task is one hour. However, the deadline of each
task is given in specified in Table

+91 - 9319133134, [email protected]


Task Scheduling Algorithm

• Solution: To solve this problem, consider the highest deadline and


prepare that many number of slots of unit time to schedule the tasks. This
will help in scheduling the tasks easily as there will be no task to be
completed after the highest deadline. According to the given problem, the
highest deadline is 3. So, there will be three slots as illustrated in Figure,
each slot of unit time.

• Now, greedy approach has to select the set of three tasks and schedule
them in such a way that the total profit is maximum on their
completion.

+91 - 9319133134, [email protected]


Task Scheduling Algorithm

• The schedule of given tasks is presented in Figure

• Therefore, the sequence of selected tasks is as follows:

• {T1, T2, T4} The total profit is: 18+22+7= 47

+91 - 9319133134, [email protected]


Huffman Codes

• Huffman coding is a greedy algorithm that is used to compress the data.


Data can be sequence of characters and each character can occur multiple
times in a data called as frequency of that character.

• Data transmitted through transmission media that can be digital


communication or analog communication & represent data in the form of
bits.

• Huffman compression algorithm is applied to represent the data in


compressed form called as Huffman codes.

+91 - 9319133134, [email protected]


Huffman Codes

https://towardsdatascience.com/h
uffman-encoding-python-
implementation-8448c3654328 +91 - 9319133134, [email protected]
Huffman Codes

https://towardsdatascience.com/h
uffman-encoding-python-
implementation-8448c3654328 +91 - 9319133134, [email protected]
Huffman Codes

https://towardsdatascience.com/huffman-encoding-python-
implementation-8448c3654328

+91 - 9319133134, [email protected]


Huffman Codes

• Huffman coding works in two steps:


• Build a Huffman tree.
• Find Huffman code for each character

+91 - 9319133134, [email protected]


Huffman Codes

• Steps for Building Huffman Tree:


• Input is a set of 𝑀 characters and each character 𝑚 ∈ 𝑀 has a
frequency 𝑚𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
• Store all the characters in min-priority queue using frequencies as their
values (Priority queue is a data structure that allows the operation of
search min (or max), insert, delete min (or max, respectively).
• If heap is used to implement priority queue , it will take O(log n) time.

+91 - 9319133134, [email protected]


Huffman Codes

• Extract two minimum nodes 𝑥, 𝑦 from min-priority queue𝑃𝑄.


• Replacing 𝑥, 𝑦 in the queue with a new-node 𝑧 representing their merger.
The frequency of 𝑧 is computed as sum of the frequencies of𝑥and𝑦. The
node 𝑧 has 𝑥 as its left child and 𝑦 as its right child.
• Repeat steps 3 and 4 until one node left in the queue, which is the root of
the Huffman tree.
• Return the root of the tree.

+91 - 9319133134, [email protected]


Huffman Codes

• Steps to find Huffman code for each character


• Traverse the tree starting from the root node.
• An edge connecting to the left child node is labeled as 0 and 1 if it
is
connecting to the right child node.
• Traverse the complete tree through the left and right child based on
the assigned value.
• Prefix code/ Huffman code for a letter is the sequence of labels on the
edge connecting the root to the leaf for that letter.

+91 - 9319133134, [email protected]


Huffman Codes

• Time Complexity of Huffman Algorithm


• Priority queue is implemented using binary-min heap. Building min
heap procedure takes 𝑂(𝑛) time in step 2. Steps 3 and 4 run exactly
n-1 times.
• Since in each step a new node z is added in the heap. When new
node added it take 𝑂(𝑙𝑜𝑔𝑛) time to heapify.
• Thus, total running time of Huffman algorithm on a set of n
characters id 𝑶(𝒏 𝒍𝒐𝒈𝒏).

+91 - 9319133134, [email protected]


Important Topics

• Formalization of Greedy Techniques


• Local and Global Optima
• Fractional Knapsack Problem
• Task Scheduling Algorithm
• Huffman Codes

+91 - 9319133134, [email protected]


Summary

• In this session we have seen that an optimization problem is a problem


in which the best solution among all the possible solutions of a problem
is needed which maximizes/minimizes the objective function under
some constraints or conditions.

• Local optimal solutions correspond to the set of all the feasible solutions
that are best locally for an optimization problem. Global optimal
solution corresponds to the best solution of the optimization problem.

• In Fractional Knapsack problem, a Knapsack (or bag) of some capacity


is to be filled with objects which can be included in fractions. Each
object is given with a weight and associated profit.
+91 - 9319133134, [email protected]
+91 - 9319133134, [email protected]
Design and Analysis of Algorithms
Block-2 Unit-2

Divide & Conquer Technique


Topics to be Covered

• Introduction
• Recurrence Relation Formulation in Divide and Conquer Technique
• Binary Search Algorithm
• Sorting Algorithms
• Integer Multiplication
• Matrix Multiplication Algorithm
• Important Topics
• Summary
+91-9319133134, [email protected]
Introduction

• In Divide and Conquer approach,


the original problem is divided into
two or more sub-problems
recursively.
• A divide and conquer algorithm
is a strategy of solving a large
problem by breaking the problem
into smaller sub-problems solving
the sub-problems, and combining
them to get the desired output.

+91-9319133134, [email protected]
Recurrence Relations Formulations in
Divide and Conquer Approach
• Divide and Conquer approach the running time is equated as a
recurrence relation which is based upon three steps:
• Divide: Dividing the given problem into sub-problems in such a way that
each sub-problem is equivalent to the original problem but its size is
smaller than the original one. Further sub-division of each sub-problem is
done till either it is directly solvable or it is impossible to perform sub-
division which indicates that there is a direct solution of the sub-problem.
• Conquer: Each sub-problem solves itself by calling itself recursively.
• Combine: Each solution of the sub-problem is combined to obtain the
original solution.
+91-9319133134, [email protected]
Recurrence Relations Formulations in
Divide and Conquer Approach
• where,

• T(n) = time required to solve a problem of size ‘n’.


• A corresponds to the number of partitions made to a problem.
• T(n/b) corresponds to the running time of solving each sub-problem
of size
(n/b).

+91-9319133134, [email protected]
Recurrence Relations Formulations in
Divide and Conquer Approach
• f(n) − D(n) + C(n) − time required to divide the problem and combine
the solutions respectively. If the problem size is small enough say , n ≤
C for some constant C , we have a best case which can be directly
solved in a constant time: θ(1), otherwise, divide a problem of a size n
into sub problems, each of 1/b size.

Solution of
Sub-problem
sub-problem
Solution of
Problem
Problem
Solution of
Sub-problem
sub-problem

+91-9319133134, [email protected]
Binary Search

• Binary search is a procedure of finding the location of an element in


a sorted array.. The divide and conquer version of the algorithm proceeds
as follows:
• If the key is found at the middle position of the array, the algorithm
terminates, otherwise
• Divide the array into two sub-arrays recursively. If the key is
smaller than the middle value, select the left part of the sub-arrays,
otherwise ( if it is larger) select the right part of :the sub-array. The
process continues until the search interval is not empty or it cannot
be broken further.

+91-9319133134, [email protected]
Binary Search

• Conquer (solve) the sub-array by determining whether the key is


located in that sub-array.
• Obtain the result.
• Binary search looks for a particular item by comparing the middle most
item of the collection. If a match occurs, then the index of item is
returned. If the middle item is greater than the item, then the item is
searched in the sub-array to the left of the middle item.
mid = low + (high - low) / 2

+91-9319133134, [email protected]
Binary Search

• Analysis of Binary Search:


Method1: the size of the array is a power of 2, say 2k . Each time in the
while loop, when we examine the middle element, we cut the size of the
sub-array into half. So Before the 1st iteration size of the array is 2k .
• After the 1st iteration size of the sub-array of our interest is: 2k-1
• After the 2nd iteration size of the sub-array of our interest is: 2k-2
• After the kth iteration size of the sub-array of our interest is:
2k-k=1
• So we stop after the next iteration. Thus we have at most
(k+1) = (logn+1) iterations.
+91-9319133134, [email protected]
Binary Search
• Recurrence Relation of Binary Search
• The complexity of divide and conquer approach is defined by
recurrence relation as follows:
• T(n) = a T(n/b) + f(n)
• As binary search follows divide and conquer approach and
performs dividing the array but searching on only one sub-
array in iteration, the computational complexity of binary
search technique can be defined as recurrence relation in the
following form:
• T(n) = T(n/2) + k

+91-9319133134, [email protected]
Binary Search

• Where, a, b, and f(n) are replaced with values 1, 2, k (constant less than
n), respectively. K is constant value for divide and conquer operation on
solving this recurrence relation by substitution method, the computational
complexity for binary search is O(logn).

+91-9319133134, [email protected]
Sorting Algorithms

• It is process of rearranging the elements of a list in either ascending or


descending order. For example, consider a list of elements < a1, a2, a3,
….., an > as input which needs to be sorted in ascending order, then the
output of the sorting algorithm will be rearranging the list such that <a1 <=
a2 <= a3 <= …….<= an >. there are number of sorting algorithms like
bubble sort, merge sort, radix sort, quick sort and many more.

• Merge-Sort and Quick-Sort sorting algorithms as they are based on divide


and conquer approach.

+91-9319133134, [email protected]
Sorting Algorithms

• Merge-Sort
• Merge-Sort algorithm is a divide-and-conquer based sorting
algorithm. It follows a divide and conquer approach to perform
sorting. The algorithm in divide step repeatedly partitions an array
into several sub arrays until each sub array consists of a single
element.

+91-9319133134, [email protected]
Sorting Algorithms

• The following example illustrates the above


steps: Suppose the array has following
numbers:
37 20 23 30 18 15 27 17
In divide step , the array is divided into two sub arrays
37 20 23 30 and 18 15 27 17
In conquer step , we sort each sub array 20 23 30 37 and 15
17 18 27 In combine(merge) ,all the sub arrays are merged in
sorted order 15 17 18 20 23 27 30 37

+91-9319133134, [email protected]
Sorting Algorithms

• Algorithm 2:Merge-Sort (X, m, n) if (m<n) {


i = (m+n)/2;
Merge-Sort (X, m, i); Merge-Sort (X, i+1, n);
Merge (X, m, i, n);
}
else {
Already sorted
}

+91-9319133134, [email protected]
Sorting Algorithms

• Quick-Sort
Quick-Sort is another sorting algorithm that works on the principle of
divide- and-conquer approach. It works by arranging the elements in an
array by identifying their correct place (or index).

+91-9319133134, [email protected]
Sorting Algorithms

• Divide: In this, an element of the given array is considered as pivot


element whose correct index ‘q’ in the array is determined by
rearranging the array elements. Then, the given array A[m … n] is
partitioned into two sub-arrays, A[p..q] and A[q+1..r], in such a way
that all the elements in A[p..q] are smaller than A[q] and all the
elements in A[q+1..r] are greater than A[q]. Generally, the
procedure which is used to perform this step is termed as Partition.
The output of this procedure is the index ‘q’ which divides the
given array ‘A’.

+91-9319133134, [email protected]
Sorting Algorithms

• Conquer: The partitioned sub-arrays, A[p..q] and A[q+1..r],


recursively call the step (1) to perform sorting of the elements.
• Combine: As all the elements are sorted in their respective
places, there is no need to perform combine step and the array
is sorted.

+91-9319133134, [email protected]
Sorting Algorithms

• Analysis of Quick-Sort:
• The computation complexity of Quick-Sort is based on the
arrangement of array elements. Arrangement of elements effects
the partitioning of an array. If partitioning is unbalanced,
partitioning procedure will perform more number of times in
comparison to balanced partition. Therefore, the Quick-Sort
has different complexity in different scenarios:

• Best Case: If the input data is not sorted, then the partitioning of
subarray is balanced. (i.e. 0(nlogn))
+91-9319133134, [email protected]
Sorting Algorithms
• Worst Case: If the given input array is already sorted or almost sorted, then
the partitioning of the subarray is unbalancing in this case the algorithm runs
asymptotically as slow as Insertion sort(i.e. 0(n2)).

• Average Case: Except best case or worst case. The shows the recursion
depth of Quick-sort for Best, Worst and Average cases.

+91-9319133134, [email protected]
Integer Multiplication

• The brute force algorithm for multiplying two large integer numbers which
everyone of us uses by hand, takes quadratic time i.e., O (n2). Because each
digit of one number is multiplied by each digit in another number.

• Assume X and Y are two n digits number. Divide X and Y into two halves of
approximately n/2 digits each.
• The following examples illustrate the division process:
• 657,138= 657* 103 + 138
• 6578,381 = 6578* 103 +381

+91-9319133134, [email protected]
Integer Multiplication
• Let us generalize the number representation. If Z is an n-digit
number, it would be divided, into two halves , the first half with
Ceiling with[n/2] and the second half with Floor[ n/2] as shown
below:
• Z( n digit number) = 𝑋L * 10m + 𝑋R
• Suppose are given two n- digit numbers:
• Z1 = 𝑋L * 10m + 𝑋R
• Z2 = 𝑌L * 10m + 𝑌R
• Z1 * Z2 = (𝑋L * 10m+ 𝑋R)(𝑌L * 10m + 𝑌R
• = 𝑋L ∗ 𝑌L * 102m + (𝑋L ∗ 𝑌R+ 𝑌L ∗ 𝑋R) 10m + 𝑋R𝑌R
+91-9319133134, [email protected]
Matrix Multiplication
• Matrix multiplication a binary operation of multiplying two or
more matrices one by one that are conformable for
multiplication. For example two matrices A, B having the
dimensions of 𝑝 × 𝑞 and 𝑠 × 𝑡 respectively; would be
conformable for 𝐴 × 𝐵 multiplication only if q==s and for 𝐵 × 𝐴
multiplication only if t==p.
• Matrix multiplication is associative in the sense that if A, B, and
C are three matrices of order 𝑚 × 𝑛, 𝑛 × 𝑝 and 𝑝 × 𝑞 then the
matrices (AB)C and A(BC) are defined as (AB)C = A (BC) and
the product is an 𝑚 × 𝑞 matrix.
+91-9319133134, [email protected]
Matrix Multiplication
• Matrix multiplication is not commutative. For example two
matrices A and B having dimensions 𝑚 × 𝑛 and 𝑛 × 𝑝 then the
matrix AB = BA can’t be defined. Because BA is not
conformable for multiplication, even if AB are conformable for
matrix multiplication.
• For three or more matrices, matrix multiplication is associative,
yet the number of scalar multiplications may vary significantly
depending upon how we pair the matrices and their product
matrices to get the final product.

+91-9319133134, [email protected]
Matrix Multiplication

• Straight forward method


• Let’s suppose, we have taken two matrices A and B of size m×n = 2×2
and want to multiply A×B and stores the multiplication in C.

+91-9319133134, [email protected]
Matrix Multiplication

• For multiplying both the matrices A and B simple algorithm


will be:
• for (int i = 0; i< N; i++) {
for (int j = 0; j < N; j++) {
C[i][j] = 0;
for (int k = 0; k < N; k++) {
C[i][j] += A[i][k]*B[k][j];
}
}
}

+91-9319133134, [email protected]
Matrix Multiplication

• Divide & Conquer Strategy for multiplication


• In divide and conquer strategy we say that if the
problem is large, we break the problem into small
problems called sub problems and solve those sub
problems and combined solution of sub problems to get
the solution of main problem.

+91-9319133134, [email protected]
Matrix Multiplication

+91-9319133134, [email protected]
Matrix Multiplication
• Strassen’s Matrix Multiplication Algorithm
Strassen’s algorithm makes use of the same divide and conquer approach
• Divide the input matrices A and B into n/2 x n/2 sub-matrices, which
takes Θ(1) time.
• Now calculate the 7 sub-matrices M1-M7 by using below formulas:
M1=(A11+A22)(B11+B22)
M2= (A21+A22) B11
M3= A11 (B12−B22) M4= A22 (B21−B−11)
M5= (A11+A12) B22
M6= (A21−A11) (B11+B12)
M7= (A12−A22) (B21+B22)

+91-9319133134, [email protected]
Matrix Multiplication

• To get the desired sub-matrices C11, C12, C21, and C22 of the
result matrix C by adding and subtracting various
combinations of the Mi sub-matrices. These four sub-
matrices in Θ(n2)time.
C11= M1+M4−M5+M7 C12= M3+M5
C21= M2+M4
C22= M1−M2+M3+M6

+91-9319133134, [email protected]
Important Topics

• Recurrence Relation Formulation in Divide and Conquer


Technique
• Binary Search Algorithms
• Types of Sorting Algorithms
• Matrix Multiplication Algorithm`

+91-9319133134, [email protected]
Summary
• In this session we have seen that Divide and Conquer approach
follows a recursive approach which making a recursive call to
itself until a base (or boundary) condition of a problem is not
reached but with reduced problem size.
• Divide and Conquer is a top-down approach, which consists of
three steps:
• Divide: the given problem is break down into smaller parts.
• Conquer: Solve each sub-problem by recursively calling them.
• Combine: each sub-solution is combined to generate solution
to the original problem.
+91-9319133134, [email protected]
+91-9319133134, [email protected]
Design and Analysis of Algorithms
Block-2 Unit-3
Graph Algorithms-I
Topics to be Covered
• Introduction
• Basic definition and Terminologies
• Graph Representation Schemes
• Graph Traversal Schemes
• Directed Acyclic Graph and Topological Ordering
• Strongly Connected Components
• Important Topics
• Summary
+91-9319133134, [email protected]
Introduction
• Graphs are most widely used mathematical structure. It is
widely used in finding shortest path routes, shortest path
between every pair of vertices, in computing maximum flow
problem which has applications in a large range of problems
related to airlines scheduling, maximum bipartite matching
and image segmentation.
• A graph can be used to model a social network which
comprises millions of users or interest groups which can be
represented as nodes.

+91-9319133134, [email protected]
Basic Definition and Terminologies

• Graph: A graph G = (V,E) is a data structure comprising two set of


objects, V={v1,v2,……} called vertices, and an another set
E={e1,e2,……} called the edges. In the above graph set of vertices
V = {A,B,C,D,E,F} and set of edges E = {AB , A-C, B-E ,B-D,B-C,C-
D,C-F,D-F, D-E,}.Vertices are unordered set of nodes V.

• Edge :An edge is identified with an


unordered pair of vertices (vi, vj), where vi
and vj are the end vertices of the edge ek.

+91-9319133134, [email protected]
Basic Definition and Terminologies
• Graph Types:

• A simple graph is a graph in which each edge


is connected with two different vertices. There
is no self-loop and no parallel edges in a
simple graph. A simple graph.
• A simple graph with multiple edges is called
multi graph.
• No vertex has a self-loop and no two vertices
have more than one edge connecting them.

+91-9319133134, [email protected]
Basic Definition and Terminologies

• Graph Types:

• In Multi Graph vertex have


more than one edge
connecting them.

Example: Multi graph

+91-9319133134, [email protected]
Basic Definition and Terminologies

• Undirected Graph:
• A graph in which the edges do not
have any direction and all the edges
are in bi-direction.
• In undirected graph if there is an
edge from u to v then we can move
from node u to node v and as well
as from node v to node u.
Undirected Graph

+91-9319133134, [email protected]
Basic Definition and Terminologies
• Directed Graph: A graph in
which the edges have direction. It
is also called digraph.
• This is usually indicated with an
arrow on the edge.
• In a directed graph if there is an
edge from u to v then we can
move from a node u to a node v
only
Directed graph
Basic Definition and Terminologies
• Subgraph:
• A Subgraph of G is a graph G’ such that V(G’)
⊆ V(G) and E(G’) ⊆ E(G)i.e., a graph whose
vertices and edges are subsets of another graph. It is
not necessary that a Subgraph will have all the edges
of graph or all the nodes. This is a Subgraph of the
graph which has the nodes A, B, C, D. In this there
are C, B, D nodes.
+91-9319133134, [email protected]
Basic Definition and Terminologies
• Connected Graph:
• An undirected graph is said to be connected if for every pair of two
different vertices 𝑣i , 𝑣j , there is a path between these two vertices.
The graph 𝐺1 is connected whereas 𝐺2 is not connected.

+91-9319133134, [email protected]
Graph Representation

• The purpose of graph representation is to convert it to a format that can be


used by algorithms running on a computer.
• In order to have efficient algorithms the logical representation of the graph
plays a very critical role.
• When it comes to representations of graphs, two most standard and common
computational representations are in practice such as Adjacency Matrix and
Adjacency List.

+91-9319133134, [email protected]
Graph Representation
• Adjacency Matrix:
• The adjacency matrix of a graph with V vertices is V x V Boolean
matrix with one row and one column for each of the graph’s vertices
Adjacency matrix representation is typically used to represent both
directed and undirected graphs.

The Boolean matrix 𝐴[𝑖, 𝒿] = 1if


there is edges between 𝐴𝒾&𝐴𝒿
𝐴[𝑖, 𝒿] = 0

+91-9319133134, [email protected]
Graph Representation
• Adjacency matrix representation of a graph given in
above figure

+91-9319133134, [email protected]
Graph Representation
• Adjacency List
• Adjacency list representation is typically used to represent graphs,
where the number edges |E| is much less than |V|2.Adjacency list is
represented as an array of |V| linked lists. There is one linked list for
every vertex node in a graph ί each node in this linked list is a reference
to the other vertices which share an edge with the current vertex.

+91-9319133134, [email protected]
Graph Representation

• An adjacency list of the above graph

+91-9319133134, [email protected]
Graph Traversal Algorithms

• Traversal algorithms are used to navigate across a given graph among all
the nodes using all possible vertices. These algorithms will help us in
finding the nodes, making paths that are shortest or feasible or prioritized
in nature.
• There are two key graphs traversal
• Depth First Search(DFS)
• Breadth First Search (BFS) algorithms.

+91-9319133134, [email protected]
Graph Traversal Algorithms

• Depth First Search(DFS)


• DFS starts with any arbitrary vertex in a graph and
traverses to the deepest node as far as possible and
then backtracks. The algorithm proceeds as
follows: it starts with selecting any arbitrary vertex
as start vertex then traverses the next node 𝑤
adjacent to the starting vertex 𝑣.

+91-9319133134, [email protected]
Graph Traversal Algorithms
• Depth First Search(DFS) : As the name says Depth, we will return the elements
of a tree or a graph in depth wise order. Let us understand from an example.

As in the example given above, DFS algorithm traverses from


S to A to D to G to E to B first, then to F and lastly to C. It
employs the following rules.
•Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited.
Display it. Push it in a stack.
•Rule 2 − If no adjacent vertex is found, pop up a vertex from
the stack. (It will pop up all the vertices from the stack, which
do not have adjacent vertices.)
•Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.
https://www.tutorialspoint.co
m/data_structures_algorithms

+91-9319133134, [email protected] /depth_first_traversal.htm


Graph Traversal Algorithms

• Breadth First Search (BFS)


• BFS is a graph searching algorithm which start from
any arbitrary vertex as a starting vertex and visits all its
adjacent vertices at the first level and then moves at the
second level to visit all the unvisited vertices which are
adjacent to vertices at the first level vertices and so on.

+91-9319133134, [email protected]
Graph Traversal Algorithms

• Breadth First Search (BFS) : According to the BFS, you must traverse the
graph in a breadthwise direction:
•To begin, move horizontally and visit all the current layer's nodes.
•Continue to the next layer.

Output : A -> B -> C -> D -> E -> F -> G

https://www.simplilearn.co
+91-9319133134, [email protected] m/tutorials/data-structure-
tutorial/bfs-algorithm
Graph Traversal Algorithms

• Complexities:
• Time complexity: O (V + E),where 𝑂(𝑉) is a total time taken to
complete queue operations (insertion and deletion of vertices.
Insertion and deletion of a single vertex takes O(1) unit of time .
• Since there are V number of vertices in the graph, it will take
O(V) time. The time taken to traverse each adjacency list only
once is O(E). Therefore, the total time is O (V + E).

+91-9319133134, [email protected]
Directed Acyclic Graph and Topological
Ordering
• A directed graph without a cycle is called a directed acyclic graph (or a
(DAG (for short) which is a frequently used graph structure to represent
precedence relation or dependence is a network. The following is a
example of a directed acyclic task graph.

+91-9319133134, [email protected]
Directed Acyclic Graph and Topological
Ordering

• In this graph except vertex V1, all other vertices are dependent
upon other vertices.

• Any major task can be broken down into several subtasks. The
successful completion of the task is possible only when all the
subtasks are completed successfully.

+91-9319133134, [email protected]
Directed Acyclic Graph and Topological
Ordering
• Function topological sort(G)
Input 𝐺 = (𝑉, 𝐸) //𝐺 is a DAG
{
search for a node 𝑉 with 𝑧𝑒𝑟𝑜 in-degree (no incoming
edges) and order it first in topological sorting
remove V from G
topological sort(G-{V}) // recursively compute topological
sorting append the ordering
}
+91-9319133134, [email protected]
Strongly Connected Components (SCC)

• Strongly Connected Graph and Strongly Connected Components of a


directed graph and then we apply an algorithm to find out whether a given
directed graph is strongly connected component or not.
• A directed graph 𝐺 = (𝑉, 𝐸) is strongly connected if for every two vertices
ν𝒾and ν𝒿there is a pair of edges from ν𝒾 to ν𝒿 and ν𝒿 to ν𝒾 Strongly
connected components is a maximal set of vertices M C V such that every
pair of vertices in M are mutually reachable

+91-9319133134, [email protected]
Strongly Connected Components (SCC)

• Pseudo code of Strongly Connected Components Strongly


• Perform DFS (Depth First Search) on the directed graph G and
the number the vertices in the order they are visited
• Transpose of the original G (i.e. GT)
• Perform DFS (Depth First Search) on GT = (V, ET) by starting the
traversal at the highest numbered vertex
• Print the final result.

+91-9319133134, [email protected]
Important Topics

• Graph Representation Schemas


• Difference between DFS and BFS
• Directed Acyclic Graph and Topological Ordering
• Strongly Connected Components

+91-9319133134, [email protected]
Summary

• In this session we have seen that a graph is a very frequently used data structure
for many basic and significant algorithms.
• There are two standard approaches to represent a graph: Adjacency matrix and
adjacency lists which can be used to represent both directed as well as undirected
graph.
• Adjacency list provides a compact way to represent a sparse graph.
• BFS is a graph searching algorithm which start from any arbitrary vertex as a
starting vertex and visits all its adjacent vertices first and then moves at the
second level to visit all the unvisited vertices which are adjacent to vertices at the
first level vertices and so on.

+91-9319133134, [email protected]
+91-9319133134, [email protected]
Design and Analysis of Algorithms
Block-3 Unit-1
Graph Algorithms - II
Topics to be Covered

Introduction
Minimum Cost Spanning Tree (MCST)
Single Source Shortest Path
Maximum Bipartite Matching
Important Topics
Summary

+91 – 9319133134, [email protected]


Introduction

• Graph is a non-linear data structure just like tree that consist of set of
vertices and edges and it has lot of applications in computer science and
in real world scenarios. Using graph algorithm easily find the shortest
path, cheapest path and predicted outputs. Real Life-example of Graph:
• Maps: You can think of map as a graph where intersections of roads are
vertices and connecting roads are edges.
• Social Networks: It is another example of graph structure where peoples
are connected based on the friendships, or some relationships.
• Internet: You can think of internet as graph structures, where there are
certain webpages and each webpages is connected through some link

+91 – 9319133134, [email protected]


Minimum Cost Spanning Tree
• MST- A Spanning tree, weight is minimum over all spanning trees is
called a minimum spanning tree or MST properties of MST are as
follows:
• A MST has |V|-1 edges.
• MST has no cycles.
• It might not be unique.

+91 – 9319133134, [email protected]


Minimum Cost Spanning Tree

• Two find the minimum spanning tree of a graph G, two


algorithm
has been proposed:
• Generic MST Algorithm
• Kruskal’s Algorithm
• Prim’s Algorithm

+91 – 9319133134, [email protected]


Minimum Cost Spanning Tree
• Generic MST Algorithm
• The generic algorithm for finding MSTs maintains a subset A of the
edges E. At each step, and edge (u, v)∈ E is added to A if it is not
already in A and its addition to the set does not violate the condition
that there may not be cycles in A.
Generic-MST(G, w)
1. A= { }
2. 2. while A is not a spanning tree
3. do find an edge (u, v) that is a safe for set A
4. A = 𝐴 𝖴 (𝑢, 𝑣)
5. return A
+91 – 9319133134, [email protected]
Minimum Cost Spanning Tree

+91 – 9319133134, [email protected]


Minimum Cost Spanning Tree

• Kruskal’s Algorithm: Kruskal’s


algorithm finds a minimum
weighted edge (safe edge) from a graph
G and add to the new sub-graph S.
Pseudo code of Kruskal’s Algorithm:

+91 – 9319133134, [email protected]


Minimum Cost Spanning Tree

• MCST_Kruskal(V, E, w)
• 𝐾←{}
• for each vertex 𝑣 ∈ 𝑉
• MAKE − SET(v)
• Sort the edge 𝐸 of 𝐺 in a non-decreasing order by weight 𝑤
• for the edge (𝑢, 𝑣) ∈ 𝐸, taken from the sorted-list
• if 𝐹𝐼𝑁𝐷 − 𝑆𝐸𝑇(𝑢) ≠ 𝐹𝐼𝑁𝐷 − 𝑆𝐸𝑇(𝑣)
• then 𝐾 ← 𝐾 𝖴 {(𝑢, 𝑣)}
• 𝑈𝑁𝐼𝑂𝑁(𝑢, 𝑣)
• return K

+91 – 9319133134, [email protected]


Graph Traversal Algorithms

• PRIM’s Algorithm
• Prim’s algorithm based on the greedy algorithm to find the
minimum cost spanning tree & it finds safe edge .

• Minimum cost={0,5,8,15}
+91 – 9319133134, [email protected]
Graph Traversal Algorithms

• Working strategy of Prims’s algorithm.


• We begin with some vertex 𝑣 in a given graph 𝑮(𝐕, 𝐄) defines the
initial set of vertex in 𝑲.
• Next choose a minimum weight edge (𝐮, 𝐯) ∈ 𝐄 in the graph that have
one end vertex 𝒖 in the set 𝑲 and vertex 𝒗 outside of the set 𝑲.
• Then vertex 𝒗 added in the set𝑲.
• Repeat this process until you get the|𝑽| − 𝟏 edges in the spanning tree.

+91 – 9319133134, [email protected]


Graph Traversal Algorithms
• Pseudo code of Prim’s Algorithm:

+91 – 9319133134, [email protected]


Single Source Shortest Path

• In real world life graph can be used to represent the cities and their
connections between each city.
• Vertices represents the cities and edges representing roads that connects these
vertices.
• The edges can have weights which may be the miles from one city to other
city. Suppose a person wants to drive from a city P to city Q.
• He may be interested to know the following queries:
• Is there any path exist from P to Q?
• If there are multiple paths from P to Q, then which is the shortest path?

+91 – 9319133134, [email protected]


Single Source Shortest Path

• There are two algorithm to solve the single-source-shortest


path problem.

• Dijkstra’s Algorithm
• Bellman Ford Algorithm

+91 – 9319133134, [email protected]


Single Source Shortest Path

• Dijkstra’s Algorithm
• Dijkstra’s algorithm solves the single-source shortest path problem when
all edges have non-negative weights.
• It is a greedy algorithm’s similar to Prim’s algorithm and always choose
the path that are optimal right now not for future consequences.

+91 – 9319133134, [email protected]


Single Source Shortest Path

+91 – 9319133134, [email protected]


Single Source Shortest Path

• Apply Dijakstra’s algorithm on directed graph from source vertex 𝒂

+91 – 9319133134, [email protected]


Single Source Shortest Path

• Bellman-Ford Algorithm
The Bellman-Ford algorithm solves the single-source
shortest-paths problem in case where edge weights may be
negative or there is a negative edge weight cycle in the
graph.

+91 – 9319133134, [email protected]


Single Source Shortest Path
• Pseudo-code of Bellman-Ford Algorithm
𝐵𝐸𝐿𝐿𝑀𝐴𝑁 − 𝐹𝑂𝑅𝐷(𝐺, 𝑉, 𝐸, 𝑤, 𝑠)
• Initialize dist[s] ← 0
• for each vertex 𝑣 ∈ 𝑉-s
• dist[𝑉] ← ∞
• for each vertex i=1 to 𝑉 − 1
• for each edge (𝑢, 𝑣) in E[G]
• RELAX(𝑢, 𝑣, 𝑤)
• for each edge edge(𝑢, 𝑣) in E[G]
• if 𝑑𝑖𝑠𝑡[𝑢] + 𝑤(𝑢 , 𝑣) < 𝑑𝑖𝑠𝑡[𝑣]
• return FALSE
• return TRUE +91 – 9319133134, [email protected]
Single Source Shortest Path

• Apply Bellman-Ford Algorithm on a following graph

+91 – 9319133134, [email protected]


Single Source Shortest Path

• Time Complexity of Bellman-Ford Algorithm:


• The initialization in line 1 takes 𝑂(𝑉) time. For loop of line 2 takes 𝑂
(𝑉) time. For loop of lines 3-4, it takes 𝑂(𝐸) time and for-loop of line
5-7,it takes 𝑂(𝐸)Therefore, Bellman-ford runs in 𝑶(𝑽𝑬).

+91 – 9319133134, [email protected]


Maximum Bipartite Matching

• Maximum bipartite matching problem is a


subset of matching problem specific for
bipartite graphs. A matching of maximum size
(i.e. maximum number of edges) is known to be
maximum matching if the addition of any edge
makes it no longer a matching.

+91 – 9319133134, [email protected]


Important Topics

• Representation of graph
• Minimum cost spanning tree
• Dijkstra’s Algorithm
• Bellman-Ford Algorithm
• Maximum Bipartite Matching

+91 – 9319133134, [email protected]


Summary

• In this session we have seen that :-


• Graph is a non-linear data structure just like tree that consist of set of
vertices and edges and it has lot of applications in computer science and in
real world scenarios.
• A connected sub graph S of Graph𝐆(𝐕, 𝐄)is said to be spanning tree if and
only if it contains all the vertices of the graph G and have minimum total
weight of the edges of G.
• kruskal’s algorithm prim’s algorithm also based on the greedy algorithm to
find the minimum cost spanning tree.
• Dijkstra’s algorithm solves the single-source shortest path problem when
all edges have non-negative weights
+91 – 9319133134, [email protected]
+91 – 9319133134, [email protected]
Design and Analysis of Algorithms
Block-3 Unit-2
Dynamic Programming Technique
Topics to be Covered

• Introduction • All Pair shortest Path


• Principal of Optimality • Important Topics
• Matrix Multiplication • Summary
• Matrix Chain Multiplication
• Optimal Binary Search Tree
• Binomial Coefficient
Computation

+91 - 9319133134 , [email protected]


Introduction
• Dynamic programming is an optimization technique. Optimization
problems are those which required either minimum result or
maximum result .
• Dynamic programming granted to find the optimal solution of any
problem if their solution exist .
• Dynamic programming is useful in solving the problems which can
be divided into similar sub problems. If each sub problem is
different in nature, Dynamic programming does not help in reducing
the time complexity
+91 - 9319133134 , [email protected]
Principal of Optimality
• Dynamic programming follows the principal of optimality. If a
problem have an optimal structure, then definitely it has principal of
optimality. A problem has optimal sub structure if an optimal solution
can be constructed efficiently from optimal solution of its sub-
problems.
• Principal of optimality shows that a problem can be solved by taking
a sequence of decision to solve the optimization problem. In dynamic
programming in every stage we takes a decision. The Principle of
Optimality states that components of a globally optimum solution
must themselves be optimal.
+91 - 9319133134 , [email protected]
Principal of Optimality
• The shortest path problem satisfies the Principle of Optimality.
• This is because if 𝑎, 𝑥1, 𝑥2, … , 𝑥n is a shortest path from node 𝑎 to node 𝑏 in
a graph, then the portion of 𝑥i to 𝑥j on that path is a shortest path from 𝑥i to 𝑥j.
• The longest path problem, on the other hand, does not satisfy the Principle of
Optimality. For example the undirected graph of nodes 𝑎, 𝑏, 𝑐, 𝑑, and 𝑒 and
edges(𝑎, 𝑏), (𝑏, 𝑐), (𝑐, 𝑑), (𝑑, 𝑒)and (𝑒, 𝑎). That is, G is a ring. The longest (non
cyclic) path from a to d to a, b, c, d. The sub-path from b to c on that path is
simply the edge b,c. But that is not the longest path from b to c. Rather b, a, e,
d, c is the longest path.

+91 - 9319133134 , [email protected]


Matrix Multiplication
• Matrix multiplication a binary operation of multiplying two or more
matrices one by one that are conformable for multiplication. For
example two matrices A, B having the dimensions of 𝑝 × 𝑞 and 𝑠 × 𝑡
respectively; would be conformable for 𝐴 × 𝐵multiplication only if
q==s and for 𝐵 × 𝐴 multiplication only if t==p.
• Matrix multiplication is associative in the sense that if A, B, and C
are three matrices of order𝑚 × 𝑛, 𝑛 × 𝑝 and 𝑝 × 𝑞 then the matrices
(AB)C and A(BC) are defined as (AB)C = A (BC) and the product is
an 𝑚 × 𝑞 matrix.

+91 - 9319133134 , [email protected]


Matrix Multiplication
• Matrix multiplication is not commutative. For example two matrices
A and B having dimensions 𝑚 × 𝑛 and 𝑛 × 𝑝 then the matrix AB =
BA can’t be defined. Because BA are not conformable for
multiplication even if AB are conformable for matrix multiplication.
• For 3 or more matrices, matrix multiplication is associative, yet the
number of scalar multiplications may very significantly depending
upon how we pair the matrices and their product matrices to get the
final product.

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication

• Given a sequence of matrices that are conformable for multiplication


in that sequence, the problem of matrix-chain-multiplication is the
process of selecting the optimal pair of matrices in every step in such
a way that the overall cost of multiplication would be minimal.
• If there are total N matrices in the sequence then the total number of
different ways of selecting matrix-pairs for multiplication will be
2nC /(n+1).
n

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication
• Step 1: The structure of an optimal Parenthesization:
• An optimal parenthesization of A1…An must break the product into two
expressions, each of which is parenthesized or is a single array
• Assume the break occurs at position𝑘.
• In the optimal solution, the solution to the product A1…Ak must be optimal
• Otherwise, we could improve A1…An by improving A1…Ak.
• But the solution to A1…Anis known to be optimal
• This is a contradiction
• Thus the solution to A1…Anis known to be optimal

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication

• This problem exhibits the Principle of Optimality:


• The optimal solution to product A1…An contains the optimal
solution to two sub products
• Thus we can use Dynamic Programming
• Consider a recursive solution
• Then improve it's performance with memorization or by
rewriting bottom up

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication
• Step 2: A recursive solution
• For the matrix-chain multiplication problem, we pick as our sub
problems the problems of determining the minimum cost of
parenthesizing𝐴i 𝐴i+1 … 𝐴j for 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑛.
• Let 𝑚[𝑖, 𝑗] be the minimum number of scalar multiplication
needed to compute the matrix 𝐴i…j; for the full problem, the
lowest cost way to compute
• 𝐴1…n would thus be 𝑚[1, 𝑛].
• 𝑚[𝑖, 𝑖] = 0, when i=j, problem is trivial. Chain consist of just
one matrix
+91 - 9319133134 , [email protected]
Matrix Chain Multiplication

• 𝐴i…i = 𝐴i, so that no scalar multiplications are necessary to compute


the product.
• The optimal solution of 𝐴i × 𝐴jmust break at some point, k, with ≤
𝑘<𝑗.
• Each matrix 𝐴i is 𝑝i–1 × 𝑝i and computing the matrix
multiplication 𝐴i…k 𝐴k+1…j takes 𝑝i–1𝑝k𝑝j scalar multiplication.
• Thus, 𝑚[𝑖, 𝑗] = 𝑚[𝑖, 𝑘] + 𝑚[𝑘 + 1, 𝑗] + 𝑝i–1𝑝k𝑝j Equation 1

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication
• Step 3 :Computing the Optimal Costs

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication

+91 - 9319133134 , [email protected]


Matrix Chain Multiplication

• Step 4 :Constructing an Optimal Solution


• The MATRIX-CHAIN-ORDER determines the optimal
number of scalar multiplication needed to compute a matrix-
chain product. To obtain the optimal solution of the matrix
multiplication, we call PRINT_OPTIMAL_PARES(s,1,n) to
print an optimal parenthesization of 𝑨𝟏𝑨𝟐, … , 𝑨𝒏. Each entry
𝑠[𝑖, 𝑗] records a value of k such that an optimal
parenthesization of 𝑨𝟏𝑨𝟐, … , 𝑨𝒋splits the product between 𝐴
k and 𝐴k+1.
+91 - 9319133134 , [email protected]
Matrix Chain Multiplication

+91 - 9319133134 , [email protected]


Optimal Binary Search Tree

• Binary Search Tree: A binary tree is binary search tree such that all the key smaller
than root are on left side and all the keys greater than root or on right side.
• No of comparison required to search any element in a binary search tree is depend
upon the number of levels in a binary search tree. We are searching for the key that
are already present in the binary search tree is successful search.

+91 - 9319133134 , [email protected]


Optimal Binary Search Tree

• Let’s take an example: Keys: 10, 20, 30 How many binary search tree possible.
T(n)=2nCn / n+1
• So there are 3 keys number of binary search tree possible = 5. Let’s draw five
possible binary search tree shown in below figure (a)-(e).

+91 - 9319133134 , [email protected]


Optimal Binary Search Tree

• Dynamic Programming Approach:


• Optimal Substructure: if an optimal binary search tree T has a sub tree
T` containing keys 𝑘i, . . . , 𝑘j, then this sub tree T’ must be optimal as
well for the sub problem with keys 𝑘i, . . . , 𝑘and dummy keys 𝑑i–1, . .
. , 𝑑j.
• Algorithm for finding optimal tree for sorted, distinct keys 𝑘i, . . . , 𝑘j:
• For each possible root 𝑘r for 𝑖 ≤ 𝑟 ≤ 𝑗
• Make optimal sub tree for𝑘i, . . . , 𝑘r–1.
• Make optimal sub tree for 𝑘r+1, . . . , 𝑘j.
• Select root that gives best total tree
+91 - 9319133134 , [email protected]
Optimal Binary Search Tree

• Recursive solution: We pick our sub problem domain as finding an


optimal binary search tree containing the keys 𝑘i, . . . , 𝑘j, where 𝑖 ≥
1, 𝑗 ≤ 𝑛, and
• 𝑗 ≥ 𝑖 − 1. Let us define 𝑒[𝑖, 𝑗] as the expected cost of searching an
optimal binary search tree containing the keys 𝑘i, . . . , 𝑘j.
Ultimately, we wish to compute 𝑒[1, 𝑛].

+91 - 9319133134 , [email protected]


Optimal Binary Search Tree
•Computing the Optimal Cost:

+91 - 9319133134 , [email protected]


Optimal Binary Search Tree

+91 - 9319133134 , [email protected]


Binomial Coefficient Computation

• Computing binomial coefficients is non optimization problem but


can be solved using dynamic programming. Binomial Coefficient is
the coefficient in the Binomial Theorem which is an arithmetic
expansion. It is denoted as c(n, k) which is equal to n! / ( k! ×(n–
k)!) where ! denotes factorial.
• This follows a recursive relation using which we will calculate the n
binomial coefficient in linear time 𝑂(𝑛 × 𝑘)using Dynamic
Programming

+91 - 9319133134 , [email protected]


Binomial Coefficient Computation

• What is Binomial Theorem?


Binomial is also called as Binomial Expansion describe the
powers in algebraic equations. Binomial Theorem helps us to
find the expanded polynomial without multiplying the bunch of
binomials at a time. The expanded polynomial will always
contain one more than the power you are expanding.

+91 - 9319133134 , [email protected]


All Pair Shortest Path

• Floyd Warshall Algorithm


• Floyd Warshall Algorithm uses the Dynamic Programming (DP)
methodology. Unlike Greedy algorithms which always looks for
local optimization, DP strives for global optimization that means
DP does not relies on immediate best result. It works on the
concept of recursion i.e. dividing a bigger problem into similar
type of sub problems and solving them recursively.
• Floyd Warshall Algorithm works on directed weighted graph
including negative edge weight. Although it does not work on
graph having negative edge weight cycle.
+91 - 9319133134 , [email protected]
All Pair Shortest Path

• Working Strategy for FWA


• Initial Distance matrix 𝑫𝟎of order 𝒏 × 𝒏 consisting direct edge weight is
taken as the base distance matrix.𝒊,𝒋
• Another distance matrix 𝑫𝟏of shortest path 𝒅(𝟏)including one (1) intermediate
vertex is calculated where i, j ∈ V.
• The process of calculating distance matrices 𝑫𝟐, 𝑫𝟑… 𝑫𝒏by including other
intermediate vertices continues until all vertices of the graph is taken.
• The last distance matrix 𝑫𝒏 which includes all vertices of a matrix as
intermediate vertices gives the final result.

+91 - 9319133134 , [email protected]


All Pair Shortest Path

• Pseudo-code for FWA


• N is the number of vertices in graph G (V, E).
• 𝑫𝒌is the distance matrix of order 𝒏 × 𝒏 consisting matrix
element 𝒅(𝒌)as the weight of shortest path having intermediate
vertices from 1,2…k ∈ V
• M is the initial matrix of direct edge weight. If there is no direct
edge between two vertices, it will be considered as ∞.

+91 - 9319133134 , [email protected]


All Pair Shortest Path

+91 - 9319133134 , [email protected]


All Pair Shortest Path

• When FWA gives the best result


• Floyd Warshall Algorithm for finding all pair shortest path at one go or one
can also use Dijkstra’s or Bellman Ford algorithm for each vertex?.

• Now we will find out the time complexities of running Dijkstra’s and Bellman
Ford algorithm for finding all pair shortest path to see which gives the best
result.

+91 - 9319133134 , [email protected]


All Pair Shortest Path

• Using Dijkstra’s Algorithm - The time complexity of running


Dijkstra’s Algorithm for single source shortest path problem on
graph G(V,E) is O(E+VlogV) using Fibonacci heap.
• For G being complete graph, running Dijkstra’s Algorithm on each
vertex will result in time complexity of O(𝑽𝟑).
• For graph other than complete, it shall be O(n*n log n), less than
FWA but since Dijkstra’s Algorithm does not work with negative
edge weight for single source shortest path problem it will also not
work for all pair shortest path problem given negative edge weight.
+91 - 9319133134 , [email protected]
All Pair Shortest Path

• Using Bellman Ford Algorithm – The time complexity of running


Bellman Ford algorithm for single source shortest path problem on
Graph G (V,E) is O(VE). If G is complete graph then this
complexity turns out to be O(V*𝑽𝟐) i.e. O(𝑽𝟑). Therefore, the time
complexity of running Bellman Ford Algorithm on each vertex of
graph G shall be O(𝑽𝟒).

+91 - 9319133134 , [email protected]


All Pair Shortest Path

• From the two points it can be concluded that FWA is the best choice
for all pair shortest path problem when the graph is dense whereas
Dijkstra’s Algorithm is suitable when the graph is sparse and no
negative edge weight exist. For graph having negative edge weight
cycle the only choice among the three is Bellman Ford Algorithm.

+91 - 9319133134 , [email protected]


Important Topics
• Principal of Optimality
• Matrix Multiplication
• Matrix Chain Multiplication
• Binary Search Tree
• Binomial Coefficient Computation
• Floyd Warshall Algorithm
• Working Strategy for FWA
• Pseudo-code for FWA

+91 - 9319133134 , [email protected]


Summary

• In the session of Dynamic Programming we have seen it is a technique for solving


optimization Problems, using bottom-up approach.
• The underlying idea of dynamic programming is to avoid calculating the same
thing twice, usually by keeping a table of known results that fills up as substances
of the problem under consideration are solved.
• In order that Dynamic Programming technique is applicable in solving an
optimization problem, it is necessary that the principle of optimality is applicable
to the problem domain.
• The principle of optimality states that for an optimal sequence of decisions or
choices, each subsequence of decisions/choices must also be optimal

+91 - 9319133134 , [email protected]


+91 - 9319133134 , [email protected]
Design and Analysis of Algorithms
Block-3 Unit-3
String Matching Techniques
Topics to be Covered

• Introduction
• Naive or Brute Force Algorithm
• Rabin Karp Algorithm
• Knuth-Morris-Pratt Algorithm
• Important Topics
• Summary

+91 – 9319133134, [email protected]


Introduction

• String matching is an important problem in computer science in


which a sub-string ( also called a pattern) is searched in a larger
string or a text (e.g., a sentence, a paragraph of a book) and
returns the index of a starting character of a substring.
• Approximate String Matching Algorithms is known as Fuzzy
String Searching , search for substrings of the input string.
• The purpose of the string matching algorithm is to search for a
location of a smaller text pattern in a paragraph or a book .

+91 – 9319133134, [email protected]


The Naïve or Brute Force Algorithm
• String matching is an important problem in computer science in which a sub-string (also
called a pattern) is searched in a larger string or a text (e.g., a sentence, a paragraph of a book)
and returns the index of a starting character of a substring.
• The Naïve search algorithm compares the pattern string to the text, one character at a time.
This process continues until there is mismatch of characters between the pattern string and
the text. Pseudo-Code -
do
if (pattern string character == text character) {
compare next character of the pattern string to next character of text character;
}
else{
shift the pattern string to the next character of the text; while(entire
pattern string matched or end of the text)
+91 – 9319133134, [email protected]
The Rabin Karp Algorithm

• The central idea in Rabin Karp algorithm is computation of hash


function to speed up the pattern matching. The algorithm
calculates hash values for
• ( i) pattern string of m- characters
• (ii) m-character substring of a text .

• Advantage is that there is only one comparison per text substring .

+91 – 9319133134, [email protected]


The Rabin Karp Algorithm
• Pseudo-code of Robin –Karp Algorithm
m- length of a pattern substring //Input
P_ hash – Hash value of a pattern string //Input
T_ hash – Hash value of a first m character of a text substring
//Input
do
if( P_ hash == T_ hash) {brute force comparison of the pattern string and the
first m-character text substring ; }
else{
T_ hash = hash value of the next m-character of the text substring after one
character shift;}
while( match of the pattern string or end of the text)
+91 – 9319133134, [email protected]
The Rabin Karp Algorithm

• Best Case –O(n) where n is a length of a text. If sufficiently large base number
or a large prime number is used for computing hash value , there will be no
spurious hits and the hashed values would be distinct for both a pattern string and
a text substring. In such a case , the searching would take O(n) time.

• Worst Case = O (mn) where m is a length of a pattern string and n is a length of


a text string.. This may happen if there are spurious hits because of use a small
base number/prime number in hash calculation of a pattern and a text string.

+91 – 9319133134, [email protected]


Knuth Morris Pratt Algorithm

• This is linear time string matching algorithm. The complexity is O( m + n) where


m and n are the length of a pattern string and a text string respectively.
• This happens because the KPS algorithm avoids frequent backtracking in the text
string as it is done in the naïve algorithm.
• The key idea in KMP algorithm is to build a LPS( largest prefix as suffix) array
to determine from which point in the pattern string to restart comparing for
pattern matching in a text in case there is a mismatch of a character without
moving the text pointer backward.

+91 – 9319133134, [email protected]


Knuth Morris Pratt Algorithm

https://www.geeksforge
eks.org/kmp-algorithm-
for-pattern-searching/
+91 – 9319133134, [email protected]
Important Topics

• Naive or Brute Force Algorithm

• Rabin Karp Algorithm

• Knuth-Morris-Pratt Algorithm

+91 – 9319133134, [email protected]


Summary

• In this session we have seen that :-


• String matching is a problem for a pattern string into a larger text and return the
location where the pattern has occurred in the Text.
• The Rabin Karp algorithm is based on computing the hash values of an m-
character pattern string and an m-character text substring.
• The KMP is linear time algorithm. The algorithm forbids moving a text pointer

backward.

+91 – 9319133134, [email protected]


+91 – 9319133134, [email protected]
Design and Analysis of Algorithm

Block-4 Unit-1
Introduction to Complexity Classes
Topics to be Covered

• Introduction
• Some Preliminaries to P and NP Class of Problems
• Introduction to P and NP, and NP- Complete Problems
• The CNF Satisfiability Problem
• Important Topics
• Summary

+91 - 9319133134 , [email protected]


Introduction

• The NP complete problem which is a basis of all other NP


complete problems.
• Problems from graph theory and combinatory can be
formulated as a language recognition problem, for example,
pattern matching problems which can be solved by building
automata.

+91 - 9319133134 , [email protected]


Some Preliminaries to P and NP
Complexity Classes
• Tractable Vs. Intractable Problems
• The general view is that the problems are hard or intractable if they
can be solved only in exponential time or factorial time. The opposite
view is that the problems having polynomial time solutions are
tractable or easy problems.
• Although the exponential time function such as 2n grows more rapidly
than any polynomial time algorithms for an input size n, but for small
values of n, intractable problems for with exponential time bounded
complexity can be more efficient than polynomial time complexity.
But in the asymptotic analysis of algorithm complexity, we always
assume that the size of n is very large .
+91 - 9319133134 , [email protected]
Some Preliminaries to P and NP
Complexity Classes
• Tractable Vs. Intractable Problems

• It is to be kept in mind that intractability is a characteristic of a


problem. It is not related to any problem solving technique. To
explain this concept, let us take an example of a chained matrix
multiplication problem which was examined earlier.

• The brute force approach to the solution of this problem is intractable


but it can be solved in polynomial time through dynamic
programming technique.

+91 - 9319133134 , [email protected]


Some Preliminaries to P and NP
Complexity Classes
• Optimization Problems Vs. Decision Problems
• An Optimization problem is one in which we are given a
set of input values, which are required to be either
maximized or minimized w. r. t. some constraints or
conditions.

• There is a corresponding decision problem to each


optimization problem. Unlike an optimization problem, a
decision problem outputs a simple “yes” or “no” answer.
+91 - 9319133134 , [email protected]
Some Preliminaries to P and NP
Complexity Classes
• Deterministic Vs. Nondeterministic Algorithms

• A deterministic algorithm will produce the same output on an


input going through the same sequence of steps,
• A non-deterministic algorithm behaves in a completely different way.
• There may be multiple next steps possible after any given step and the
algorithm is allowed to choose any of them in an arbitrary manner.
• A non-deterministic algorithm may proceed through different
sequences of steps on different runs, and may even produce
different outputs.

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems

• The theory of NP-completeness identifies a large class of


problems which cannot be solved in polynomial time. Categories
the problems into three classes namely P (Polynomial), NP
(Non-deterministic Polynomial), and NP complete.

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems
• P Class
• An algorithm solves a problem in polynomial time if its worst -case time
complexity belongs to O(p(n)) where n is a size of a problem and p(n) is
polynomial of the problem’s input size n.
• Problems can be classified as tractable and intractable problems. Problems
with polynomial time solutions are called tractable, problems which do not
have polynomial time solutions are called intractable.

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems
• NP Class
• An algorithm solves a problem in polynomial time if its worst -case time
complexity belongs to O(p(n)) where n is a size of a problem and p(n) is
polynomial of the problem’s input size n.
• Problems can be classified as tractable and intractable problems.
• Problems with polynomial time solutions are called tractable, problems
which do not have polynomial time solutions are called intractable.

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems
• An NP algorithm consists of two stages:
• Guessing Stage (Nondeterministic stage) : Given a problem instance, in this stage
a simple string S is produced which can be thought of as a guess (candidate
solution ) to the problem instance.

• The following table displays the results of some input strings S generated at the
nondeterministic stage for the problem instance graph given in figure 1with the
total distance d value that is claimed to be a tour not greater than d(i.e. 18)and
passed to the function verify as an input.

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems
• Verification Stage (Deterministic
stage) : Input to this stage is the
problem instance, the distance d and
the string S. A deterministic algorithm
takes these inputs and outputs yes if S
is a correct guess to the problem
instance and stops running. In case S
is not a correct guess, then the
deterministic algorithm outputs no or
it may go to an infinite loop and does
not halt.
+91 - 9319133134 , [email protected]
Introduction to P, NP,NP Hard
& NP-Complete Problems
• NP Complete Problems
• NP complete problems are the most difficult problem, also called
hardest problems in the subset of NP-class.
• The common feature is that there is no polynomial time solution for
any of these problems exists in the worst cases.
• In other words, NP-Complete problems usually take super polynomial
or exponential time or space in the worst cases.

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems
• NP Problems containing both P and NP-Complete Problems

+91 - 9319133134 , [email protected]


Introduction to P, NP,NP Hard
& NP-Complete Problems
• NP-Complete problem can be reduced or mapped to it
in polynomial time. To reduce one problem X to another
problem Y,

• we need to do mapping such that a problem instance of X can be


transformed to a problem instance of Y, solve Y and then do the
mapping of the result back to the original problem

+91 - 9319133134 , [email protected]


CNF – Satisfiability Problem – A
First NP Complete Problem
• Boolean Satisfiability Problem is also known as SAT is the problem of
determining if there exists an interpretation that satisfies a given boolean
formula.
• The variables of a given boolean formula can be consistently replaced by the
values TRUE or FALSE that the formula evaluates to TRUE. Here , the
formula is called satisfiable. On the other hand, if no such assignment exists,
the function expressed by the formula is FALSE for all possible variable
assignments and the formula is unsatisfiable.
+91 - 9319133134 , [email protected]
CNF – Satisfiability Problem – A
First NP Complete Problem
• Let us recall that for a new problem to be NP-Complete problem , it
must be in NPclass and then a known NP- Complete problem must be
polynomials reduced to it.
• In this session we will not show any reduction example, because we are
interested in the general idea.
• One might be wondering how the first NP-Complete problem was proven
to be NP Complete and how the reduction was done.

+91 - 9319133134 , [email protected]


CNF – Satisfiability Problem – A
First NP Complete Problem
• The satisfiability problem was the first NP-Complete problem ever found.
• The problem can be formulated as follows: Given a Boolean expression, find
out whether the expression is satisfiable or not.
• Whether an assignment to the logical variables that gives a value 1(True) A
logical variable, also called a Boolean variable is a variable having only one of
the two values: 1 (True) or False (0). A literal is a logical variable or negation
of a logical variable. A clause combines literals through logical or(V)
operator(s). A conjunctive normal form (CNF) combines several clauses
through logical and ( ∧) operator(s)

+91 - 9319133134 , [email protected]


Important Topics

• Difference between Optimization vs Decision Problems

• P, NP, and NP-Complete Problems

• The CNF Satisfiability Problem

+91 - 9319133134 , [email protected]


Summary
• In this session we have seen that :-
• Relationship between P, NP, NP-Complete
• Three classes of problems in terms of its time complexities in
the worst cases: P, NP, NP-Complete.
• Circuit Satisfiability Decision Problem asks for a given logical
expression in CNF, Whether some combination of True and False
values to the logical variables makes the output of the expression
Differences between tractable and intractable problems,
optimization and decision problems and deterministic and
nondeterministic algorithms
+91 - 9319133134 , [email protected]
+91 - 9319133134 , [email protected]
Design and Analysis of Algorithms
Block-4 Unit-2

NP-Completeness And NP-Hard Problems


Topics to be Covered

• Introduction
• P Vs NP-Class of Problems
• Polynomial time reduction
• NP-Hard and NP-Complete problem
• Some well-known NP-Complete Problems-definitions
• Techniques (Steps) for proving NP-Completeness
• Proving NP-completeness (Decision problems)
• Important Topics
• Summary
+91 – 9319133134, [email protected]
Introduction

• A class of problems can be


divided into two parts:
solvable and unsolvable
problems. Solvable problems
are those for which an
algorithm exist, and
unsolvable problems are those
for which algorithm does not
exist such as halting problem
of turing machine etc.

+91 – 9319133134, [email protected]


P Vs NP-Class of Problems

• P-Class (Polynomial class):


• If there exist a polynomial time algorithm for L then
• problem L is said to be in class P (Polynomial Class), that is in
worst case L can be solved in where n is the input size of the
problem and k is positive constant.

+91 – 9319133134, [email protected]


P Vs NP-Class of Problems
• NP-Class (Nondeterministic Polynomial time solvable):

• NP is the set of decision problems (with „yes‟ or „no‟ answer) that can be
solved by a Nondeterministic algorithm (or Turing Machine) in Polynomial
time.

+91 – 9319133134, [email protected]


P Vs NP-Class of Problems
• Here is an example of an NDA:
• Algorithm: (Nondeterministic Linear search)
• /* Input: A linear list A with n elements and a searching element 𝑥.
• Output: Finds the location LOC of 𝑥in the array A (by returning an index) or return LOC=0
to indicate 𝑥is not present in A. */
• 𝑁𝐷𝐿𝑆𝑒𝑎𝑟𝑐ℎ(𝐴, 𝑛, 𝑥) {
• 1. [Initialize]: Set j=1 and LOC=0.
• 2. 𝑗 = 𝐶ℎ𝑜𝑖𝑐𝑒( ); // Nondeterministic Statement
• 3. 𝑖𝑓(𝑥 = 𝐴[𝑗]) {
• 5. LOC=j
• 6. 𝑝𝑟𝑖𝑛𝑡𝑓}
• 8.
• 𝑖𝑓(𝐿𝑂𝐶 = 0)
Relationship between P
• 9. 𝑝𝑟𝑖𝑛𝑡𝑓 } +91 – 9319133134, [email protected]
and NP class of problem
Polynomial Time Reduction

• An any new problem requires a new algorithm, But often we


can solve a problem X using a known algorithm for a related
problem Y.

• A given instance x of X is translated to a suitable instance y of Y,


so that we can use the available algorithm for Y. Eventually the
result of this computation on y is translated back, so that we get
the desired result for x.

+91 – 9319133134, [email protected]


Polynomial Time Reduction

• Reductions-Formal definition:
• Reduction is a general technique for showing similarity
between problems. To show the similarity between
problems we need one base problem. A procedure which
is used to show the relationship (or similarity) between
problems is called Reduction step, and symbolically can
be written as

+91 – 9319133134, [email protected]


Polynomial Time Reduction

• The meaning of the above statement is “problem A is


polynomial time reducible to problem B” and if there exist a
polynomial time algorithm for problem A then problem B can
also have polynomial time algorithm.
• Here problem A is taken as a base problem

+91 – 9319133134, [email protected]


Polynomial Time Reduction

Reduction step of problem A to problem B


+91 – 9319133134, [email protected]
NP-Hard and NP-Complete Problems
• NP-Hard or NP-Complete problems are stated in terms of
language recognition problems. This is because the theory of
NP-Completeness grew out of automata and formal language
theory.
• NP-Complete problems are the “hardest” problems to solve
among all NP problems, in that if one of them can be solved in
polynomial time then every problem in NP can be solved in
polynomial time.

+91 – 9319133134, [email protected]


NP-Hard and NP-Complete Problems

• 𝐸𝑎𝑠𝑦 → 𝑃
• 𝑀𝑒𝑑𝑖𝑢𝑚 → 𝑁𝑃
• 𝐻𝑎𝑟𝑑 → 𝑁𝑃 − 𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒
• 𝐻𝑎𝑟𝑑𝑒𝑠𝑡 → 𝑁𝑃 − 𝐻𝑎𝑟d
• The following figure 6 shows a
relationship between P, NP,
NP-C and NP-Hard problems:
Source :https://www.baeldung.com/cs/p-np-np-complete-np-hard

+91 – 9319133134, [email protected]


NP-Hard and NP-Complete Problems

• NP-Hard Problem
• A problem L is said to be NP-Hard problem if there is a
polynomial time reduction from
• already known NP-Hard problem 𝐿 ᇱ to the given
problem L.

• E′={( i, j) : I , j ∈ V and I ≠ j

+91 – 9319133134, [email protected]


Some Well-Known Np-complete
Problems Definitions
• SAT Problem
• Input: Given a CNF formula f having m clauses 𝐶1,
• 𝐶2, … … . , 𝐶m in n variables 𝑥1, 𝑥2, … … . , 𝑥n. There is
no restriction on number of variables in each clause. The
problem is to find an assignment (value) to a set of
variables 𝑥1, 𝑥2, … … . , 𝑥n which satisfy all the m clauses
simultaneously.
• 𝑆𝐴𝑇 = {𝑓: 𝑓 is a given Boolean formula in CNF, Is this
formula 𝑓satisfiable ?}
+91 – 9319133134, [email protected]
Some Well-Known Np-complete
Problems Definitions
• NP-Complete Problem
• A decision problem is said to be NP-Complete if the following two
conditions are satisfied

𝐿 ∈ 𝑁𝑃
• L is NP Hard (that is every problem (say 𝐿') in NP is polynomial-time
reducible to L).

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Knapsack problem:
• A decision version of 0/1 knapsack problem is NP- complete. A list
{𝑖1, 𝑖2, … , 𝑖n}, a knapsack with capacity M and a desired value V
. Each object 𝑖i has a weight 𝑤i and value 𝑣i. Can a subset of items
𝑋 ⊆ {𝑖1, 𝑖2, … , 𝑖n} be picked whose total weight is at most M and
at total value is at least V? that satisfy the following constraints:

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Traveling Salesman Problem (TSP)
• Given a set of cities and distance between every pair
of cities, the problem is to find the shortest possible
route that visits every city exactly once and returns
to the starting point. There is a integer cost 𝐶(𝑖, 𝑗)
to travel from city i to city j and the salesman wishes
to make the tour whose total cost is minimum, where
the total cost is the sum of individual costs along the
edges of the tour.

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Sum-of-Subset Problem
• Given set of positive integer S = {x1, x2, ..., xn} and
• a target sum K, the decision problem asks for a subset 𝑆'of
S(𝑖. 𝑒. 𝑆' ⊆ 𝑆) having a sum equal to K.

+91 – 9319133134, [email protected]


Definitions of some well-known NP-
Complete problems
• CLIQUE Problem
• CLIQUE is a complete Subgraph problem. A clique in an
Undirected graph G = (V, E) is a subset V‟ ⊆V, each pair of which
is connected by an edge in E (a complete Subgraph of G).

• CLIQUE = {⟨G, k⟩: G is a graph containing a clique of size k}

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Vertex Cover Problem (VCP):
• A vertex cover of an undirected graph G = (V, E) is
a subset of the vertices 𝑉' ⊆ 𝑉 such that for any
edge (𝑈, 𝑉) ∈ 𝐸, then either 𝑢 ∈ 𝑉′ or 𝑣 ∈ 𝑉′or both.

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Vertex Cover as a Decision Problem
• The problem vertex cover, stated as a decision problem,
is to determine whether a given graph 𝐺(𝑉, 𝐸) with |V|=n,
has a vertex cover of size k, where 𝑘 ≤ 𝑛.VERTEX-COVER
= {⟨𝐺, 𝑘⟩: graph G has a vertex cover of size k}

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Hamiltonian Cycle problem
• A Hamiltonian Cycle of an undirected graph 𝐺(𝑉, 𝐸) is a simple cycle
that passes through all the vertices of the graph G exactly once. For
example, the following graph as shown in figure, contains a cycle 𝑎 − 𝑏
− 𝑐 − 𝑓 − 𝑑 − 𝑒 − 𝑎 that visits each vertex exactly once.

+91 – 9319133134, [email protected]


Definitions of Some Well-Known
NP- Complete Problems
• Graph Coloring Problem:
• The natural graph coloring optimization problem is to color a graph with
the fewest number of colors. it is based on decision problem and
provide the input is a pair (G, k) and then
• The search problem is to find a k-coloring of the graph G if one exists.
• The decision problem is to determine whether or not G has a k coloring.
Clearly solving the optimization problem solves the search problem
which in turn solves the decision problem.

+91 – 9319133134, [email protected]


Important Topics

• P Vs NP-Class of Problems
• Polynomial time reduction
• NP-Hard and NP-Complete problem
• The CNF Satisfiability Problem
• Techniques (Steps) for proving NP- Completeness
• NP-completeness (Decision problems)

+91 – 9319133134, [email protected]


Summary
• In this session we have seen that :-
• A class of problems can be divided into two parts: solvable and
unsolvable problems. Solvable problems are those for which an
algorithm exist, and unsolvable problems are those for which algorithm
does not exist such as halting problem of turing machine etc.
• Polynomial time algorithm for L then problem L is said to be in class P
(Polynomial Class), that is in worst case L can be solved in where n is the
input size of the problem and k is positive constant.
• A problem L is said to be NP-Hard problem if there is a polynomial time
reduction from already known NP-Hard problem 𝐿 ᇱ to the given
problem L.

+91 – 9319133134, [email protected]


+91 – 9319133134, [email protected]
Design and Analysis of Algorithms

Block-4 Unit-13

Handling Intractability
Topics to be Covered

• Introduction
• Intelligent Exhaustive Search
• Approximation Algorithms Basics
• Important Topics
• Summary

+91 - 9319133134 , [email protected]


Introduction

• Problem solving techniques such as backtracking and branch


and bound techniques perform better in comparison to
exhaustive search.
• But unlike exhaustive search, these techniques construct the
solutions step by step and performs evaluation of the partial
solution.
• The focus of the unit will be to discuss few techniques such as
backtracking and branch and bound techniques and
approximation algorithms to handle intractable problems.
+91 - 9319133134 , [email protected]
Backtracking and Branch and Bound
Techniques
• Backtracking
• Backtracking is a technique for design of algorithms. It is applied
to solve problems in which components are selected in sequence
from the specified set so that it satisfies some criteria or objective.
• Backtracking procedure includes depth first search of a state
space tree, verifying whether a node is leading to any
solution(called promising node)or but dead ends (called non
promising node), does backtracking to the parent of the node if a
node is not promising and continuing with the search process on
the next child.
+91 - 9319133134 , [email protected]
Backtracking and Branch and Bound
Techniques
• Backtracking technique to solve two problems

• Hamiltonian Circuit problem


• Subset Sum problem

+91 - 9319133134 , [email protected]


Backtracking and Branch and Bound
Techniques
• Hamiltonian circuit problem
• Suppose G = (V, E) is a connected graph
with n vertices, Hamiltonian circuit
problem determines a cycle that visits
every vertex of a graph only once except
the starting
• vertex.𝑉1is a starting vertex of a cycle
where𝑉1 ∈ G and 𝑉1, 𝑉2……..𝑉n+1 are
distinct vertices in the cycle except 𝑉1
and 𝑉n+1 vertices which are equal. A graph for finding
Hamiltonian cycle
+91 - 9319133134 , [email protected]
Backtracking and Branch and Bound
Techniques
• Design of a state space tree
• 𝑉1 as the starting vertex in a state space tree
of a graph given at figure Among the three
options available to select a node from 𝑉1,
we pick up𝑉
• From 𝑉4 it can go to 𝑉3or 𝑉6 or 𝑉
• If we follow the order of 𝑉6, 𝑉5,𝑉 3, 𝑉2 and
𝑉1, we get a correct Hamiltonian cycle.

State Space Tree representing


+91 - 9319133134 , [email protected]
Hamiltonian circuit of a graph
Backtracking and Branch and Bound
Techniques
• Subset sum Problem
• Given a positive integer W and a set S of n positive integer values i.e., S
= { 𝑠1, 𝑠2,..𝑠n}, the main objective of the Subset Sum problem is to
search for all combinations of subsets of integers whose sum is equal to W.
• For example let us take S = { 1,4,6,9} and W = 10. there are two solution
subsets : {1,9} and {4,6}. In some cases, problem instances may not have
any solution subset.
• We will assume that elements in the set are in the sorted order, i.e., 𝑠1 ≤ 𝑠2
≤ ⋯ ≤ 𝑠n.

+91 - 9319133134 , [email protected]


Backtracking and Branch and Bound
Techniques
• Design of state space tree for subset sum problem: S = {4,6,7,8} and W = 18

+91 - 9319133134 , [email protected]


Backtracking and Branch and Bound
Techniques
• Branch and Bound
• In branch and bound technique, search path in state space tree at a
particular node is stopped if any of the following reasons occurs:
• Constraint violation
•The value of a bound of a node is inferior to the value of the v best
solution achieved so far.

+91 - 9319133134 , [email protected]


Backtracking and Branch and Bound
Techniques
• As in backtracking, the left branch of the tree includes the next object
while the right branch excludes it. A node is represented by the three
values:
• w – total weight of items selected at a node
• p – total profit
• bound – total profits of any subset of items
• One of the simplest way to calculate the bound is given
below:
bound = p + (W-w)(pi+1/wi+1)

+91 - 9319133134 , [email protected]


Approximation Algorithms Basics

• A radically approach to deal with hard combinatorial


problems called approximation algorithms which provide a
solution to the hard combinatorial problem, reasonably close
to optimal solution but in polynomial time.
• In real life situations usually work with inaccurate data.
In such cases, getting near optimal solution is accepted
and good enough.

+91 - 9319133134 , [email protected]


Approximation Algorithms Basics

• Its require to find a bound that provides a measure how close


a proposed solution is to optimal solution.

• For example, consider a TSP(Traveling Salesperson


Optimization Problem) problem which is NP- Hard .

+91 - 9319133134 , [email protected]


Approximation Algorithms Basics

• There is an algorithm whose solution has the following


guarantee: approx. value is less than twice optimal value
(i.e.<2xoptvalue) where, opt value is the optimal solution for
the problem and approx. value is the approximate solution that
the algorithm outputs.

+91 - 9319133134 , [email protected]


Approximation Algorithms Basics

• Approximation Ratio
• The term approximation ratio which can be defined as a ratio between
the results obtained by the approximation algorithm and the optimal
results of the algorithm. Consider a minimization optimization problem
P in which the main task is to minimize the given objective function.

• For example, vertex cover problem, graph coloring problem, etc. Are
combinatorial minimization optimization problem.

+91 - 9319133134 , [email protected]


Important Topics

• Intelligent Exhaustive Search and explain their types

• Approximation Algorithms Basics

+91 - 9319133134 , [email protected]


Summary

• In this session we have seen that :-


• Problem solving techniques such as backtracking and branch and
bound techniques perform better in comparison to exhaustive
search. But unlike exhaustive search, these techniques construct
the solutions step by and performs evaluation of the partial
solution.
• A radically approach to deal with hard combinatorial problems
called approximation algorithms which provide a solution to the
hard combinatorial problem, reasonably close to optimal solution
but in polynomial time.+91 - 9319133134 , [email protected]
+91 - 9319133134 , [email protected]

You might also like