0% found this document useful (0 votes)
62 views

CSE 241 Algorithms Midterm

The document discusses different types of algorithms including brute force, recursive, backtracking, searching, sorting, hashing, divide and conquer, greedy, and dynamic algorithms. It also compares the characteristics of greedy algorithms, and the differences between greedy vs dynamic programming, brute force vs greedy, and divide and conquer vs dynamic programming.

Uploaded by

Lakad Chowdhury
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

CSE 241 Algorithms Midterm

The document discusses different types of algorithms including brute force, recursive, backtracking, searching, sorting, hashing, divide and conquer, greedy, and dynamic algorithms. It also compares the characteristics of greedy algorithms, and the differences between greedy vs dynamic programming, brute force vs greedy, and divide and conquer vs dynamic programming.

Uploaded by

Lakad Chowdhury
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

ALGORITHMS

MID TERM EXAMINATION

Course No : CSE 241


Course Title : Algorithms

Name: Umair Hossain


ID: 21225103103 INATKE: 49 SECTION: 07
NAME: UMAIR HOSSAIN | ID: 21225103103

*** Algorithms
Algorithm refers to a sequence of finite steps to solve a particular problem. It is a well-defined sequence of
instructions that takes some input, processes it, and produces the desired output.
Key characteristics of algorithms include:
Well-Defined Steps: Algorithms provide clear and unambiguous instructions for each step of the process.
Each step must be precisely defined and executable.
Input and Output: Algorithms take some input data, process it, and generate an output. The output is ideally
the solution to the problem the algorithm aims to solve.
Finiteness: Algorithms must eventually terminate after a finite number of steps, producing the desired output.
They should not result in an infinite loop or continue indefinitely.
Correctness: An algorithm should produce the correct output for all valid inputs and follow the intended
logic. It should accurately solve the problem it was designed for.
Well-defined instructions: Each step or instruction in an algorithm must be precisely defined and
unambiguous. It should be clear what needs to be done at each stage.
Effective: Algorithms are designed to solve specific problems or tasks efficiently. This means they should be
practical and capable of producing results in a reasonable amount of time and using reasonable resources (such
as memory and processing power).

Types of algorithms
1. Brute Force Algorithm: It is the simplest approach for a problem. A brute force algorithm is the first
approach that comes to finding when we see a problem.
2. Recursive Algorithm: A recursive algorithm is based on recursion. In this case, a problem is broken into
several sub-parts and called the same function again and again.
3. Backtracking Algorithm: The backtracking algorithm basically builds the solution by searching among
all possible solutions. Using this algorithm, we keep on building the solution following criteria. Whenever
a solution fails, we trace back to the failure point and build on the next solution and continue this process
till we find the solution or all possible solutions are looked after.
4. Searching Algorithm: Searching algorithms are the ones that are used for searching elements or groups
of elements from a particular data structure. They can be of different types based on their approach or the
data structure in which the element should be found.
5. Sorting Algorithm: Sorting is arranging a group of data in a particular manner according to the
requirement. The algorithms which help in performing this function are called sorting algorithms.
Generally sorting algorithms are used to sort groups of data in an increasing or decreasing manner.
6. Hashing Algorithm: Hashing algorithms work similarly to the searching algorithm. But they contain an
index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm: This algorithm breaks a problem into sub-problems, solves a single
sub-problem and merges the solutions together to get the final solution. It consists of the following three
steps:
• Divide
• Solve
• Combine
8. Greedy Algorithm: In this type of algorithm the solution is built part by part. The solution of the next
part is built based on the immediate benefit of the next part. The one solution giving the most benefit will
be chosen as the solution for the next part.
9. Dynamic Algorithm: This algorithm uses the concept of using the already found solution to avoid
repetitive calculation of the same part of the problem. It divides the problem into smaller overlapping
subproblems and solves them.

1|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

*** Characteristics of Greedy algorithms


Best-for-Now Choices: Greedy algorithms make the best choices they can at each step without worrying
about the future.
Quick Decisions: They're good for problems where making quick decisions helps find a solution that's okay,
even if not perfect.
No Backtracking: Greedy algorithms don't change their minds. Once a decision is made, they stick with it.
Not Always Perfect: Sometimes, the choices they make might not give the very best solution. But they often
do a pretty good job.
Simple and Fast: They're easy to understand and can be faster than other methods for certain problems.
Use When It Works: Greedy algorithms work well for some problems, like finding the best way to give
change. But for other problems, they might not give the best result.

*** Difference between Greedy and Dynamic programming

Feature Greedy method Dynamic programming


Feasibility In a greedy Algorithm, we make In Dynamic Programming we make
whatever choice seems best at the decision at each step considering current
moment in the hope that it will lead problem and solution to previously solved
to global optimal solution. sub problem to calculate optimal solution.
Optimality In Greedy Method, sometimes there It is guaranteed that Dynamic
is no such guarantee of getting Programming will generate an optimal
Optimal Solution. solution as it generally considers all
possible cases and then choose the best.
Recursion A greedy method follows the A Dynamic programming is an algorithmic
problem-solving heuristic of making technique which is usually based on a
the locally optimal choice at each recurrent formula that uses some
stage. previously calculated states.
Memorization It is more efficient in terms of It requires Dynamic Programming table for
memory as it never looks back or Memorization and it increases its memory
revise previous choices complexity.
Time complexity Greedy methods are generally Dynamic Programming is generally
faster. slower.
Fashion The greedy method computes its Dynamic programming computes its
solution by making its choices in a solution bottom up or top down by
serial forward fashion, never synthesizing them from smaller optimal
looking back or revising previous sub solutions.
choices.
Example Fractional knapsack. 0/1 knapsack problem

2|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Difference between Brute force and Greedy programming

Aspect Brute Force Algorithm Greedy Algorithm


Approach Exhaustively tries all possible Makes locally optimal choices at each step.
solutions.
Decision Considers all options and evaluates Makes quick decisions without looking ahead.
Making each one.
Efficiency Can be inefficient for large problem Tends to be more efficient for certain problems.
spaces.
Optimality Guarantees optimal solution if Might not guarantee globally optimal solution.
checked exhaustively.
Backtracking May require backtracking to try Doesn't backtrack; sticks with initial choices.
different options.
Simplicity Conceptually simple but can be slow. Often simple and fast for some problem types.
Applicability Suitable for small problem instances. Useful for problems with greedy choice
property.
Solution Always finds the best solution if Might find a good solution, not necessarily best.
Quality given time.
Examples Traveling Salesman Problem (TSP), Coin Change, Minimum Spanning Tree (some
Subset Sum. cases).
Time High time complexity due to Generally lower time complexity in many
Complexity exhaustive search. cases.
Proof of Doesn't require complex proofs for Sometimes requires proof to show optimality.
Correctness correctness.

*** Difference between Divide and Conquer and Dynamic programming

Aspect Divide and Conquer Dynamic Programming


Approach Breaks problem into non-overlapping Solves overlapping subproblems by storing
subproblems. solutions.
Subproblem Subproblems are solved Subproblems are solved once and solutions are
Solving independently and their solutions are stored for reuse.
combined.
Optimal Not always required; some problems Problems exhibit optimal substructure; solutions
Substructure may not exhibit this property. can be built from optimal subproblem solutions.
Overlapping Overlapping subproblems are not a Overlapping subproblems are a key
Subproblems defining characteristic. characteristic; solutions are stored for reuse.
Examples Merge Sort, QuickSort, Fast FourierKnapsack Problem, Fibonacci Sequence,
Transform. Shortest Path Problems.
Recursive vs Often implemented using recursion. Can be implemented using either recursion (top-
Iterative down) or iteration (bottom-up).
Memory Usage Generally, less memory-intensive. Can be more memory-intensive due to storage of
solutions in a table.
Trade-offs May involve redundant work due to Efficiently handles overlapping subproblems,
lack of solution storage. but requires more memory.

3|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Rate of Growth of a function (Asymptotic notations)


The rate of growth of a function refers to how quickly the output of the function increases relative to the size
of its input as the input becomes larger.

Asymptotic notations are mathematical tools used in computer science and mathematics to describe the
behavior and growth rate of functions as their input sizes approach infinity.

Order of time complexity


1 < logn < √n < n < nlogn < n2 <n3 . . . . . < 2n < 3n < 4n . . . . . < n! < nn

Big O Notation (O) (Worst Case): This notation provides an upper bound on the growth rate of a function.
If a function f(n) is said to be O(g(n)), it means that there exists a constant c and an input size n0 beyond which
f(n) will never exceed c * g(n) for all n ≥ n0. f(n) ≤ c*g(n) = O(g(n))

Example: Let, f(n) = 2n + 3 then can we write f(n) = O(g(n))?


? ? ?
solution: 2n + 3 ≤ c * g(n) for n ≥ n0 c = Highest degree term of n in f(n)
n = sum of (Constant * Highest Degree of f(n))
n0 = Have to check.
f(n) ≤ 2n + 3n
= f(n) ≤ 5n where, n ≥ 1

So, we can write f(n) as f(n) = O(g(n)).

Omega Notation (Ω) (Best Case): This notation provides a lower bound on the growth rate of a function. If
a function f(n) is Ω(g(n)), it means that there exists a constant c and an input size n 0 beyond which f(n) will
always be greater than or equal to c * g(n) for all n ≥ n0. f(n) ≥ c*g(n) = Ω(g(n))

Example: Let, f(n) = 2n + 3 then can we write f(n) = Ω(g(n))?


? ? ?
solution: 2n + 3 ≥c * g(n) for n ≥ n0

= f(n) ≥ n where, n ≥ 1

So, we can write f(n) as f(n) = Ω(g(n))

Theta Notation (Θ) (Average Case): This notation provides a tight/average bound on the growth rate of a
function. If a function f(n) is Θ(g(n)), it means that there exist constants c 1, c2, and an input size n0 such that
c1 * g(n) <= f(n) <= c2 * g(n) for all n ≥ n0. c1*g(n) <= f(n) <= c2*g(n)

c1*g(n) <= f(n) <= c2*g(n)


Ω(g(n)) O(g(n))

4|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Example: Let, f(n) = 2n + 3 then can we write f(n) = Θ(g(n))?


Here,
2n + 3 ≥ c1*g(n) for n ≥ n0
= f(n) ≥ n where, n ≥ 1
Again,
2n + 3 ≤ c2*g(n) for n ≥ n0
f(n) ≤ 2n + 3n
= f(n) ≤ 5n where, n ≥ 1

Now, c1*g(n) <= f(n) <= c2*g(n) = n ≤ 2n + 3 ≤ 5n . So, we can write f(n) = Θ(g(n))

*** Find the lower bound of the running time of the quadratic function f(n) = 2n3 + 4n + 5
Solution:

The lower bound of the function f(n) = 2n3 + 4n + 5 will be Ω(g(n)). If a function f(n) is Ω(g(n)), it means that
there exists a constant c and an input size n0 beyond which f(n) will always be greater than or equal to c * g(n)
for all n ≥ n0,
So, 2n3 + 4n + 5 ≥ c*g(n)

Here, g(n) = n3 for the lower bound analysis we aim to find a constant c and a straight point n 0 beyond which
the inequality holds.

Putting c = 1 in the equation we have,


2n3 + 4n + 5 ≥ n3

Let’s choose n0 = 1 for n ≥ 1 we have,


2n3 + 4n + 5 ≥ n3
which satisfies the condition for the lower bound relationship.

So, lower bound of the running time of the quadratic function f(n) = 2n3 + 4n + 5 is Ω(n3).

*** If, f(n) = 6*2n + 2n, then what will be the wrong upper bounds for this function?
Solution:

The function f(n) = 6*2n + 2n grows exponentially with n. So, any upper bound that grows slower than
exponential is wrong.

Let’s, 6*2n + 2n <= c*2n as g(n) = 2n , for c = 6+1 = 7


This means that f(n) is always less than or equal to 7*2 n for all values of n greater than n0.

Some wrong upper bounds for f(n) are:


• O(n)
• O(n2)
• O(n3)
• O(log n)
• O(1)
The correct upper bound for f(n) is O(2n).

5|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Code Analysis

Example
1 algorithm sum(a,n)
2{
3 sum:=0;
4 for i...N
5 sum:=sum+a[i];
6 end loop;
7 return sum;
8}
How many times dose the segment in line 5 executed?
Solution: N Times.

Example
1 algorithm sum(a,n)
2 {
3 sum:=0;
4 for i...M
5 for i...N
6 sum:=sum+a[i];
7 end loop;
8 end loop;
9 return sum;
10 }
How many times dose the segment in line 6 executed?
Solution: M*N Times.

***Example
1 for i in 1 ... N loop
2 for j in 1 ... i loop
3 for k in i ... j loop
4 sum:=sum+1
5 end loop;
6 end loop;
7 end loop;
8 for p in 1 ... N*N loop
9 for q in 1 ... p loop
10 sum:=sum-1
11 end loop;
12 end loop;

1) How many times dose the segment in line 4 executed?


2) How many times dose the segment in line 10 executed?
3) What is the running time of the fragment?

6|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Solution:
1) 0 Times because the condition k < j is never satisfied.
2) (N2 * (N2 – 1 ))/2 Times.
3) N + N(N-1) + N2 + N2(N2-1) + N*N2

Sorting Algorithms

Working of Insertion Sort


Suppose we need to sort the following array.

The first element in the array is assumed to be sorted. Take the second element and store it separately in key.
Compare key with the first element. If the first element is greater than key, then key is placed in front of the
first element.

Pseudocode Code:

1 InsertionSort(arr)
2 for i = 1 to length(arr) - 1
3 key = arr[i]
4 j=i-1
5 while j >= 0 and arr[j] > key
6 arr[j + 1] = arr[j]
7 j=j-1
8 end while
9 arr[j + 1] = key
10 end for
11 end InsertionSort

7|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Working of Selection Sort


Suppose we need to sort the following array.

Set the first element as minimum and Compare minimum with the
second element. If the second element is smaller than minimum,
assign the second element as minimum. Compare minimum with
the third element. Again, if the third element is smaller, then assign
minimum to the third element otherwise do nothing. The process
goes on until the last element.

After each iteration, minimum is placed in the front of the unsorted list.

Applying the same process,

Pseudocode Code:
1 SelectionSort(arr)
2 for i = 0 to length(arr) - 2
3 minIndex = i
4 for j = i + 1 to length(arr) - 1
5 if arr[j] < arr[minIndex]
6 minIndex = j
7 end if
8 end for
9 swap(arr[i], arr[minIndex])
10 end for
11 end SelectionSort

8|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Working of Merge Sort


Merge Sort is one of the most popular sorting algorithms that is based on the principle of Divide and Conquer
Algorithm. Here, a problem is divided into multiple sub-problems. Each sub-problem is solved individually.
Finally, sub-problems are combined to form the final solution.

Pseudocode Code: Merge Sort


1 MergeSort(arr)
2 if length(arr) <= 1
3 return arr
4
5 mid = length(arr) / 2
6 left = arr[0 to mid - 1]
7 right = arr[mid to end]
8
9 left = MergeSort(left)
10 right = MergeSort(right)
11
12 return Merge(left, right)
13 end MergeSort
14
15 Merge(left, right)
16 result = []
17 while length(left) > 0 and length(right) > 0
18 if left[0] <= right[0]
19 result.append(left[0])
20 left = left[1 to end]
21 else
22 result.append(right[0])
23 right = right[1 to end]
24 end while
25
26 result.extend(left)
27 result.extend(right)
28
29 return result
30 end Merge

9|P age
NAME: UMAIR HOSSAIN | ID: 21225103103

Working of Quick Sort

1. Select the Pivot Element


There are different variations of quicksort where the pivot
element is selected from different positions. Here, we will be
selecting the rightmost element of the array as the pivot
element.
2. Rearrange the Array
Now the elements of the array are rearranged so that elements
that are smaller than the pivot are put on the left and the
elements greater than the pivot are put on the right.

3. Divide Subarrays
Pivot elements are again chosen for the left and the right sub-
parts separately. And, step 2 is repeated.

10 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

Pseudocode Code: Quick Sort


1 QuickSort(arr, low, high)
2 if low < high
3 pivotIndex = Partition(arr, low, high)
4
5 QuickSort(arr, low, pivotIndex - 1)
6 QuickSort(arr, pivotIndex + 1, high)
7 end if
8 end QuickSort
9
10 Partition(arr, low, high)
11 pivot = arr[high]
12 i = low - 1
13
14 for j = low to high - 1
15 if arr[j] <= pivot
16 i=i+1
17 swap(arr[i], arr[j])
18 end if
19 end for
20
21 swap(arr[i + 1], arr[high])
22 return i + 1
23 end Partition

Working of Radix Sort

11 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

1) Find the largest element in the array, i.e. max. Let X be the number of digits in max. X is calculated
because we have to go through all the significant places of all elements.
In this array [121, 432, 564, 23, 1, 45, 788], we have the largest number 788. It has 3 digits. Therefore, the
loop should go up to hundreds place (3 times).

2) Now, go through each significant place one by one.


Use any stable sorting technique to sort the digits at each significant place. We have used counting sort for
this.
Sort the elements based on the unit place digits (X=0).

3) Now, sort the elements based on digits at tens place

4) Finally, sort the elements based on the digits at hundreds place.

Pseudocode Code: Radix Sort


1 RadixSort(arr)
2 maxNum = FindMax(arr)
3
4 exp = 1
5 while maxNum / exp > 0
6 CountingSortByDigit(arr, exp)
7 exp = exp * 10
8 end while
9 end RadixSort
10
11 CountingSortByDigit(arr, exp)
12 n = length(arr)
13 output = new array of size n
14 count = new array of size 10 initialized to 0
15
16 for i = 0 to n - 1
12 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
17 digit = (arr[i] / exp) % 10
18 count[digit] = count[digit] + 1
19 end for
20
21 for i = 1 to 9
22 count[i] = count[i] + count[i - 1]
23 end for
24
25 for i = n - 1 downto 0
26 digit = (arr[i] / exp) % 10
27 output[count[digit] - 1] = arr[i]
28 count[digit] = count[digit] - 1
29 end for
30
31 copy output to arr
32 end CountingSortByDigit
33
34 FindMax(arr)
35 maxNum = arr[0]
36 for i = 1 to length(arr) - 1
37 if arr[i] > maxNum
38 maxNum = arr[i]
39 end if
40 end for
41 return maxNum
42 end FindMax

Working of Heap Sort


If the index of any element in the array is i, the element in the index 2i+1 will become the left child and
element in 2i+2 index will become the right child. Also, the parent of any element at index i is given by the
lower bound of (i-1)/2.

What is Heap Data Structure?


Heap is a special tree-based data structure. A binary tree is said to follow a heap data structure if
• It is a complete binary tree
• All nodes in the tree follow the property that they are greater than their children i.e. the largest
element is at the root and both its children and smaller than the root and so on. Such a heap is called
a max-heap. If instead, all nodes are smaller than their children, it is called a min-heap
The following example diagram shows Max-Heap and Min-Heap.
13 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

How to "heapify" a tree?


Starting from a complete binary tree, we can modify it to become a Max-Heap by running a function called
heapify on all the non-leaf elements of the heap.

Build max-heap
To build a max-heap from any tree, we can thus start heapifying each sub-tree from the bottom up and end
up with a max-heap after the function is applied to all the elements including the root element.
In the case of a complete tree, the first index of a non-leaf node is given by n/2 - 1. All other nodes after that
are leaf-nodes and thus don't need to be heapified.
So, we can build a maximum heap as

1 // Build heap (rearrange array)


2 for (int i = n / 2 - 1; i >= 0; i--)
3 heapify(arr, n, i);

14 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

Working of Heap Sort


1. Since the tree satisfies Max-Heap property, then the largest item is stored at the root node.
2. Swap: Remove the root element and put at the end of the array (nth position) Put the last item of the
tree (heap) at the vacant place.
3. Remove: Reduce the size of the heap by 1.
4. Heapify: Heapify the root element again so that we have the highest element at root.
5. The process is repeated until all the items of the list are sorted.

15 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

Pseudocode Code: Heap Sort


1 HeapSort(arr)
2 n = length(arr)
3
4 // Build a max heap
5 for i = n / 2 - 1 down to 0
6 Heapify(arr, n, i)
7
8 // Extract elements from the heap one by one
9 for i = n - 1 down to 1
10 swap(arr[0], arr[i])
11 Heapify(arr, i, 0)
12 end HeapSort
13
14 Heapify(arr, n, i)
15 largest = i
16 left = 2 * i + 1
17 right = 2 * i + 2
18
19 if left < n and arr[left] > arr[largest]
20 largest = left
21
22 if right < n and arr[right] > arr[largest]
23 largest = right
24
25 if largest ≠ i
26 swap(arr[i], arr[largest])
27 Heapify(arr, n, largest)
28 end Heapify

16 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

*** If you have a large number of unsorted elements, between Merge Sort and Insertion sort which
sorting algorithm will be efficient? Justify your answer.

Merge sort will be more efficient than insertion sort for large number of unsorted elements.

Merge sort is a divide-and-conquer algorithm that works by recursively splitting the array in half until it is
sorted, and then merging the sorted halves back together. The worst-case time complexity of merge sort is
O(nlogn), which is asymptotically better than the worst-case time complexity of insertion sort, which is O(n2).
Insertion sort is an in-place algorithm that sorts an array by repeatedly inserting each element into its correct
position in the sorted part of the array. The worst-case time complexity of insertion sort is O(n2), and it can be
even worse for already sorted or nearly sorted arrays.

Greedy Algorithms

Activity Selection Problem


Select Maximum Possible Activity from the below start and end time
START 11 8 5 0 2 3 1
END 12 10 9 7 5 4 3

Solution:
Our target is selecting non-conflicting tasks based on start and end time and can be solved in O(N logN) time
using a simple greedy approach.
Step1: Sort based on ending time.
Step2: Select the first one and compare the first ending time to the next start time.

17 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

Pseudocode Code: Activity Selection


1 void ActivitySelection(int start[], int finish[], int n)
2 {
3 printf("The following activities are selected:\n");
4 int j = 0;
5 cout<<"Start: "<<start[0]<<"\nEnd: "<<finish[0]<<endl;
6 for (int i = 1; i < n; i++)
7 {
8 if (start[i] >= finish[j])
9 {
10 cout<<"Start: "<<start[i]<<"\nEnd: "<<finish[i]<<endl;
11 j = i;
12 }
13 }
14 }

Huffman Coding
Suppose the string below is to be sent over a network.
Message: BCCABBDDAECCBBAEDDCC
Now apply fixed size length to encode the message bit and also calculate how many bits are required to transfer
the message.
Solution (Fixed Length):
Given,
Message: BCCABBDDAECCBBAEDDCC
Length = 20, Size = 20 x 3 = 60 bit
Character Frequency Code
Size = Message size + Table Size
A 3 000
= 60 + 40 + 15 = 115 bits B 5 001
C 6 010
D 4 011
E 2 100
5 x 8 = 40 bits 5 x 3 = 15 bits

Solution (Variable Length):


2 3 4 5 6
E A D B C

Character Frequency Code


A 3 011
B 5 10
C 6 11
D 4 00
E 2 010
5 x 8 = 40 bits 12 bits Total Size: 3x3 + 5x2 + 6x2 + 4x2 + 2x3 + 40 + 12 = 97 bits
18 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

Knapsack Problem

Fractional Knapsack (Greedy Method)


Objects 1 2 3 4 5 6 7
Profits(p) 12 5 16 7 9 11 6
Weights(w) 3 1 4 2 9 4 3

p/w 4 5 4 3.5 1 2.75 2

Objects Profits Weight Remaining Weight


2 5 1 15 - 1 = 14
1 12 3 14 - 3 = 11
3 16 4 11 - 4 = 7
4 7 2 7-2=5
6 11 4 5-4=1
7 6x1/3 = 2 1 1-1=0
56 15

0/1 Knapsack (Dynamic Programming)

Objects 1 2 3 4 Max Weight =5

Weights(w) 3 2 5 4 N=4

Profits(p) 4 3 6 5

W
N 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 0 4 4 4
2 0 0 3 4 4 7
3 0 0 3 4 4 7
4 0 0 3 4 5 7

Maximum profit = 7

Formula: A[i,w] = max(A[i-1,w], A[i-1,w-w[i]] + p[i])

19 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103

Longest Common Subsequence (LCS)

X = PRESIDENT
Y = PROVIDENCE

0 P R O V I D E N C E
0 0 0 0 0 0 0 0 0 0 0 0
P 0 1 1 1 1 1 1 1 1 1 1
R 0 1 2 2 2 2 2 2 2 2 2
E 0 1 2 2 2 2 2 3 3 3 3
S 0 1 2 2 2 2 2 3 3 3 3
I 0 1 2 2 2 3 3 3 3 3 3
D 0 1 2 2 2 3 4 4 4 4 4
E 0 1 2 2 2 3 4 5 5 5 5
N 0 1 2 2 2 3 4 5 6 6 6
T 0 1 2 2 2 3 4 5 6 6 6

20 | P a g e

You might also like