CSE 241 Algorithms Midterm
CSE 241 Algorithms Midterm
*** Algorithms
Algorithm refers to a sequence of finite steps to solve a particular problem. It is a well-defined sequence of
instructions that takes some input, processes it, and produces the desired output.
Key characteristics of algorithms include:
Well-Defined Steps: Algorithms provide clear and unambiguous instructions for each step of the process.
Each step must be precisely defined and executable.
Input and Output: Algorithms take some input data, process it, and generate an output. The output is ideally
the solution to the problem the algorithm aims to solve.
Finiteness: Algorithms must eventually terminate after a finite number of steps, producing the desired output.
They should not result in an infinite loop or continue indefinitely.
Correctness: An algorithm should produce the correct output for all valid inputs and follow the intended
logic. It should accurately solve the problem it was designed for.
Well-defined instructions: Each step or instruction in an algorithm must be precisely defined and
unambiguous. It should be clear what needs to be done at each stage.
Effective: Algorithms are designed to solve specific problems or tasks efficiently. This means they should be
practical and capable of producing results in a reasonable amount of time and using reasonable resources (such
as memory and processing power).
Types of algorithms
1. Brute Force Algorithm: It is the simplest approach for a problem. A brute force algorithm is the first
approach that comes to finding when we see a problem.
2. Recursive Algorithm: A recursive algorithm is based on recursion. In this case, a problem is broken into
several sub-parts and called the same function again and again.
3. Backtracking Algorithm: The backtracking algorithm basically builds the solution by searching among
all possible solutions. Using this algorithm, we keep on building the solution following criteria. Whenever
a solution fails, we trace back to the failure point and build on the next solution and continue this process
till we find the solution or all possible solutions are looked after.
4. Searching Algorithm: Searching algorithms are the ones that are used for searching elements or groups
of elements from a particular data structure. They can be of different types based on their approach or the
data structure in which the element should be found.
5. Sorting Algorithm: Sorting is arranging a group of data in a particular manner according to the
requirement. The algorithms which help in performing this function are called sorting algorithms.
Generally sorting algorithms are used to sort groups of data in an increasing or decreasing manner.
6. Hashing Algorithm: Hashing algorithms work similarly to the searching algorithm. But they contain an
index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm: This algorithm breaks a problem into sub-problems, solves a single
sub-problem and merges the solutions together to get the final solution. It consists of the following three
steps:
• Divide
• Solve
• Combine
8. Greedy Algorithm: In this type of algorithm the solution is built part by part. The solution of the next
part is built based on the immediate benefit of the next part. The one solution giving the most benefit will
be chosen as the solution for the next part.
9. Dynamic Algorithm: This algorithm uses the concept of using the already found solution to avoid
repetitive calculation of the same part of the problem. It divides the problem into smaller overlapping
subproblems and solves them.
1|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
2|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
3|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
Asymptotic notations are mathematical tools used in computer science and mathematics to describe the
behavior and growth rate of functions as their input sizes approach infinity.
Big O Notation (O) (Worst Case): This notation provides an upper bound on the growth rate of a function.
If a function f(n) is said to be O(g(n)), it means that there exists a constant c and an input size n0 beyond which
f(n) will never exceed c * g(n) for all n ≥ n0. f(n) ≤ c*g(n) = O(g(n))
Omega Notation (Ω) (Best Case): This notation provides a lower bound on the growth rate of a function. If
a function f(n) is Ω(g(n)), it means that there exists a constant c and an input size n 0 beyond which f(n) will
always be greater than or equal to c * g(n) for all n ≥ n0. f(n) ≥ c*g(n) = Ω(g(n))
= f(n) ≥ n where, n ≥ 1
Theta Notation (Θ) (Average Case): This notation provides a tight/average bound on the growth rate of a
function. If a function f(n) is Θ(g(n)), it means that there exist constants c 1, c2, and an input size n0 such that
c1 * g(n) <= f(n) <= c2 * g(n) for all n ≥ n0. c1*g(n) <= f(n) <= c2*g(n)
4|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
Now, c1*g(n) <= f(n) <= c2*g(n) = n ≤ 2n + 3 ≤ 5n . So, we can write f(n) = Θ(g(n))
*** Find the lower bound of the running time of the quadratic function f(n) = 2n3 + 4n + 5
Solution:
The lower bound of the function f(n) = 2n3 + 4n + 5 will be Ω(g(n)). If a function f(n) is Ω(g(n)), it means that
there exists a constant c and an input size n0 beyond which f(n) will always be greater than or equal to c * g(n)
for all n ≥ n0,
So, 2n3 + 4n + 5 ≥ c*g(n)
Here, g(n) = n3 for the lower bound analysis we aim to find a constant c and a straight point n 0 beyond which
the inequality holds.
So, lower bound of the running time of the quadratic function f(n) = 2n3 + 4n + 5 is Ω(n3).
*** If, f(n) = 6*2n + 2n, then what will be the wrong upper bounds for this function?
Solution:
The function f(n) = 6*2n + 2n grows exponentially with n. So, any upper bound that grows slower than
exponential is wrong.
5|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
Code Analysis
Example
1 algorithm sum(a,n)
2{
3 sum:=0;
4 for i...N
5 sum:=sum+a[i];
6 end loop;
7 return sum;
8}
How many times dose the segment in line 5 executed?
Solution: N Times.
Example
1 algorithm sum(a,n)
2 {
3 sum:=0;
4 for i...M
5 for i...N
6 sum:=sum+a[i];
7 end loop;
8 end loop;
9 return sum;
10 }
How many times dose the segment in line 6 executed?
Solution: M*N Times.
***Example
1 for i in 1 ... N loop
2 for j in 1 ... i loop
3 for k in i ... j loop
4 sum:=sum+1
5 end loop;
6 end loop;
7 end loop;
8 for p in 1 ... N*N loop
9 for q in 1 ... p loop
10 sum:=sum-1
11 end loop;
12 end loop;
6|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
Solution:
1) 0 Times because the condition k < j is never satisfied.
2) (N2 * (N2 – 1 ))/2 Times.
3) N + N(N-1) + N2 + N2(N2-1) + N*N2
Sorting Algorithms
The first element in the array is assumed to be sorted. Take the second element and store it separately in key.
Compare key with the first element. If the first element is greater than key, then key is placed in front of the
first element.
Pseudocode Code:
1 InsertionSort(arr)
2 for i = 1 to length(arr) - 1
3 key = arr[i]
4 j=i-1
5 while j >= 0 and arr[j] > key
6 arr[j + 1] = arr[j]
7 j=j-1
8 end while
9 arr[j + 1] = key
10 end for
11 end InsertionSort
7|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
Set the first element as minimum and Compare minimum with the
second element. If the second element is smaller than minimum,
assign the second element as minimum. Compare minimum with
the third element. Again, if the third element is smaller, then assign
minimum to the third element otherwise do nothing. The process
goes on until the last element.
After each iteration, minimum is placed in the front of the unsorted list.
Pseudocode Code:
1 SelectionSort(arr)
2 for i = 0 to length(arr) - 2
3 minIndex = i
4 for j = i + 1 to length(arr) - 1
5 if arr[j] < arr[minIndex]
6 minIndex = j
7 end if
8 end for
9 swap(arr[i], arr[minIndex])
10 end for
11 end SelectionSort
8|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
9|P age
NAME: UMAIR HOSSAIN | ID: 21225103103
3. Divide Subarrays
Pivot elements are again chosen for the left and the right sub-
parts separately. And, step 2 is repeated.
10 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
11 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
1) Find the largest element in the array, i.e. max. Let X be the number of digits in max. X is calculated
because we have to go through all the significant places of all elements.
In this array [121, 432, 564, 23, 1, 45, 788], we have the largest number 788. It has 3 digits. Therefore, the
loop should go up to hundreds place (3 times).
Build max-heap
To build a max-heap from any tree, we can thus start heapifying each sub-tree from the bottom up and end
up with a max-heap after the function is applied to all the elements including the root element.
In the case of a complete tree, the first index of a non-leaf node is given by n/2 - 1. All other nodes after that
are leaf-nodes and thus don't need to be heapified.
So, we can build a maximum heap as
14 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
15 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
16 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
*** If you have a large number of unsorted elements, between Merge Sort and Insertion sort which
sorting algorithm will be efficient? Justify your answer.
Merge sort will be more efficient than insertion sort for large number of unsorted elements.
Merge sort is a divide-and-conquer algorithm that works by recursively splitting the array in half until it is
sorted, and then merging the sorted halves back together. The worst-case time complexity of merge sort is
O(nlogn), which is asymptotically better than the worst-case time complexity of insertion sort, which is O(n2).
Insertion sort is an in-place algorithm that sorts an array by repeatedly inserting each element into its correct
position in the sorted part of the array. The worst-case time complexity of insertion sort is O(n2), and it can be
even worse for already sorted or nearly sorted arrays.
Greedy Algorithms
Solution:
Our target is selecting non-conflicting tasks based on start and end time and can be solved in O(N logN) time
using a simple greedy approach.
Step1: Sort based on ending time.
Step2: Select the first one and compare the first ending time to the next start time.
17 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
Huffman Coding
Suppose the string below is to be sent over a network.
Message: BCCABBDDAECCBBAEDDCC
Now apply fixed size length to encode the message bit and also calculate how many bits are required to transfer
the message.
Solution (Fixed Length):
Given,
Message: BCCABBDDAECCBBAEDDCC
Length = 20, Size = 20 x 3 = 60 bit
Character Frequency Code
Size = Message size + Table Size
A 3 000
= 60 + 40 + 15 = 115 bits B 5 001
C 6 010
D 4 011
E 2 100
5 x 8 = 40 bits 5 x 3 = 15 bits
Knapsack Problem
Weights(w) 3 2 5 4 N=4
Profits(p) 4 3 6 5
W
N 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 0 4 4 4
2 0 0 3 4 4 7
3 0 0 3 4 4 7
4 0 0 3 4 5 7
Maximum profit = 7
19 | P a g e
NAME: UMAIR HOSSAIN | ID: 21225103103
X = PRESIDENT
Y = PROVIDENCE
0 P R O V I D E N C E
0 0 0 0 0 0 0 0 0 0 0 0
P 0 1 1 1 1 1 1 1 1 1 1
R 0 1 2 2 2 2 2 2 2 2 2
E 0 1 2 2 2 2 2 3 3 3 3
S 0 1 2 2 2 2 2 3 3 3 3
I 0 1 2 2 2 3 3 3 3 3 3
D 0 1 2 2 2 3 4 4 4 4 4
E 0 1 2 2 2 3 4 5 5 5 5
N 0 1 2 2 2 3 4 5 6 6 6
T 0 1 2 2 2 3 4 5 6 6 6
20 | P a g e