0% found this document useful (0 votes)
4 views21 pages

Unit II(Searching and Sorting)

The document provides an overview of searching and sorting algorithms in computer programming, detailing their importance in data organization and manipulation. It categorizes sorting algorithms into comparison-based and non-comparison-based, discussing specific algorithms like Bubble Sort, Selection Sort, Insertion Sort, and Merge Sort, along with their complexities, advantages, and disadvantages. The document aims to equip readers with the knowledge to choose appropriate algorithms for optimizing program performance.

Uploaded by

Roopa Sk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views21 pages

Unit II(Searching and Sorting)

The document provides an overview of searching and sorting algorithms in computer programming, detailing their importance in data organization and manipulation. It categorizes sorting algorithms into comparison-based and non-comparison-based, discussing specific algorithms like Bubble Sort, Selection Sort, Insertion Sort, and Merge Sort, along with their complexities, advantages, and disadvantages. The document aims to equip readers with the knowledge to choose appropriate algorithms for optimizing program performance.

Uploaded by

Roopa Sk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Searching and Sorting Algorithms

In computer programming, searching and sorting are essential techniques for organizing and
manipulating data. Searching involves finding a specific element within a data set, while
sorting involves arranging the elements of a data set in a specific order. Both techniques play a
vital role in the performance and efficiency of computer programs.

There are various searching and sorting algorithms available in programming, each with its

own strengths and weaknesses depending on the size, type, and complexity of the data. In this

article, we will provide a comprehensive guide to the most commonly used searching and

sorting techniques in programming. We will discuss the basics of each technique, their time

and space complexities, and when to use them to achieve optimal performance. By the end of

this article, you will have a better understanding of how to choose the appropriate searching

and sorting techniques to optimize the performance of your programs.


Types of sorting Algorithms

Sorting algorithms can be broadly classified into two categories: comparison-based and non-

comparison-based algorithms.

Comparison-based algorithms compare elements in the input to determine their order, and thus

their time complexity is lower bounded by the number of comparisons needed to correctly

order the input. All the sorting algorithms mentioned in this blog post, including Bubble Sort,

Selection Sort, Insertion Sort, Merge Sort, Quick Sort, and Heap Sort, are comparison-based

algorithms.

Non-comparison-based algorithms, on the other hand, do not rely on element comparisons to

determine their order. These algorithms are often more specialized and are used in specific

situations where the input has certain properties. For example, counting sort is a non-

comparison-based algorithm that can be used to sort inputs consisting of integers with a small

range.

Another way to classify sorting algorithms is based on their stability. A sorting algorithm is

said to be stable if it preserves the relative order of equal elements in the input. Merge Sort and

Insertion Sort are stable algorithms, while Quick Sort and Heap Sort are unstable algorithms.
In addition to these broad classifications, there are many variations and extensions of sorting

algorithms, such as shell sort, radix sort, and bucket sort, each designed to optimize for

specific properties of the input data.

1. Bubble sort
Bubble sort is a simple sorting algorithm that works by repeatedly
swapping adjacent elements if they are in the wrong order. The
algorithm starts at the beginning of the list and compares each pair
of adjacent elements. If they are not in the correct order, they are
swapped, and the algorithm continues until no more swaps are
needed. Bubble sort has a time complexity of O(n²) and is not
suitable for large datasets.

 We sort the array using multiple passes. After the first pass, the maximum element goes
to end (its correct position). Same way, after second pass, the second largest element
goes to second last position and so on.
 In every pass, we process only those elements that have already not moved to correct
position. After k passes, the largest k elements must have been moved to the last k
positions.
 In a pass, we consider remaining elements and compare all adjacent and swap if larger
element is before a smaller element. If we keep doing this, we get the largest (among
the remaining elements) at its correct position.

C Program to Optimized implementation of Bubble sort


#include <stdbool.h>
#include <stdio.h>
void swap(int* xp, int* yp){
int temp = *xp;
*xp = *yp;
*yp = temp;
}
// An optimized version of Bubble Sort
void bubbleSort(int arr[], int n){
int i, j;
bool swapped;
for (i = 0; i < n - 1; i++) {
swapped = false;
for (j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
swap(&arr[j], &arr[j + 1]);
swapped = true;
}
}
// If no two elements were swapped by inner loop,
// then break
if (swapped == false)
break;
}
}
// Function to print an array
void printArray(int arr[], int size){
int i;
for (i = 0; i < size; i++)
printf("%d ", arr[i]);
}
int main(){
int arr[] = { 64, 34, 25, 12, 22, 11, 90 };
int n = sizeof(arr) / sizeof(arr[0]);
bubbleSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}

Output
Sorted array:
11 12 22 25 34 64 90

Complexity Analysis of Bubble Sort:


Time Complexity: O(n2)
Auxiliary Space: O(1)

Advantages of Bubble Sort:


 Bubble sort is easy to understand and implement.
 It does not require any additional memory space.
 It is a stable sorting algorithm, meaning that elements with the same key value maintain
their relative order in the sorted output.
Disadvantages of Bubble Sort:
 Bubble sort has a time complexity of O(n2) which makes it very slow for large data
sets.
 Bubble sort has almost no or limited real world applications. It is mostly used in
academics to teach different ways of sorting.

2. Selection Sort

Selection sort is another simple sorting algorithm that works by repeatedly finding the smallest
element from the unsorted part of the list and swapping it with the first element. The algorithm
then moves on to the second element and finds the smallest element from the remaining
unsorted elements and swaps it with the second element, and so on until the entire list is
sorted. Selection sort also has a time complexity of O(n²) and is not suitable for large datasets.
1. First we find the smallest element and swap it with the first element. This way we get
the smallest element at its correct position.
2. Then we find the smallest among remaining elements (or second smallest) and swap it
with the second element.
3. We keep doing this until we get all elements moved to correct position.

// C program for implementation of selection sort


#include <stdio.h>
void selectionSort(int arr[], int n) {
for (int i = 0; i < n - 1; i++) {
// Assume the current position holds
// the minimum element
int min_idx = i;

// Iterate through the unsorted portion


// to find the actual minimum
for (int j = i + 1; j < n; j++) {
if (arr[j] < arr[min_idx]) {

// Update min_idx if a smaller element is found


min_idx = j;
}
}

// Move minimum element to its


// correct position
int temp = arr[i];
arr[i] = arr[min_idx];
arr[min_idx] = temp;
}
}

void printArray(int arr[], int n) {


for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

int main() {
int arr[] = {64, 25, 12, 22, 11};
int n = sizeof(arr) / sizeof(arr[0]);

printf("Original array: ");


printArray(arr, n);

selectionSort(arr, n);

printf("Sorted array: ");


printArray(arr, n);

return 0;
}

Output
Original vector: 64 25 12 22 11
Sorted vector: 11 12 22 25 64
Complexity Analysis of Selection Sort
Time Complexity: O(n2) ,as there are two nested loops:
 One loop to select an element of Array one by one = O(n)
 Another loop to compare that element with every other Array element = O(n)
 Therefore overall complexity = O(n) * O(n) = O(n*n) = O(n2)
Auxiliary Space: O(1) as the only extra memory used is for temporary variables.

Advantages of Selection Sort


 Easy to understand and implement, making it ideal for teaching basic sorting concepts.
 Requires only a constant O(1) extra memory space.
 It requires less number of swaps (or memory writes) compared to many other standard
algorithms. Only cycle sort beats it in terms of memory writes. Therefore it can be
simple algorithm choice when memory writes are costly.

Disadvantages of the Selection Sort


 Selection sort has a time complexity of O(n^2) makes it slower compared to algorithms
like Quick Sort or Merge Sort.
 Does not maintain the relative order of equal elements which means it is not stable.

Applications of Selection Sort


 Perfect for teaching fundamental sorting mechanisms and algorithm design.
 Suitable for small lists where the overhead of more complex algorithms isn’t justified
and memory writing is costly as it requires less memory writes compared to other
standard sorting algorithms.
 Heap Sort algorithm is based on Selection Sort.

3. Insertion Sort

Insertion sort is a simple sorting algorithm that works by building the final sorted list one
element at a time. The algorithm starts with the second element of the list and compares it with
the first element. If the second element is smaller than the first element, it is swapped with the
first element. The algorithm then moves on to the third element and compares it with the
second and first elements, swapping it with the appropriate element if needed. The algorithm
continues in this manner until the entire list is sorted. Insertion sort has a time complexity of
O(n²) but is more efficient than bubble sort and selection sort for small datasets.
 We start with the second element of the array as the first element is assumed to be sorted.
 Compare the second element with the first element if the second element is smaller then
swap them.
 Move to the third element, compare it with the first two elements, and put it in its correct
position
 Repeat until the entire array is sorted.

// C program for implementation of Insertion Sort


#include <stdio.h>

/* Function to sort array using insertion sort */


void insertionSort(int arr[], int n)
{
for (int i = 1; i < n; ++i) {
int key = arr[i];
int j = i - 1;

/* Move elements of arr[0..i-1], that are


greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}

/* A utility function to print array of size n */


void printArray(int arr[], int n)
{
for (int i = 0; i < n; ++i)
printf("%d ", arr[i]);
printf("\n");
}

// Driver method
int main()
{
int arr[] = { 12, 11, 13, 5, 6 };
int n = sizeof(arr) / sizeof(arr[0]);

insertionSort(arr, n);
printArray(arr, n);

return 0;
}

Output
5 6 11 12 13
Complexity Analysis of Insertion Sort
Time Complexity
 Best case: O(n), If the list is already sorted, where n is the number of elements in
the list.
 Average case: O(n2), If the list is randomly ordered
 Worst case: O(n2), If the list is in reverse order
Space Complexity
 Auxiliary Space: O(1), Insertion sort requires O(1) additional space, making it a
space-efficient sorting algorithm.

Advantages and Disadvantages of Insertion Sort


Advantages
 Simple and easy to implement.
 Stable sorting algorithm.
 Efficient for small lists and nearly sorted lists.
 Space-efficient as it is an in-place algorithm.
 Adoptive. the number of inversions is directly proportional to number of swaps. For
example, no swapping happens for a sorted array and it takes O(n) time only.

Disadvantages
 Inefficient for large lists.
 Not as efficient as other sorting algorithms (e.g., merge sort, quick sort) for most cases.

Applications of Insertion Sort


Insertion sort is commonly used in situations where:
The list is small or nearly sorted.
Simplicity and stability are important.
Used as a subroutine in Bucket Sort
Can be useful when array is already almost sorted (very few inversions)
Since Insertion sort is suitable for small sized arrays, it is used in Hybrid Sorting
algorithms along with other efficient algorithms like Quick Sort and Merge Sort. When the
subarray size becomes small, we switch to insertion sort in these recursive algorithms. For
example IntroSort and TimSort use insertions sort.

4. Merge Sort
Merge sort is a divide-and-conquer algorithm that works by dividing the list into smaller sub-
lists, sorting them, and then merging them back together. The algorithm recursively divides
the list in half until each sub-list contains only one element. It then merges the sub-lists back
together, comparing the elements in each sub-list and merging them in the correct order.
Merge sort has a time complexity of O(n log n) and is more efficient than the previous
algorithms for large datasets.
How does Merge Sort work?
Here’s a step-by-step explanation of how merge sort works:
1. Divide: Divide the list or array recursively into two halves until it can no more be
divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.
3. Merge: The sorted subarrays are merged back together in sorted order. The process
continues until all elements from both subarrays have been merged.

// C program for Merge Sort


#include <stdio.h>
#include <stdlib.h>

// Merges two subarrays of arr[].


// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

// Create temp arrays


int L[n1], R[n2];

// Copy data to temp arrays L[] and R[]


for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];

// Merge the temp arrays back into arr[l..r


i = 0;
j = 0;
k = l;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
}
else {
arr[k] = R[j];
j++;
}
k++;
}

// Copy the remaining elements of L[],


// if there are any
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

// Copy the remaining elements of R[],


// if there are any
while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}

// l is for left index and r is right index of the


// sub-array of arr to be sorted
void mergeSort(int arr[], int l, int r)
{
if (l < r) {
int m = l + (r - l) / 2;

// Sort first and second halves


mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);

merge(arr, l, m, r);
}
}
// Function to print an array
void printArray(int A[], int size)
{
int i;
for (i = 0; i < size; i++)
printf("%d ", A[i]);
printf("\n");
}

// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int arr_size = sizeof(arr) / sizeof(arr[0]);

printf("Given array is \n");


printArray(arr, arr_size);

mergeSort(arr, 0, arr_size - 1);

printf("\nSorted array is \n");


printArray(arr, arr_size);
return 0;

Output
Given array is
12 11 13 5 6 7

Sorted array is
5 6 7 11 12 13

Complexity Analysis of Merge Sort


 Time Complexity:
o Best Case: O(n log n), When the array is already sorted or nearly sorted.
o Average Case: O(n log n), When the array is randomly ordered.
o Worst Case: O(n log n), When the array is sorted in reverse order.
 Auxiliary Space: O(n), Additional space is required for the temporary array used
during merging.

It works on the principle of divide and conquer, breaking down the problem into smaller
sub-problems.
There are mainly three steps in the algorithm:
1. Choose a Pivot: Select an element from the array as the pivot. The choice of pivot can
vary (e.g., first element, last element, random element, or median).
2. Partition the Array: Rearrange the array around the pivot. After partitioning, all
elements smaller than the pivot will be on its left, and all elements greater than the pivot
will be on its right. The pivot is then in its correct position, and we obtain the index of
the pivot.
3. Recursively Call: Recursively apply the same process to the two partitioned sub-arrays
(left and right of the pivot).
4. Base Case: The recursion stops when there is only one element left in the sub-array, as
a single element is already sorted.

Advantages

 Stability : Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.
 Guaranteed worst-case performance: Merge sort has a worst-case time complexity
of O(N logN) , which means it performs well even on large datasets.
 Simple to implement: The divide-and-conquer approach is straightforward.
 Naturally Parallel : We independently merge subarrays that makes it suitable for
parallel processing.

Disadvantages
 Space complexity: Merge sort requires additional memory to store the merged sub-
arrays during the sorting process.
 Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in applications
where memory usage is a concern.
 Merge Sort is Slower than QuickSort in general as QuickSort is more cache friendly
because it works in-place.

5. Quick Sort

Quick sort is another divide-and-conquer algorithm that works by selecting a pivot element
from the list and partitioning the other elements into two sub-lists, according to whether they
are less than or greater than the pivot element. The algorithm then recursively sorts the sub-
lists, using the same pivot element selection and partitioning process until the entire list is
sorted. Quick sort has a time complexity of O(n log n) and is more efficient than merge sort
for small to medium-sized datasets.
// C Program to illustrate Quick Sort
#include <stdio.h>

void swap(int* a, int* b);

// Partition function
int partition(int arr[], int low, int high) {

// Choose the pivot


int pivot = arr[high];

// Index of smaller element and indicates


// the right position of pivot found so far
int i = low - 1;

// Traverse arr[low..high] and move all smaller


// elements to the left side. Elements from low to
// i are smaller after every iteration
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(&arr[i], &arr[j]);
}
}

// Move pivot after smaller elements and


// return its position
swap(&arr[i + 1], &arr[high]);
return i + 1;
}

// The QuickSort function implementation


void quickSort(int arr[], int low, int high) {
if (low < high) {

// pi is the partition return index of pivot


int pi = partition(arr, low, high);

// Recursion calls for smaller elements


// and greater or equals elements
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

void swap(int* a, int* b) {


int t = *a;
*a = *b;
*b = t;
}
int main() {
int arr[] = {10, 7, 8, 9, 1, 5};
int n = sizeof(arr) / sizeof(arr[0]);

quickSort(arr, 0, n - 1);
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}

return 0;
}

Output
Sorted Array
1 5 7 8 9 10

Complexity Analysis of Quick Sort


Time Complexity:
 Best Case: (Ω(n log n)), Occurs when the pivot element divides the array into two
equal halves.
 Average Case (θ(n log n)), On average, the pivot divides the array into two parts, but
not necessarily equal.
 Worst Case: (O(n²)), Occurs when the smallest or largest element is always chosen as
the pivot (e.g., sorted arrays).
Auxiliary Space: O(n), due to recursive call stack

Advantages of Quick Sort


 It is a divide-and-conquer algorithm that makes it easier to solve problems.
 It is efficient on large data sets.
 It has a low overhead, as it only requires a small amount of memory to function.
 It is Cache Friendly as we work on the same array to sort and do not copy data to any
auxiliary array.
 Fastest general purpose algorithm for large data when stability is not required.
 It is tail recursive and hence all the tail call optimization can be done.

Disadvantages of Quick Sort


 It has a worst-case time complexity of O(n2), which occurs when the pivot is chosen
poorly.
 It is not a good choice for small data sets.
 It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here we
are swapping elements according to the pivot’s position (without considering their
original positions).
6. Heap Sort

Heap sort is a comparison-based sorting algorithm that works by first building a binary heap

from the list of elements and then repeatedly extracting the maximum element from the heap

and adding it to the sorted list. The heap is restructured after each extraction to maintain the

heap property. Heap sort has a time complexity of O(n log n) and is more efficient than the

previous algorithms for large datasets.

Heap Sort Algorithm


First convert the array into a max heap using heapify, Please note that this happens in-
place. The array elements are re-arranged to follow heap properties. Then one by one delete
the root node of the Max-heap and replace it with the last node and heapify. Repeat this
process while size of heap is greater than 1.
 Rearrange array elements so that they form a Max Heap.
 Repeat the following steps until the heap contains only one element:
o Swap the root element of the heap (which is the largest element in current
heap) with the last element of the heap.
o Remove the last element of the heap (which is now in the correct position).
We mainly reduce heap size and do not remove element from the actual
array.
o Heapify the remaining elements of the heap.
 Finally we get sorted array.
// C Program to illustrate the usage of Heap sort

#include <stdio.h>

// To heapify a subtree rooted with node i


// which is an index in arr[].
void heapify(int arr[], int n, int i) {

// Initialize largest as root


int largest = i;

// left index = 2*i + 1


int l = 2 * i + 1;

// right index = 2*i + 2


int r = 2 * i + 2;

// If left child is larger than root


if (l < n && arr[l] > arr[largest]) {
largest = l;
}

// If right child is larger than largest so far


if (r < n && arr[r] > arr[largest]) {
largest = r;
}

// If largest is not root


if (largest != i) {
int temp = arr[i];
arr[i] = arr[largest];
arr[largest] = temp;

// Recursively heapify the affected sub-tree


heapify(arr, n, largest);
}
}

// Main function to do heap sort


void heapSort(int arr[], int n) {

// Build heap (rearrange array)


for (int i = n / 2 - 1; i >= 0; i--) {
heapify(arr, n, i);
}

// One by one extract an element from heap


for (int i = n - 1; i > 0; i--) {

// Move current root to end


int temp = arr[0];
arr[0] = arr[i];
arr[i] = temp;

// Call max heapify on the reduced heap


heapify(arr, i, 0);
}
}

// A utility function to print array of size n


void printArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

// Driver's code
int main() {
int arr[] = {9, 4, 3, 8, 10, 2, 5};
int n = sizeof(arr) / sizeof(arr[0]);

heapSort(arr, n);

printf("Sorted array is \n");


printArray(arr, n);
return 0;
}

Output
Sorted array is
2 3 4 5 8 9 10

Complexity Analysis of Heap Sort


Time Complexity: O(n log n)
Auxiliary Space: O(log n), due to the recursive call stack. However, auxiliary space can be
O(1) for iterative implementation.

Advantages of Heap Sort


 Efficient Time Complexity: Heap Sort has a time complexity of O(n log n) in all cases.
This makes it efficient for sorting large datasets. The log n factor comes from the height
of the binary heap, and it ensures that the algorithm maintains good performance even
with a large number of elements.
 Memory Usage: Memory usage can be minimal (by writing an iterative heapify()
instead of a recursive one). So apart from what is necessary to hold the initial list of
items to be sorted, it needs no additional memory space to work
 Simplicity: It is simpler to understand than other equally efficient sorting algorithms
because it does not use advanced computer science concepts such as recursion.
Disadvantages of Heap Sort
 Costly: Heap sort is costly as the constants are higher compared to merge sort even if
the time complexity is O(n Log n) for both.
 Unstable: Heap sort is unstable. It might rearrange the relative order.
 Inefficient: Heap Sort is not very efficient because of the high constants in the time
complexity.

comparison

Here is a comparison table for the six sorting algorithms


mentioned above:

comparison of all the algorithms

From this table, we can see that Merge Sort and Heap Sort are the most efficient algorithms

for large datasets, with a time complexity of O(nlogn) in the worst case. Quick Sort is also

efficient for large datasets, but has a worst-case time complexity of O(n²). Bubble Sort,

Selection Sort, and Insertion Sort are less efficient for larger datasets, with a time complexity

of O(n²) in the worst case.

Additionally, we can see that Merge Sort is stable, meaning that it preserves the relative order

of equal elements in the input, while Quick Sort and Heap Sort are unstable, meaning that they

may change the relative order of equal elements.

Finally, we can see that Merge Sort requires additional memory to store the sub-arrays during

the recursion, while Heap Sort is the most space-efficient algorithm, using only O(1) space.
Searching Techniques
There are several types of searching algorithms in computer programming, each with its own

strengths and weaknesses depending on the size, type, and complexity of the data. The most

common types of searching algorithms include:

1. Linear search is a simple and basic searching algorithm used to find a target value in a

data set. It is also known as sequential search and involves traversing the data set from

start to end, comparing each element with the target value until a match is found or the

end of the data set is reached. Linear search is suitable for small and unordered data sets

but can be inefficient for large and ordered data sets.

The time complexity of linear search is O(n), where n is the size of the data set. In the worst-

case scenario, the target value is not found in the data set, and the algorithm has to traverse the

entire data set, making n comparisons. In the best-case scenario, the target value is found in

the first element, and only one comparison is needed.

Linear search can be implemented using a loop or a recursion in most programming

languages. Its simplicity and ease of implementation make it a useful algorithm for small and

simple data sets. However, for larger and ordered data sets, other searching algorithms such as

binary search or interpolation search may be more efficient.

2. Binary search is a searching algorithm used to find a target value in an ordered data set. It

works by repeatedly dividing the search space in half until the target value is found or it is

determined that the target value does not exist in the data set.
The algorithm starts by comparing the target value with the middle element of the data set. If

the middle element is the target value, the search is successful. If the middle element is greater

than the target value, the search continues in the lower half of the data set. If the middle

element is smaller than the target value, the search continues in the upper half of the data set.

The process repeats until the target value is found or it is determined that the target value does

not exist in the data set.

Binary search has a time complexity of O(log n), where n is the size of the data set. In the

worst-case scenario, the target value is not found in the data set, and the algorithm has to

divide the search space log n times, making log n comparisons. In the best-case scenario, the

target value is found in the middle element, and only one comparison is needed.

Binary search is an efficient algorithm for large and ordered data sets. However, it requires the

data set to be sorted in ascending or descending order beforehand. It can be implemented using

a loop or recursion in most programming languages.

You might also like