Open In App

Logarithmic vs Double Logarithmic Time Complexity

Last Updated : 12 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Logarithmic (O(log n)) and double logarithmic (O(log log n)) are two important time complexities in computational complexity. Similar to other time complexities, it help us in understanding how an algorithm scales with input size. This helps us in optimizing the performance of algorithms, especially when they have to process large amount of data.

Following are some applications in which logarithmic and double logarithmic complexities are used:

  • Binary search works with logarithmic (O (log n)) time complexity. It is used in databases (indexing) and search engines.
  • Union-Find with path compression have close to double logarithmic (O(log log n)) time complexity. It is used in network routing and parallel computing.
  • Van Emde Boas trees have O(log log n) complexity for queries. These are used for real time systems and large scale data structures.
  • Operations of AVL and B-trees have O(log n) complexity. It helps in database indexing and handing large amount of data.
Log_vs_double_log_TimeComplexity
O(Log n) - Red , O(Log Log n) - Green

Logarithmic Time Complexity (O(log n))

If any algorithm have logarithmic time complexity it means that running time of that algorithm is proportional to logarithm (or log) of the input size. As the size of input increases the time required to execute the algorithm also increases but very slowly (or logarithmically). Logarithmic functions are increasing functions but the rate at which they grow is very slow. This make any algorithm with this complexity highly efficient for large amount of data.

Mathematical Intuition Behind O(log n)

Let’s say we start with an input of size n. If an algorithm halves input in every iteration or recursive call, then number of operations needed is:

k = log2n where 2k = n

This means:

  • For n = 8, the algorithm takes log2(8) = 3 steps.
  • For n = 1,000,000, it only takes about log2(1,000,000) ≈ 20 steps.

This is much more efficient than linear (O(n)) or quadratic (O(n2)) time, especially for large datasets.

In the diagram below, we can see logarithm functions increase with very slow rate.

LogarithmFunction
Logarithmic Function

Example

To understand O(log n), let's take the most classic examples of all, the dictionary problem. The problem is to find the word "program" in the dictionary. If you open the dictionary and start looking for the word on every page from 1 through n, it would be an example of O(N) time complexity. We know that it is not an efficient way to search a word from dictionary.

To search the word efficiently we open the book roughly to a center page and see if our word that starts from the letter P will fall before the words on the currently selected page or after them. If "Program" is supposed to come after it we then try to find the center page between the last page and our current page. We do the same step again and again until we reach a single page that contains our desired word, i.e., program.

So basically, we went on dividing our problem in half at every step until we found the result. This is what we mean by O(log N). Here, the time will increase linearly and N will increase exponentially. For example, if it takes 5 seconds to compute 100 elements, then it will take 6 seconds to compute 1000 elements.

Double Logarithmic Time Complexity (O(log log n))

If any algorithm have double logarithmic time complexity then, the running time of that algorithm is proportional to logarithm of the logarithm of input size. Similar to logarithmic time complexity, execution time of algorithm increases as size of input increases but this increment is very very slow. It is even slower than logarithmic (O log n) time complexity. This complexity is extremely efficient.

Mathematical Intuition Behind O(log (log n))

To understand O(log log n), consider the nested logarithmic growth:

T(n)=log(log n)

This means the number of operations increases very slowly even with large inputs.

  • For n = 16, log2(log2(16)) = 2 steps
  • For n = 1,048,576 (which is 220), log2(log⁡2(1,048,576)) ≈ 4.32

So even for very large input sizes, the number of steps remains tiny.

DoubleLogarithmicFunction
Double Logarithmic Function

Example

Imagine we have a problem in which we have to find a number from a huge list of sorted numbers with size 22n. First we perform binary search by checking the middle element and decide whether the number we are looking for is in left or right half, just like a standard O(log n) search. After that we repeat this process on remaining portion of the list. But here's the twist: after first search, the remaining portion we are working with is a much smaller set and we need to perform another binary search to narrow it down further.

At each step, we are dividing the problem by half twice, once in outer search and again in inner search. Each time we halve the problem in two stages, number of comparisons grows much more slowly than it would with a single binary search. This is why we get O(log log n) time complexity.

If it takes 4 seconds to search through 100 elements with a standard binary search, it might only take 5 seconds to search through a million elements using double logarithmic complexity. The time increases slowly even as the input size grows exponentially.

If logarithmic complexity is like searching a word in a dictionary by halving the pages, double logarithmic complexity is like having a smart table of contents, where each section tells you which subsections actually contain words and each subsection contains only relevant entries.


Next Article

Similar Reads