# Asymptotic Analysis Explained with Pokémon: A Deep Dive into Complexity Analysis

This is the complexity of the Merge Sort algorithm.

If we fit the recurrence relation for merge sort in the Master method, we get:a = 2, b = 2, and f(n) = O(n^1)Hence, c = 1 = log_2(2)This fits the criterion for the Case 3 described above.

The amount of work done is same on all the levels as can be verified from the figure above.

Thus, the time complexity is the work done at any level * the total number of levels (or the height of the tree).

We have analyzed the time complexity of the Merge Sort algorithm using two different ways namely the Recursion Tree and the Master Method.

We had to use these different techniques since the merge sort algorithm is a recursive algorithm and the classical asymptotic analysis approaches we had seen earlier for loops were of no use here.

Space Complexity: As for the space complexity, we don’t have to use any complicated techniques and hence, the analysis is much simpler.

One main space occupying data structure in the Merge Sort algorithm is the temp buffer array that is used during the merge procedure.

This array is initialized once and the size of this array is N.

Another data structure that occupies space is the recursion stack.

Essentially, the total number of recursive calls determine the size of the recursion stack.

As we’ve seen in the recursion tree representation, the number of calls made by merge sort is essentially the height of the recursion tree.

The height of the recursion tree was log_2(N) and hence, the size of the recursion stack will also be log_2(N) at max.

Hence, the total space complexity for merge sort would be N + log_2(N) = O(N).

Binary Search ????.???? ????Remember our friend Pikachu and his search for a Pokemon of a specific power.

Poor little Pikachu had a 1000 Pokemons at his disposal and he had to find the one Pokemon with a specific power.

Yeah, Pikachu is very choosy about his opponents.

Which one will it be?His requirements keep on changing day in and day out and he certainly cannot go and check with each and every Pokemon, every time his requirements change i.

e.

he cannot perform a Linear Search through the list of Pokemons to find the one he is looking for.

We mentioned earlier the use of a Hash Table to store the Pokemons using their unique power value as the key and the Pokemon itself as the value.

This would bring down the search complexity to O(1) i.

e.

constant time.

However, this makes use of additional space which raises the space complexity of the search operation to O(N) considering there are N Pokemons available.

N in this case would be 1000.

What if Pikachu didn’t have all this extra space available and he still wanted to speed up the search process?Can I do that?Yes!.Certainly Pikachu can make use of his profound knowledge about sorting algorithms to come up with a search strategy which would be faster than the slow Linear search.

Pikachu decided to ask his good friend Deoxys for help.

Deoxys, being the fastest Pokemon out there, helps Pikachu sort the list of Pokemons according to their power.

Instead of relying on the traditional sorting algorithms, Deoxys makes use of the Quick Sort algorithm (of course he does!) for sorting the Pokemons.

In doing so, he doesn’t make use of any additional space and the time taken for sorting the N Pokemons is the same as that of the Merge Sort algorithm.

So, Pikachu is happy with his friend helping him out at the time of need.

Let me QUICKly sort the Pokemons for Pikachu!Pikachu, being extremely smart, comes up with a search strategy which makes use of the sorted nature of the list of Pokemons.

This new strategy /algorithm is known as the Binary Search algorithm.

(Note: Sorting is a precondition for running binary search, once the list is sorted Pikachu can run binary search as many times as he wants on this sorted list).

Let’s have a look at the code for this algorithm and then analyze its complexity.

Clearly, the algorithm is recursive in nature.

Let’s see if we can use our newly learnt tricks to analyze the time complexity for the binary search algorithm.

The two variables l and r essentially define the portion of the array in which we have to search for the given element, x.

If we look at the algorithm, all it’s really doing is dividing the search portion of the input array into half.

Other than making a recursive call based on a certain condition, it doesn’t really do anything.

So, let’s quickly look at the recurrence relation for the binary search algorithm.

T(n) = T(n / 2) + O(1)That seems like a pretty simple recurrence relation to analyze.

First let’s try and analyze the recursion tree and draw the complexity from there and then we will look at the Master theorem and see which of the three cases fits this recursion.

Whoa!.This binary search algorithm is super fast.

It’s much faster than linear search.

What this implies for our cute little friend Pikachu is that for 1000 Pokemons, he would simply have to go and “ask” 10 of them at max to find the one special pokemon he is looking for (how? ????).

Now let’s see how the more “formulaic” way of approach recursive complexity analysis i.

e.

the Master method help us in this case.

The generic master method recursive relation isT(n) = a T(n / b) + f(n)and for our binary search algorithm we haveT(n) = T(n / 2) + O(1)f(n) = O(n^0), hence c = 0a = 1b = 2c = 0There are 3 different cases for the master theorem and c ?.log_b(a) decides which of the three cases get’s used for a particular analysis.

In our case, 0 < log_2(1) i.

e.

0 = 0.

This implies that our binary search algorithm fits the case-3 of the master theorem, therefore T(n) = Θ(n^0 log(n)) = Θ(log(n)How to choose the best algorithm?.????In this article we introduced the idea of complexity analysis which is an important part of algorithm design and development.

We saw why analyzing an algorithm’s complexity is important and how it directly affects our scalability decisions.

We even saw some great techniques for analyzing this complexity efficiently and correctly so as to make informed decisions in a timely manner.

The question arises, however,Given all that I know about the time and space complexities of two algorithms, how do I choose which one to finally go with?.Is there a golden rule?The answer to this question, unfortunately, is No!There’s no golden rule to help you decide which algorithm to go with.

It totally depends on a lot of external factors.

Let’s try and look at a few of these scenarios that you might find yourself in and also look at the kind of decisions you would want to make.

No constraint on the space!Well, if you have two algorithms A and B and you want to decide which one to use, apart from the time complexity, the space complexity also becomes an important factor.

However, given that space is not an issue that you are concerned with, it’s best to go with the algorithm that has the capability to reduce the time complexity even further given more space.

For example, Counting Sort is a linear time sorting algorithm but it’s heavily dependent upon the amount of space available.

Precisely, the range of numbers that it can deal with depends on the amount of space available.

Given unlimited space, you’re better off using the counting sort algorithm for sorting a huge range of numbers.

Sub-second latency requirement and limited space availableIf you find yourself in such a scenario, then it becomes really important to deeply understand the performance of the algorithm on a lot of varying inputs especially the kind of inputs you expect the algorithm to work with in your application.

For example, we have two sorting algorithms: Bubble sort and Insertion sort, and you want to decide amongst them which one to use for sorting a list of users based on their age.

You analyzed the kind of input expected and you found the input array to be almost sorted.

In such a scenario, it’s best to use Insertion sort over Bubble sort due to its inherent ability to deal amazingly well with almost sorted inputs.

Wait, why would anyone use Bubble or Insertion sort in real world scenarios?If you think that these algorithms are just for educational purposes and are not used in any real world scenarios, you’re not alone!.However, this couldn’t be further away from truth.

I’m sure you’ve all used the sort() functionality in Python sometime in your career.

Well, if you’ve used it and marveled at its performance, you’ve used a hybrid algorithm based on Insertion Sort and Merge Sort called the Tim Sort algorithm.

To read more about it, head over here:Timsort — the fastest sorting algorithm you’ve never heard ofTimsort: A very fast , O(n log n), stable sorting algorithm built for the real world — not constructed in academia…skerritt.

blogIt’s true that insertion sort might not be useful for very large inputs as we’ve al seen from its polynomial time complexity.

However, it’s inherent ability to quickly sort almost sorted range of numbers is what makes it so special and that’s precisely the reason it’s used in the Timsort algorithm.

In short, you won’t ever have a clear black and white division between the algorithms you are struggling to choose from.

You have to analyze all the properties of the algorithms, including their time and space complexity.

You have to consider the size of inputs you are expecting your algorithm to work with and any other constraints that might exist.

Considering all these factors, you have to make an informed decision!If you had a fun time understanding the intricacies of complexity analysis and also playing around with our friend Pikachu, do remember to destroy that like button and spread some love.

❣️If you want more programming problems with detailed complexity analysis, head over to our kitchen!.????Analyzing an algorithm is an important part of any developer’s skill set and if you feel there are other’s who might benefit from this article, then do share it as much as possible!.