What’s the Big O-deal?

What’s the Big O-deal?Yet another breakdown of Big O and runtime complexityRachel LumBlockedUnblockFollowFollowingMay 1Forrest GumpThere are probably a good hundred other articles just like this one that break down Big O for beginner software engineers.

Read any of them and I am sure that you will come out with the same kind of content.

But what I have learned in my experience is that not everyone learns in the same way, and even just the slightest restructuring of words can help someone understand a concept.

So I am sharing my interpretation of Big O, and what it means for me as a web developer.

In this decomposition of Big O we will cover:What Big O representsTime complexityThe significance of Big O to youWhat Big O represents“With big O notation we express the runtime in terms of — brace yourself — how quickly it grows relative to the input, as the input gets arbitrarily large.

“ — Interview CakeSo if runtime is describing the speed of your code based on its input, then big O describes how quickly or slowly that speed changes in relation to the input.

In high school math or physics, you may have referred to this as the slope of the slope.

You could think of it as a limit, or the asymptote that a function approaches.

We consider big O because we are accounting for the worst case scenario, as if you would be dealing with arbitrarily large data, even if that realistically won’t always be the case.

According to Wikipedia (I know, not the most academic source, but most of the other sources I read also referenced Wikipedia), Big O is also known as Landau’s symbol from German number theoretician Edmund Landau.

He chose the letter O to describe the order of the function, which refers to the rate of growth of the function (throwback to Algebra II).

What we do not often hear about as web developers is that big O refers to the asymptotic upper bound (worst case scenario), Ω (omega) is the asymptotic lower bound (best case scenario), and Θ (theta) refers to the average input size.

For more detail, check out this article.

Time ComplexityHow long it takes to run an algorithm.

The Curious ProgrammerConstant time, O(1), basically implies that your code will run the same amount of time regardless of the input.

An example of this comes from HackerRank’s video on YouTube featuring Gayle Laakmann McDowell, author of Cracking the Coding Interview.

She retells the story of a company in South Africa trying to transfer files via their slow internet versus pigeon mail.

Spoiler alert: the pigeon won (for a large file size).

And that’s the gist of it — the pigeon flies at approximately the same speed independent of the file size, whereas the internet runs on linear time, or O(N).

Laakmann McDowell refers to Big O as “algorithmic efficiency”.

Linear runtime is time that is dependent on the size of the file.

For instance, let’s pretend that it takes one second to console log each item (what is this, the 1980s!).

For the following example, it will take 1 second for printItems([1]), whereas it will take 5 seconds to run for printItems([1, 2, 3, 4, 5]).

const printNumbers = function (array){ for (let i=0; i<array.

length; i++){ console.

log(array[i]) }}printNumbers([1]) // 1printNumbers([1, 2, 3, 4, 5]) // 1 2 3 4 5If you have nested loops, then it is going to exponentially increase the time complexity respective to the number of loops.

See the following example:const printPairsUpToN = function (n){ for (let i=0; i<n; i++){ for (let j=0; j<n; j++){ console.

log(i, j) } }}printPairsUpToN(3)// 0, 0// 0, 1// 0, 2// 1, 0// 1, 1// 1, 2// 2, 0// 2, 1// 2, 2As you can see, the input is 3, and the output size is 9, or 3².

So this function has runtime O(n²).

But note, that if you have two separate loops, the runtime is still linear since it is looping through the data only once, and runtime is directly correlated to this input.

See:const printNumbersUpAndThenDown = function (array){for (let i=0; i<array.

length; i++){console.

log(array[i]) }for (let j=array.

length-1; j>=0; j–){console.

log(array[j]) }}printNumbersUpAndThenDown([1, 2, 3, 4, 5])// 1 2 3 4 5// 5 4 3 2 1So the actual runtime of this function would be O(n+n), or O(2n), and since coefficients are insignificant, this still has linear runtime of O(n).

You’ll get the hang of it.

A large, detailed table of common time complexities is shown here.

The significance of Big O to youFor us as software engineers, Big O is nothing more than a way to describe the time complexity of our code.

If our code runs at some function differentiating from its parent function, we simply reduce it to the term of the function with the highest exponent, excluding coefficients, other terms, and constants.

… Let’s clear that up a bit.

O(n³ + 50n²+10000) is simply O(n³)This is because as n gets arbitrarily large, or as n approaches infinity, the less significant terms do not make as much of an impact on runtime in comparison to the highest order.

Same goes for coefficients and constants — insignificant in the grand scheme of things*.

*Interview Cake notes that “Big O ignores constants, but sometimes the constants matter.

If we have a script that takes 5 hours to run, an optimization that divides the runtime by 5 might not affect big O, but it still saves you 4 hours of waiting.

” So just keep that in mind during an interview.

ConclusionTo sum up, here are my key takeaways:If your algorithm takes the same amount of time regardless of the size of the input (meaning it probably has nothing to do with the input, it is constant — O(1).

If you iterate through a series of items once (for, while, map, etc.

), or perform an action to each thing in an array, it is linear — O(n).

If you have any nested loops, increment the exponent of n i.


two loops = O(n²), three loops = O(n³), etc.

If you are iterating through data structures of different sizes, it’s O(a*b)Comparison sort and merge sort have runtime O(nlogn).

(There’s definitely a great explanation out there.

Utilize your most powerful tool, Google search.

)A short note on space complexity:We must keep in mind that when we are talking Big O, we are referring to mainly the time complexity of the algorithm.

It is important to also consider space complexity, as this is another factor in optimizing your code for website performance.

Space complexity refers to the total additional size (relative to the input) of any new variables used in an algorithm.

Stay tuned for another article on Space Complexity (which I’ll link here when it is published, but for now, enjoy this Wikipedia article).

“Sometimes there’s a tradeoff between saving time and saving space, so you have to decide which one you’re optimizing for.

[…]“A great engineer (startup or otherwise) knows how to strike the right balance between runtime, space, implementation time, maintainability, and readability.

— Interview CakeResourcesBig O — Interview CakeBig O — Free Code CampBig O — Towards Data ScienceHackerRank Cracking the Coding Interview YouTube PlaylistBig O Notation Lecture from MIT.. More details

Leave a Reply