Big O & What You Need To Know

Daniel C Reyes
4 min readDec 5, 2020

--

The Basics of Big O Notation

During my journey of becoming a software engineer I would hear about Big O Notation from time to time and would think , what in the world does this have to do with writing code? I for reasons beyond me had always associated Big O Notation with algorithms and associated algorithms with math and have always associated math with the word YIKES! Seeing the chart below seemed daunting enough , so i decided to stop playing word association and jumped into the scary deep end of Big O Notation.

My first few question regarding Big O notation was what it was? , what did the chart above have to do with anything? and why is this so important to know when working as a software engineer. To sum Big O at its most basic level, Big O notation defines how long it takes an algorithm to run, this can also be regarded as time complexity. Big O represents how long the runtime for a given algorithm can be as the data grows larger. Someone new to Big O , like myself, may be wondering why it is important enough to calculate the speed of an algorithm ,but as programs grow in size, milliseconds start to add up and boom an algorithm that used to little to no time all starts to slow your entire application or program down, so software engineers need to know the “worst-case”, or the slowest an algorithm will run given a growing list of data. My journey into Big O led me to discover there are four main classifications to represent Big O : O(1) , O(N) , O(logN) and O(N²).

O(1)

The best time complexity in Big O notation is O(1) as seen on the chart above . This includes algorithms that take pretty much the same amount of time to run no matter how long or short a list. If we were to write a simple block of code that was greeting my dog Oliver , you might think this is not an algorithm, but you would be wrong (like I was) any line of code that is completing a task is an algorithm, but for this algorithm the time complexity is extremely low. That said, the line print “Hello Oliver!” is what is known as, in Big O notation, O(1) which means very simply that the algorithm will take the same number of steps, no matter how much data is given. 0(1) is known as constant time.

O(N)

Let us say we have an array of my furry friends ,dogs = [ “Oliver”, “Charlie”, “Malo”] being that we understand O(1), let’s try to think about searching an array for a specific value. Given our dogs array mentioned above say we were only looking for an element in that array that was equal to the name “Charlie”, simple enough, we would write a simple loop that would iterate through out dogs array and display the elements that contain the name “Charlie”.

const dogs = ['Oliver','Charlie','Malo'];function helloPuppy(array) {
for (let i = 0; i < array.length; i++) {
if (array[i] === 'Charlie') {
console.log('Hello Charlie!');
}
}
}
helloPuppy(dogs);

O(n) gets a little more complicated, the variable “n” being the size of the data you are working on. This usually represents an algorithm that has to sort through each item in a list to find the element it is looking for, or elements where the algorithm uses each item and alters or combines them in some way.With this in mind, we now can see that this type of algorithm would most likely not be O(1) since the number of steps it would take to complete this would depend on the size of the data set. That size, in Big O, is referred to as “N” and if we were to classify this loop in terms of Big O notation it would be O(N), or it would take “N” times to complete this algorithm. O(N) is usually known as linear time. According to our chart above O(N) time complexity is not the best, neither the worst, but is dependent on data size.

O(logN)

Lets say I build a furry friends application and now dogs array has grown to a tremendous size, there’s another time complexity that is better than O(n), referred to as O(logN).The log here stands for logarithm, which you could think of as the opposite of an exponent. O(log n) is also called logarithmic time.

The way O(log n) works is by using binary search in a sorted array. A binary search splits the list in half and just searches one half. If the data you’re looking for is in that half, it splits it in half again repeats the search until there’s nothing left to split. This way of searching becomes much more efficient when you’re working with large sets of data, because it discards one half each time it searches.

O(N²)

The last time complexity I will cover here is O(n²).This one is pretty bad, as it starts out okay and then very quickly becomes slow. This time complexity usually deals with nested data, and the reason it has a bad runtime is because not only do you have to sort through every item in a list, but you also have to sort through every item within them.

let’s say you had an array with five dogs in it, and each of those dogs was actually another array containing 5 details about the dog. This algorithm will have to iterate through a total of 25 times, which is ⁵². Deeper nesting will result in O(n³), O(n⁴) and so on.

Big O can get very complex, and it can be hard to conceptualize. However, once understanding the why and how of Big O, it will eventually help understand the more complex ideas of Big O and now the chart above doesn't look so scary!

--

--