Algorithm Concepts – Big (O) Notation

Big O notation helps us determine how complex an operation is. It provides information on how long an operation will take, based on the size of the data. It can also be used as a complexity measurement.

O comes from the Order of the function (growth rate), while n is the length of the data we work with.

Given the following formula:

f(n) = 8n5 – 3n4 + 6:

As n approaches infinity (for very large sets of data), 8n5 is the only one that actually matters, so all the lesser terms will simply drop. The constants drop as well, so the formula have a growth rate of O(n5).

The following table contain a list of the common O notations, from fastest (best) to slowest (worst).


O(1) Constant Operations occur at the same speed, independent of the data set.
Example: null pointer check, comparing a given value to a constant.
O(log n) Logarithmic Operations take slightly longer, as the size of the data set increases.
Example: Finding an item in a sorted data set, using binary search.
O(n) Linear Operations take longer and they are directly proportioned to the data set size
Example: adding all the values within a container
O(n log n) Linearithmic As the data set doubles in size, operations take longer by an increasing multiplier of one Example: Quick sort
O(n2) Quadratic Each time data set doubles in size, the operations take four times longer
Example: Two nested loops through the container (a for inside another for, from 1 to length of data)
O(2n) Exponential For every new element added in the data set, operations take twice as long
Example: brute force
O(n!) Factorial Operations take n! times longer directly proportioned to the size of the data
Example: Calculating fibonacci numbers recursively

Time complexity

The running time describes how many operations an algorithm must do before it completes.


For the following example, the growth rate is O(1), which means that the lookup is constant, no matter how big the container is.


For the following example, the growth rate is O(n) because, for a vector with n=1000 (1000 elements), the number of operations will also be 1000.


For the following example, the growth rate is O(n2) because, for a vector with n=10 (10 elements), the number of operations will be 102.

  • For each number i from 0 to 9 (Line 4)
  • For each number j from 0 to 9 (Line 5)
    • Display the values on screen ( [0][0], [0][1], …[9][9])

Space complexity

The space complexity describes how much space must be allocated to run the algorithm.

For any algorithm, memory may be used for variables (constant, temporary variables), instruction, or execution.

Space Complexity is the sum of Auxiliary space and Input space.

Auxiliary space is the extra/temporary space that is used by the algorithm during its execution, whereas the Input space refer to the data received as input.

To calculate the space complexity, we need to know the memory required for the type of variables, which generally is different for different operating systems.


Let’s assume that the space requirement for an int is 4 bytes.

Variables a,b,c, and sum are integers, so they will take 4 bytes each => space requirement = 4*4 = 16. We add 4 more bytes to the return value => total is 20.

Because the space requirements is fixed, we call it to be constant.


In the above example, we receive a vector and we copy each element of that vector into a new one.

Therefore, assuming the size of an integer is 4 bytes, we will need 4*n bytes of space for the elements, where n is the size of the original vector.

The memory is linearly increased based on the size of the vector, so the space complexity is linear => O(n).

You may also like...