# Algorithm Concepts – Big (O) Notation

Big O notation helps us determine how complex an operation is. It provides information on how long an operation will take, based on the size of the data. It can also be used as a complexity measurement.

O comes from the Order of the function (growth rate), while n is the length of the data we work with.

Given the following formula:

f(n) = 8n^{5} – 3n^{4} + 6:

As n approaches infinity (for very large sets of data), 8n^{5}
is the only one that actually matters, so all the lesser terms will simply
drop. The constants drop as well, so the formula have a growth rate of O(n^{5}).

The following table contain a list of the common O notations, from fastest (best) to slowest (worst).

Notation | Name | Example |

O(1) | Constant | Operations occur at the same speed, independent of the data set. Example: null pointer check, comparing a given value to a constant. |

O(log n) | Logarithmic | Operations take slightly longer, as the size of the data set increases. Example: Finding an item in a sorted data set, using binary search. |

O(n) | Linear | Operations take longer and they are directly proportioned to the data set size Example: adding all the values within a container |

O(n log n) | Linearithmic | As the data set doubles in size, operations take longer by an increasing multiplier of one Example: Quick sort |

O(n^{2}) | Quadratic | Each time data set doubles in size, the operations take four times longer Example: Two nested loops through the container (a for inside another for, from 1 to length of data) |

O(2^{n}) | Exponential | For every new element added in the data set, operations take twice as long Example: brute force |

O(n!) | Factorial | Operations take n! times longer directly proportioned to the size of the data Example: Calculating fibonacci numbers recursively |

### Time complexity

The running time describes how many operations an algorithm must do before it completes.

#### O(1)

For the following example, the growth rate is O(1), which means that the lookup is constant, no matter how big the container is.

1 2 3 4 |
vector<int> items = { 1, 2, 3, 4, 5, 6 }; bool isValueAtIndex(int index, int value) { return items[index] == value; } |

#### O(n)

For the following example, the growth rate is O(n) because, for a vector with n=1000 (1000 elements), the number of operations will also be 1000.

1 2 3 4 5 6 7 8 9 |
vector<int> items = { 1, 2, 3, 4, 5, 6 }; int sum = 0; int calculateSum() { for (int i = 0; i < items.size(); ++i) { sum += items[i]; } return sum; } |

#### O(n^{2})

For the following example, the growth rate is O(n^{2})
because, for a vector with n=10 (10 elements), the number of operations will be
10^{2}.

- For each number i from 0 to 9 (Line 4)
- For each number j from 0 to 9 (Line 5)
- Display the values on screen ( [0][0], [0][1], …[9][9])

1 2 3 4 5 6 7 8 9 10 11 12 13 |
vector<int> items = { 1, 2, 3, 4, 5, 6 }; void printItemsAsMatrix() { for (int i = 0; i < items.size(); ++i) { for (int j = 0; j < items.size(); ++j) { print(i, j); } } } void print(int i, int j) { cout << "[" << i << "][" << j << "]"; } |

## Space complexity

The space complexity describes how much space must be allocated to run the algorithm.

For any algorithm, memory may be used for variables (constant, temporary variables), instruction, or execution.

Space Complexity is the sum of Auxiliary space and Input space.

Auxiliary space is the extra/temporary space that is used by the algorithm during its execution, whereas the Input space refer to the data received as input.

To calculate the space complexity, we need to know the memory required for the type of variables, which generally is different for different operating systems.

#### Constant

Let’s assume that the space requirement for an int is 4 bytes.

1 2 3 4 5 6 |
int function() { int a = 10, b = 5, c = 5; int sum = a + b + c; return sum; } |

Variables a,b,c, and sum are integers, so they will take 4 bytes each => space requirement = 4*4 = 16. We add 4 more bytes to the return value => total is 20.

Because the space requirements is fixed, we call it to be constant.

#### Linear

1 2 3 4 5 6 7 8 |
vector<int> copy(const vector<int>& input) { vector<int> output; for(int i = 0; i < input.size(); i++) { output.push_back(input[i]); } return output; } |

In the above example, we receive a vector and we copy each element of that vector into a new one.

Therefore, assuming the size of an integer is 4 bytes, we will need 4*n bytes of space for the elements, where n is the size of the original vector.

The memory is linearly increased based on the size of the vector, so the space complexity is linear => O(n).