# Algorithm Techniques – Dynamic programming

Dynamic programming is a technique that is based on a recurrent formula and one or multiple states. A sub-solution of the problem is constructed from previously found ones. (express the value of a function in terms of other values of that function).

The trick behind Dynamic Programming is that we trade space for time (instead of calculating all the states takes a lot of time but no space, and we take up space for results of sub-problems to save time).

Majority of the Dynamic Programming problems can be categorized into two types:

- Optimization problems
- Combinatorial problems

The optimization problems expect you to find a solution so that the value of a function is minimized or maximized. Combination problems expect you to figure out the number of ways to do something, or the probability of an event to happen.

Dynamic Programming problems should be resolved through the following schema:

- Show that the problem can be broken down into optimal sub-problems.
- Recursively define the value of the solution by expressing it in terms of optimal solutions for smaller subproblems.
- Compute the value of the optimal solution.
- Construct an optimal solution from the computed information.

**Bottom up vs. Top Down**

In Top-down, you start building the solution right away by explaining how you build it from smaller solution.

In Bottom-up, you start with the small solutions, and then build up.

**Analogy:**

**Bottom up – **I’m going
to learn programming. Afterwards, I will start practicing. Afterwards, I will
participate in contests. Afterwards, I’ll practice further and try to improve.
Afterwards, I’ll be a great coder.

**Top Down – **I’l be a great coder. How? I’ll practice further and try to improve. How? I’ll participate in contests. How? I’ll start practicing. How? I’m going to learn programming.

**Top-down**** approach: Memoization**

Memoization is an optimization technique used to speed up applications by storing the results of an expensive function call and returning the cached result when the same input occurs again.

Whenever we need a solution to a subproblem, we look into the lookup table. If it’s there, we retrieve it, and otherwise, we calculate the value and put the result in the lookup table to be used later on.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
int fibonacci_topDown(int n, vector<int>& lookup_table) { iff (n == 0) { return 0; } iff (n == 1) { return 1; } iff (lookup_table[n] != 0) { return lookup_table[n]; } else { lookup_table[n] = fibonacci_topDown(n - 1, lookup_table) + fibonacci_topDown(n - 2, lookup_table); return lookup_table[n]; } } int main() { int value = 10; vector<int> lookup_table(value+1, 0); cout << fibonacci_topDown(value, lookup_table); return 0; } |

In our case, the lookup table is found on line 17. We need to make sure we allocate enough memory to store the results.

First, we check the value of n in the lookup table (Lines 4-5).

If the value is not found in the lookup table, we calculate it based on the previous results (Lines 8-9).

**Bottom-up**** Approach: Tabulation**

In the bottom-up approach, we calculate the small values, and then build larger values from them.

1 2 3 4 5 6 7 8 9 10 |
int fibonacci_bottomUp(int x) { vector<int> results_table; results_table.push_back(0); results_table.push_back(1); for (int i = 2; i <= x; i++) { results_table.push_back(results_table[i - 1] + results_table[i - 2]); } return results_table[x]; } |

Line 2 contains the results table, in which we add the values for 0 and 1 (Lines 3-4).

For each value starting from 2 to x, we add in the results table the solution from previously calculated values (Line 7).