## Shortcut to seniority

##### Home

Go to main page

##### Section level: Junior

A journey into the programming realm

##### Section level: Intermediate

The point of no return

##### Section level: Senior

Leaping into the unknown

You are not allowed to print this document.

Go to main page

A journey into the programming realm

The point of no return

Leaping into the unknown

## Chapter warning

This chapter will contain some algorithms that are usually learned in school.

It is great if you're able to follow the problem and the solution, but if not, do not discourage yourself!

Algorithms are great for exposing you to different problem-solving techniques. However, you will not use them in your everyday programming, and if you actually do need one, you can always look them up on the internet.

Therefore, it's a good idea to read this chapter, but if you are stuck or simply get bored, feel free to go directly to Chapter 3 and come back later on.

Traversing a tree means that we visit every node in the tree. Every tree is a combination of a node carrying data, and two sub-trees.

Depending on the order we visit all nodes, there are two big categories of traversals:

- Depth first traversals, achieved through inorder, preorder, or postorder.
- Breadth first traversals, achieved through level order.

Let’s assume that the definition of a node is as follows:

```
struct Node {
Node(int data) : mData(data) {}
Node* left;
Node *right;
int mData;
};
```

We can see that the type is **Node*** - that is a pointer to an instance of type Node. In other words, we have reserved space for a pointer, so we can assign it afterwards to a memory address that stores a Node object.

Inorder traversal is achieved by:

- Visiting all the nodes in the left subtree
- Visiting the root node
- Visiting all the nodes in the right subtree

```
void traversal_inorder(Node* node) {
if (node == nullptr)
return;
// go recursively on left side
traversal_inorder(node->left);
// consume the data for this node
cout << node->data << " ";
// go recursively on right side
traversal_inorder(node->right);
}
```

The function is called with the head node = 41.

According to the algorithm, we go to the left node until it is null, we print the data from that node and then we go right.

- 41 calls 20 which calls 11.
- 11 is done with the left side, so it displays itself {11} and goes to the right (null here, so it returns instead).
- 20 is done with the left side, so it displays itself {11, 20} and goes to the right (29).
- 29 is done with the left side, so it displays itself {11, 20,29} and goes to the right (32).
- 32 is done with the left side, so it displays itself {11, 20,29,32} and goes to the right (null here, so it returns instead).
- 41 is done with the left side, so it displays itself {11,20,29,32,41} and goes to the right (65) which calls 50.
- 50 is done with the left side, so it displays itself {11, 20,29,32,41, 50} and goes to the right (null here, so it returns instead).
- 65 is done with the left side, so it displays itself {11, 20,29,32,41, 50,65} and goes to the right (91) which calls 72.
- 72 is done with the left side, so it displays itself {11, 20,29,32,41, 50,65,72} and goes to the right (null here, so it returns instead).
- 91 is done with the left side, so it displays itself {11, 20,29,32,41, 50,65, 72, 91} and goes to the right (99).
- 99 is done with the left side, so it displays itself {11, 20,29,32,41, 50,65, 72, 91, 99} and goes to the right (null here, so it returns instead).

The stack unrolls until the last function is removed from the stack.

In-order traversal displays the following data: 11, 20, 29, 32, 41, 50, 65, 72, 91, 99.

Preorder traversal is achieved by:

- Visiting the root node
- Visiting all the nodes in the left subtree
- Visiting all the nodes in the right subtree

```
void traversal_preorder(Node* node) {
if (node == nullptr)
return;
// consume the data for this node
cout << node->data << " ";
// go recursively on left side
traversal_preorder(node->left);
// go recursively on right side
traversal_preorder(node->right);
}
```

The function is called with the head node = 41.

According to the algorithm, we print the data and then go left side recursively, and then go right side recursively.

- 41 displays itself {41} and goes left side (20).
- 20 displays itself {41, 20} and calls left side (11).
- 11 displays itself {41, 20, 11} and has no items to the left nor right, so it goes back to the caller (20) which goes right side now (29).
- 29 displays itself {41, 20, 11, 29} and goes left side (nothing there) so it goes right side (32).
- 32 displays itself {41, 20, 11, 29, 32} and have nothing left nor right, so it goes back to the caller (29) -> back to the caller (20) -> back to the caller (41).
- Now that 41 is done with the left side, it goes to the right (65).
- 65 displays itself {41, 20, 11, 29, 32, 65} and goes left (50).
- 50 displays itself {41, 20, 11, 29, 32, 65, 50} and have nothing to the left nor right, so it goes back to the caller (65) which goes right side now (91).
- 91 displays itself {41, 20, 11, 29, 32, 65, 50, 91} and goes left (72).
- 72 displays itself {41, 20, 11, 29, 32, 65, 50, 91, 72} and have nothing to the left nor right, so it goes back to the caller (91), which goes right side now (99).
- 99 displays itself {41,20,11,29,32, 65,50,91,72,99} and have nothing left nor right, so it goes back to the caller (91) -> back to the caller (65) -> back to the caller (41) -> pop out of the stack.

Pre-order traversal displays the following data: 41, 20, 11, 29, 32, 65, 50, 91, 72, 99.

Postorder traversal is achieved by:

- Visiting all the nodes in the left subtree
- Visiting all the nodes in the right subtree
- Visiting the root node

```
void traversal_postorder(Node* node) {
if (node == nullptr)
return;
// go recursively on left side
traversal_postorder(node->left);
// go recursively on right side
traversal_postorder(node->right);
// consume the data for this node
cout << node->data << " ";
}
```

The function is called with the head node = 41.

According to the algorithm, we go left side recursively, then right side recursively, then we print the data.

- 41 goes left side (20).
- 20 goes left side (11).
- 11 have nothing on the left, nothing on the right, so it prints itself {11} and goes back to the caller (20).
- 20 goes right side (29).
- 29 have nothing on the left, so it goes on the right (32).
- 32 have nothing on the left, nothing on the right, so it prints itself {11, 32} and goes back to the caller (29).
- 29 displays itself {11, 32, 29} and goes back to the caller (20).
- 20 displays itself {11, 32, 29, 20} and goes back to the caller (41).
- 41 goes right side (65).
- 65 goes left side (50).
- 50 have nothing on the left or right, so it prints itself {11, 32, 29, 20, 50} and goes back to the caller (65).
- 65 goes right side (91).
- 91 goes left side (72).
- 72 have nothing on the left or right, so it prints itself {11, 32, 29, 20, 50, 72} and goes back to the caller (91).
- 91 goes right side (99).
- 99 have nothing on the left or right, so it prints itself {11, 32, 29, 20, 50, 72, 99} and goes back to the caller (91).
- 91 displays itself {11, 32, 29, 20, 50, 72, 99, 91} and goes back to the caller (65).
- 65 displays itself {11, 32, 29, 20, 50, 72, 99, 91, 65} and goes back to the caller (41).
- 41 displays itself {11, 32, 29, 20, 50, 72, 99, 91, 65, 41} and pops out of the stack.

Post-order traversal displayed the following data: 11, 32, 29, 20, 50, 72, 99, 91, 65, 41.

Level order traversal is achieved by visiting all nodes at a particular level, before visiting the nodes at a deeper level.
In order to implement such traversal, we will need to use a **queue**.
As we visit a node, we can add all the children in the queue, to visit them later as well.
As long as the queue is not empty, we can take out a node from the front of the queue, visit it, and then enqueue its children.

```
void traversal_levelorder(Node* root) {
if (root == nullptr)
return;
queue<Node*> traversal_queue;
traversal_queue.push(root);
while (!traversal_queue.empty()) {
Node* front = traversal_queue.front();
// use the current node
cout << front->data << " ";
// add the children of this node in the queue
traversal_queue.push(front->left);
traversal_queue.push(front->right);
// dispose the object
traversal_queue.pop();
}
}
```

We first create a queue that will store the nodes that we still have to visit, and then we add there the root node (41). The queue is not empty, so we pop the item from the queue, and add its children instead. Therefore, after the first run, we will have the first level traversed (the root node), and we have its left and right children in the queue {20, 65}.

For the 2^{nd} iteration, we pop the first item in the queue, which is 20. Now, the queue contains only the value 65. By adding the left and right children (next depth) to the queue, the queue will be {65, 11, 29}.

We can see that we cannot visit the next level until all the items from level 2 are visited yet. Level-order traversal displays the following data: 41, 20, 65, 11, 29, 50, 91, 52, 72, 99.

**Graph traversal** means that we visit every vertex and edge exactly once, in a well-defined order. The order in which the vertices are visited is important and may depend upon the algorithm or problem that you are solving.

During a traversal, it is important to track which vertices have been visited.

BFS is starting the transversal from a given vertex, and traverse the graph on each layer.

This is performed in two steps:

- Move horizontally and visit all the vertices of the current layer
- Move to the next layer

If the starting vertex is 0, we will cover the graph as follows: 0 1 2 3 4 5 6 7

This is similar to **level-order** traversal of a tree, except that in, unlike trees, graphs can contain cycles, so we may traverse the same node again. The solution will be to use a boolean visited array whose index will be marked visited as we traverse through the vertices.

So, we have a graph, and we want to iterate through all the vertices in that graph. Starting from a vertex s, we will start by looking at all the vertices in the graph that contain a path from s => all the closest / nearest neighbourns.

To remember that, using BFS, we find all the vertices that are at a distance x from s, before finding all the vertices that are at a distance x+1 from s.

The visited array is marked 0 for all the nodes [0-7] and the queue is empty. We start with vertex 0, so we mark it as true in the visited array, and then proceed to add it into the queue.

Queue : 0

Index | 0 | 1 | 2 | 3 | 4 | 5 | 6 |

Visited | 1 | 0 | 0 | 0 | 0 | 0 | 0 |

Since the queue is not empty, we dequeue the first element, and look at the adjacent vertexes of that node. The vertexes that are not visited will be set as visited and added into the queue.

So, what will happen?

Node 1 has the following adjacent vertices: 0, 2, 3, and 5. 0 and 2 are already visited, so we do nothing about them, whereas 4 and 5 will be marked as visited and added into the queue. Since the queue is not empty, they will be added at the end of the queue, after 2 and 3, which are still on layer 2, meaning that they will be processed after we finish with that layer.

```
class Graph {
public:
void addEdge(int from, int to) {
vertices[from].push_back(to);
}
void BFS(int initial);
private:
map<int, vector<int>> vertices;
};
void Graph::BFS(int initial) {
// The queue we need to use
list<int> queue;
map<int, bool> visited_values;
// Add starting vertex in the queue and set to visited
visited_values[initial] = true;
queue.push_back(initial);
// Iterate through the queue items
while (!queue.empty()) {
const int top = queue.front();
cout << top << " ";
queue.pop_front();
// Insert at the end of the queue all the items that are adjacent
// and not visited already
for (int item : vertices[top]) {
if (visited_values.find(item) == visited_values.end()) {
visited_values[item] = true;
queue.push_back(item);
}
}
}
}
int main() {
Graph g;
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 2);
g.addEdge(2, 0);
g.addEdge(2, 3);
g.addEdge(3, 3);
g.BFS(2);
return 0;
}
```

The main() function is initially called, which constructs a Graph and calls BFS function with 2 as parameter. The value 2 implies that we should start our BFS implementation with 2 as the starting point.

The algorithm starts at line 12.

Line 14 introduces a queue of integers, in which we will insert the values we want to further visit.

Line 16 introduces a map called visited_values, which keeps track of the nodes that were already visited.

Initially, we add our initial node (2) in the queue, and mark it as visited (Line 19-20).

Until the queue is not empty (Line 23):

- We pick the first element from the queue (Line 24), and we remove it from the queue (Line 26).
- For each edge connected to that node (Line 30):
- If the node was not already visited, we set it as visited, and add it into the queue for further processing.

Because of this logic, what will happen?

We start with node 2, and mark it as visited. We iterate through all neighbours of 2, and add them into the queue, while also marking them as visited. We finish with node 2, and we get the next item from the queue, which is one of the vertices directly connected to 2, and we insert all of its neighbours into the queue as well.

To be noted that we don’t introduce into the queue any nodes that were already visited – we do this in order to NOT process the same node multiple times, causing an infinite loop (e.g. Add 2 in queue, which adds 5, which adds 2, which adds 5…).

**DFS** is a recursive algorithm, that uses backtracking internally. It involves searching all the nodes by going ahead, and when there are no more nodes to visit, it moves backwards on the same path to find more nodes to traverse.

All the nodes on the current path will be visited before it goes on the next path. The implementation can be done by using stacks.

- Pick a starting node.
- Push all its adjacent nodes in the stack.
- Pop a node from stack to select the next node to visit.
- Push all of its adjacent nodes into the stack.
- Repeat until stack is empty.

You need to ensure that the nodes that are visited were marked, otherwise you will end up in an infinite loop.

So, we have a graph, and we want to iterate through all the vertices in that graph.

Starting from a vertex S, we add all its adjacent node that are not visited yet, into the stack.

Then, we pop a node from the stack and we add the adjacent node of that node into the stack. This way, we will visit all the adjancent nodes from a given starting node, until there are no nodes left unvisited. When that happens, we pop the stack and continue to look for unvisited nodes.

We can deduce that there are 3 states:

**Unvisited**: Node was not reached yet.**Pushed**: Node was pushed into the stack.**Visited**: The adjacent nodes are into the stack, while this node was popped.

We repeat these steps until the stack is empty.

If there’s a node that was added twice (let’s say node 3 here, first added by initial node 0, 2^{nd} time added after iterating through adjacent nodes of 4) – we add it into the stack again.

When we pop the nodes to visit them, we check their state – if they were already visited, we skip them.

First of all, we need to set the node we start from - let's go with node 0.

- Node 0 is pushed into the stack

Stack: 0

After visiting node 0:

- We pop value 0 from the stack
- we push values 8, 3, 1 into the stack
- Next, we visit the top node in the stack (1)

Stack: 8 3 1

Next, we visit one of the unvisited neighbours of node 1

We pick node 7 to further work with.

After visiting node 1:

- We pop value 1 from the stack
- We push value 7 into the stack
- Next, we visit the top node in the stack (7)

Stack: 8 3 7

```
class Graph {
map<int, vector<int>> vertices;
public:
void addEdge(int from, int to) {
vertices[from].push_back(to);
}
void DFS(int initial);
};
void Graph::DFS(int initial) {
// The stack we need to use
stack<int> myStack;
map<int, bool> visited_values;
myStack.push(initial);
while (!myStack.empty()) {
const int top = myStack.top();
// We pop the item before we add others into the stack
myStack.pop();
if (!visited_values[top]) {
std::cout << top << " ";
visited_values[top] = true;
for (int item : vertices[top]) {
myStack.push(item);
}
}
}
}
int main() {
Graph g;
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(0, 3);
g.addEdge(1, 3);
g.addEdge(2, 3);
g.addEdge(2, 4);
g.DFS(0);
return 0;
}
```

The main() function is initially called, which constructs a Graph and calls DFS function with 0 as parameter. The value 0 implies that we should start our DFS implementation with 0 as the starting point.

The algorithm starts at line 11.

Line 13 introduces a stack of integers, which we will further use to insert nodes we want to further visit. Line 15 introduces a map called visited_values, which we will use to keep track of nodes that were already visited.

Initially, we push the initial value into the stack (Line 17).

Until the stack is empty (Line 19):

- We retrieve and remove the first node from the stack (Line 20, 22).
- If the node was not already visited:
- We iterate through it’s neighbours and push them further into the stack.

Because of this logic, what will happen?

We initially add the node 0 into the stack. Because the stack is not empty, we pop the node from the stack. It is not visited, so we mark it as such and we add its direct nodes into the stack. Assuming 0 has 3 neighbours, the stack will now have 3 items. Then, we start again, popping the first item from the stack, marking it as visited, and pushing its direct nodes into the stack.

Assuming this node has 3 neighbours, the stack will then have 5 items (the current node was already popped).

Because stacks follow the LIFO principle, we will iterate through all the direct nodes of our starting point, and only afterwards we will go through the other nodes within the stack.

We will talk about Union-find algorithm, which is an algorithm that detects whether our graph contains cycles.

Union-Find algorithm can be used only if the graph does not contain any self-loops. Otherwise, the algorithm will not work.

A cycle occurs in this case, because by selecting any node as starting point, we can get back to that node after visiting some nodes.

We will use a data structure to keep track of elements that are split in non-overlapping subsets.

The Find function shall return which subset an element is in, in order to compare with the subset of another element. If the function returns the same value for different elements, it means that they are part of the same subset.

Union function is used to merge multiple subsets into a single one.

The subsets will be tracked in an array, usually called parent. Initially, the parent array is initialized with -1, which means that there’s only one item in each subset. When we call find, we chase the parent until we reach -1. When -1 is found, we return the value of that node.

Example 1: Node 3 has the parent node -1 => We return 3

Example 2: Node 0 has the parent node 2 => Node 2 has the parent -1 => we return 2

Node | Parent |
---|---|

0 | -1 |

1 | -1 |

2 | -1 |

Now we process all edges one by one.

Let’s start with edge 0-1.

**Find**

Check subsets:

Subset[0] = -1 => we return 0

Subset[1] = -1 => we return 1

They are in different subsets, so we merge them.

**Merge**

For merging the subsets, we make one node a parent of another. Either way can be done, so here are the possible values.

Node | Parent |
---|---|

0 | 1 |

1 | -1 |

2 | -1 |

Node | Parent |
---|---|

0 | -1 |

1 | 0 |

2 | -1 |

Another visualization to understand this concept is as follows:

We start with the following sets:

{0}, {1}, {2}

After this step, the sets will be {0,1}, {2}

Let’s go with the first table (node 1 as parent of 0) and settle with this configuration from now on =>

Given edge (x,y) - merging will set y as parent of x.

Let’s continue with edge 1-2.

**Find**

Check subsets:

Subset[1] = parent[0] = -1 => returns 0

Subset[2] = -1 => returns 2

They are in different subsets, so we merge them.

**Merge**

Before merging, the table looks like this

Node | Parent |
---|---|

0 | 1 |

1 | -1 |

2 | -1 |

Now, we want to merge the subset.

Since we said that we use the following formula

(Given edge (x,y) – merging will set the parent of x to y)

=> Parent[1] = 2

Node | Parent |
---|---|

0 | 1 |

1 | 2 |

2 | -1 |

After this step, the sets will be {0,1,2}, {}

Let’s continue with edge 0-2.

Let’s discuss one more time how we go from subset[0] to 2:

In the given table above - We go forward the line:

- Node 0 has parent 1 => we check the node 1 in the table
- Node 1 has parent 2 => we check the node 2 in the table
- Node 2 has parent -1 => we return 2.

Node | Parent |
---|---|

0 | 1 |

1 | 2 |

2 | -1 |

**Find**

Check subsets:

Subset[0] = parent[1] = parent[2] = -1 => returns 2

Subset[2] = -1 => returns 2

Find() shall return value 2, for both subsets.

This means that this edge forms a cycle.

```
int Find(int parent[], int idx) {
if (parent[idx] == -1)
return idx;
return Find(parent, parent[idx]);
}
void Union(int parent[], int x, int y) {
int set_x = Find(parent, x);
int set_y = Find(parent, y);
parent[set_x] = set_y;
}
```

**Let’s talk about the Find() function (Line 1)** - It receives the array and an index as parameter.

If the value of the array at that index is -1, we return that index. If not, it means that there’s a parent for that node, and we recursively call the function with the value of the parent, to find the parent of that subset.

**Let’s talk about the Union() function (Line 7) **- It receives the array and two values as parameters.

We call the Find function for both parameters, to find the subset they belong, and check whether they are part of the same subset (Line 8-9). Line 10 is merging the subsets by setting one to be the parent of the other one.

**Binary search** is an algorithm that works only on a sorted set of values, and drastically reduces searching time. It is implemented by selecting the middle element of the data set, and choosing on each iteration to keep only half of it, based on whether a condition was successful or not. The algorithm stops when the element was found or when the remaining set is empty.

- Go to the middle of the vector
- Compare the value at that location with the value we search for
- Take the decision of which side to stick with, based on the comparison result

- If the value in the middle is bigger than the value we search for: Stick with the left half
- If the value in the middle is smaller than the value we search for: Stick with the right half

The vector is sorted, so the values are in increasing order. If the value is bigger than what we have, we know the following:

- The values on the left side contains values smaller than the value we compare (and possibly our value as well)
- The values on the right side contains only values higher than the value we compared with

**Input Example:**

- a sorted vector of 1000 elements
- a value to search for (let's take 242)

Our value: 242

Current vector: [1,1000]

242 < 1000/2, so we stay with the left half of the vector

Our value: 242

Current vector: [0,500]

242 < 500/2, so we stay with the left half of the vector

Our value: 242

Current vector: [0,250]

242 > 250/2, so we stay with the right half of the vector

We can see that after only 3 steps we are left with 125 values and got rid of 875.

If the vector was not sorted, the values would have been randomly distributed, and we had no way of getting rid of half of data, since we had no information about it.

If the value that we need to search is close to one side (let’s say the value 3 for example) – linear search would have been way faster than binary search.

Binary search drastically improves performance when used on a big (sorted) collection of data.

Quicksort is a sorting algorithm which is based on the divide-and-conquer technique.

The algorithm picks an element (pivot) and partitions an array around the pivot. Therefore, the values smaller than the pivot will go to the left, whereas those bigger wil go to the right.

First, we choose a pivot. It can be the value of the middle element, the value of the last element, or any value in range of sorted values (even if it doesn’t exist in the array). Afterwards, we do the partitioning, which means rearranging the elements based on their value, in relation to the pivot value. The last step is to apply quicksort recursively to the left and right parts.

In the code, left and right are indices that initially point to the first and the last element in the array, respectively.

The algorithm moves left forward, until it finds an element which has a greater value than the pivot. The algorithm moves right backward, until it finds an element which has a smaller value than the pivot. If left is smaller than right, they are swapped, and they continue with the search (left is incremented, right is decremented).

The algorithm stops when left becomes greater than right.

After partitioning is done, all values before left are less or equal to pivot, and all values after right are greater or equal to pivot. When partitioning happens, the algorithm splits the array in two parts. Every element of left array is smaller than pivot, and every element of right array is bigger than pivot.

The following inequality is satisfied: Left <= pivot <= Right.

```
int partition(vector<int>& arr, int left, int right) {
int pivot = arr[(left + right) / 2];
while (left <= right) {
while (arr[left] < pivot) left++;
while (arr[right] > pivot) right--;
if (left <= right) {
swap(arr[left], arr[right]); // swaps the values
left++;
right--;
}
}
return left;
}
void quickSort(vector<int>& arr, int left, int right) {
int index = partition(arr, left, right);
if (left < index - 1)
quickSort(arr, left, index - 1);
if (index < right)
quickSort(arr, index, right);
}
int main() {
vector<int> arr = { 1, 12, 5, 26, 7, 14, 3, 7, 2 };
quickSort(arr, 0, arr.size() - 1);
for (int i = 0; i < arr.size(); ++i)
cout << i << " ";
return 0;
}
```

The main() function is initially called, at line 23.

We fill a vector with random elements (Line 24) and call quicksort to sort them. The algorithm starts at line 14, where the function is declared.

Let’s have a look on the parameters sent: the vector we want to sort (through reference), 0 as left, and the size of the vector -1 as right value (the last element).

We assign size-1 because we want to use the indexes of the elements, and in C++ the arrays start with 0. So if we have 3 items, the indexes will be {0,1,2} => therefore left is set to 0, and right is set as 2. Right after the function is called, a partition is made (Line 15). That partition returns an index.

So how does this partitioning works ? (Line 1)

Line 2 declares a pivot, which is always the middle of our array. Until left reaches right (Line 3):

- If what’s on the left side is smaller than pivot, we go further.
- If what’s on the right side is bigger than pivot, we go further.

Line 6 is tricky, because we already checked that condition. But why do we check it again? Line 4 and 5 are checking the values and incrementing left and decrementing right, so we could be in the situation in which these two would have went over one another.

If that’s not yet the case, it means:

- The value of the left is bigger than the pivot (since we moved passed line 4).
- The value of the right is smaller than the pivot (since we moved passed line 5).

Because we want to have all items on the left smaller than pivot, at all times, and all items on the right bigger than pivot, at all times, we swap these two (Line 7) and go past those elements (Line 8-9).

We go back to line 3 and keep checking for any elements that are not placed properly.

So the partition is selecting the middle point of the array and sort all elements smaller than the value in the middle to the left, and all elements bigger than the value in the middle to the right.

Therefore, the index returned by the partition() function is the index from which we know for sure that all the items to the left of it are smaller than that index, and all items from the right of it are bigger than the index.

So, what are we gonna do with this index?

Line 17-18: If the value of left is smaller than the index-1, we call quicksort again, but this time the right will no longer be the end of the vector, but that index itself (-1).

Line 19-20: If the index is smaller than the value of right, we call quicksort again, but this time the left will longer be 0, but that index itself.

Strange.

But what actually happens is, we get a value which will be our index, and that will split our vector in two parts. Afterwards, we call the function again, each time with a smaller vector. There, we will get yet another index, and we will keep splitting our vector in two parts, until left reaches past index and right reaches past index too.

**Merge-sort** is a sorting algorithm which is based on the divide-and-conquer technique.

Merge sort is performed through three steps:

- Divide the array into two subarrays (Divide)
- Sort each array (Conquer)
- Merge the arrays into one

Let’s assume we have the following array of numbers:

**{25, 4, 16, 32, 14, 17, 49, 7}**

First step is to divide it into two parts

**{25, 4, 16, 32}, {14, 17, 49, 7}**

Now, Let’s split them again.

**{25, 4}, {16, 32}, {14, 17}, {49, 7}**

And again.

**{25}, {4}, {16}, {32}, {14}, {17}, {49}, {7}**

Now we merge them again (sorted)

**{4, 25}, {16, 32}, {14, 17}, {7, 49}**

And again

**{4, 16, 25, 32}, {7, 14, 17, 49}**

And one more time

**{4, 7, 14, 16, 17, 25, 32, 49}**

But that’s extra work!

No, it’s not, because merging is done in a smart way.

We have the arrays that we want to sort further, so we keep references to the front of these arrays, and always pick the smallest element out of them two, moving it to the sorted array, and incrementing the indices.

` ````
void merge(vector<int>& arr, int left, int median, int right) {
int idx_left_end = median - left + 1;
int idx_right_end = right - median;
vector<int> tmp_left, tmp_right;
for (int i = 0; i < idx_left_end; ++i) tmp_left.push_back(arr[left + i]);
for (int i = 0; i < idx_right_end; ++i) tmp_right.push_back(arr[median + 1 + i]);
int idx_left = 0, idx_right = 0;
int idx_merged = left;
while (idx_left < idx_left_end && idx_right < idx_right_end) {
if (tmp_left[idx_left] <= tmp_right[idx_right]) {
// merge element from left side, is lower than the one from right
arr[idx_merged] = tmp_left[idx_left];
idx_left++;
}
else {
// merge element from right side, is lower than the one from left
arr[idx_merged] = tmp_right[idx_right];
idx_right++;
}
idx_merged++;
}
// copy any remaining items, on each side, if one of them reached the limit
while (idx_left < idx_left_end) {
arr[idx_merged] = tmp_left[idx_left];
idx_left++;
idx_merged++;
}
while (idx_right < idx_right_end) {
arr[idx_merged] = tmp_right[idx_right];
idx_right++;
idx_merged++;
}
}
void mergeSort(vector<int>& arr, int left, int right) {
if (left < right) {
int median = (left + right) / 2;
mergeSort(arr, left, median);
mergeSort(arr, median + 1, right);
merge(arr, left, median, right);
}
}
int main() {
vector<int> arr = { 25, 4, 16, 32, 14, 17, 49, 7 };
mergeSort(arr, 0, arr.size() - 1);
return 0;
}
```

The main() function is initially called, at line 51.

We fill a vector with random elements (Line 52) and call mergeSort() to sort them. The algorithm starts at line 40, where the function is declared.

Let’s have a look on the parameters sent: the vector we want to sort (through reference), 0 as left, and the size of the vector -1 as right value.

We assign size-1 because we want to use the indexes of the elements, and in C++ the arrays start with 0. So if we have 3 items, the indexes will be {0,1,2} => therefore left is set to 0, and right is set as 2.

Let’s see what the function does.

It receives the vector and the entire length of it, and compares left with right. (Line 41). Since left is 0 and right is the size of it, the condition is fulfilled and we move further. In line 42, we pick the middle of our vector, and then we do mergeSort twice:

Once for all the items from start to the middle of our initial vector (Line 44).

Once for all the items from the middle of our initial vector until the end (Line 45).

This is done recursively, so this logic will be applied continuously, splitting the vector in two until left is bigger or equal than right.

Therefore, if we start with an array of 8, we will call the same function for index 0-4 and 5-7. 0-4 will be split in two (0-1, 2-3), which will be again split in ({0}, {1}), and ({2}, {3}).

When the recursivity no longer applies, we return to the caller, and apply merge(), this time passing both the left, right, and the median as values (Line 47).

Merge implementation starts at line 1.

Line 2 and 3 introduce the end indexes for temporary arrays that will keep left and right values.

Assuming the vector was of size 8, the median will be 4 (mid of the array), and therefore, we will pass the function with the values (left =0 , median=4, right=7).

Idx_left_end (Line 2) will have the value “median-left + 1” = 3. This means we have 3 items on the left side.

Idx_right_end (Line 3) will have the value “right-median” = 7-4 = 3. This means we have 3 items on the right side.

Line 5-7 introduces two temporary arrays which copies the items from the left side on the left array, and from the right side, on the right array. Now it’s more clear why we have those indexes we discussed earlier, and why they both start from 0 – they are indexes for our temporary arrays.

Idx_left and idx_right (Line 9) are the current indexes for our temporary arrays.

Line 12 runs a loop until either left or right side finishes with iterating through their elements.

If the item from the left is smaller than the one from the right (Line 13), we add that one in our initial array (Line 15) and increment the index that points to the temporary left array (Line 16).

Otherwise (if the item from the right is smaller than the one from the left), we add that one instead (Line 20) and also increment the index that points to the temporary right array (Line 21).

Each time we add a value in our initial array, we increment its index so we can keep adding items. (Line 23).

Line 27-31, respectively 33-37, is there to copy any remaining items in our initial array.

That is because, for example, all the items from the left side were smaller than those from the right, so we added only those from the left. Now, we still need to add what is remaining (what’s on right side).

Kruskall’s algorithmpicks the smallest weight edge, that doesn’t cause any cycle with the MST that is built in the process.

Kruskal’s algorithm detects the MST through the following steps:

- Sort all the edges in increasing order, based on their weight
- Pick the smallest edge and check if it forms a cycle with our already built spanning tree, and include it if it doesn’t
- Repeat step 2 until there are V-1 edges in the spanning tree

Let's assume we have some edges and weights associated with them.

First step is to sort based on the weight.

Weight | Source | Destination |
---|---|---|

1 | 7 | 6 |

2 | 8 | 2 |

2 | 6 | 5 |

4 | 0 | 1 |

4 | 2 | 5 |

6 | 8 | 6 |

7 | 2 | 3 |

7 | 7 | 8 |

8 | 0 | 7 |

8 | 1 | 2 |

9 | 3 | 4 |

10 | 5 | 4 |

11 | 1 | 7 |

14 | 3 | 5 |

Now that we have the weights sorted in increasing order, we take them one by one and look for cycles within the MST.

Pick edge 7-6, weight 1.

There is no cycle if we include it -> add to MST

Number of edges: 1

Pick edge 8-2, weight 2.

There is no cycle if we include it -> add to MST

Number of edges: 2

Pick edge 6-5, weight 2.

There is no cycle if we include it -> add to MST

Number of edges: 3

Pick edge 0-1, weight 4

There is no cycle if we include it -> add to MST

Number of edges: 4

Pick edge 2-5, weight 4

There is no cycle if we include it -> add to MST

Number of edges: 5

Pick edge 8-6, weight 6

There is a cycle if we include it -> discard

Pick edge 2-3, weight 7

There is no cycle if we include it -> add to MST

Number of edges: 6

Pick edge 0-7, weight 8

There is no cycle if we include it -> add to MST

Number of edges: 7

Pick edge 1-2

There is a cycle if we include it -> discard

Pick edge 3-4

There is no cycle if we include it -> add to MST

Number of edges: 8

After this last step, we have a MST with 8 edges, which is what we wanted, so the algorithm stops here.

` ````
struct Edge {
int from, to, cost;
};
class Graph {
public:
using GraphMap = vector<Edge>;
void addEdge(int from, int to, int cost) {
edges.push_back({ from, to, cost });
}
const GraphMap& getEdges() const { return edges; }
int kruskalMST();
private:
GraphMap edges;
};
bool sortingFunction(Edge& left, Edge&right) {
return left.cost < right.cost;
}
int Graph::kruskalMST() {
int result = 0;
sort(edges.begin(), edges.end(), sortingFunction);
UnionFindSets ufs(edges.size());
for (const Edge& edge : edges) {
int set_from = ufs.find(edge.from);
int set_to = ufs.find(edge.to);
// Check for cycle (exists in same set)
if (set_from != set_to) {
result += edge.cost;
ufs.merge(set_from, set_to);
}
}
return result;
}
int main() {
Graph graph;
graph.addEdge(0, 1, 4);
graph.addEdge(0, 7, 8);
graph.addEdge(1, 2, 8);
graph.addEdge(1, 7, 11);
graph.addEdge(2, 3, 7);
graph.addEdge(2, 8, 2);
graph.addEdge(2, 5, 4);
graph.addEdge(3, 4, 9);
graph.addEdge(3, 5, 14);
graph.addEdge(4, 5, 10);
graph.addEdge(5, 6, 2);
graph.addEdge(6, 7, 1);
graph.addEdge(6, 8, 6);
graph.addEdge(7, 8, 7);
std::cout << "Weight of MST is " << graph.kruskalMST();
return 0;
}
```

The algorithm starts at line 38, where the main() function is declared.

Further below, a graph is created and edges (connections from one vertex to another) are added in that graph.

In line 7 we create an alias so that a GraphMap refers to a vector of edges, so we will have information of the source and destination vertex, and the cost that it has.

Line 55 calls the Kruskall function, and we can find its implementation at line 20.

We initiate the result value with 0 (Line 21). That will be the cost of our MST.

Line 22 sorts the data based on a specific criteria (based on the cost). This means that, after the sort is finished, the vector will contain edges sorted by the cost to traverse that edge.

Line 24 introduces the UnionFindSets structure, and its implementation was explained before at the Cycle detection Chapter.

From there on, we iterate through all the edges one by one (Line 25), compare the sets of the source and destination for that given edge (Line 30), and if they are not part of the same set (in other words, if they don’t form a cycle) – we merge them together and add it to our final cost (Line 31-32).

Prim’s algorithmis a greedy algorithm that finds a minimum spanning tree within a graph (a subset of the graph that includes every vertex, in which the total weight of the edges is minimized).

This algorithm relies on keeping two sets of vertices, one being the vertices included in MST, while the other one keeping the remaining vertices.

At each step, the algorithm considers all the edges that connect two sets, picks the minimum weight edge, and adds the other endpoint of the edge to the first set (with the MST).

We have 9 vertices and 14 edges. The MST must have (9-1) = 8 edges

First step is to pick the vertex with the minimum key value, which is not included in MST. After the vertex is selected, we have to update the key values.

Step 1. Select vertex

Selected vertex 0

Set: 0

Step 2. Update key values

Vertex 1 = value 4

Vertex 7 = value 8

Step 1. Select vertex

Options:

Vertex 1 : value 4

Vertex 7 : value 8

Therefore, we pick vertex 1 and add it into the set

Set: 0, 1

Step 2. Update key values

Vertex 2 = value 8

Step 1. Select vertex

Options:

Vertex 2: value 8

Vertex 7: value 8

We pick any of these two, let’s go with 7

Set: 0, 1, 7

Step 2. Update key values

Vertex 6 = value 1

Vertex 8 = value 7

Step 1. Select vertex

Options:

Vertex 6: Value 1

Vertex 8: Value 7

We pick vertex 6 and add it into the set

Set: 0, 1, 7, 6

Step 2. Update key values

Vertex 5: value 2

Vertex 8: value 6

We repeat the steps until the set include all vertices of the graph

Assuming we have the following classes already defined:

` ````
struct Edge {
int to;
int cost;
bool operator >(const Edge& other) const {
return cost > other.cost || cost == other.cost && to > other.to; } };
class Graph {
public:
using GraphMap = map<int, vector<Edge>>;
void addEdge(int from, int to, int cost) {
vertices[from].push_back({ to, cost });
vertices[to].push_back({ from, cost });
}
const GraphMap& getVertices() const { return vertices; }
void primMST();
private:
GraphMap vertices;
};
int main() {
Graph graph;
graph.addEdge(0, 1, 4);
graph.addEdge(0, 7, 8);
graph.addEdge(1, 2, 8);
graph.addEdge(1, 7, 11);
graph.addEdge(2, 3, 7);
graph.addEdge(2, 8, 2);
graph.addEdge(2, 5, 4);
graph.addEdge(3, 4, 9);
graph.addEdge(3, 5, 14);
graph.addEdge(4, 5, 10);
graph.addEdge(5, 6, 2);
graph.addEdge(6, 7, 1);
graph.addEdge(6, 8, 6);
graph.addEdge(7, 8, 7);
graph.primMST();
return 0;
}
```

The classes from above will help us with the implementation, providing functionality for later usage.

` ````
void Graph::primMST() {
priority_queue<Edge, vector<Edge>, std::greater<Edge>> pq;
int src = 0;
int V_MAX = getVertices().size();
vector<int> key(V_MAX, numeric_limits<int>::max());
vector<int> parent(V_MAX, -1);
vector<bool> visited(V_MAX, false);
pq.push({ src, 0 });
key[src] = 0;
while (!pq.empty()) {
int node = pq.top().to;
pq.pop();
visited[node] = true;
for (auto& edge : getVertices().at(node)) {
int vertex = edge.to;
int weight = edge.cost;
if (visited[vertex] == false && key[vertex] > weight) {
pq.push({ vertex, weight });
parent[vertex] = node;
key[vertex] = weight;
}
}
}
cout << "From - To" << std::endl;
for (int i = 1; i < V_MAX; ++i)
cout << parent[i] << " - " << i;
}
```

Afterwards, we create a graph and add a few edges to it (Lines 23-37).

If we check the implementation of addEdge (Lines 10-13), we see that we fill the vertice nodes for both the source and the destination – therefore we are working with an undirected graph.

After the edges are added, we call the algorithm (Line 39).

The implementation of Prim’s algorithm starts at line 1.

In line 2 we see that we’ll be using a priority queue, which is nothing more than a binary heap. Based on the comparison function, we can make it either a min or a max heap.

Our priority queue uses a reverse comparison operator, so the edges will be sorted incrementally based on cost. (smallest cost will be on top of the queue).

Line 3 declares the source as 0, whereas line 4 is a constant which stores the number of vertices we have.

Afterwards, we create three vectors with the same size as our number of vertices:

- Line 5: A vector of keys, with default value set to maximum int value
- Line 7: A vector of parents, with default value -1 (no parent)
- Line 8: A vector of booleans, to keep track of whether the vertex was already visited or not (default is false)
- We start by pushing the source node in the priority queue (Line 10), with cost of 0 (not relevant, since it’s the only item in the queue, it will be picked anyway.

Line 11 sets the key value for the source to 0 (no cost from this node to itself).

Until the priority queue is out of nodes (Line 13):

- We pop the next item from the queue (Line 15)
- We mark it as visited (Line 17)
- For each of its direct connection (Line 19)
- We get the node it’s connected with, and the required cost to reach it (Line 20-21)
- If the node was not visited yet, and the cost is smaller than what we already have (Line 22)
- We push the node into the queue, with the priority based on its cost (Line 23)
- We mark the node as child for the node it came from (Line 24)
- We set its cost (which stores the minimum cost to hit that node) to be its current weight (Line 25) – since the initial cost was higher than what we have now

At the end of the algorithm:

- the parent vector will contain the minimum spanning tree
- the vector of keys will contain the distances from the initial source to that vertex

That is because the entire loop is working with the vertex variable defined in line 20, which is the edge to the destination node.

- Line 22 compares the distance to the destination
- Line 23 adds the node along with its cost
- Line 24 set the parent of that node
- Line 25 set the weight of that node

Dijkstra’s Algorithmfor calculating the shortest path is very similar to Prim’s algorithm.

Similar to Prim’s MST, we generate a shortest path tree (SPT) with given source as root, and we maintain two sets (one for vertices included in SPT, and another for the remaining vertices).

For each step of the algorithm, we find the vertex which is not included and has a minimum distance from the source.

Step 1. Select vertex

Selected vertex 0

Set: 0

Step 2. Update distances

Vertex 1 = distance 4

Vertex 7 = distance 8

Step 1. Select vertex

Options:

Vertex 1: distance 4

Vertex 7: distance 8

We pick vertex 1

Set: 0, 1

Step 2. Update distances of adjacent vertices of 1

Vertex 2 = distance 12 (8+4)

Step 1. Select vertex

Options:

Vertex 2: distance 12

Vertex 7: distance 8

We pick vertex 7

Set: 0, 1, 7

Step 2. Update distances of adjacent vertices of 7

Vertex 8 = distance 15 (8+7)

Vertex 6 = distance 9 (8+1)

Step 1. Select vertex

Options:

Vertex 2: distance 12

Vertex 8: distance 15

Vertex 6: distance 9

We pick vertex 6

Set: 0, 1, 7, 6

Step 2. Update distances of adjacent vertices of 6

Vertex 8 = distance 15 (6+9)

Vertex 5 = distance 11 (2+9)

We repeat the steps until the set include all vertices of the graph.

This is how the distance from source will look like:

Vertex | Distance from source |
---|---|

0 | 0 |

1 | 4 |

2 | 12 |

3 | 19 |

4 | 21 |

5 | 11 |

6 | 9 |

7 | 8 |

8 | 14 |

If we are interested only in the shortest distance from the source to a specific target, we can stop the algorithm once we calculated the minimum distance for the target vertex.

` ````
void display(const map<int, int>& paths) {
for (const auto & path : paths)
cout << path.first << " - " << path.second << std::endl;
}
struct Edge {
int to;
int cost;
bool operator > (const Edge& other) const { return cost > other.cost; }
};
class Graph {
public:
using GraphMap = map<int, vector<Edge>>;
void addEdge(int from, int to, int cost) {
vertices[from].push_back({ to, cost });
vertices[to].push_back({ from, cost });
}
const GraphMap& getVertices() const { return vertices; }
private:
GraphMap vertices;
};
int main() {
Graph graph;
graph.addEdge(0, 1, 4);
graph.addEdge(0, 7, 8);
graph.addEdge(1, 2, 8);
graph.addEdge(1, 7, 11);
graph.addEdge(2, 3, 7);
graph.addEdge(2, 8, 2);
graph.addEdge(2, 5, 4);
graph.addEdge(3, 4, 9);
graph.addEdge(3, 5, 14);
graph.addEdge(4, 5, 10);
graph.addEdge(5, 6, 2);
graph.addEdge(6, 7, 1);
graph.addEdge(6, 8, 6);
graph.addEdge(7, 8, 7);
map<int, int> paths = shortestPath(graph.getVertices(), 0);
display(paths);
return 0;
}
```

The algorithm starts at line 26, where the main() function is declared.

Afterwards, we create a graph and add a few edges to it (Lines 27-41).

If we check the implementation of addEdge (Lines 15-18), we see that we fill the vertice nodes for the source and for the destination – therefore we are working with an undirected graph.After the edges are added, we call the algorithm (Line 43), and the implementation of it can be seen below.

` ````
map<int,int> shortestPath(const Graph::GraphMap& vertices, int src) {
priority_queue<Edge, vector<Edge>, std::greater<Edge>> pq;
map<int, int> dist;
pq.push({ 0, src });
dist[src] = 0;
while (!pq.empty()) {
int u = pq.top().to;
pq.pop();
for (const Edge &x : vertices.at(u)) {
const int dest = x.to;
const int cost = x.cost;
if (dist.find(dest) == dist.end() || dist.at(dest) > dist[u] + cost) {
dist[dest] = dist[u] + cost;
pq.push({ dest, dist[dest] });
}
}
}
return dist;
}
```

What we observe right from the start is the in and out of the function. We receive a map of vertices, and a starting point, and we return a map of integers (destination node, weight).

In line 2, we make use again of the priority_queue with the greater as comparison, so the edges will be sorted in increasing order based on the weight. In line 5-6 we add the source in the queue and we set the distance from source to itself as 0.

Until the queue is empty (Line 8):

- We get the node from the queue -our node is the source, so node 0, weight 0 (Lines 9-10)
- For all the nodes directly connected to this node: (Line 12)
- We retrieve the node and the cost of the edge (Lines 13-14)
- If we haven’t calculated the distance yet, or if the currently stored distance from source to this node is bigger than the distance from our source + the cost required to reach this node (or in other terms, if the distance to reach this vertex through this node is smaller than what we already have calculated)
- - Set the distance to be the distance to current node + the cost to reach this node (Line 16)
- - Push the current node into the queue to be further processed (Line 17)

The algorithm isn’t very hard, but we need to understand a few things:

- Line 15: If the distance isn’t calculated yet => this is implementation specific. We could have gotten rid of this if we had initiated a distance vector with initial value to be the highest possible value – so the comparison with the new distance would always be succeded first time.
- Line 3, 6, 16: The distance vector contains the distance from the initial source node, to all the other nodes.

In the end, the distance vector is what we actually return to the caller.

Knowing that the source node was 0, distance[0] will be 0, distance[1] will be the smallest cost to reach vertex 1 from vertex 0, etc.

A thief is robbing a house and can carry into his knapsack a maximal weight W.

There are N available items, each of them with their weight and profit of selecting that item. In fractional Knapsack, we can break the items to maximize the total value of knapsack. Which items should the thief take?

An efficient solution would be to use Greedy here. The basic idea is to calculate the value/weight ratio for each item, and sort them based on the highest ratio. Afterwards, we can use greedy and take the items with the highest ration.

Let’s assume the knapsack weight is 50.

Let’s define the following items (value / weight):

- Item 1 (value = 60, weight = 10)
- Item 2 (value = 100, weight = 20)
- Item 3 (value = 120, weight = 30)

Let’s create the value/weight ratio.

- Item 1 = 60 value, 10 weight => 60/10 = 6 value per weight
- Item 2 = 100 value, 20 weight => 100/20 = 5 value per weight
- Item 3 = 120 value, 30 weight => 120/30 = 4 value per weight

Let’s sort based on ratio -> [Item1, Item2, Item3]

Let’s find out the maximum possible values:

**We take item 1 (biggest ratio)**.

Total value = 60

Weight = 50-10 = 40

**We take item 2**

Total value = 60 + 100 = 160

Weight = 40-20 = 20

**We don’t have enough weight left to fully take item 3, but we can take 2/3 of it**

Total value = 160 + 2/3 * 120 = 160+80= 240

Weight = 20-2/3 * 30 = 20-20 = 0

**Why 240 is the highet possible value?**

We have only 3 items, of value 60, 100, and 120.

We cannot take all 3 of them, as that would result in a weight of 60, while the knapsack weight is 50.

Therefore, we can fill the knapsack by breaking the last item that would otherwise overflow the knapsack.

` ````
struct Item {
const double value, weight;
double ratio() const { return value / weight; }
bool operator <(const Item& other) const { return ratio() > other.ratio(); }
bool operator =(const Item& other) const { return weight == other.weight &&
value == other.value; }
};
double fractionalKnapsack(int maximumWeight, vector<Item>& items) {
sort(begin(items), end(items));
double currentWeight = 0;
double currentValue = 0.0;
for (const auto& item : items) {
// No overflow? Take as a whole
if (currentWeight + item.weight <= maximumWeight) {
currentWeight += item.weight;
currentValue += item.value;
}
else { // Overflow? Take only how much you can still fit in knapsack
double remainingWeight = maximumWeight - currentWeight;
currentValue += item.value * (remainingWeight / item.weight);
break;
}
}
return currentValue;
}
int main() {
int weight = 50;
vector<Item> items = { { 60, 10 }, { 100, 20 }, { 120, 30 } };
cout << fractionalKnapsack(weight, items);
return 0;
}
```

The code starts at line 31, in which the main() function is called.

We define a total weight our knapsack can have (Line 32) and a vector of items (Line 33). In line 1-7 we see that an item is defined as a pair of doubles (value and weight). The ratio is also important (Line 3), as we will use to assert how valuable an item is.

The algorithm starts in line 9, after being called from main().

The items are then sorted (Line 10), based on the comparison operator, or to be more specific, based on their ratio (Line 4). Because the comparison operator returns the bigger of two items, the vector will be sorted in decreasing order (the most valueable will be first).

We initiate two variables to keep track of the current weight and value we currently possess (Lines 12-13). Then, we iterate through the vector (Line 15), and we check whether we can steal the entire item (Line 17). If so, we add the weight to our weight counter, and its value to our value counter (Lines 18-19).

We repeat this for all the items in our knapsack.

At some point, the knapsack will be almost full, so we will no longer be able to add the entire item. (Line 21). Since the items are sorted by how valuable they are, it is clear that the current item we’re holding is more valuable then the remaining ones, so we verify the remaining weight that we have in the bag (Line 22), and steal only a part of that object. Therefore, the value counter will be increased with only a percentage of how much the item was valued, based on how much of it got stolen (if we stole half of an item of value 10, we increase our value with 5). This is done in line 23. Because we filled the knapsack with the remaining portion that can be filled, our knapsack is now full, so we can break out of the loop (Line 24).

Line 28 returns the value counter, which is the maximum value that we managed to steal with the current resources.

Vertex: Node containing data

Edge: The line that connects two vertices

Cycle: Cycles occur when nodes are connected in a way that they forms a closed structure.

Cycle: Cycles occur when nodes are connected in a way that they forms a closed structure.

Self loop: An edge that connects a vertex to itself (source = destination).

Negative weight edge refer to edges with a negative cost.