# You Need to Know Edge Relaxation for Shortest Paths Problem

You Need to Know Edge Relaxation for Shortest Paths ProblemUnderstanding relaxation with PythonYasufumi TaniguchiBlockedUnblockFollowFollowingJun 17Photo by Denys Nevozhai on UnsplashThe purpose of the shortest paths problem is to find the shortest path from the starting vertex to the goal vertex.

We widely use the algorithms to solve the shortest paths problem from competitive programming to Google Maps directions search.

By understanding the key notion, “edge relaxation”, it is really easier to understand the concrete algorithms, say Dijsktra’s algorithm or Bellman-Ford algorithm.

In other words, it might be difficult to make these algorithms your own without understanding edge relaxation.

In this post, I focus on edge relaxation and explain the general structure to solve the shortest paths problem.

Also, we’ll go through the easy algorithm and its implementation for better understanding.

I use Python for the implementation.

This post is structured as follows:What is the shortest paths problem?What is edge relaxation?The order of the relaxationThe shortest path on DAG and its implementation1.

What is the shortest paths problem?In this post, I explain the single-source shortest paths problems out of the shortest paths problems, in which we need to find all the paths from one starting vertex to all other vertices.

I define the shortest paths as the smallest weighted path from the starting vertex to the goal vertex out of all other paths in the weighted graph.

Here, you can think “weighted” in the weighted path means the reaching cost to the goal vertex (some vertex).

From here onward, when I say a just graph, it means a weighted graph.

In the graph below, let’s think about the shortest paths from the starting vertex S to the other vertices A and B.

The shortest path from the vertex S to the vertex A becomes “S→A”.

In the graph above, there is the only one path from the vertex S to the vertex A, so we don’t need to care about the weights.

On the other hand, we can find the two paths from the vertex S to the vertex B, which are “S→B” and “S→A→B”, and the shortest path becomes “S→A→B”.

In “S→B”, the weight of the path is 3, but in “S→A→B”, the weight of the path becomes 2 and it’s shortest: 1+1=2.

We can think the weight of the shortest path as the shortest distance from the starting vertex to one vertex.

2.

What is edge relaxation?Here, I’ll explain the important and commonly used notion, edge relaxation, to solve the shortest paths problem.

Generally, you can solve all the shortest paths problem by using edge relaxation.

The edge relaxation is the operation to calculate the reaching cost to the vertex lower.

More concretely, the operation will become:For the edge from the vertex u to the vertex v, if d[u]+w(u,v)<d[v] is satisfied, update d[v] to d[u]+w(u,v)The vertices u and v stand the neighbors in the graph and d[u] and d[v] stand the reaching cost to the vertices u and v respectively.

Also, w(u,v) stands the weight of the edge from the vertex u to the vertex v.

To summarize things up to now, we can make this figure below.

Now we know we can reach the vertex u from the starting vertex S through two vertices and that path costs d[u].

Also, we can reach the vertex v from the starting vertex S through four vertices and that path costs d[v].

Here, edge relaxation updates d[v] to d[u]+w(u,v) when d[u]+w(u,v) is less than d[v].

In other words, it updates the current reaching cost to the vertex v (d[v]) to the lower reaching cost (d[u]+w(u,v)).

The reason why it updates the cost is that the path through the vertex u can be shorter because the reaching cost of the path through the vertex u will be lower than the cost of the current path.

Actually, the algorithms for the shortest paths problem solve the problem by repeatedly using the edge relaxation.

I’ll show the example that we can solve the shortest paths problem by repeatedly using the edge relaxation.

Let’s find the shortest paths for the same graph as before by the edge relaxation.

I assume the starting vertex S and apply the edge relaxation to the graph to obtain the shortest paths to the vertices A and B.

To apply the edge relaxation, we need to know the reaching cost, but there is no way to know it before searching, so we initialize the reaching costs for the vertices A and B as infinity (∞).

The infinity cost for the vertex means that we cannot reach that vertex.

On the other hands, the reaching cost from the starting vertex to the starting vertex is zero, so we set the d value of the vertex S as 0.

We choose and relax the edges out-going from the vertex S first, then the edge out-going from the vertex A.

First of all, we apply the edge relaxation to the edge SA.

The edge SA satisfies d[S]+w(S, A)<d[A], we set d[A] as d[S]+w(S, A)=1.

Here, Π[A] stands the vertex reaching before the vertex A in the path of the reaching cost d[A].

In this case, Π[A] is S.

Π[A]=S means the path of the reaching cost d[A] always uses the sub-path, A←S.

The details will be described below, but we can reconstruct the path by using Π.

We’re relaxing the edge SB in the same way.

The edge SB satisfies d[S]+w(S, B)<d[B], so we set d[B] as d[S]+w(S, B)=3.

For the d[B] path reconstruction, we set Π[B] as S.

Subsequently, we relax the edge AB out-going from the vertex A.

The edge AB satisfies d[A]+w(A, B)<d[B], so we set d[B] as d[A]+w(A, B)=2.

Once we update the d value of B, we also update Π[B] as A.

From the figure above, you can find that the shortest distances from the vertex S to the vertices A and B equal to the reaching cost d.

We cannot update the d value lower, so we finish the edge relaxation.

Here, let’s confirm that we can reconstruct the shortest path by using Π[A]=S and Π[B]=A.

In the graph above, we’re going to reconstruct the shortest path from the vertex S to vertex B as an example.

From Π[B]=A, we can know we should visit the vertex A before reaching vertex B in the path to vertex B.

From Π[A]=S, we can know we should visit the vertex S first and then reaching vertex A.

The vertex S is the starting vertex, so we cannot traverse backward anymore.

By reversing the vertices we obtained up to now, we can get the shortest path from the vertex S to vertex B, “S→A→B”.

Generally, we can reconstruct the shortest path for the vertex v by back-tracking Π[v], Π[Π[v]], Π[Π[Π[v]]], … and reversing the obtained vertices.

3.

The order of the relaxationIn the section before, we don’t care about the order to relax edges, but how should we decide the order?.Or do we really care about it?.It seems that we can obtain the shortest path by randomly relaxing edges.

This is not correct, though.

Here, I’ll explain the reason why we should care about the order and how to choose the edge to relax.

To tell the truth, the really bad case in computation efficiency exists if you choose and relax the edge randomly.

For example, let’s think about the following graph.

I assume the d values in this graph is already initialized.

First, let’s relax the straight line edges from left to right.

Next, we relax the edge EG.

Then, we relax the edge CE.

Here, you might find that we can relax the edge EG again.

Also, when we relax the edge AC, we can relax the edges CE and EG again.

So if we don’t pay attention to the order, we would relax the same edge again and again.

In the example above, we can relax the edges efficiently if we relax these from left to right.

However, there are too dense graphs to visualize like the example above.

So it seems not realistic to find an efficient order in advance.

This is the reason why we should care about the relaxing order.

Well, how we should choose and relax the edges.

In fact, the shortest paths algorithms like Dijkstra’s algorithm or Bellman-Ford algorithm give us a relaxing order.

What it means that every shortest paths algorithm basically repeats the edge relaxation and designs the relaxing order depending on the graph’s nature (positive or negative weights, DAG, …, etc).

In other words, we should look for the way how to choose and relax the edges by observing the graph’s nature.

In summary, every shortest paths algorithm has this general structure below:1.

Initialize d and Π in the graph2.

Choose the edge somehow (it depends on the algorithm) and Relax it.

4.

The shortest path on DAG and its implementationIn the section before, I said that we should choose the way for the edge relaxation by observing the graph’s nature.

Here, I’ll explain the simple and easy shortest paths algorithm for DAG (Directed acyclic graph) with Python implementation.

DAG is the graph has no cyclic.

In this section, I’ll explain the algorithm as you know the topological order.

If you are not familiar with it, check my article: Understanding the Depth-First Search and the Topological Sort with Python?In the shortest paths algorithm on DAG, we can obtain the shortest paths by choosing and relaxing the out-going edges in the topological order.

This is the concrete algorithm as follows:1.

Initialize the d value of the starting vertex as 0 and the other vertices as ∞2.

Relax the out-going edges in topological orderLet’s see how the algorithm works.

I show the graph which I initialized the ds and topological-sorted as follows.

I assume the starting vertex is B.

Let’s try to solve the shortest paths problem.

Each vertex is topological-sorted, so we just relax the out-going edges from left to right.

We cannot relax the out-going edge from the most left vertex A, so we don’t update d.

Next, we relax the out-going edges from vertex B, which are BC and BD.

Once we relax the edges, we update the Π.

We set Π[C] as B and Π[D] as B.

Then, we relax the out-going edges from the vertex C.

We cannot relax the out-going edge to vertex D, so we only update d[E] and d[F].

Also, we update Π[E] as C, Π[F] as C.

We update the out-going edge from the vertex D.

We only update d[E] and Π[E] as D.

We update the out-going edge from the vertex E.

We also update Π[F] as E.

There is no out-going edge from the vertex F.

We finish the edge relaxation.

Finally, we obtain the shortest distances as follows: We don’t check if it works here, but we can reconstruct the shortest paths from Π.

Subsequently, let’s implement the shortest paths algorithm on DAG in Python for better understanding.

The implementation is below: In this implementation, this code solves the shortest paths problem on the graph used in the above explanation.

This code evaluates d and Π to solve the problem.

I assume we’ve already known the topological order beforehand.

First, let’s look at the way how the graph and its weights are expressed.

In the code above, the graph is implemented as follows:graph = {'A': ['B', 'C'], 'B': ['C', 'D'], 'C': ['D', 'E', 'F'], 'D': ['E', 'F'], 'E': ['F'], 'F': []}weights = {('A', 'B'): 5, ('A', 'C'): 2, ('B', 'C'): 2, ('B', 'D'): 6, ('C', 'D'): 7, ('C', 'E'): 4, ('C', 'F'): 2, ('D', 'E'): -1, ('D', 'F'): 1, ('E', 'F'): -2}The corresponding figure for the graph is below:For example, when we look at the vertex C, graph[‘C’] returns [‘D’, ‘E’, ‘F’] which are the reachable neighbors from the vertex C.

So these vertices construct the out-going edges from the vertex C.

Also, you find out that weights[u, v] corresponds the weight of the edge uv.

Next, let’s look at each line’s role of dag_shortest_path to obtain the shortest paths.

The lines from 2 to 4 is the initialization as follow: Set the d of the starting vertex as 0, the other vertices as ∞.

Also, these lines initialize the Π to reconstruct the path.

d = {v: INF for v in graph}d[s] = 0pi = {s: None}The lines from 9 to 12 correspond to the edge relaxation.

d_temp < d[v] in the code corresponds to the d[u]+w(u,v)<d[v] in the condition of the edge relaxation.

When this condition is satisfied, it updates the d[v].

d_temp = d[u] + weights[u, v]if d_temp < d[v]: d[v] = d_temp pi[v] = uThe code repeats this process to the out-going edges from each vertex in the topological order.

This repeating process is worked out by the two for-loops.

The torder holds the vertices in the topological order.

Also, the graph returns the vertices to construct the outgoing edges from the vertex.

So, we obtain the edge uv by regarding the vertex from the torder as u and the graph[u] as v.

for u in torder: for v in graph[u]: # relax(u, v)That’s all for the code explanation.

We don’t check the result, but you can execute the code by the following command in your terminal: You’ll find out that you can get the d and Π correctly.

curl -s https://gist.

githubusercontent.

com/yasufumy/e6477c836baa85735f6019bc0b0c1460/raw/ee4885e5d21f009ee490038525887d8fcf80f8d8/dag_shortest_path.

py | python3Finally, let’s think about the time complexity of this algorithm.

In this algorithm, there are two main computation parts.

One is for the topological sorting.

The other is for edge relaxation.

In the code above, we don’t do the topological sort, but actually, we need to do it.

So we should take it into account.

We can do the topological sort by the depth-first search, so the time complexity is O(|V|+|E|).

The number of the loops only affects the time complexity of the edge relaxation because the processes in the loop run in constant time.

The number of loops for the torder is |V| and the number of loops for the graph[u] is |E|.

So the time complexity of the edge relaxation is O(|V|+|E|).

In summary, the entire time complexity of the algorithm is O(|V|+|E|).

In this post, I focus on the edge relaxation and explain the shortest paths problem and its algorithm.

When you understand the edge relaxation, you can easily understand Dijsktra’s algorithm or Bellman-Ford algorithm.

Also, you can know the difference between these algorithms.

Thank you for reading my article.

ReferencesMIT OpenCourseWare 6.

006 Lecture 15: Single-Source Shortest Paths ProblemMIT OpenCourseWare 6.

006 Recitation 15: Shortest Paths.