It had been a week or two, so I was thinking it's been too long since I last mentioned Dijkstra's algorithm. Have I pointed out that it's my favorite? It's awesome.

I'm building on my previous work using Dijkstra's algorithm for automated planning. In case you don't want want to read that whole thing, I'll summarize: Dijkstra's algorithm is used to find the shortest path in a graph. You can, however, convert that 'shortest distance' problem to 'maximum probability' with minimal modifications. Then, you can make a statistical model of state transitions and use the modified algorithm to find a most-likely set of transitions that lead to the desired state.

I tend to use graph theory as my primary theoretical tool when it comes to algorithms, so I've gotten into a place where I tend to view a graph as a highly variable concept. In the above, I represent states and transitions between them as a graph, not unlike a flow chart. Something I've been after for a little while, to get some additional mileage out of the graph-as-model abstraction, is integration if time-varying information into he model. It's a little tricky, because you can't just update outgoing links as you progress, because you might end up with a segment connected by mis-ordered times. Which, unless you're a time traveler, is problematic.

However, in thinking about this dynamic update type of incorporation procedure, I stumbled across a resolution. While I was driving, actually. I end up solving a lot of conceptual problems while driving. I'm thinking I should just drive around more. In any case, that issue of potentially jumbling times lead me to an appropriate constraint- you can only move up from the current time step to the next one (naturally, I'm assuming discretized time steps here- I'll talk about more flexible structures in a bit).

What that means is you should only have links between individual states at given times and the states to which it can transition at the next time step (the links will be directed naturally: t1 -> t2 -> t3). The exciting part was when I realized I could create a set of nodes which corresponded to those in the initial graph, but augmented with an additional time parameter. For a discrete series of times, then, we have a copy of all the initial nodes for that time:

I'm building on my previous work using Dijkstra's algorithm for automated planning. In case you don't want want to read that whole thing, I'll summarize: Dijkstra's algorithm is used to find the shortest path in a graph. You can, however, convert that 'shortest distance' problem to 'maximum probability' with minimal modifications. Then, you can make a statistical model of state transitions and use the modified algorithm to find a most-likely set of transitions that lead to the desired state.

I tend to use graph theory as my primary theoretical tool when it comes to algorithms, so I've gotten into a place where I tend to view a graph as a highly variable concept. In the above, I represent states and transitions between them as a graph, not unlike a flow chart. Something I've been after for a little while, to get some additional mileage out of the graph-as-model abstraction, is integration if time-varying information into he model. It's a little tricky, because you can't just update outgoing links as you progress, because you might end up with a segment connected by mis-ordered times. Which, unless you're a time traveler, is problematic.

However, in thinking about this dynamic update type of incorporation procedure, I stumbled across a resolution. While I was driving, actually. I end up solving a lot of conceptual problems while driving. I'm thinking I should just drive around more. In any case, that issue of potentially jumbling times lead me to an appropriate constraint- you can only move up from the current time step to the next one (naturally, I'm assuming discretized time steps here- I'll talk about more flexible structures in a bit).

What that means is you should only have links between individual states at given times and the states to which it can transition at the next time step (the links will be directed naturally: t1 -> t2 -> t3). The exciting part was when I realized I could create a set of nodes which corresponded to those in the initial graph, but augmented with an additional time parameter. For a discrete series of times, then, we have a copy of all the initial nodes for that time:

In this graph, we see a collection of nodes. Each one represents a location, or a state, or something like that- whatever would be represented on the original graph. So each state is marked with a color. Each state has a unique descriptor, sn. In this example, let's name them by their color- S = {R,O,Y,G,B,V}. Now, each of these colored nodes represents a specific time-invariant 'state'- a location or abstract configuration, as in my Dijkstra automated planning application.

Now, each static state is copied over through each time step, and becomes augmented by a time variable. So if we have t time steps, the new graph has n nodes originally, the new graph will have t∙n nodes. The new nodes are specified as an augmented state <si,t>, a concatenation of the time-invariant part of the state and the time stamp.

Now, each node si will have some set of transitions at time t, specified by a function Γ(si,sj,t), where the output of the function is the duration of time required for the transition from si to sj, given a start from si at t. With this in hand, starting at the first timestep, we can establish directed links from each layer to future layers by taking a link between the augmented node <si,t> to <sj,t + Γ(si,sj,t)>, so that each edge represents a transition in both the state space and time.

For instance, let's say were currently populating links for node R at time t = 0. Suppose running through Γ for S, we find Γ(R,Y,0) = 1∙T and Γ(R,G,0) = 2∙T, and all there are no other viable outbound links from R at time 0 (say we define Γ(si,sj,t) = 0 to indicate the non-existence of a path), then we can begin to populate the augmented graph:

Now, each static state is copied over through each time step, and becomes augmented by a time variable. So if we have t time steps, the new graph has n nodes originally, the new graph will have t∙n nodes. The new nodes are specified as an augmented state <si,t>, a concatenation of the time-invariant part of the state and the time stamp.

Now, each node si will have some set of transitions at time t, specified by a function Γ(si,sj,t), where the output of the function is the duration of time required for the transition from si to sj, given a start from si at t. With this in hand, starting at the first timestep, we can establish directed links from each layer to future layers by taking a link between the augmented node <si,t> to <sj,t + Γ(si,sj,t)>, so that each edge represents a transition in both the state space and time.

For instance, let's say were currently populating links for node R at time t = 0. Suppose running through Γ for S, we find Γ(R,Y,0) = 1∙T and Γ(R,G,0) = 2∙T, and all there are no other viable outbound links from R at time 0 (say we define Γ(si,sj,t) = 0 to indicate the non-existence of a path), then we can begin to populate the augmented graph:

Recalling of course that these edges are directed, as we can only move forward in time (if you have a way to circumvent this, please email me at ckevinr@gmail.com so I can correct my work). Now, we can populate each node in each level by calculating the transition time between states as given by Γ at the time step progressively through the levels. For instance, the final population for t = 0 might look something like this:

Which would correspond to a certain Γ(t) (denoting the set of all transitions at time t):

Given Γ for the future time steps we could go on to populate all the forward-feeding edges for additional times. Such a graph becomes very visually cluttered, so instead, I'm going to illustrate the temporal aspects by taking another slice of Γ, where the originating node stays constant throughout the slice, producing all the links outbound from a given node over time (in this case, a hypothetical set for R):

In this slice (which I'm going to elect to denote with Γi, the set of transitions from i over all t) we see the possible paths leading out of R changing over time, analogous to how potential routes from a location will vary with traffic throughout the day. This particular example will correlate with the following block of the Γ function:

t Where 3T is a terminal time for this situation, so there are no outbound edges, because there is no further time listed.

If we then say we have constructed the entire time-extended graph (I'm going to call it χ, because Γ is getting a little overloaded), we can take note of some of its structural properties, even though drawing it directly is fairly unhelpful. For starters, χ can be represented as an <n⨯n⨯t> transition table, where the n⨯n components refer to transitions from i to j at the time from which the slice is taken. Naturally, it's also a navigable directed graph, though it may not be a connected graph.

Additionally, because all edges feed forward in time, there can (by definition) be no loops, thus the graph is a tree in the directed space. Though projecting the total temporal space down into a net connectivity graph may not give a tree (and I'd imagine in most real cases won't)- since you could travel from some augmented node <si,t0> to <sj,t1>, and then later to <si,t2>. In augmented space, this is not a cycle, but in the projection into state space, it is.

If we elect to map out the temporal aspect in this way, say be creating an array which maps sets of augmented states (St) at various times onto their transition matrices (χt1t2, a binary n⨯n array indicating if a transition from states i to j is possible in the time frame t1 to t2), then we can see the following pattern among the transitions:

If we then say we have constructed the entire time-extended graph (I'm going to call it χ, because Γ is getting a little overloaded), we can take note of some of its structural properties, even though drawing it directly is fairly unhelpful. For starters, χ can be represented as an <n⨯n⨯t> transition table, where the n⨯n components refer to transitions from i to j at the time from which the slice is taken. Naturally, it's also a navigable directed graph, though it may not be a connected graph.

Additionally, because all edges feed forward in time, there can (by definition) be no loops, thus the graph is a tree in the directed space. Though projecting the total temporal space down into a net connectivity graph may not give a tree (and I'd imagine in most real cases won't)- since you could travel from some augmented node <si,t0> to <sj,t1>, and then later to <si,t2>. In augmented space, this is not a cycle, but in the projection into state space, it is.

If we elect to map out the temporal aspect in this way, say be creating an array which maps sets of augmented states (St) at various times onto their transition matrices (χt1t2, a binary n⨯n array indicating if a transition from states i to j is possible in the time frame t1 to t2), then we can see the following pattern among the transitions:

This is maybe the most natural way to represent χ, with the coordinates within the array are given by timer-ordered augmented states, such that Sn is a vector <(s0,n),(s1,n),...>, and all entries are simple binary connectivity indicators.

Then, we can use Dijkstra's algorithm to find shortest paths, and then pick the node with the static state we want, and the least time stamp, using the Dijkstra procedure as a replacement for automated deduction as in the DIjkstra automated planning project mentioned above. This use of path-finding as deduction is a topic I plan to treat in another post, soon.

So then, we can find the most efficient time-branching paths in the augmented graph, and from that minimum spanning tree pull all the fastest paths to each static for a further reduced complexity tree (noting that we may be able to get to some state si quickest in time t1, but that si' at t2>t1 may lead to a shortest time to reach state sj- that static states may appear on that reduced tree more than once). This way, we can find a fastest route starting at some point si, and ending at sj, which accounts for changes in travel time which occur during the travel- a most efficient path accounting for changing path 'lengths' as time progresses.

We can also add in a couple of other possibilities, too. χ can be adjusted to include non-binary values, which represent abstracted 'costs' associated with taking a given edge- the algorithm can then be used to select out minimum-cost paths, which will still include the minimum-time path as each time is a unique augmented state, allowing a post-facto weighting function of cost vs time to be applied to the resultant minimum cost tree.

Alternatively, one could also apply probabilities to these edges, as used in the oft-referenced other project, to compare maximum probability of a path to times- such as a destination that may be achieved at some low time, but with very low probability, versus a later arrival with a high probability of success.

One final alteration that could be used would be the production of new augmented states by computing the transition time from a continuous Γ, and labeling all newly created nodes with that time. In this setup, each time you add a new node, you would compute the continuous transition times for all other states (still discrete, I'm still working out the kinks in a fully continuous space) and then evaluate to mark as permanent only the shortest of the possibilities from the newly updated list, as per the usual implementation of Dijkstra's algorithm. If you were to create a node to a particular state at a given time at some point during this procedure which had a time stamp within some threshold duration of another node with the same static state, you could truncate the (slightly) longer of the two to get the graph a little more sparse, and get a small time savings.

To conclude, this modified procedure for adapting a graph allows the analysis of a path-finding problem (in state or physical space) which possesses a time variant collection of edge weights by extending the graph into additional dimensions by augmentation of static states with time stamps. Applying a path-finding algorithm (I persistently refer to Dijkstra's because I adore it) to this new state space, we can select out paths which are most efficient after consideration of the time varying properties of the system.

Additionally, because the array representation of the modified graph was a connectivity array, we can further apply other metrics to the weights of the edges and include cost-like or probability-like computations in our path planning, resulting in a survey of the space which includes extremum-metric paths which also allow for travel time comparisons and after the fact applications of different weighting schemes to pick an ideal balance of speed and cost/likelihood, depending on the context of the problem.

Then, we can use Dijkstra's algorithm to find shortest paths, and then pick the node with the static state we want, and the least time stamp, using the Dijkstra procedure as a replacement for automated deduction as in the DIjkstra automated planning project mentioned above. This use of path-finding as deduction is a topic I plan to treat in another post, soon.

So then, we can find the most efficient time-branching paths in the augmented graph, and from that minimum spanning tree pull all the fastest paths to each static for a further reduced complexity tree (noting that we may be able to get to some state si quickest in time t1, but that si' at t2>t1 may lead to a shortest time to reach state sj- that static states may appear on that reduced tree more than once). This way, we can find a fastest route starting at some point si, and ending at sj, which accounts for changes in travel time which occur during the travel- a most efficient path accounting for changing path 'lengths' as time progresses.

We can also add in a couple of other possibilities, too. χ can be adjusted to include non-binary values, which represent abstracted 'costs' associated with taking a given edge- the algorithm can then be used to select out minimum-cost paths, which will still include the minimum-time path as each time is a unique augmented state, allowing a post-facto weighting function of cost vs time to be applied to the resultant minimum cost tree.

Alternatively, one could also apply probabilities to these edges, as used in the oft-referenced other project, to compare maximum probability of a path to times- such as a destination that may be achieved at some low time, but with very low probability, versus a later arrival with a high probability of success.

One final alteration that could be used would be the production of new augmented states by computing the transition time from a continuous Γ, and labeling all newly created nodes with that time. In this setup, each time you add a new node, you would compute the continuous transition times for all other states (still discrete, I'm still working out the kinks in a fully continuous space) and then evaluate to mark as permanent only the shortest of the possibilities from the newly updated list, as per the usual implementation of Dijkstra's algorithm. If you were to create a node to a particular state at a given time at some point during this procedure which had a time stamp within some threshold duration of another node with the same static state, you could truncate the (slightly) longer of the two to get the graph a little more sparse, and get a small time savings.

To conclude, this modified procedure for adapting a graph allows the analysis of a path-finding problem (in state or physical space) which possesses a time variant collection of edge weights by extending the graph into additional dimensions by augmentation of static states with time stamps. Applying a path-finding algorithm (I persistently refer to Dijkstra's because I adore it) to this new state space, we can select out paths which are most efficient after consideration of the time varying properties of the system.

Additionally, because the array representation of the modified graph was a connectivity array, we can further apply other metrics to the weights of the edges and include cost-like or probability-like computations in our path planning, resulting in a survey of the space which includes extremum-metric paths which also allow for travel time comparisons and after the fact applications of different weighting schemes to pick an ideal balance of speed and cost/likelihood, depending on the context of the problem.