I've made a couple of postings at this point in which I co-opt path planning algorithms to solve other problems. I've also mentioned that I use graph theory as a core tool for understanding algorithms, and these two things are naturally related. What I've found is that combinatorial problems can be converted with ease to different types of graph problems. For instance, consider the task of sorting a collection of values- you can conceptualize the output list itself as being constructed by starting from a root node, then traveling outwards along a directed tree, where each set of branches corresponds to the choice of the next element in the list. Then, you can assign a weight to each edge, like a 0 if the succeeding choice is greater than the prior element, and a 1 otherwise, or the difference between the current element and the next choice. Then, we can perform some sort of analysis on the constructed graph which extracts a path corresponding to the sorted list- for the former, the longest connected path, for the latter, the most negative path (because the constructed graph is a tree, we don't need to worry about negative cycles).

You can also take two examples of this sort of thinking from my two prior posts which use path planning to navigate different state spaces. The unifying thread among all these applications is that we construct graphs which as isomorphic to a specific problem state such that a path through the graph maps onto a legal series of changes in the original state space. This transformation of state space into graph space is the root idea, and it is an extension of the common application of path planning where, rather than set the edge weights analyzed to the distances between whatever the nodes represent as destinations, we set them to the travel time between them. In similar form, we alter in these examples the weights to correspond to pertinent properties of the system we're analyzing. In the Dijkstra Automated Planner, these edges represent probabilities of transitions given action choices from a given state. In the Time Variant Planning example, they correspond to a two dimensional time quantity- a duration dependent on starting time.

What all this revolves around is the idea that these problems can be configured so that a solution corresponds to a path, and in fact, this is usually how I determine the changes to make, by taking an abstract solution and considering it as a path, and try to identify in what space it can exist as a path with a metric extrema that can be traced by a space, path-finding algorithm. Once in that space, all we have to do is find the path, and since it's an extrema on a graph, path finding can retrieve the actual (non-abstracted) sequence which is a solution. Generally, I tend to refer to the graphs built to match with problems as 'constructed graphs', because they are built from the information known about another, initially, non-graphical problem space.

One way to think about this in a simplified context is to consider a graph which has non-weighted (i.e. between any two nodes, you either can or cannot traverse) edges. Then the edges represent, essentially, the legality of a state transition. I'd like to point out here that these constructed graphs are almost always directed, as transitions typically depend on from whence you come (in the spirit of a one-way road on a map). If you then have a map of transitions formed as a graph, the most efficient path to get from some initial state to the final state is the minimal path between the corresponding nodes in the constructed graph. In fact, a fair proportion of the Dijkstra based planner is conversion from a space which contains probabilistic split transitions into a compressed graph like the one described here, except including weights. In this sense, the path planning procedure provides a more efficient replacement for the process of expansive searching through the space, while also accommodating for additional information, such as probability or cost of the transition.

Another facet of this approach is that, rather than operating from a complete table of values (in the previous two options, we only briefly discussed the use of compacted functions representing the graph, but they were geared to problems tending not to remit such), but rather some sort of function, simulation, or neural networks based representation of the costs, times, probabilities, or what have you that mark the transition values. From such, we can generate, on the fly, the minimal map by evaluating the outgoing weights from the current 'permanent' list. The utility of this feature naturally depends on the nature of the problem and whether such a representation is available, but it is a specific implementation mode that can be employed, potentially eliminating or complementing an experimental phase.

This is the sense in which the path planning action acts as a deductive engine- whenever you add a node to the list of 'permanent' nodes, ones to which an optimal path is known, you are lopping off a collection of alternate paths associated with differing paths through that node, or involving that node at different times. This process of excising non-optimal paths by selecting optimal paths eventually removes all paths which are not efficient, thus leaving you with a final set of transitions within the mapped space that transform (travel from) the originating state (starting node) to the ending state (terminal node).

The essential take away here is that we can construct mappings between different problem spaces which allow for solutions to be generated in the better-understood (or better behaved) space, similar to the use of Fourier transforms to solve analysis problems. And really, it starts to look awfully practical when we realize that we can turn any state machine into a state transition graph (r.e., a flowchart!), on which this technique can work to find transition sequences which are optimized. From that, we can apply probabilties and action mapping as in the original automated planning version, apply time variant expansion as in the latter article, or even synthesize the two as hinted at in the conclusion of that second one. As long as we can find a transformation from problem space to graph space, we can use path planning to find solutions within that space.

You can also take two examples of this sort of thinking from my two prior posts which use path planning to navigate different state spaces. The unifying thread among all these applications is that we construct graphs which as isomorphic to a specific problem state such that a path through the graph maps onto a legal series of changes in the original state space. This transformation of state space into graph space is the root idea, and it is an extension of the common application of path planning where, rather than set the edge weights analyzed to the distances between whatever the nodes represent as destinations, we set them to the travel time between them. In similar form, we alter in these examples the weights to correspond to pertinent properties of the system we're analyzing. In the Dijkstra Automated Planner, these edges represent probabilities of transitions given action choices from a given state. In the Time Variant Planning example, they correspond to a two dimensional time quantity- a duration dependent on starting time.

What all this revolves around is the idea that these problems can be configured so that a solution corresponds to a path, and in fact, this is usually how I determine the changes to make, by taking an abstract solution and considering it as a path, and try to identify in what space it can exist as a path with a metric extrema that can be traced by a space, path-finding algorithm. Once in that space, all we have to do is find the path, and since it's an extrema on a graph, path finding can retrieve the actual (non-abstracted) sequence which is a solution. Generally, I tend to refer to the graphs built to match with problems as 'constructed graphs', because they are built from the information known about another, initially, non-graphical problem space.

One way to think about this in a simplified context is to consider a graph which has non-weighted (i.e. between any two nodes, you either can or cannot traverse) edges. Then the edges represent, essentially, the legality of a state transition. I'd like to point out here that these constructed graphs are almost always directed, as transitions typically depend on from whence you come (in the spirit of a one-way road on a map). If you then have a map of transitions formed as a graph, the most efficient path to get from some initial state to the final state is the minimal path between the corresponding nodes in the constructed graph. In fact, a fair proportion of the Dijkstra based planner is conversion from a space which contains probabilistic split transitions into a compressed graph like the one described here, except including weights. In this sense, the path planning procedure provides a more efficient replacement for the process of expansive searching through the space, while also accommodating for additional information, such as probability or cost of the transition.

Another facet of this approach is that, rather than operating from a complete table of values (in the previous two options, we only briefly discussed the use of compacted functions representing the graph, but they were geared to problems tending not to remit such), but rather some sort of function, simulation, or neural networks based representation of the costs, times, probabilities, or what have you that mark the transition values. From such, we can generate, on the fly, the minimal map by evaluating the outgoing weights from the current 'permanent' list. The utility of this feature naturally depends on the nature of the problem and whether such a representation is available, but it is a specific implementation mode that can be employed, potentially eliminating or complementing an experimental phase.

This is the sense in which the path planning action acts as a deductive engine- whenever you add a node to the list of 'permanent' nodes, ones to which an optimal path is known, you are lopping off a collection of alternate paths associated with differing paths through that node, or involving that node at different times. This process of excising non-optimal paths by selecting optimal paths eventually removes all paths which are not efficient, thus leaving you with a final set of transitions within the mapped space that transform (travel from) the originating state (starting node) to the ending state (terminal node).

The essential take away here is that we can construct mappings between different problem spaces which allow for solutions to be generated in the better-understood (or better behaved) space, similar to the use of Fourier transforms to solve analysis problems. And really, it starts to look awfully practical when we realize that we can turn any state machine into a state transition graph (r.e., a flowchart!), on which this technique can work to find transition sequences which are optimized. From that, we can apply probabilties and action mapping as in the original automated planning version, apply time variant expansion as in the latter article, or even synthesize the two as hinted at in the conclusion of that second one. As long as we can find a transformation from problem space to graph space, we can use path planning to find solutions within that space.