## Experiment 9: Sussman Anomaly

The Sussman Anomaly is a problem first posed by Gerald Sussman in his dissertation, illustrating a subtle problem for automated planning systems that are not 'interleaved', in a sense (a very loose sense), planners which do not do look-ahead work. The postulate is a problem which bears many similarities to the Tower of Hanoi we looked at above- it involves stacking things on top of one another, boxes in this case. The problem, however, does not mandate the ordering restriction of the Tower of Hanoi, and is therefore both more difficult, and more simplistic. It is more simple in that movement of the boxes is unrestricted, so there are no funny rules about how any given object can be placed, but it is more difficult because there is far more freedom, and thus more possible experimentation for the agent to do.

The basic problem statement is thus: Given three boxes A, B, and C, stacked such that C is atop A, and B is in its own stack, move boxes so that A is atop B, atop C. This is one of those problems that looks simple to a human, but can confound weakly designed AIs. The core of the problem lies in that an agent has two natural sub goals for this task: A atop B, and B atop C. However, attempting a first order move results in the stack CAB for the first, and BCA for the second. This is, on a small scale, an illustration of the difficulties assorted with sorting problems- individual elements may be easy to order, but various blocks of elements may be non-comparable (all elements of one block might not be 'less than' all elements of the other block).

In this experiment, we teach the agent how to sort such stacks, essentially. The intent being that we will both explore sorting as a problem space, and demonstrate the agent's immunity to such a planning fault. In particular, that the algorithm can maintain stability when a multistage plan is necessary. Additionally, we have one other interesting aspect for this experiment- we are going to use the same exact critic that we used in in the Tower of Hanoi experiment, as a demonstration of it's efficacy in another problem domain.

For this problem, we specify the disposition of the system by indexing which box is on top of which stack. We allow for three different stacks, since there are three boxes. This gives a state space of 58 addresses to look at.

The basic problem statement is thus: Given three boxes A, B, and C, stacked such that C is atop A, and B is in its own stack, move boxes so that A is atop B, atop C. This is one of those problems that looks simple to a human, but can confound weakly designed AIs. The core of the problem lies in that an agent has two natural sub goals for this task: A atop B, and B atop C. However, attempting a first order move results in the stack CAB for the first, and BCA for the second. This is, on a small scale, an illustration of the difficulties assorted with sorting problems- individual elements may be easy to order, but various blocks of elements may be non-comparable (all elements of one block might not be 'less than' all elements of the other block).

In this experiment, we teach the agent how to sort such stacks, essentially. The intent being that we will both explore sorting as a problem space, and demonstrate the agent's immunity to such a planning fault. In particular, that the algorithm can maintain stability when a multistage plan is necessary. Additionally, we have one other interesting aspect for this experiment- we are going to use the same exact critic that we used in in the Tower of Hanoi experiment, as a demonstration of it's efficacy in another problem domain.

For this problem, we specify the disposition of the system by indexing which box is on top of which stack. We allow for three different stacks, since there are three boxes. This gives a state space of 58 addresses to look at.