One thing planners (humans and machines alike) must contend with is problem of scheduling a series of tasks efficiently. It's pretty simple if there is one actor which can approach tasks- in that case jobs must simply be done in order of priority, with any prerequisites occurring before dependents, naturally. However, imagine you have a collection of ten jobs and three actors (say three people working together, or a computer capable of running three processes conterminously). In this situation, the issue becomes how to divide the work among the different agents so as to complete all tasks as quickly as possible.

To this end, we need a selection algorithm that will assign tasks in an order that minimizes time to completion of all tasks. It's prudent to know what this minimum time is before setting to the task of making the algorithm, though I'll admit having examined that question after the fact when working on it in the first place. The most simple analysis is to consider all the tasks as a fungible body of time to be spent. Instead of having tasks A, B and C which take 6, 3, and 4 units of time, respectively, to complete, we have 13 time units of work to divide between the agents. If we have three such entities, then, we can allocate each 4.33 units exactly, and be finished with all works after 4.33 time units. This gives us the lower bound on time to completion of all work: sum of total time to complete all tasks is divided equally among all actors. For compactness, say we have n actors, then tcmin = Ts/n.

How do we know for sure that this is the minimum time to completion? Consider each agent i with a total workload of wi units. Let's say the agent with the minimum work has wm units of tasks and the agent with the maximum amount has wM units. The time to total completion is the time to completion of work for the agent that finishes last, and is thus wM. If we move work from any agent that is not the one with maximum work, then the total time to completion either remains the same, if the new distribution of work leaves wM at the maximum, or increases if we add to wM or another agent who becomes the maximal agent. There is no procedure aside from removing work from the maximal agent the reduces net time to completion.

Given that, consider moving tasks from the maximal agent- doing so reduces the total time to completion, so long as the agent which receives the moved task does not end up with a greater workload than wM. In the ideal case, in which a continuous amount of work can be shifted at will from any agent, taking from the maximum load and adding to one of the less-taxed agents will reduce the total time to completion. It's fairly straightforward to see that continuing this process will eventually lead to a uniform distribution of workload, and that any deviation from the most uniform distribution will result in sub-optimal performance.

Knowing this, we can look at our group of tasks and and start sorting them. Let's start with a basic set with no dependencies (that is to say, no task in the list requires another to be complete before it can be performed- we'll relax this requirement later). In this case, we need to assign tasks in such a way that we get the distribution as close to uniform as possible.

We acknowledge that actual uniformity may not be possible- consider a group of tasks taking times 1, 2 and 3 units to complete, and having three agents. No matter how we distribute these tasks, we have a hard limit of 3 units to completion- one agent gets the 3 unit task, period. If one other agent does both the 1 and 2 unit tasks, or if they're split between the remaining two, it still takes 3 units to finish the longest task. This is a corollary to our earlier deduction- the minimum time will always be greater than or equal to the single longest task (tM). This is a consequence of the fixed duration of the tasks, so including that in the analysis modifies our theoretical minimum to the better limit of: max({tM,Ts/n}).

So, we set to distributing our tasks so that they are divided as close to the minimum as possible. Let's start by imagining the set of activity queues for the agents as a set of n empty ordered sets, and the list of unallocated tasks. We start by assigning one of the agents the longest task. We know that this is a valid optimal allocation because the minimum time is either this task duration or the average, and the task must be allocated to some actor. Next, we assign the second longest task to the next free actor. This assignment follow from similar logic, since the sub-problem of assigning the remaining list to the remaining free agents is an identical problem, and assigning the longest and second longest tasks to the same agent will guarantee a net completion time greater than or equal to their sum.

That is, we essentially pretend the top two tasks are one very long task if we assign them that way. Doing so alters max({tM,Ts/n}) iff Ts/n < tM, in which case it increases that limit. Consequently, either adding the second greatest member to the first agent increases net completion time, or good solutions with it assigned to first agent and also to other agents exist. Therefor, it is prudent to add it to the next free agent, as this evades the case in which the minimum theoretical limit is elevated. This reasoning continues to apply until all empty agents have at least one allocation, so our first step is to take the n greatest elements in the task list, and allocate them to the n agents uniquely.

At this point, we have n agents which each contain one of the n greatest duration tasks in the list. It seems natural that the next allocation will follow the same pattern, and that the (n+1)th greatest duration task is the next to assign, but the question of to which agent it should be assigned remains. Considering that we want to cap the maximum time, however, it should be apparent that adding the next task to any element with exceeds the time of the current agent with the greatest work time should be avoided, if possible.

But what of cases in which that isn't possible, or there are many options to satisfy that rule? Consider the current time frames as relative. We can then subtract from every agent the time of the minimum workload agent's total time. This recasts the problem to the original one of filling up unallocated agents efficiently. Therefore, the general procedure is to go down the list of tasks, assigning the longest unallocated task to the agent with the least total work currently assigned until the list is empty.

Following this procedure, we will distribute the tasks so that the deviations from uniformity are as small as possible, because we will, essentially, be consistently increasing the level of the minimal element, and thus be preemptively shrinking the distances between maximal and minimal elements. However, consider a case in which there are dependencies between tasks- certain ones require others to be completed before they can be performed. If there is a task A which is very long, but requires a very short task B to begin, how should A and B be considered together for effective planning?

To approach this problem, we'll think of the assignment process as being a dynamic one, in which the 'subtraction' we mentioned earlier is subsumed as a time operation. At each step, we will assign tasks to inactive agents from the pool of pending tasks, in descending order of duration. We will then check all agents for the time remaining on their current task, and advance the time counter to the end of the soonest completed task, and proceed with allocation from there.

Framing the allocation this way, let's turn our attention to the dependent sequence. First, we say that any dependent task will only be added to the list of pending tasks once all its dependencies are performed. This alone does not fully rectify the ordering, however. Imagine a case in which task A is dependent on task B. If task A take 10 units to complete, but task B is only a 1 unit task, and all other tasks are in the 2-9 unit range, then task B will remain pending until the very end of the allocation procedure, around which time the disposition of tasks will be very near to optimal sorting for the list if A and B were not in it. Then, at the very end, task B is completed, and introduces a highly unbalanced 10 unit task to the mix, which has a radical negative effect on the net uniformity of the plan.

The issue here is that task B is being considered independent of those tasks which are dependent on it. To include this in the consideration, we'll adjust the notion of the duration for tasks which are prerequisites for others, by making the total 'duration' associated with that task equal to the sum of all durations of all tasks which depend on it, where we take those 'durations' to include further on dependencies, so that the net time associated with the task is recursively calculated.

For instance, if we have a battery of four tasks A, B, C, and D, with individual durations of tA, tB, tC, and tD respectively, and we say that task A depends on B and C, and task B depends on D. The tasks C and D are the independent tasks, which are available for allocation immediately. D's effective 'duration' is: dD = tD + dB. Task B, then, will have a duration of dB = tB + dA; task C will have dC = tC + dA; and task A, being the only one in this group on which no tasks are dependent, will simply be dA = tA. This will result in the final durations: dD = tD + tB + tA; dC = tC + tA; dB = tB + tA; and dA = tA.

Then, in the allocation, when prioritizing each task, the time considered for sorting the pending task list will use these durations rather than the task's individual time. This will result in an elimination of dependent blocks based on how much time they trap, the end result being the early elimination of blocking actions even if they are small, thus allowing a more even distribution of tasks. Let's see this case made an example of, assigning the times tA = 3; tB = 2; tC = 4; and tD = 3. And, let's say we have a 2 agent system into which we will be assigning these tasks.

The first step is computing the effective durations of each task. A is simple as, containing no dependencies, its duration is merely its own time, 3. For B, the duration is 5: its own time of 2 added to the duration of 3 for A. C has a duration of 7, from its 4 and A's 2, and D has a duration of 8, from its time of 3, and B's duration of 5. We can illustrate this setup with the following diagram:

To this end, we need a selection algorithm that will assign tasks in an order that minimizes time to completion of all tasks. It's prudent to know what this minimum time is before setting to the task of making the algorithm, though I'll admit having examined that question after the fact when working on it in the first place. The most simple analysis is to consider all the tasks as a fungible body of time to be spent. Instead of having tasks A, B and C which take 6, 3, and 4 units of time, respectively, to complete, we have 13 time units of work to divide between the agents. If we have three such entities, then, we can allocate each 4.33 units exactly, and be finished with all works after 4.33 time units. This gives us the lower bound on time to completion of all work: sum of total time to complete all tasks is divided equally among all actors. For compactness, say we have n actors, then tcmin = Ts/n.

How do we know for sure that this is the minimum time to completion? Consider each agent i with a total workload of wi units. Let's say the agent with the minimum work has wm units of tasks and the agent with the maximum amount has wM units. The time to total completion is the time to completion of work for the agent that finishes last, and is thus wM. If we move work from any agent that is not the one with maximum work, then the total time to completion either remains the same, if the new distribution of work leaves wM at the maximum, or increases if we add to wM or another agent who becomes the maximal agent. There is no procedure aside from removing work from the maximal agent the reduces net time to completion.

Given that, consider moving tasks from the maximal agent- doing so reduces the total time to completion, so long as the agent which receives the moved task does not end up with a greater workload than wM. In the ideal case, in which a continuous amount of work can be shifted at will from any agent, taking from the maximum load and adding to one of the less-taxed agents will reduce the total time to completion. It's fairly straightforward to see that continuing this process will eventually lead to a uniform distribution of workload, and that any deviation from the most uniform distribution will result in sub-optimal performance.

Knowing this, we can look at our group of tasks and and start sorting them. Let's start with a basic set with no dependencies (that is to say, no task in the list requires another to be complete before it can be performed- we'll relax this requirement later). In this case, we need to assign tasks in such a way that we get the distribution as close to uniform as possible.

We acknowledge that actual uniformity may not be possible- consider a group of tasks taking times 1, 2 and 3 units to complete, and having three agents. No matter how we distribute these tasks, we have a hard limit of 3 units to completion- one agent gets the 3 unit task, period. If one other agent does both the 1 and 2 unit tasks, or if they're split between the remaining two, it still takes 3 units to finish the longest task. This is a corollary to our earlier deduction- the minimum time will always be greater than or equal to the single longest task (tM). This is a consequence of the fixed duration of the tasks, so including that in the analysis modifies our theoretical minimum to the better limit of: max({tM,Ts/n}).

So, we set to distributing our tasks so that they are divided as close to the minimum as possible. Let's start by imagining the set of activity queues for the agents as a set of n empty ordered sets, and the list of unallocated tasks. We start by assigning one of the agents the longest task. We know that this is a valid optimal allocation because the minimum time is either this task duration or the average, and the task must be allocated to some actor. Next, we assign the second longest task to the next free actor. This assignment follow from similar logic, since the sub-problem of assigning the remaining list to the remaining free agents is an identical problem, and assigning the longest and second longest tasks to the same agent will guarantee a net completion time greater than or equal to their sum.

That is, we essentially pretend the top two tasks are one very long task if we assign them that way. Doing so alters max({tM,Ts/n}) iff Ts/n < tM, in which case it increases that limit. Consequently, either adding the second greatest member to the first agent increases net completion time, or good solutions with it assigned to first agent and also to other agents exist. Therefor, it is prudent to add it to the next free agent, as this evades the case in which the minimum theoretical limit is elevated. This reasoning continues to apply until all empty agents have at least one allocation, so our first step is to take the n greatest elements in the task list, and allocate them to the n agents uniquely.

At this point, we have n agents which each contain one of the n greatest duration tasks in the list. It seems natural that the next allocation will follow the same pattern, and that the (n+1)th greatest duration task is the next to assign, but the question of to which agent it should be assigned remains. Considering that we want to cap the maximum time, however, it should be apparent that adding the next task to any element with exceeds the time of the current agent with the greatest work time should be avoided, if possible.

But what of cases in which that isn't possible, or there are many options to satisfy that rule? Consider the current time frames as relative. We can then subtract from every agent the time of the minimum workload agent's total time. This recasts the problem to the original one of filling up unallocated agents efficiently. Therefore, the general procedure is to go down the list of tasks, assigning the longest unallocated task to the agent with the least total work currently assigned until the list is empty.

Following this procedure, we will distribute the tasks so that the deviations from uniformity are as small as possible, because we will, essentially, be consistently increasing the level of the minimal element, and thus be preemptively shrinking the distances between maximal and minimal elements. However, consider a case in which there are dependencies between tasks- certain ones require others to be completed before they can be performed. If there is a task A which is very long, but requires a very short task B to begin, how should A and B be considered together for effective planning?

To approach this problem, we'll think of the assignment process as being a dynamic one, in which the 'subtraction' we mentioned earlier is subsumed as a time operation. At each step, we will assign tasks to inactive agents from the pool of pending tasks, in descending order of duration. We will then check all agents for the time remaining on their current task, and advance the time counter to the end of the soonest completed task, and proceed with allocation from there.

Framing the allocation this way, let's turn our attention to the dependent sequence. First, we say that any dependent task will only be added to the list of pending tasks once all its dependencies are performed. This alone does not fully rectify the ordering, however. Imagine a case in which task A is dependent on task B. If task A take 10 units to complete, but task B is only a 1 unit task, and all other tasks are in the 2-9 unit range, then task B will remain pending until the very end of the allocation procedure, around which time the disposition of tasks will be very near to optimal sorting for the list if A and B were not in it. Then, at the very end, task B is completed, and introduces a highly unbalanced 10 unit task to the mix, which has a radical negative effect on the net uniformity of the plan.

The issue here is that task B is being considered independent of those tasks which are dependent on it. To include this in the consideration, we'll adjust the notion of the duration for tasks which are prerequisites for others, by making the total 'duration' associated with that task equal to the sum of all durations of all tasks which depend on it, where we take those 'durations' to include further on dependencies, so that the net time associated with the task is recursively calculated.

For instance, if we have a battery of four tasks A, B, C, and D, with individual durations of tA, tB, tC, and tD respectively, and we say that task A depends on B and C, and task B depends on D. The tasks C and D are the independent tasks, which are available for allocation immediately. D's effective 'duration' is: dD = tD + dB. Task B, then, will have a duration of dB = tB + dA; task C will have dC = tC + dA; and task A, being the only one in this group on which no tasks are dependent, will simply be dA = tA. This will result in the final durations: dD = tD + tB + tA; dC = tC + tA; dB = tB + tA; and dA = tA.

Then, in the allocation, when prioritizing each task, the time considered for sorting the pending task list will use these durations rather than the task's individual time. This will result in an elimination of dependent blocks based on how much time they trap, the end result being the early elimination of blocking actions even if they are small, thus allowing a more even distribution of tasks. Let's see this case made an example of, assigning the times tA = 3; tB = 2; tC = 4; and tD = 3. And, let's say we have a 2 agent system into which we will be assigning these tasks.

The first step is computing the effective durations of each task. A is simple as, containing no dependencies, its duration is merely its own time, 3. For B, the duration is 5: its own time of 2 added to the duration of 3 for A. C has a duration of 7, from its 4 and A's 2, and D has a duration of 8, from its time of 3, and B's duration of 5. We can illustrate this setup with the following diagram:

What we're illustrating here is a time chart for the two agents, a1 and a2, the list of currently available tasks, and the list of complete tasks, as well as a small directed graph representing the dependency tree. On the tree, we've labeled the individual task time (in blue) and the duration for comparison (in red). Of the two available tasks, we've got C with a duration of 7 and D with a duration of 8, so we assign first D to agent 1 at time 0:

Next, we look at the earliest time at which an agent can accept a task. Agent 2 has no tasks at all, so it can receive one at time 0. At t = 0, task D has been claimed, and tasks B has not yet had its requirements met (since D is

*in progress*, as opposed to having been completed. Thus, the only remaining choice (A is right out- none of its direct requirements are met yet) is C, so we assign C to agent 2.Now, with both agents working on a task, we can roll forward, until t = 3, when agent 1 completes task D, and need to check for a new activity. Completion of task D means that all of B's prerequisites are met, and thus it is available for allocation. So, we assign it to agent 1, and proceed.

Next, after we assign B, we can move up to t = 4, when agent 2 completes task C. However, at this point in time, Agent 1 is still working on task B. This puts us in the situation mentioned above where there are no tasks that any agent can begin, though one is available for assignment. Agent 2 must sit idle until Agent 1 completes task B.

At time 5, when Task B is complete, after Agent 2 has waited a time step, all the prerequisites for task A are complete, and so we can now assign task A. Both agents are available, so it does not matter much to which one we assign it, so let's just decide on giving it to agent 2.

During the period in which Agent 2 is working on task A, Agent 1 has no work available, as the list of all tasks is now empty. So, at time 8, when Agent 2 finishes A, all tasks are done, with a final completion time of 8.

At the end of this procedure, we've allocated the tasks and acquired a time table for their completion, with a measure of all the time spent idle included (as opposed to a list lacking these idle times, such as: {D,B}, {C,A}, which might indicate a final completion time of 7 units).

Using this procedure, we can order tasks among any number of agents efficiently, as indicated by the pursuit of the maximally even distribution of individual completion times. Additionally, the precise timing of each action's start time can be recorded using the time-step approach, which also gives an actual total run time, even with dependencies, as it allows stepping through simulated completions in the order they will be achieved in application. For instance, if all tasks are cleared except one on which several depend, the agents not performing any action sit idle while waiting for new pending tasks to open up. This approach not only sorts tasks for efficient completion, but also does not mask the presence of idle times, and minimizes their presence by continually adding to the pending list as soon as tasks become available.

Using this procedure, we can order tasks among any number of agents efficiently, as indicated by the pursuit of the maximally even distribution of individual completion times. Additionally, the precise timing of each action's start time can be recorded using the time-step approach, which also gives an actual total run time, even with dependencies, as it allows stepping through simulated completions in the order they will be achieved in application. For instance, if all tasks are cleared except one on which several depend, the agents not performing any action sit idle while waiting for new pending tasks to open up. This approach not only sorts tasks for efficient completion, but also does not mask the presence of idle times, and minimizes their presence by continually adding to the pending list as soon as tasks become available.