Source of "New Art System" is

title:HAL'S LEGACY
2001's COMPUTER AS DREAM AND REALITY
EDITED BY DAVID G. STORK
THE MIT PRESS
ISBN 0-262-19378-7

p310-
#Plan generation
#Plan recognition
#Plan revision
#Scheduling
#Reactive control
#Adversarial Planning

The defining focus of Plan generation is the need to synthesize a plan by
reasoning about the future consequences of actions. Synthesis is hard, as
there can be many solutions, and each one may be arbitrarily long and com-
plex. Many AI problems, however, need only classification, not synthesis.
For example, to prove the theorem "all integers can be factored by prime
numbers" the theorem prover need only classify the proposition as either
true or false.
Often systems can be written to fulfill a planning function while avoiding
plan generation or any reasoning about actions. Although such approaches
should be taken when practical to avoid unnecessary complexities, they are
not really planning, as AI generally defines it.

Before a computer can solve a problem, it must be able to represent it. Pre-
cisely defined problems, like chess, are easy to represent; the difficulty lies
in computing the best move. Planning how to represent actions is a topic of
much current research. Even after actions are represented, a computational
problem far harder than selecting a chess move remains.

The qualification problem also affects action representations. Actions gener-
ally have all sorts of preconditions that are extremely unlikely to occur.

Early planning systems were limited to totally ordered sequences of ac-
tions, which makes it easy to determine what is true at any particular point
in the plan. For a problem with parallel goals, such a system would have to
try different possible orderings of the parallel goals until it finds an adequate
plan.
Most current planning programs allow for partially or-
dered actions. By committing to a limited set of orderings, the planner can
avoid having to try all orderings and can add new ones as they are needed
to represent more complex problems with multiple agents. However, such
systems make it computationally difficult to determine what is true at a
given point in the plan. Suppose a plan begins with n actions in parallel, is
some fact true after these actions are taken? Again, there are n! different
execution orders for these actions, and some may make the fact true and
others may make it false.

Many AI systems do not deduce situation-dependent effects, be-
cause of the complexities they introduce (e.g., deductions need to be recom-
puted when a new action is added at the beginning of the plan). Instead,
what is intuitively a single action must be represented as many different
actions-one for each situation in which the action might have a slightly
different effect.

To expand a plan to a lower level of abstraction or to contain more detail
at the current abstraction level, the system selects a goal. It then finds all
operators that are relevant to reaching such a goal and chooses one. If it
makes a poor choice, the search will eventually try the other possibilities.
The planner expand the plan by first instantiating the operator (by binding
some of its variables) and then replacing the goal with the more detailed
subart specified in the operator.

#When and how resources are allocated
#Which operators are applied to which nodes
#Which goal to solve next
#Which objects to use as instantiations for planning variables
#How to resolve conflicts between actions

Planning systems are brittle because they can only use actions for which
they have a wealth of information. AI planners can produce good plans as
long as their knowledge of the world and the actions are perfect. However,
change the world or an action slightly and these planners often can't cope.