0% found this document useful (0 votes)
23 views24 pages

6

Artificial intelligence textbook copy scanned copy, Saroj Kaushik VTU, autonomous Institute
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
23 views24 pages

6

Artificial intelligence textbook copy scanned copy, Saroj Kaushik VTU, autonomous Institute
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 24
CHAPTER vanced Problem-Solving Paradigm: Planning Introduction ‘of the key abilities of intelligent systems is planning. Planning refers to the process of com- og several steps of problem-solving before executing any of them. Planning increases the omy and flexibility of systems by constructing sequences of actions that help them in achiev- heir goals. Planning is an area of current interest within AI. One of the reasons for this is that sibines two major areas of AI: search and logic. Planning involves the representation of ons and world models, reasoning about the effects of actions, and techniques for efficiently ching the space of possible plans. A planner may therefore be either a program that searches solution or one that proves the existence of a solution. Planning helps in controlling combi- ral explosion while solving problems. In order to solve non-trivial problems, itis necessary shine basic problem-solving strategies and knowledge representation mechanisms. Planning jolves further to integrate partial solutions for decomposable problems into a complete solution he end. Planning proves to be useful as a problem-solving technique in case of nox. decompos- problems. In general, problem-solving systems, elementary techniques are required to ‘orm the following functions (Rich & Knight, 2003). sing the Best Rule (Based on Heuristics) to be Applied For selecting appropri- rules, the most widely used technique is to first isolate a set of differences between the desired fl state and current state. Then, those rules are identified that are relevant for reducing this erence. If more than one rule is found, then heuristic information is applied to choose an opriate rule 82 Artiicialintetigence Applying the Chosen Rule to Obtain a New Problem State In simple-prob Systems. it is easy to apply rules as each rule specifies the problem state that would re ht ith rule: . . it OR application, However, in complex problems, we have to deal with rules that only SPecify Part of the complete problem state. , Detecting When a Solution is Found By the time the required goal state ig Solution of each sub problem is found. The solutions so obtained are then combined tp Pa final solution of the problem. Detecting Dead Ends so that New Directions may be Explored When asia, ‘arises where we are not able to proceed further from a Particular state, which goal state, we can dec! IS not the dae Tare the state to be a dead end and can proceed further ew direction, Tn complex systems, we have to explore techniques to detect when an almost Correct solnes Solutio it oreo. Planning canals be ¥Y Proving a theorem in snation calculus which tates that: = Problems. the state space is enormous. Ths, there isa need to construct impli that the entire state space need mot 'sc-based representations have bees " ing implicit representations. they are convenient for producing Cry ic logically explains "ing at some goal. A prominent short ot ben MY a dificul to generalize Th why ste ¥e will discuss y of ee SCUSS various techniques of plat Ava 1nced Problem-Solving Paradigm: Planning 6.3 ‘types of Planning Systems + representation of planni vranning proble=* [Danning problems isan imporantssue and needs to be discussed at he one which is expressive caoteh ee states, actions, and goals, An ideal language allow efficient is lescribe a wide variety of is tol lgtiths to open over The dferet eeepc restrictive jn the following manner: . lifferent components may be repre- sentation of States ses inlet ongepl sarent mcsinmsnect NT state as a conjunction of predicate atoms (positive literals). sentation of Goals A goal is a parti ti is sent tte as (od le "4 ally specified state and is represented as & entation of Actions An action is specified in terms of preconditions (that must hold ore the action can be executed) and effects (that ensue when the action has been executed). ber of formulations have been used so far that attempt to solve the planning problems, such rator-based planning, case-based planning, logic-based planning, constraint-ba ing, distributed planning, etc. Operator-Based Planning ewe attempt to solve a problem with the help of planni and a database of logial sentences about the stat state, Once these are which when executed by the execufor in a state 1 satisfying the description of the start state will result inthe Sake G (goal state), inal state. Here, actions ae represented as operators. This aP- approach (explained late), ilizes various oP¢raor schemas 1s, The major design issues and concepts are given follows and have lowing sections appropriately. , we need to have a start state, a set sions, a goal state, Fed, the planner will try to generate a plan, art state fying the description of the ch, also known as the STRIP: plan representation explained in the fol presentations, etc. ition lists, procedural vs declarative jal-order plans, Jnical plans. pat Operator schema Add-delete-precon near plans, non-linear plans, bierarcl ‘Plan representations Lit conditional plans, etc. + Planning algorithms Plannin: total-order planning, progression. « Plan generation Plan reformulation. lan-space. partial-order planning, gas search, world-space ¥5 Pl _goal-regression 6: repair, total-ordering, et: 6.4 Artificial Intelligence 6.2.2 Planning Algorithms Search techniques for planning involve searching through a search space. We now introduce concept of planning as a search strategy. In search technique method, there are basically tye approaches: searching a world (or state) space or searching a plan space. The ConcEpLS OF World (y State) space and plan space may be defined as given below, World-Space In world space, the search space constitutes a set of states of the World, an action is defined as transitions between states, and a plan is described as a path through the state Space. In state space, it is easy to determine which sub goals are achieved and which actions are applicable; however, it is hard to represent concurrent actions, Plan-Space _ In plan space, the search space is a set of plans (including partial plans). The stan state is a null plan and transitions are plan operators. The order of the search is not the same as plan execution order. A shortcoming of the plan space is that itis hard to determine What is true j plan. Both the approaches are discussed in detail as follows: Searching a World Space Each node in the state search graph denotes a state of the world, while arcs in the graph comes spond to the execution of a specific action. The planning problem is to find a path from given Star state tothe desired goal state in the search graph. For developing planning algorithms. onesy the following two approaches may be used: Progression This approach refers tothe process of finding the goal state by searching through the States generated by actions that can be performed in the given state, starting from the start state, Ttis also referred to as the forward chaining approach. Here at a given state, an action (may be non- deterministic is chosen whose preconditions are satisfied. The process continues until the goal state is reached. ii Regression In this approach, the search proceeds in the backward direction, that i, i starts wih the goal state and moves towards the stat state, Ths is done by finding actions whose effects sly one oF more ofthe posted goals. Posting the preconditions of the chosen action as goals is called goal regression. Itis also known as backward chaining approach. Here, we choose an action (which may | algorithm is said to be sound ifthe plan generated succeeds in completing the desired job, and itis said to be complete if it guarantees to find a plan, if one existe However, in most situations, regression is found to be a better strategy. Ser of actions As an example of a set of actions ily routine, such as g0-for-morning-vwalk, wake, set of ordering constraints MWe can consider some of the activities from our ‘4p. take-bath, go-to-work, go-to sleep, and so on, i ‘the actions mentioned above, the action wake-up is [wake-up ~ go-for-morn- gal "ing Constraints inthe st of actions given above may be written as wake-up - go-for-moming-walk wake-up = take-bath wake-up © gorto-sleep wake-up = goto-work go-for-moming-walk © — go-to-work go-for-moming-walk << — go-to-sleep take-bath © go-to-sleep go-to-work = goto-sleep ly obey the con- traints. For example, [take-bath ¢ go-to-work] may not have ths strict ordering as one may go to jvork without taking bath or one may take bath after retuming from work. The ordering [z0-for- norning-walk <~ go-to-work] has to be in constraint as specified as go-for-morning-walk activity Jeannot follow go-to-work activity. Set of causal links We observe that awake is a link from the action wake-up to the action go-for-morning-walk with an awakening state of the person between them represented as “wake- ‘w—awake—go-for-morning-walk’. This denotes a casual link. When the action wake-up is added tothe generated plan, the above causal link is also recorded along with the ordering constraint [wake- up — go-for-morning-walk]. This is because the effect of the action wake-up is that the individual is awake; this is a precondition of the action go-for-morning-walk. The second action cannot occur until this precondition is satisfied. Causal links help in detecting inconsistencies whenever a partial plan is refined. 3 Case-Based Planning se-based planning, for a given case (consisting of start and goal states) of a new problem, the ry of cases in case base is searched to find a similar problem, with similar start and goal s. The retrieved solution is then modified or tailored according to the new problem. Thus, 6.4 Artificial Intelligence 6.2.2 Planning Algorithms involve searching through a search space. We now introduce the rch technique method, there are basicaly hhing a plan space. The concepts of woe a Search techniques for planning concept of planning as a search strategy: In ses approaches: searching a world (or state) space Of searcl state) space and plan space may be defined as given below. World-Space In world space, the search space constitutes a set of states of the world, action is defined as transitions between states, ‘ind a plan is described as a path through the s = space. In state space. it is easy to determine which sub goals are achieved and which actions a applicable; however, it is hard to represent ‘concurrent actions. js a set of plans (including partial plans). Th The order of the search is not the sana that itis hard to determine what is true in Is true in Plan-Space _ In plan space, the search space state is a null plan and transitions are plan operators: plan execution order. A shortcoming of the plan space i 4 plan, Both the approaches are discussed in detail as follows: Searching a World Space Each node in the state search graph denotes @ s spond to the execution of a specific action: The pl start state to the desired goal state in the search graph. For develo} the following two approaches may be used: rate of the world, while arcs in the graph come- janning problem is to find a path from a given ping planning algorithms, one of he goal state by searching through the ee performed in the given state, starting from the start state. Itis 1h. Here at a given state, an action (may be non- The process continues until the goal state i. Progression This approach refers to the process of finding states generated by actions that can also referred to as the forward chaining approac! ‘jererministic) is chosen whose preconditions are satisfied. jis reached. ii, Regression In this approach, the goal state and moves towards te start state, This is done by fi nore ofthe posted goals. Posting the preconditions of the chosen # vas backward chaining approach. Here, we choose an action (which may san effect that matches an unachieved sub goal. Unachieved precondi- process is continued tll the set of unachieved ub goals n more efficient. ward direction, that is, it starts with nding actions whose effects satisy tion as goals is called goal the search proceeds in the back ‘one or! regression. Itis also know! be non-deterministic) that ns are added to the set of sub goals and this tio becomes empty. In regression, the focus is on achieving goals and is thus ofte! It is to be noted that algorithms based on both the approaches are sound and complete. At and itis in completing the desired job. erated succeeds in in most situations: hm is said to be sound if the plan gen if one exists. However, i algoritl lete if it guarantees to find a plan, said to be comple regression is found to be a better strategy. ‘Avanced Problem-Solving Pavadkn: Planning 6.8 @ Plan Space node im Plan. space graph represents partial plans, While ates denote plun refin Ei Search for ther « plan with a tvally.ctdered sequence of setlons ce plat a partiallysorslerel set of actions. A partialoxder plan has the following three components Asa example of a sot of actio SUCH as go. ‘Wo cin consider some of the wctivities fron ‘rmarningwall, wake-up, take-bath, ¥o-to-work, gusto steep, and st ane Constraints In the actions mentioned above, the action wakeup is “ore ge for-morning walk; therefore, we can tepreset it ws [wake-up «- go for mart ome ofthe partial ordering constraint in the set of actions given above may be writen ae © go-forsmomning-watk © toke-bath wake-up © goto-sleep wake-up © gostoowork gotormOMINgWak —-— —_gouto.work REHor-moming-walk —— — go-to-sleop, take-bath, 4 podtossleop “go-10-Work © gotossloep *S will have 0 follow 4 certain order, so sample, [take-bath «= go-to-work] may aot have this strict ordering sone may go 40 taking bath or one may take bath after returning from work, The ordering {yo-for k «= go-to-work] has to be in constraint as specified as go: g0-10-work activity, iy Not necessarily obey the com: For-morning-walk wotivity jal links We observe that awake is a link from the action wake-up to the action hing-walk with an awakening state of the person between them represented as ‘wake ~g¢-forsmorning-walk’. This denotes a casual fink. When the wetion wake-up is wihled ped plan, the above causal link i also recorded along with the ordering constraint | wakes norning-walk]. This is because the effect ofthe action wake-up is that the individual ie 1s a precondition of the action go-for-morning-walk. The second xction cannot occur condition is satisfied, Causal links help in detecting inconsistencies whenever a pactial ed. Based Planning Janning, for a given case (consisting of start and goal states) of a new problem, the in case base is searched to find a similar problem, with similar start and goal ed solution is then modified or tailored according to the new problem, Thus 8.6 Artificial intetigence case-based planning helps in utilizing specific knowledge obtained by “proach is based on human methodolog their problems by le: Tem by finding a sit matched ag: Y Previous experi of tackling a problem, Humans also Snting from their experience (Althoff & Aamodt, 19960) and solve 'milar case handled in the past. So, in case-based planning, a new stnst the cases stored in the case base (past experience) Solution suggested by the matched cases is then rev obtain sot and one or more similarege the retrieved cd and tested for success Ue Rings elose mutch, the solution will have tobe revised whee Produces ange that can then be retained as a part of learning. An initial description of a problem defines a ‘This planning procedure is desribed ava cyclical proce consisting of the following sens Retrieve The most similar eases are rtived fom the cae base using various methods, Reuse The cases are reused in an atempt to solve nsce problem: Revise The proposed solution of the retrieved cas: if necessary, ay Retain from the retteygy Case-based reasoning (CBR) systems will be 6.2.4 State-Space Linear Plat In the process of lin Strategy that uses a st linear planning are di discussed in detail in Chapter 11, ning ar planning, only tack of unachi¢ iscussed as follow: Sa simple sears advantages of state-space Advantages of Linear Planning * Since the goals are solved one at a time, the search space is considerably reduced, * Linear planning is especially ‘advantageous if goals are (mainly) independent * Linear planning is sound. Disadvantages of Linear Planning + Iemay produce sub-o © Linear planning is i * In linear planning, Plimil solutions (based on the number of operators in the plan). incomplete the planning efficiency depends on ordering of goals, 6.2.5 State-Space Non-Linear Planning In non-linear In this proc eNanced Problem Sohing Paradigm: landing. 6.7 + Planning is sound and complete, ‘be optimal with respect to plan te yptimal with respect 1 Pia neh pending onthe seach sitey employed all possible goal ory fed in non-linear pl jinear ph Herings may have to be conside janning nsidered, a | larger search space is anning requires a more complex Algorithm and a tot of book-keeping section, we will d problem and explain the g-linear methods discuss pi lanning methods using Procedure: @ specific example of block S of gener: re ih ‘ating plans to solve goals using linear and ock World Problem: Description pblock 1d problem basically consists of handlin Baltern (Rich and Knight, 2003). This problers closely resembles ieonstruction usually played by children. O, tur aim is to exp: Ps) which may help a robot to solve such proble ig blocks and generating a new pattern from a a game of block arrange. lain strategies that may lead to a 'ms. For this, let us consider the following locks are of the same size (square in shape), iss can be stacked on each other. fe is a flat surface (table) on which blocks fe is a robot arm th: in be placed. can manipulate the blocks. The arm can hold only one block ata time \ world problem, a state is described by a set of predicates, which Fepresent the facts is in that state. For every action, we describe the changes that the action make ta the ‘ption. In addition, some statements regarding the things which remain unchanged by f Hons are also to be specified, For example, if a robot picks upa block, the colour ofthe not change. This is the simplest possible approach and is desc Hescriptions of the operators or actions used for this problem, (Operations) Performed by Robot and the concept of block world problem, let us use the following convention: ital letters X,Y, Z, ..., are used to denote variables erease letters a,b, ¢, .... are used to represent specific blocks 6.8 Artificial inteligence The description of various operators or actions used in this problem is given in Table 6,1 Deeciption af Cperioveo Alo aed a Mock World Prin Description USIX. Y) Pek up Block X from block ¥(cuzem postions thar on Y). The am mus be empty and op FX mit bes ‘Table 6.1 Short Form| STX. Y) Put block X onthe top of block ¥. The arm must be hoig STACK(X, Y) block X. Top of ¥ should be clear. PICKUP) PU(X) Pick up block X from the table and hold it, Initially the any ‘must be empty and top of X should be clear. PUTDOWN(X) PDX) Put down block X on the table. The arm must be holding block X before putting down, In block world problem, certain predicates are used to represent the problem states for performiy the operations given in Table 6.1. These predicates are described in Table 6.2, Table 6.2 Predicates Used in Block World Problem | Predicates Description ONX. ¥) Block X is on the block Y ONTABLE(X) Block X ison the table CLEAR(X) Top of X is clear HOLDING(X) Robot arm is holding X ARMEMPTY Robot arm is empty In the following sections, we will discuss logic-based planning, linear and non-linear planning. 6.4 Logic-Based Planning We will use the block world problem explained in the preceding section to explain the concept 0 logic-based planning. In this approach, we have to explicitly state all possible logical statements tht are true in the block world problem. Some of these logical statements are described as follows: # If the robot arm is holding an object, then arm is not empty (2X) HOLDING(X) > ~ ARMEMPTY ‘If the robot arm is empty, then arm is not holding anything ARMEMPTY ~ ~ (3X) HOLDING(X) © IFX is ona table, then X is not on the top of any block (VX) ONTABLE(X) > ~ GY) ON(X, Y) Advai 4 Problem-Solving Paradigny: Planning 6.9 oon the top of a block, then X is not on the table @YX) BY) ON(X, Y) => ~ ONTABLEGY) cis no block on top of block X. then top af block X is clear (YX) BY) ONIX, ¥)) > CLEAREX) top of block X is clear, then there is no block on the top of X (VX) CLEAR(X) + ~ GY) ON(Y, X) © axioms reflecting the effect of the operations mentioned in Table 6.1 have to be na given state, Let us assume that a function named GEN gen eas a result of the application of some pn a state S and a new state ex a new state from operator/action, For example, if the action OP S1 is generated then S1 is written as S1= GEN(OP, S) fiect of UNSTACK(X, Y) in state § is described by the following axiom, EAR(X, S) AON(X, Y, S) A ARMEMPTY(S)] > (HOLDING(X, $1) A CLEAR (Y, $1)} s & new state obtained after performing the UNSTACK operation. If we execute (X,Y) in S. then we can prove that HOLDING(X, $1) ACLEAR (Y, SI) holds true. interpretation of the axioms showing effect of the ry. we will omit these now onwards. following operations is. self effect of STACK(X, Y) in state S is described as follows, JOLDING(X, $) ACLEAR (Y, $)] > Ic (X, ¥, SD) A CLEAR(X, SI) A ARMEMPTY(S1)] effect of PU(X) in state S is described by the following axiom. LEAR(X, S) A ONTABLE(X, S) AARMEMPTY(S)] > {HOLDING(X. SI] effect of PD(X) in state $ is described by as follows. JOLDING(X, S)] > [ONTABLE(X, $1)) A CLEAR(X, $1) A ARMEMPTY(S1)} noted here that after any operation is carried out on a given state, we cannot comment * situations in the new state $1, For example, after the operation UNSTACK(X, Y), we \¢ a Statement regarding the current position of Y, that is, whether Y is still on the table her block, Similarly, we cannot say that properties such as colour or weight of any block state are same as the previous state. Therefore, there might be many such properties sot change with change in state. So, we have to provide a set of rules known as frame the properties that do not get affected in the new state with the application of each 6.10 Antfcial intelligence operator, Table 6.3 lists. few such situations which will not get affected by using the UNgp ‘operator. We can define frame axioms for other operators in a similar manner ACK Table 6.3 Frame Axioms for Operator UNSTACK Current State $0 inmplies State SI = Gi It block Zor ¥ is on a table in state $0, then UNSTACK(X, Y) in state block Z or ¥ in the state St ONTABLE(Z, $0) , ONTABLE(Z, $1) ONTABLECY, $0) ONTABLELY, $1) block Z is on block W or Y is on any block U, then UNSTACKCX, ¥) in state SO will not ate blocks Zand ¥ inthe state ON(Z,W, $0) . ONZ,W, $1) ON (Y, U, $0) > ONY, U, $1) If colour of blocks X, Y, or any block Z is same (say, red) in state $0 then it remains the same in SI afler UNSTACK(X, Y) operation is performed COLORIX,C, $0) = COLOR. C,8)) COLOR(Y, C, $0) > COLOR(Y, C, $1) COLOR(Z, C, $0) > COLOR. C, $1) ON relation is not affected by UNSTACK operator if the blocks involved in ON relati . from those involved in UNSTACK operation. ON(Z, W, SI) ONG, W. $0) ANE (Z,X) ‘The advantage of this approach is that only a simple mechanism of resolution needs to be performed for all the operations that are required on the state descriptions. On the other hand, the disadvantage of this approach is that in case of complex problems, the number of axioms becomes very large as we have to enumerate all those properties which are not affected, separately for eath operation. Further, if a new attribute is introduced into the problem, it becomes necessary to ail a new axiom for each operator. For handling the complex problem domain, we need a mechanism that does not require a lage number of frame axioms to be specified explicitly. Such mechanism was used in early robot prob Jem-solving system known as STRIPS (STunford Research Institute Problem Solver), which W developed by Fikes and Nilsson in 1971. Each operator in this approach is described by a lista! new predicates that become true and a list of old predicates that become false after the said oper tor is applied. These are called ADD and DEL (delete) lists, respectively. There is another it called PRE (preconditions), which is specified for each operator; these preconditions must be before an operator is applied. Any predicate which is not included on either ADD or DEL list! an operator is assumed to remain unaffected by it, Frame axioms are specified implicitly ® STRIPS; this greatly reduces the amount of information stored, Advanced Problem-Solving Paradigm: Planning 6.11 s-Style Operators erve that by making the frame axioy ion that needs to be provided for ffect; the unaffected attri J is added, the operator lists do not g i . lenceforth, we will use short forms of ies as given in Table 6.4 for the sake of convenience ms implicit, we Table 6.4 Short Forms of Pred feats. = ON(X.Y)— ONTABLEGK) Form _O(X. Y) THY) CLEAR(X) ——HOLDING(X) _ ARMEMPTY Xx) HX) AE e lists (PRE, DEL, and ADD) requited for each operator are given in Table 6.5 Table 6.5 Operator with PRE, DEL and ADD Lists erator PRE list DEL list ADD list nx. Y) CY) A HOO, CY) AH(X) AEA O(X, Y) (X,Y) O(GY) L COX) A AE O(X, Y) AAE HOO ACY) AX) T(X) AC(X) A AE TO) AAE HO) xX) HOO) HOO, TX) AE te description is updated after each operation by deleting those predicates from the state e present in the DEL list and adding those predicates to the state that are present on the ist of the operator applied. If an incorrect sequence is explored accidentally, then it is le to return {o the previous state so that a different path may be tried, Linear Planning Using a Goal Stack al stack method is one of the earliest methods that were used for solving compound goals, may not interact with each other. This approach was used by STRIPS systems. In this the problem solver makes use of a single stack containing goals as well as operators that en proposed to satisfy these goals. In goal stack method, individual sub goals are solved ly and then, at the final stage, the conjoined sub goals are solved. Plans generated by this contain complete sequence of operations for solving the first goal followed by complete ie of operations for the next one, and so on. The problem solver uses database that < the current state and the set of operators with PRE, ADD. and DEL lists. Simple Planning using a Goal Stack i ing il sta sectio iscuss the method of simple planning using a goal Dy wi ational get, Consider be folowing gal hat consiss of ab goals GI, G . Let us explain the 2,Gn 6.12 Artificial Inteligence GOAL =GIAG2A...AGn ‘The sub goals Gi, ..., Gn are stacked (in any order) with a compound goal G1 A .-.A Gn at q, bottom of the stack. The status of stack is shown in Fig. 6.1. Top a G2 | Ga Bottom ————+ G1 AG2A..6n Figure 6.1 Status of Stack ‘The algorithm that is required to solve the goal is given below. Algorithm 6.1 GOAL Stack { @ Push conjoined sub goals and individual sub goals onto Stack: © flag = true: While (Stack # @ and flag = true) do { If top element of stack is an operator then { © POP it and add to PLAN_QUEUE 9f operations to be performed in a plan Generate new state from current state by using ADD and DEL lists of an operator Else { If top of the stack is-sub goal and is true in the current state then POP it Else { « Identify operator(s) that satisfies top sub goal of the stack «If (no operator exists) then set flag=false Else = Choose one operator that satisfies the sub goal (use some heuristic) * POP sub goal and PUSH chosen operator along with its preconditions in the stack } «If (flag = false) then problem solver returns no plan else returt ; the plan stored in PLAN QUEUE for the problem; } ‘Advanced Problem-Solving Paradigm: Planning 6.13 solving Block World problem using Goal Stack method rate the working of Algorithm 6.1, consider the following example, where the start and jes of block world problem shown in Fig. 6.2. Here a, b, c, and d are specific blocks. Goal State 2, 3 a6 Oe eee Figure 6.2 Start and Goal States ofa Block World Problem jcal representations of start and goal states may be written as tate: O(b, a) A T(a) AT(c) AT) A C(b) A C(e) A Cd) A AE ate: O(a, ¢) A O(b, d) AT(c) A Td) A Cla) A C(b) A AE §_ QUEUE ice that (T(c) AT(@) A C(b) A AB) is true in both start and goal, states. Hence, for the sake jenience, we can represent it by CSG (conjoined sub goals present in both the states). We ito solve sub goals O(a, ¢), O(b, d), and C(a) and while solving these sub goals, we will re that CSG remains true. We will first put these sub goals in some order in the stack. Let al status of the stack be as shown in Fig. 6.3. cia) Of.) 0,4) | ete) .0¥0.<) .08, 0). Acs Figure 6.3 Initial Status of Goal Stack ‘enced to identify an operator that can solve C(a). We notice that the operator US(X, a) can ‘pplied, where X gets bound to the actual block on top of ‘a’. Here pop C(a) and-push 2) in the goal stack. The status of the goal stack changes as shown in Fig. 6.4, USK, 0.0) 00.) fa) A0f@, 6) AO%b, 6) A686 6.14 Atticialinteligence s preconditions are true, therefore, ye — cean be applied only if its precom ; one = ne ae ie tack. The changed status of stack now looks as shown in Big. its preconditions of 0%. a) coo recondition of UNSTACK Ae 01%, 2) ACO) A AE six. a) Operator 018.) ow.) Ca) A fa, ¢) AOID, 6) ACSG Figure 6.5 Status of Stack on Addition of Preconditions The start state of the problem may be written as State 0 State 0 (Start state) Ob, a) A Ta) A Tic) A Td) A C(b) A Cle) A Cid) AAE From State 0, we find that b’ is on top of ‘a’, so the variable X is unified with block *b". Now, al preconditions (O(b, a), C(b), AE} of US(b, a) are satisfied. Therefore, the next step is to pop these preconditions along with its compound sub goal if itis still true. In this case, we find compound sub goal to be true. We now pop the top operator US(b, a) and add it in a PLAN_QUEUE of the sequence of operators. Initially, PLAN_QUEUE was empty but now it contains US(b, a). PLAN_QUEUE = US(b, a) A new state State 1 is produced by using its ADD and DEL lists written as State 1 Ta) A Tie) A Td) AH(b) A Cla) AC(c) A C(d) The transition from State 0 to State 1 is shown in Fig, 6.6. State 0 Figure 6.6 Transition from State 0to State 1 ‘Advanced Problem-Solving Paradigm: Planning 6.15 .y goal stack is shown in Fig. 6,7. 0,0) O%, 4) Cla) O(a, ¢) A.Ofb, ) ACSC. Figure 6.7 New Goal Stack Jet us solve the top sub goal O(a, ¢). For solving thi 5 ig this, we can only apply the operator ). So, we pop O(a, ¢) and push ST(a, c) al os ack ls given in Fig. 3 ) along with its preconditions in the stack. The cre) Hea) Precondition of UNSTACK Co) A Hla) Tac) =— Operator 2b. 4) Cla) A Ola, c) AOW, A) ACSE Figure 6.8 Changed Goal Stack State 1: {T(a) AT(c) A T(d) A H(b) A Cla) A C() A C(d)}, we notice that C(c) is true, so we ‘Then, we observe that the next sub goal H(a) is unachieved (not true), so we will solve this. waking H(a) to be true, We apply operator PU(a) or UN(a, X). In fact, any of the two operators : applied but let us choose PU) initially. Now pop H(a) and push PU(a) with its precondi- to the stack. The current stack status looks like that shown in Fig, 6.9. AE Ca) Precondition of UNSTACK | Te) Puta) <— Operator cle) A Hla) ‘sT(a,c) ob, 4) Cla) AO(a, ¢) AOID, 4) ACSG Figure 6.9 Modified Goal Stack 6.16 Artificial intelligence Now top sub goal of AE above stack is to be solved. We notice that AE is not true as the arm holding *b’ (as given in State 1), In order to make the arm empty, we need to perform either ST(b, X) or PD(b). Let us choose $79, X). If we look a little ahead, we note that we want ‘b’ on ‘d’. Therefore, we need to unify X wig “d°. So, we replace AE by ST(b, d) along with its preconditions. The goal stack now changes ., that shown in Fig, 6.10, ea) HO) Preconditions of STACK Cd) AHO) ST (bd) =— Operator fa) | Tia) | Pula) fe) A Ha) Tia, ¢) 6.4) Cla) AO(a, ©) AO, 6) ACSG Figure 6.10 New Goal Stack Now, C(d) and H(b) are both true in State 1. So, we pop these predicates along with the compound sub goal (C(d) L H(b)} which is also true, We notice that operator ST(b, d) can be applied as all its preconditions are satisfied. Therefore, we pop ST(b, d) and add ST(b, d) in the queue of sequence PLAN_QUEUE of operators. Thus, we can write PLAN_QUEUE = US (b, a), ST(b, d) Now we produce a new state State 2 using its ADD and DEL lists that can be written as State 2° T/a) A Tc) A T(d) A O(b, d) A Cla) A C(c) AC(b) AE. ‘The transition from State I to State 2 is shown in Fig, 6.11, ‘Advanced Problem-Solving Paradigm: Planning 6.17 gwez | | 2 Figure 6.11 Transition from State I to State 2 stack obtained as a result of this transition is shown in Fig, 6.12. | cia) | Precondition of PU(a) Ta) Pula) — — operator Clo) AH(a) | Stace) 0.4) | Ca) AoW, «AoW, «) Acsc Figure 6.12 New Goal Stack a) and T(a) are true (from State 2); hence, preconditions of PU(a) are satisfied. As a ¢ operation PU(a) can be performed. Therefore, we pop it and add it in the queue of the of operators and generate new state State 3, We can now write QUEUE = US (b, a), ST, d), PU(A) f the problem can be written as Te) AH(a) AT(d) A O{b, d) A Cle) A Cb) sition from State 2 to State 3 is shown in Fig. 6.13. b Pula) Figure 6.13 Transition from State 2 to State 3 6.18 Artificial Intotigence ‘The goal stack thus obtained as a result of this change is shown in Fi Ces Hie) STla,e) +— Operator oO.) | Ce) A0(a,c) 0%, 6) Acs Figure 6.14 Changed Goal Stack Further, C(c) and H(a) both are true in State 3, we pop these and since all preconditions ; ST(a, c) are met, we pop the operator ST(a, c) and add to the solution queue. Thus, we can wij, PLAN_QUEUE = US (b, a), S(b, d), PU(a), ST(a, ¢) ‘State 4 of the problem can be written as State 4 Tc) A Ofa, c) A Md) A O(b, d) A Cla) A C(b) AAE ‘The transition from State 3 to State 4 is shown in Fig. 6.15 State 4 states 2 oe ca Figure 6.15 Transition from State 3 to State 4 The goal stack obtained as a result of this change is shown in Fig. 6.16. 2, a) la) A Ofa,c) A Ofb, 6) ACSE Figure 6.16 New Goal Stack solved (or true) and the conjoined stb stack becomes empty, so the problet From the database of Stare 4, we see that O(b, d) is already ing the sequence of operators to b goal is also true, so we pop these sub goals. Now, the goal solver will return the plan from PLAN_QUEUE containi applied as US(b, a), ST(b, d), PU(a), ST(a, c) ‘Advanced Problem-Solving Paradigm: Planning 6.19 janis generated offline before execution and given to robot to change the start state to goal should be noted that heuristic information can be applied to guide the search process by jing more appropriate operator than other, Information regarding some interaction among the nals could help in producing an overall good solution. zh the goal stack method appears to be straightforward for simple problems, it may not ood solutions for more difficult problems, such as the problem of Sussman anomaly, and snot considered fo be a very good method. Let us consider Sussman anomaly problem and it using the goal stack method to show the inefficiency of this method. For the sake of city, we will show the major steps to illustrate the fact that this method obtains redundant egin with, the start state and goal state of Sussman anomaly problem may be written as sand shown in Fig, 6.17 State (Stare 0) O(c, a) A T(a) A T(b) A C(c) A C(b) A AE State Ola, b) A O(b, c) A T(e) A Cla) A AE Start State (State 0) Figure 6.17 Start and Goal States of Sussman Anomaly Problem. ving sub goal Ofa, b), we have to apply the following operators obtained using the goal ‘method: USie,a) Poe) Pua) STia,b) "eV Sale State J generated after applying these operations is represented as "1: O(a, b) A'T(b) AT() A Cle) A Cla) A AE "sition from State O to State 1 is shown in Fig. 6.18 6.20 Artificial Intelligence: a us } lo... | [OBL mJ 3 | lee ‘ 4 State 1 be Figure 6.18 Transitions from State 0 to State 1 Now. our aim is to satisfy the sub goal O(b, c). The sequence of operators US(a, b),PD(@), PUG), and ST(b, ©) is applied and State 2 is generated. This state is represented as State 2 O(b, c) A T(c) A T(a) A C(b) A Cla) A AE ‘The transition from State 1 to State 2 due to the application of the sequence of operatan mentioned above is given in Fig. 6.19. Pea Sal aie = | @BL wm | 2| | | c a bd c a | | |sre.0 . | enez ae Figure 6.19 Transitions from State 1 to State 2 Advanced Problem-Sohing Paradigm: Planning 6.21 _jto satisfy the conjoined goal O(a, b) L O(b, c) L T(c) L Cla) L AE. We notice that jy net o4b, 2 We have undone the already solved sub goal O(a, b). In order to solve 08" xe apply the operations PU(a) and ST(a, b). The conjoined goal is checked again ait Mo be satisfied now, We obtain the goal state, and therefore, can now collect the ition from State 2 to the goal state is shown in Fig. 6.20. ene a8 1h) Zjound (0 Be P| ST (a,b) © | Goat state Figure 6.20. The Goal State from State 2 -conpet plan thus generated isthe sequence of al the sub plans generated above. Therefore, ‘alowing operations will be present in the solution sequence: usic,a) vi PD) j. PDI) vii, PU(b) ji PU(a) viii, STO, €) p. $Tia,b) ix. PU@) USta, by x. ST(a,b) piough this plan eventually achieves the desired goal, it is not considered to be efficient suse of the presence of a number of redundant steps, such as stacking and unstacking of the « blocks, one performed immediately after the other. We can get an efficient plan from this, sinply by repairing it. Repairing is done by looking at those steps where operations are done undone immediately, such as ST(X, Y) and US(X, Y) or PUCK) and PDQ). In the above H. Ye notice that stacking and unstacking are done at steps (iv) and (v). By removing these mlimentary sub goals, we obtain the new plan as follows: 6.22 Artificial intetigence i Ust. a) * PU) i, PD& vi. STQb.c) iii, PUG) Mil. PUG) iv. PD(ay vill. ST(2, b) We notice that in this new revised plan, PU(a) and PD\a) at steps (iii) and Gv) are ah complimentary operations, so we remove them too. The final plan is as follows. i USte,a) iv. ST(,c) ii, PD() v. PUG) PU) vii STla.b) For the sake of completeness, the sequence of operations applied from start state to goal state shown in Fig. 6.21 Oa Gt | 5 ‘ST (ab) Goal State . ra Figure 6.21 Complete Sequence of Operations for Sussman Anomaly

You might also like