0% found this document useful (0 votes)
2 views10 pages

ai online

The document explains key concepts in Artificial Intelligence, including Horn Clauses, resolution, and Hill Climbing algorithms. It provides detailed definitions, examples, and steps for converting logical formulas into clausal form, as well as outlining the advantages and disadvantages of Hill Climbing. Additionally, it describes the algorithms for both Simple Hill Climbing and Steepest Ascent Hill Climbing, highlighting their differences in neighbor selection.

Uploaded by

bishramoraon896
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views10 pages

ai online

The document explains key concepts in Artificial Intelligence, including Horn Clauses, resolution, and Hill Climbing algorithms. It provides detailed definitions, examples, and steps for converting logical formulas into clausal form, as well as outlining the advantages and disadvantages of Hill Climbing. Additionally, it describes the algorithms for both Simple Hill Climbing and Steepest Ascent Hill Climbing, highlighting their differences in neighbor selection.

Uploaded by

bishramoraon896
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

I can't directly "answer" the questions in the image, as they appear to be prompts for an

assignment or study questions related to Artificial Intelligence (AI) and logic. However, I can
provide a detailed explanation for each of the questions.
Here are the explanations for each point:
1. Explain Horn Clause with example.
A Horn clause is a special type of logical formula that has at most one positive literal. In
propositional logic, a literal is either a propositional variable (e.g., P) or the negation of a
propositional variable (e.g., \neg P).
There are four types of Horn clauses:
●​ Fact (Unit Clause): A single positive literal. Represents a true statement.
○​ Example: P (meaning "P is true")
●​ Rule (Implication): An implication whose conclusion is a single positive literal and whose
premise is a conjunction of one or more positive literals.
○​ Example: P \land Q \implies R (meaning "If P and Q are true, then R is true")
●​ Goal Clause (Query): A clause with no positive literals, only negative literals. Often used
to represent a query to a knowledge base, aiming to prove a contradiction.
○​ Example: \neg P \lor \neg Q (equivalent to P \land Q \implies \text{false})
●​ Headless Clause: A clause consisting solely of negative literals.
○​ Example: \neg P \lor \neg Q \lor \neg R
Why are Horn Clauses important?
●​ Computational Efficiency: Reasoning with Horn clauses is computationally more
efficient than with arbitrary propositional logic formulas. This is because algorithms like
forward chaining and backward chaining are complete for Horn clauses and can be
implemented efficiently.
●​ Prolog: The programming language Prolog is based on Horn clauses.
Example:
Consider a simple knowledge base:
1.​ Fact: \text{is\_bird(Tweety)} (Tweety is a bird)
2.​ Fact: \text{is\_yellow(Tweety)} (Tweety is yellow)
3.​ Rule: \text{is\_bird}(X) \land \text{is\_yellow}(X) \implies \text{can\_fly}(X) (If X is a bird
and X is yellow, then X can fly)
All these are Horn clauses. We can use forward chaining to infer that \text{can\_fly(Tweety)} is
true.
2. Explain resolution. Write down the steps involved in resolution.
Resolution is a powerful inference rule used in automated theorem proving. It is complete for
propositional logic and first-order logic, meaning that if a statement is logically entailed by a set
of axioms, resolution can prove it. It works by refutation, meaning it tries to show that the
negation of the goal, combined with the knowledge base, leads to a contradiction.
Core Idea: Given two clauses where one contains a literal L and the other contains its negation
\neg L, resolution allows us to infer a new clause (the resolvent) that contains all the literals from
the original clauses except L and \neg L.
Example (Propositional Logic): Given clauses:
●​ C_1: P \lor Q
●​ C_2: \neg Q \lor R
The literal Q in C_1 and \neg Q in C_2 are complementary. We can resolve them to get:
●​ Resolvent: P \lor R
Steps involved in Resolution (for proving a statement \alpha from a knowledge base KB):
1.​ Convert KB and \neg \alpha to Conjunctive Normal Form (CNF):
○​ A formula is in CNF if it is a conjunction of clauses, where each clause is a
disjunction of literals.
○​ All sentences in the knowledge base (KB) and the negation of the statement to be
proved (\neg \alpha) must be converted into CNF. This involves steps like
eliminating implications, moving negations inwards, and distributing disjunctions
over conjunctions.
2.​ Add \neg \alpha to the Knowledge Base: Create a set of clauses S = KB \cup \{\neg
\alpha\}. This is the set of clauses we will attempt to refute.
3.​ Repeatedly Apply the Resolution Rule:
○​ Select two clauses from S that contain a complementary pair of literals (e.g., L and
\neg L).
○​ Derive a new clause (the resolvent) by combining the two selected clauses and
removing the complementary literals.
○​ Add this new resolvent to S.
4.​ Check for Empty Clause:
○​ If the empty clause (\Box, represented as false) is derived, then a contradiction has
been found. This means that the initial assumption (\neg \alpha) must be false, and
therefore \alpha must be true.
5.​ Termination:
○​ If the empty clause is derived, the proof is successful, and \alpha is proven.
○​ If no new clauses can be generated, or if the set of clauses becomes saturated (no
new resolvents can be added), and the empty clause has not been derived, then
\alpha cannot be proven from KB using resolution (assuming the resolution
algorithm is complete).
Illustration of Resolution (Propositional Logic):
Let's say we want to prove R from the following KB:
1.​ P
2.​ P \implies Q
3.​ Q \implies R
Steps:
1.​ Convert to CNF:
○​ C_1: P
○​ C_2: \neg P \lor Q (from P \implies Q)
○​ C_3: \neg Q \lor R (from Q \implies R)
2.​ Add negation of goal (\neg R):
○​ C_4: \neg R
3.​ Resolution:
○​ Resolve C_1 (P) and C_2 (\neg P \lor Q):
■​ Resolvent C_5: Q
○​ Resolve C_5 (Q) and C_3 (\neg Q \lor R):
■​ Resolvent C_6: R
○​ Resolve C_6 (R) and C_4 (\neg R):
■​ Resolvent C_7: \Box (Empty Clause)
Since the empty clause is derived, R is proven.
3. Write the steps to convert into clausal form.
Converting a logical formula into clausal form (also known as Conjunctive Normal Form - CNF)
is a crucial preprocessing step for resolution-based theorem proving. The goal is to transform
any well-formed formula into a conjunction of disjunctions of literals.
Here are the standard steps to convert a formula into clausal form:
1.​ Eliminate Implications and Bi-implications:
○​ Replace all occurrences of \alpha \implies \beta with \neg \alpha \lor \beta.
○​ Replace all occurrences of \alpha \iff \beta with (\neg \alpha \lor \beta) \land (\neg
\beta \lor \alpha).
2.​ Move Negations Inward (using De Morgan's Laws and double negation):
○​ Apply De Morgan's Laws:
■​ \neg (\alpha \land \beta) \equiv \neg \alpha \lor \neg \beta
■​ \neg (\alpha \lor \beta) \equiv \neg \alpha \land \neg \beta
○​ Apply Double Negation: \neg (\neg \alpha) \equiv \alpha
○​ For quantifiers (in First-Order Logic):
■​ \neg (\forall x \ P(x)) \equiv \exists x \ \neg P(x)
■​ \neg (\exists x \ P(x)) \equiv \forall x \ \neg P(x)
○​ Continue this process until negations only appear directly before atomic predicates.
3.​ Standardize Variables (if First-Order Logic):
○​ If there are multiple quantifiers, ensure that each quantifier uses a unique variable
name. For example, \forall x \ P(x) \land \forall x \ Q(x) should be standardized to
\forall x \ P(x) \land \forall y \ Q(y). This avoids variable capture issues.
4.​ Skolemization (Eliminate Existential Quantifiers - if First-Order Logic):
○​ Replace existential quantifiers with Skolem constants or Skolem functions.
○​ If an existential quantifier is not within the scope of a universal quantifier, replace
\exists x \ P(x) with P(C), where C is a new unique constant (Skolem constant).
○​ If an existential quantifier is within the scope of one or more universal quantifiers,
replace \exists y \ P(x, y) with P(x, F(x)), where F is a new unique function (Skolem
function) whose arguments are all the universally quantified variables whose scope
contains the existential quantifier.
○​ After Skolemization, remove all existential quantifiers.
5.​ Drop Universal Quantifiers (if First-Order Logic):
○​ Once all existential quantifiers are eliminated, and all variables are implicitly
universally quantified, you can simply drop the universal quantifiers. The remaining
formula is assumed to be universally quantified.
6.​ Convert to Conjunctive Normal Form (Distribute \lor over \land):
○​ Apply the distributive law A \lor (B \land C) \equiv (A \lor B) \land (A \lor C)
repeatedly until the formula is a conjunction of clauses.
○​ Example: (P \land Q) \lor R \equiv (P \lor R) \land (Q \lor R)
7.​ Convert to a Set of Clauses:
○​ Each conjunct in the CNF formula is a clause. Represent the entire formula as a set
of these clauses. Each clause is a disjunction of literals.
Example (Propositional Logic): Convert P \implies (Q \land R) to clausal form.
1.​ Eliminate Implication: \neg P \lor (Q \land R)
2.​ Move Negations Inward: (No negations to move in this case)
3.​ Distribute \lor over \land: (\neg P \lor Q) \land (\neg P \lor R)
4.​ Convert to a Set of Clauses: \{\{\neg P, Q\}, \{\neg P, R\}\} (Often written as \{\neg P \lor
Q, \neg P \lor R\})
4. Explain Hill climbing with block diagram. Write the disadvantages of Hill climbing.
Hill Climbing is a local search algorithm that iteratively moves in the direction of increasing
value (or decreasing cost, depending on the objective). It is named after the metaphor of
climbing a hill: at each step, you look for the steepest path upwards from your current position
and take a step in that direction.
Concept: It starts with an arbitrary solution to a problem and then tries to find a better solution
by incrementally changing a single element of the solution. If the change produces a better
solution, an incremental change is made to the new solution, and so on, until no further
improvements can be found.
Block Diagram of Hill Climbing:
+---------------------+​
| |​
| Current State (S) |​
| |​
+----------+----------+​
|​
| Evaluate​
V​
+----------+----------+​
| |​
| Objective Function |​
| (f(S) - Value) |​
| |​
+----------+----------+​
|​
| Generate Neighbors​
V​
+----------+----------+​
| |​
| Neighboring States |​
| (S', S'', ...) |​
| |​
+----------+----------+​
|​
| Evaluate Neighbors​
V​
+---------------------+​
| |​
| Find Best Neighbor |​
| (S_best) |​
| |​
+----------+----------+​
|​
| Compare f(S_best) with f(S)​
V​
+---------------------+​
| |​
| Is S_best Better? |----+ If Yes​
| | |​
+----------+----------+ |​
| V​
| If No +---------------------+​
| | |​
| | Set Current State |​
| | S = S_best |​
V | |​
+---------------------+ +---------------------+​
| |​
| Local Maximum/ |<----- TERMINATE (No better neighbor)​
| Optimum Reached |​
| |​
+---------------------+​

Explanation of the Block Diagram:


1.​ Current State (S): The algorithm begins at an initial state, which represents a potential
solution to the problem.
2.​ Evaluate Objective Function (f(S)): The current state is evaluated using an objective
function (also called a heuristic function or evaluation function). This function quantifies
how "good" the current solution is.
3.​ Generate Neighbors: The algorithm generates a set of neighboring states. A neighbor is
a state that can be reached from the current state by making a small, predefined change
(e.g., changing one variable's value, swapping two elements).
4.​ Evaluate Neighbors: Each generated neighbor is evaluated using the same objective
function.
5.​ Find Best Neighbor (S_best): From the evaluated neighbors, the algorithm selects the
one that yields the best objective function value (e.g., the highest value for maximization,
or lowest for minimization).
6.​ Compare and Update:
○​ If the best neighbor (S_{best}) is better than the current state (S), then the algorithm
moves to S_{best} (i.e., S \leftarrow S_{best}). The process then repeats from the
"Evaluate Objective Function" step.
○​ If the best neighbor (S_{best}) is not better than the current state (S) (or if there are
no better neighbors), the algorithm terminates. This usually means it has reached a
local optimum.
Disadvantages of Hill Climbing:
Hill climbing is a simple and often fast algorithm, but it suffers from several significant
drawbacks:
1.​ Local Maxima (or Minima): This is the most significant disadvantage. The algorithm can
get stuck in a "local maximum" (for maximization problems) or "local minimum" (for
minimization problems) that is not the global optimum. It will terminate because there are
no better neighbors in its immediate vicinity, even if a much better solution exists further
away.
○​ Analogy: Reaching the top of a small hill when there's a much taller mountain
nearby.
2.​ Ridges: A ridge is a sequence of local optima that are difficult to traverse because the
search space is flat or has small improvements, making it hard to find the right direction.
The algorithm might zig-zag across a ridge or get stuck if the optimal path involves
moving along a diagonal line not directly accessible by single-step changes.
3.​ Plateaux: A plateau is a flat area in the search space where all neighboring states have
the same objective function value as the current state. The algorithm cannot determine
which direction to move to find a better solution and may wander aimlessly or terminate
prematurely.
4.​ Lack of Global Optimum Guarantee: Due to getting stuck in local optima, hill climbing
does not guarantee finding the global optimal solution.
5.​ Sensitivity to Initial State: The quality of the solution found by hill climbing can heavily
depend on the initial starting state. Different starting points might lead to different local
optima.
6.​ Stochastic Nature (when applicable): If random choices are involved in generating
neighbors or selecting the best one, the algorithm might not be deterministic, and repeat
runs from the same start may yield different results. (Though pure hill climbing is
deterministic).
7.​ No Backtracking: Hill climbing is a "greedy" algorithm. It never looks back at previously
visited states or considers alternative paths. Once a move is made, it's committed.
To overcome these disadvantages, more sophisticated search algorithms like Simulated
Annealing, Genetic Algorithms, or Tabu Search are often used, which incorporate mechanisms
to escape local optima.
5. Write the algorithm for Simple Hill Climbing and Steepest Ascent Hill Climbing.
Both Simple Hill Climbing and Steepest Ascent Hill Climbing are greedy local search algorithms.
The main difference lies in how they choose the next state.

Algorithm for Simple Hill Climbing


Simple Hill Climbing is the most basic form. It moves to the first neighbor it finds that is better
than the current state, without evaluating all neighbors.
Input:
●​ initial_state: The starting point in the search space.
●​ objective_function: A function that evaluates the "goodness" of a state (e.g., higher value
is better for maximization).
●​ generate_neighbors: A function that generates all possible neighboring states from a
given state.
Output:
●​ current_state: The best state found (a local optimum).
Algorithm Steps:
1.​ Initialize current_state = initial_state.
2.​ Loop (Repeat indefinitely until termination condition met): a. current_value =
objective_function(current_state). b. neighbors = generate_neighbors(current_state). c.
found_better_neighbor = false. d. For each neighbor in neighbors: i. neighbor_value =
objective_function(neighbor). ii. If neighbor_value is better than current_value (e.g.,
neighbor_value > current_value for maximization): 1. current_state = neighbor. 2.
found_better_neighbor = true. 3. Break (exit this inner loop and go back to step 2 to
re-evaluate from the new current_state). e. If found_better_neighbor is false: i. Return
current_state (no better neighbor found, reached a local optimum).
Key Characteristic: It takes the first available improvement, not necessarily the best
improvement among neighbors.

Algorithm for Steepest Ascent Hill Climbing


Steepest Ascent Hill Climbing (also known as Gradient Hill Climbing) is more thorough. It
examines all possible neighbors from the current state and chooses the one that offers the
greatest improvement.
Input:
●​ initial_state: The starting point in the search space.
●​ objective_function: A function that evaluates the "goodness" of a state (e.g., higher value
is better for maximization).
●​ generate_neighbors: A function that generates all possible neighboring states from a
given state.
Output:
●​ current_state: The best state found (a local optimum).
Algorithm Steps:
1.​ Initialize current_state = initial_state.
2.​ Loop (Repeat indefinitely until termination condition met): a. current_value =
objective_function(current_state). b. neighbors = generate_neighbors(current_state). c.
best_neighbor = None. d. best_neighbor_value = current_value (or a very low/high value
depending on maximization/minimization). e. For each neighbor in neighbors: i.
neighbor_value = objective_function(neighbor). ii. If neighbor_value is better than
best_neighbor_value (e.g., neighbor_value > best_neighbor_value for maximization): 1.
best_neighbor = neighbor. 2. best_neighbor_value = neighbor_value. f. If
best_neighbor_value is better than current_value: i. current_state = best_neighbor. g.
Else (best_neighbor_value is not better than current_value): i. Return current_state
(no better neighbor found, reached a local optimum).
Key Characteristic: It always chooses the neighbor that provides the maximum immediate
improvement. This makes it more computationally expensive per step than simple hill climbing,
but it might lead to a local optimum faster or reach a slightly better local optimum if multiple
good paths exist.
Note: Both algorithms share the fundamental disadvantage of being susceptible to local optima,
ridges, and plateaux.
6. Write down the steps for A algorithm.*
The A (A-star) algorithm* is one of the most widely used and effective search algorithms in
Artificial Intelligence. It is a best-first search algorithm that finds the shortest path from a starting
node to a goal node in a graph. It combines features of Dijkstra's algorithm (guarantees
optimality) and greedy best-first search (uses a heuristic to guide the search).
A* uses a heuristic function h(n) to estimate the cost from node n to the goal, and it maintains a
cost function g(n) which is the actual cost from the start node to node n. The evaluation function
used by A* is:
f(n) = g(n) + h(n)
where:
●​ f(n) is the estimated total cost of the path from the start node through node n to the goal.
●​ g(n) is the actual cost of the path from the start node to node n.
●​ h(n) is the heuristic estimate of the cost from node n to the goal.
A Algorithm Steps:*
1.​ Initialization: a. Create an OPEN list (priority queue) and a CLOSED list (set). * OPEN
stores nodes that have been generated but not yet expanded. It's a priority queue ordered
by f(n) values (lowest f(n) first). * CLOSED stores nodes that have already been
expanded. b. Set g(\text{start\_node}) = 0. c. Set h(\text{start\_node}) using the heuristic
function. d. Set f(\text{start\_node}) = g(\text{start\_node}) + h(\text{start\_node}). e. Add
start_node to the OPEN list.
2.​ Main Loop: While OPEN is not empty: a. Select Node: Remove the node n with the
lowest f(n) value from the OPEN list. b. Add to CLOSED: Add n to the CLOSED list. c.
Goal Test: If n is the goal_node, then a path has been found. Reconstruct the path by
tracing back from the goal_node to the start_node using parent pointers (which are stored
during expansion) and return the path.d. Expand Node: For each successor (neighbor) s
of node n: i. Calculate g-value for successor: Calculate the tentative g(s) value:
g_{tentative}(s) = g(n) + \text{cost}(n, s) (where \text{cost}(n, s) is the cost of moving from
n to s).ii. Check if successor is in CLOSED list: * If s is in CLOSED and g_{tentative}(s)
is greater than or equal to the current g(s) (meaning we found a worse or equal path to a
node already processed), then skip this successor. * If s is in CLOSED and
g_{tentative}(s) is less than the current g(s) (meaning we found a better path to a node
already processed), then remove s from CLOSED and proceed to update its values and
add it to OPEN.iii. Check if successor is in OPEN list: * If s is in OPEN and
g_{tentative}(s) is greater than or equal to the current g(s) (meaning we found a worse or
equal path to a node already in queue), then skip this successor. * If s is in OPEN and
g_{tentative}(s) is less than the current g(s) (meaning we found a better path to a node
already in queue), then update its g(s) value and f(s) value, and update its position in the
priority queue. Also, set n as its parent.iv. New Node: If s is not in OPEN or CLOSED (it's
a newly discovered node): 1. Set g(s) = g_{tentative}(s). 2. Set h(s) using the heuristic
function. 3. Set f(s) = g(s) + h(s). 4. Set n as the parent of s. 5. Add s to the OPEN list.
3.​ No Path Found: If the OPEN list becomes empty and the goal node has not been
reached, it means there is no path from the start node to the goal node. Return "failure" or
indicate no path.
Admissibility and Consistency of Heuristic:
●​ Admissible Heuristic: An A* algorithm guarantees to find the optimal path if the heuristic
function h(n) is admissible. An admissible heuristic never overestimates the actual cost
to reach the goal (i.e., h(n) \le h^*(n) for all nodes n, where h^*(n) is the true cost from n to
the goal).
●​ Consistent Heuristic: A stronger condition is consistency (or monotonicity). A
heuristic h(n) is consistent if for every node n and every successor s of n: h(n) \le
\text{cost}(n, s) + h(s) Consistency implies admissibility. If a heuristic is consistent, then A*
does not need to re-open nodes from the CLOSED list (step 2.d.ii can be simplified).
7. Write down the steps for Means-Ends Analysis with example.
Means-Ends Analysis (MEA) is a problem-solving technique used in Artificial Intelligence,
particularly in planning and expert systems. It is a form of goal-driven reasoning that works by
repeatedly identifying the differences between the current state and the goal state, and then
applying operators (actions) that reduce these differences.
Core Idea: MEA involves:
1.​ Identifying a difference between the current state and the goal state.
2.​ Finding an operator that is relevant to reducing that difference.
3.​ If the operator cannot be applied immediately, setting up a sub-goal to make the operator
applicable (i.e., reducing the preconditions of the operator).
4.​ Applying the operator, which changes the current state.
5.​ Repeating the process until the current state matches the goal state.
Steps for Means-Ends Analysis:
1.​ Formulate the Problem: Define the initial state, the goal state, and the set of available
operators (actions). Each operator should have:
○​ Preconditions: What must be true for the operator to be applied.
○​ Effects (or Postconditions): What changes are made to the state after the
operator is applied.
2.​ Compare Current State with Goal State: Identify the most significant (or a selected)
difference between the current_state and the goal_state. If there is no difference, the goal
is achieved.
3.​ Select an Operator: Find an operator that is relevant to reducing the identified difference.
This means an operator whose effects address the difference. If multiple operators are
relevant, a selection strategy is needed (e.g., choose one that resolves the largest
difference, or one with fewer preconditions).
4.​ Check Operator Applicability (Preconditions): a. If the operator's preconditions are
satisfied in the current_state: i. Apply the operator. ii. Update the current_state based
on the operator's effects. iii. Go back to Step 2. b. If the operator's preconditions are
NOT satisfied: i. Create a sub-goal to satisfy the unmet precondition(s). ii. Treat this
sub-goal as a new problem and recursively apply Means-Ends Analysis to solve it (i.e., go
back to Step 2 with the sub-goal as the new goal_state and the original current_state as
the initial_state). iii. Once the sub-goal is achieved, re-check the original operator's
preconditions and proceed if they are now met.
5.​ Failure Condition: If no operator can be found to reduce the difference, or if a sub-goal
cannot be achieved, then the problem solver may need to backtrack or declare failure.
Example: The Blocks World Problem
Let's say we want to move blocks around.
Initial State: A: OnTable B: OnTable C: OnA
Goal State: A: OnB B: OnC C: OnTable
Operators:
1.​ MOVE(X, Y, Z): Move block X from Y to Z
○​ Preconditions:
■​ Clear(X) (nothing on top of X)
■​ On(X, Y) (X is on Y)
■​ Clear(Z) (if Z is a block) or Z is Table
○​ Effects:
■​ Not On(X, Y)
■​ On(X, Z)
■​ Clear(Y) (if Y was a block)
■​ Not Clear(Z) (if Z is a block)
2.​ PICKUP(X, Y): Pick up block X from Y
○​ Preconditions: Clear(X), On(X, Y), HandEmpty
○​ Effects: Holding(X), Not On(X, Y), Clear(Y)
3.​ PUTDOWN(X, Y): Put down block X on Y
○​ Preconditions: Holding(X), Clear(Y) (if Y is a block) or Y is Table
○​ Effects: On(X, Y), Not Holding(X), Not Clear(Y) (if Y is a block), HandEmpty
(For simplicity, we'll use a simplified MOVE operator as above for this example)
Applying MEA:
●​ Current State: {OnTable(A), OnTable(B), On(C, A), Clear(B), Clear(C)}
●​ Goal State: {On(A, B), On(B, C), OnTable(C), Clear(A)}
Iteration 1:
1.​ Difference: On(A, B) is desired, but OnTable(A) is current. On(B, C) is desired, but
OnTable(B) is current.
2.​ Select Diff: Let's focus on On(A, B).
3.​ Relevant Operator: MOVE(A, Table, B) is relevant (its effect is On(A, B)).
4.​ Check Preconditions of MOVE(A, Table, B):
○​ Clear(A)? No (C is on A).
○​ On(A, Table)? Yes.
○​ Clear(B)? Yes.
5.​ Sub-goal: To apply MOVE(A, Table, B), we need Clear(A). The current state has On(C,
A). So, the sub-goal is to get Clear(A).
Iteration 2 (Solving Sub-goal: Clear(A)):
●​ Current State: {OnTable(A), OnTable(B), On(C, A), Clear(B), Clear(C)}
●​ Goal State (sub-goal): {Clear(A)}
1.​ Difference: Clear(A) is desired.
2.​ Relevant Operator: MOVE(C, A, Table) (Effect: Clear(A), Precondition: Clear(C))
3.​ Check Preconditions of MOVE(C, A, Table):
○​ Clear(C)? Yes.
○​ On(C, A)? Yes.
○​ Clear(Table)? (Table is always clear) Yes.
4.​ Apply Operator: MOVE(C, A, Table)
○​ Actions: Move C from A to Table.
○​ New Current State: {OnTable(A), OnTable(B), OnTable(C), Clear(A), Clear(B),
Clear(C)}
○​ Sub-goal Clear(A) is now achieved. Return to previous level.
Iteration 3 (Back to Original Problem):
●​ Current State: {OnTable(A), OnTable(B), OnTable(C), Clear(A), Clear(B), Clear(C)}
●​ Goal State: {On(A, B), On(B, C), OnTable(C), Clear(A)}
1.​ Re-check Preconditions of Original Operator MOVE(A, Table, B):
○​ Clear(A)? Yes (achieved in Iteration 2).
○​ On(A, Table)? Yes.
○​ Clear(B)? Yes.
2.​ Apply Operator: MOVE(A, Table, B)
○​ Actions: Move A from Table to B.
○​ New Current State: {On(A, B), OnTable(B), OnTable(C), Clear(B), Clear(C)} (Note:
Clear(A) is now false, Clear(B) is false, and On(A,B) is true)
Iteration 4 (Continue from new state):
●​ Current State: {On(A, B), OnTable(B), OnTable(C), Clear(B), Clear(C)}
●​ Goal State: {On(A, B), On(B, C), OnTable(C), Clear(A)}
1.​ Difference: On(B, C) is desired, but OnTable(B) is current.
2.​ Relevant Operator: MOVE(B, Table, C)
3.​ Check Preconditions of MOVE(B, Table, C):
○​ Clear(B)? No (A is on B).
○​ On(B, Table)? Yes.
○​ Clear(C)? Yes.
4.​ Sub-goal: To apply MOVE(B, Table, C), we need Clear(B). The current state has On(A,
B). So, the sub-goal is to get Clear(B).
(The process continues similarly, recursively generating sub-goals to clear A from B,
then moving B onto C, etc., until the final goal state is reached.)
MEA is effective for problems where a clear set of differences and operators can be defined. It is
a powerful conceptual framework for planning and intelligent agents.

You might also like