0% found this document useful (0 votes)
14 views

Module 1

Uploaded by

19-Arjun VM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Module 1

Uploaded by

19-Arjun VM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 145

Artificial Intelligence

Artificial Intelligence
Lecture I

1
IntroductIon

• Artificial intelligence is a branch of computer science


concerned with the study and creation of computer
system

• that exhibit some form of intelligence


• that learn new concept and task,
• that perform other types of facts that require human
types of intelligence .
2
IntroductIon

• Artificial Intelligence (AI) is a branch of Science


which deals with helping machines find solutions
to complex problems in a more human-like fashion.

• This generally involves borrowing characteristics


from human intelligence, and applying them as
algorithms in a computer friendly way.

• Artificial intelligence can be viewed from a variety


of perspectives. 3
IntroductIon
• From the perspective of intelligence AI is making
machines "intelligent" -- acting as we would
expect people to act.

• From a business perspective AI is a set of very


powerful tools, and methodologies for using those
tools to solve business problems.

• From a programming perspective, AI includes the


study of symbolic programming, problem solving,
and search. 4
tHE AI ProBLEMS
• Includes formal tasks such as game playing, theorem
proving.

• Game playing and theorem proving share the property that


people who do them well are considered to be displaying
intelligence.

• Initially the computers could perform well at those tasks


simply by being fast at exploring a large number of solution
paths and then selecting the best one.

• This process required very little knowledge and could be 5


programmed easily.
Common sense reasoning

• It includes reasoning about physical objects and their


relationships to each other, as well as reasoning about
actions and their consequences.(e.g.: if you let go of
something, it will fail to the floor and may be break).

6
Perception

• As AI research progressed and techniques for handling


larger amount of world knowledge were developed new
tasks were attempted.

• These include perception(vision and speech),natural


language understanding, problem solving in special
domains such as medical diagnosis and chemical analysis.

• Perception of the world around us is crucial for survival.


7
• Animals with much less intelligence than people are
capable of more sophisticated visual perception than are
current machines.

• Perceptual tasks are very difficult because they involve


analog rather than digital signals, signals are typically very
noisy.

• The ability to use language to communicate a wide variety


of ideas is perhaps the most important thing that
separates human from other animals.

8
• The problem of understanding spoken language is a
perceptual problem.

• This problem is usually referred to as natural language


understanding.

9
Expert Tasks

• In addition to the above tasks, many people can also


perform one or may be more specialized tasks in which
carefully acquired expertise is necessary. Examples
include engineering design, scientific discovery, medical
diagnosis etc.

• The problem areas where AI is now flourishing most are


primarily the domains that require only specialized
expertise.
• The programs are called expert systems in day to day
operation.
10
Task Domains of AI

knowledge engineering 5-01-2015


11
ProBLEMS,ProBLEM SPAcES And
SEArcH

• To build a system to solve a particular problem,


four steps are to be followed:

1. Define the problem precisely. This definition


must include precise specifications of initial
situation and the final situation.

2. A n a l y ze t h e p ro b l e m : Va r i o u s p o s s i b l e
techniques for solving the problem.

3. Isolate and represent the task knowledge that is


necessary to solve the problem. 12
ProBLEMS,ProBLEM SPAcES And
SEArcH

• To build a system to solve a particular problem,


four steps are to be followed:

1. Define the problem precisely. This definition


must include precise specifications of initial
situation and the final situation.

2. A n a l y ze t h e p ro b l e m : Va r i o u s p o s s i b l e
techniques for solving the problem.

3. Isolate and represent the task knowledge that is


necessary to solve the problem. 13
4. Choose the best problem-solving techniques and
apply to the particular problem.

14
dEfInIng tHE ProBLEM AS A
StAtE SPAcE SEArcH
ü Suppose we start with the problem statement
“Play Chess”.

ü To build a program that could play chess we


would :

ü have to specify the starting position of the


chess board.

ü The rules that define the legal moves.

ü And the board positions that represent a win 15


for one side or the other.
ü For the problem Play Chess it is easy to provide
a formal and complete problem description.
ü The starting position can be described as an 8
X 8 array where each position contains a
symbol standing for the appropriate piece in
the official chess opening position.
ü The goal can be defined as any board position
in which the opponent does not have a legal
move and his or her king is under attack.
ü The legal moves provide the way of getting
from initial state to a goal state.

16
ØThe legal moves can be described as a set of rule
having two parts.

ØThe left side serves as a pattern to be matched


against the current board position and right side
that describes the change to be made to the board
position to reflect the change.

17
18
19
• The problem of playing chess is defined as a problem of
moving around in a state space.

• The state space representation forms the basis of most of


the AI methods.

• Its structure corresponds to the structure of problem


solving in 2 ways:

1. It allows for a formal definition of a problem as the


need to convert some given situation into some desired
situation using a set of permissible operations.

2. It permits us to define the process of solving a


particular problem as a combination of known
techniques and search. 20
• The water jug problem:

• You are given two jugs, a 4-litre one and a 3-litre


one.

• Neither has any measuring markers on it.

• There is a pump that can be used to fill the jugs


with water.

• How can you get exactly 2 litres of water into 4-


litre jug. 21
• Let x and y be the amounts of water in 4-Lt and 3-Lt
Jugs respectively

• Then (x, y) refers to water available at any time in 4-


Lt and 3-Lt jugs.

• Also (x, y) -> (x- d, y+ dd) means drop some unknown


amount d of water from 4-Lt jug and add dd onto 3-Lt
jug.

• All possible production rules can be written as


follows: 22
1. (x, y) (4, y) fill the 4 gallon jug
if x<4

2. (x, y) ->(x, 3) fill the 3 gallon jug


if y < 3

3. (x, y) ->(x − d, y) Pour some water out of 4 gallon jug


if x > 0

4. (x, y) ->(x, y − d)
if y > 0 Pour some water out of 3 gallon
jug
23
5. (x, y) (0, y) empty the 4 gallon jug
if x>0

6. (x, y) ->(x, 0) empty the 3 gallon jug


if y > 0

7. (x, y) ->(4 , y-(4-x))


if x+y >=4 and y> 0 Pour water from 3 gallon jug to 4
gallon jug until it is full

8. (x, y) ->(x-(3-y), 3)
if x+y >=3 and y> 0 Pour water from 4 gallon jug to 3
gallon jug until it is full
24
9. (x, y) ->((x+y), 0)
if x+y <=4 and y> 0 Pour all water from 3 gallon jug to 3
gallon jug into 4 gallon jug

25
GALLONS IN 4 GALLON JUG GALLONS IN 3GALLON JUG RULE APPLIED

0 0 2
0 3 9
3 0 2
3 3 7
4 2 5
0 2 9
2 0 26
• To summarize to provide a formal description of a problem, we
must do the following:

1. Define a state space that contains all possible configurations of the


relevant objects.

2. Specify one or more states within that space that describe possible
situations from which the problem solving process may start. This is
the initial state.

3. Specify one or more states that would be acceptable as solutions to


the problem. These states are called goal states.

4. Specify a set of rules that describe the actions available.

27
• The problem can then be solved by using the rules
in combination with an appropriate control
strategy to move through the problem space until
a path from an initial state to a goal state is found.

28
ProductIon SYStEMS
• A production system commonly consists of following
four basic components:

1. A set of rules each consisting of a left hand side


that determines the applicability of the rule and a right
h a n d s i d e t h at d e s c r i b e s t h e o p e rat i o n to b e
performed if the rule is applied.

2. One or more knowledge databases that contain


whatever information is relevant for the given problem.

29
3. A control strategy that ascertains the order in
which the rules must be applied to the available
database and a way of resolving conflicts that arise
when several rules match at once.

4. A rule applier which is the computational


system that implements the control strategy and
applies the rules to reach to goal.

30
controL StrAtEgIES

• We have to decide which rule to be applied next during


the process of searching for a solution to a problem.

• This is so because most often more than one rule will


have its left side match the current state.

• The first requirement of a good control strategy is that it


causes motion.

• The second requirement of a good control strategy is


that it should be systematic. 31
Algorithm : Breadth First Search

1. Create a variable called NODE-LIST and set it to the


initial state.

2. Until a goal state is found or NODE- LIST is empty:


a) Remove the first element from NODE-LIST and call it E. If NODE – LIST
was empty quit.

a) For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
ii. If the new state is a goal, quit and return this state.
iii. Otherwise add the new state to the end of NODE LIST.
32
Algorithm : Depth First Search

1. If the initial state is a goal state, quit and return success.

2. Otherwise, do the following until success or failure is


signaled:

a) Generate a successor, E of the initial. If there are no more successors, signal


failure.

b) Call depth first search with e as the initial state.

c) If success is returned, signal success. Otherwise continue in this loop

a) For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
i
ii. If the new state is a goal, quit and return this state.
iii. Otherwise add the new state to the end of NODE LIST.
Advantages of Depth First Search

ü Depth first search requires less memory since


only he nodes on the current path are stored. This
contrasts with breadth first search where all of
the tree that has so far been generated must be
stored.

üDepth first search may find a solution without


examining much of the search space at all. This
contrasts with breadth first search in which all
parts of the tree must be examined to level n
34
before any nodes on level n+1 can be examined.
Advantages of Breadth First Search

ü Breadth first search will not get trapped exploring


a blind way. This contrasts with depth first search
which may follow a single unfruitful path for a very
long time, before the path actually terminates in a
state that has no successors.

üIf there is a solution, then breadth firs search is


guaranteed to find it. Also if there are multiple
solutions it will find the minimal one in contrast
with depth first search.
35
HEurIStIc SEArcH
• A heuristic is a technique that improves the efficiency of a
search process, possibly by sacrificing claims of completeness.

• Travelling Salesman Problem:

• A salesman has a list of cities each of which he must exactly once.


There are direct roads between each pair of cities on the list .

• Find the route the salesman should follow for the shortest possible
round trip that both starts and finished at any one of the cities.

36
• One approach is to explore all possible paths in the
tree and return the one with shortest length.

• One example for a general purpose heuristic is the


nearest neighbour heuristic.

• Applying this to travelling salesman problem


produce the following procedure:
• Arbitrarily select a city.

• To select the next city, look at all cities not yet visited and select the
one closest to the current city and go to it next.

• Repeat previous step until all cities have been visited.


37
• A heuristic function is a function that maps from
p ro b l e m s t a t e d e s c r i p t i o n s t o m e a s u r e s o f
desirability usually represented as numbers.

• The purpose of heuristic function is to guide the


search process in the most profitable direction by
suggesting which path to follow when more than one
is available.

38
Problem characteristics
• Heuristic search is a general method applicable to a
large class of problems.

• It encompasses a wide variety of techniques each


of which is effective for only a small class of
problems.

• To choose most appropriate method for a problem


it is necessary to analyze the problem along
different key dimensions: 39
1. Is the problem decomposable?

ü Identify whether the problem can be decomposed into a set of


independent smaller or easier sub-problems.

40
41
42
2. Can solution steps be ignored or at least undone if they
prove unwise?

ü Eg : The 8-puzzle

ü The 8-puzzle is a square tray in which

43
3. Is the problem’s universe predictable?

ü If we are playing the 8- puzzle problem, every time we


make a move, we know exactly what will happen.

ü That is it is possible to plan an entire sequence of


moves and be confident that we know what the
resulting state will be.

ü In the case of playing bridge game using cards, one of


the decisions to be made is which card to play on the
first trick.

ü What we would like to do is to plan the entire game


before making the first play. 44
ü But this is not possible since we don’t know exactly
where all the cards are or what the other players will
do on their next turn.

ü The first example represents a certain outcome


problem and second one represents an uncertain
outcome problem.

45
4. Is a good solution to the problem obvious without
comparison to all other possible solutions?

46
ü The question is Marcus alive?

ü By representing each of these facts in a formal language and then


using formal inference methods we can derive an answer to the
question.

47
5. Is the desired solution a state of the world or a path
to the state?

ü Consider the interpretation of the following sentence:

ü The bank president ate a dish of pasta salad with the fork.

ü But with the case of water jug problem the solution is a path to the
state.

48
6. Is a large amount of knowledge absolutely required
to solve the problem or is knowledge important
only to constrain the search?

ü Problem of playing chess requires only little knowledge.

ü The rules for determining legal moves.

49
7. Can a computer that is simply given the problem return the
solution or will the solution of the problem require
interaction between the computer and a person?

50
ProductIon SYStEM
cHArActErIStIcS
• Production systems describe the operations that can be
performed in a search for a solution to a problem.

• Two questions need to be answered at this point.

1. Can production systems be described by a set of characteristics.

2. What relationships are there between problem types and the


types of production systems best suited to solve the problems?

51
• A monotonic production system is a production system in
which the application of a rule never prevents the later
application of another rule that could also have been
applied at the time the first rule was applied.

• A non- monotonic production system is one in which this


is not true.

• A partially commutative production system is a


production system with the property that if the
application of a particular sequence of rules transforms
state x to state y, then any permutation of these rules 52
will also transforms state x to state y.
• A commutative production system is a production
system that is both monotonic and partially
commutative.

• For any solvable problem there exists an infinite


number of production systems that describes ways to
find solutions.

53
• Partially commutative, monotonic production
systems are useful for solving ignorable problems.

• They are important from an implementation


standpoint because they can be implemented
without the ability to backtrack to previous states
when it is discovered that an incorrect path has been
followed.

• Non-monotonic partially commutative systems are


useful for problems in which changes occur but can
54
be reversed and in which order of operation is no
critical.
• Production systems that are not partially
commutative are useful for many problems in which
irreversible changes occur.

55
ISSuES In HE dESIgn of
SEArcH ProgrAMS
• Every search process can be viewed as a traversal of a tree
structure in which each node represents a problem state
and each arc represents a relationship between the
states represented by the nodes.

56
• The important issues in search are:

1. The direction in which to conduct the search


( forward versus backward).

2. How to select applicable rules.

3. How to represent each node of the search


process.

57
HEurIStIc SEArcH tEcHnIQuES
• To solve larger problems, domain-specific knowledge
must be provided to improve the search efficiency

• Heuristic
• Any advice that is often effective but is not always
guaranteed to work

• Heuristic Evaluation Function


• Estimates cost of an optimal path between two states
58
• Must be inexpensive to calculate
• There are a number of methods used in Heuristic
Search techniques

• Depth Search
• Breadth Search
• Hill climbing
• Generate-and-test
• Best-first-search
• Problem reduction
• Constraint satisfaction
59
• Means-ends analysis
gEnErAtE And tESt
• This is the simplest of all the strategies.

• It consists of the following steps:

1. Generate a possible solution. For some


problems this means generating a particular
point in the problem space and for others it
means generating a path from a start state.
60
2. Test to see if this is actually a solution by comparing
the chosen point or the endpoint of the chosen path
to the set of acceptable goal states.

3. If a solution has been found, quit. Otherwise return


to step 1.

• The generate and test algorithm is a depth first


search procedure since complete solutions must be
generated before they can be tested.

• For simple problems this method is a reasonable 61

technique
HILL cLIMBIng
• Hill climbing is a variant of generate and test in which feedback
from the test procedure is used to help the generator
decide which direction to move in the search space.

• In a pure generate and test procedure the test function


responds with only a yes or no.

• But if the test function is augmented with a heuristic


function that provides an estimate of how close a given62
state is to a goal state it will be efficient.
• Hill climbing is used when a good heuristic
function is available for evaluating states but
when no other useful knowledge is available.

63
SIMPLE HILL cLIMBIng
• The simplest way to implement hill climbing is as
follows:

1. Evaluate the initial state. If it is also a goal state,


then return it and quit. Otherwise continue with
the initial state as the current state.

2. Loop until a solution is found or until there are no


new operators left to be applied in the current 64
state:
a) Select an operator that has not yet been applied to
the current state and apply it to produce a new state.

b) Evaluate the new state.

i. If it is a goal state, then return it and quit.


ii. If it is not a goal state but it is better than the
current state, then make it the current state.
iii. If it is not better than the current state, then
continue in the loop
65
StEEPESt-AScEnt HILL cLIMBIng
• A variation on simple hill climbing considers all the
moves from the current state and selects the best
one as the next state.

• This method is called steepest ascent hill climbing or


gradient search.

66
67
• Both basic and steepest- ascent hill climbing may fail
to find a solution.

• Either algorithm may terminate not by finding a goal


state but by getting to a state from which no better
states can be generated.

• This happens if the program has reached either a


local maximum, a plateau or a ridge.

68
• A local maximum is a state that is better than all its
neighbours but is not better than some other states
farther away.

• At a local maximum all moves appear to make things


worse.

• Local maxima are particularly frustrating because


they often occur almost within sight of a solution.

69
• A plateau is a flat area of the search space in which a
whole set of neighbouring states have the same
value.

• On a plateau it is not possible to determine the best


direction in which to move by making local
comparisons.

70
• A ridge is a special kind of local maximum.

• It is an area of the search space that is higher than


surrounding areas and that itself has a slope.

• But the orientation of the high region, compared to


the set of available moves and the directions in
which they move makes it impossible to traverse a
ridge by single move.

71
• Some ways of dealing with these problems:

1. Backtrack to some earlier node and try going in a different direction.


This is useful if at that node there was another direction that looked as
promising as the one that was chosen earlier. This method is used to
deal with local maxima.

2. Make a big jump in some direction to try to get to a new section of


search space. Used to deal with plateaus.

3. Apply 2 or more rules before doing the test. Used for dealing with
ridges.

72
SIMuLAtEd AnnEALIng
• Simulated annealing is a variation of hill climbing in which at the
beginning of the process, some downhill moves may be made.

• The idea is to do enough exploration of the whole space early on so


that the final solution is relatively insensitive to the starting state.

• This lowers the chances of getting caught at a local maximum, a


plateau or a ridge.

73
• Simulated annealing as a computational process is patterned after
the physical process of annealing in which physical substances such as
metals are melted(ie raised to high energy levels) and then gradually
cooled until some solid state is reached.

• Physical substances usually move from higher energy configurations to


lower ones.

• But there is some probability that a transition to a higher energy state


will occur. The probability is given by:

74
• Where ∆E is the positive change in the energy level , T is the
temperature, and k is boltzmann’s constant.

• The rate a which the system is cooled is called annealing schedule.

• Physical annealing processes are very sensitive to the annealing


schedule.

• If cooling occurs too rapidly stable regions of high energy will be


formed.

75
• If a slower schedule is used a uniform crystalline structure will be
formed.

• If the schedule is too slow time is wasted.

• The optimal annealing schedule for each particular annealing


problem must usually be discovered empirically.

• These properties of physical annealing can be used to define an


analogous process of simulated annealing.

76
• Here ∆E is generalized so that it represents not specifically the change
in energy but more generally the change in the value of objective
function.

• The analogy for kT is slightly less straightforward.

• In physical process temperature is well defined.

knowledge engineering
• The variable k describes the correspondence between units of
temperature and the units of energy.

• Since in the analogous process, the units for both E and T are artificial.

77
• So we can incorporate k into T selecting values for T that produce
desirable behavior .

• Thus the following revised probability formula can be used

• We need to choose a schedule of values for T.

78
• The algorithm for simulated annealing is only slightly different from
the simple hill climbing procedure.

• The three differences are:

1. The annealing schedule must be maintained.


2. Moves to worse states may be accepted.
3. It is good idea to maintain in addition to the current state, the best
state found so far.

79
Algorithm: Simulated Annealing

1. Evaluate the initial state. If it is also a goal state then return it and
quit. Otherwise continue with the initial state as the current state.

2. Initialize BEST- SO- FAR to the current state.

3. Initialize t according to the annealing schedule.

80
4. Loop until a solution is found or until there are no new operators
left to be applied in the current state.

a. Select an operator that has not yet been applied to the current state
and apply it to produce a new state.

b. Evaluate the new state. Compute


∆E =(value of current) – (value of new state)

• If new state is a goal state then return it and quit.

• If it not goal state but is better than current state, then make it current
state. Also set BEST- SO- FAR to this new state.

81
• If it is not better than current state, then make it current state with
probability p’.

3. Revise as necessary according to annealing schedule.

5. Return BEST- SO-FAR as answer.

82
BESt fIrSt SEArcH(or grAPHS)
• Best first search is a way of combining the advantages of both depth
first search and breadth first search.

• DFS is good since it allows a solution to be found without all competing


branches to be expanded.

• BFS is good because it does not get trapped on dead end paths.

• One way of combining the two is to follow a single path at a time but to
switch path whenever some competing paths looks more promising
than the current one.
83
• At each step of best first search we select the most promising of the
nodes we have generated so far.

• We then expand the chosen node by using the rules to generate its
successors.

• If one of them is a solution we can quit.

• If not all those new generated nodes are added to the set of nodes
generated so far.

• Again the most promising node is selected and the process continues.

84
knowledge engineering 5-01-2015
85
• This process is similar to procedure for steepest ascent hill climbing
with two exceptions.

1. In hill climbing one move is selected and all others are rejected,
never to be re-considered.

• In best first search one move is selected but the others are kept around
so that they can be revisited later if the selected path becomes less
promising

86
2. Further in best first search, the best available state is
selected even if that state has a value that is lower than the
value of the state that was just explored.

• This contrasts with hill climbing which will stop if there are no
successor states with better values that the current state.

87
• To implement a graph search procedure we need to use 2 lists of nodes:

• OPEN :
• Nodes that have been generated and have had the
heuristic function applied to them but which have
not yet been examined.

• It is actually a priority queue in which the elements


with the highest priority are those with the most
promising value of heuristic function.

• Standard techniques for manipulating the priority


88
queues can be used to manipulate the list.
• CLOSED :

• Nodes that have already been examined.

• We need to keep these nodes in memory if we want to search a graph rather


than a tree since whenever a new node is generated, we need to check
whether it has been generated before.

89
90
tHE A * ALgorItHM
• The best first algorithm is a simplification of an A *
algorithm.

• This algorithm uses the function f’, g and h’ as well as


the list OPEN and CLOSED.

• The function g is a measure of the cost of getting from


the initial state to the current node.

91
• The function h’ is an estimate of the additional cost of
getting from the current node to a goal state.
• f’= g + h’

• The combined function f’ represents an estimate of


the cost of getting from the initial state to a goal
state along the path that generated the current node.

92
Algorithm : A*

1. Start with OPEN containing only the initial node. Set


that node’s g value to 0, its h’ value to whatever it is
and its f’ value to h’ + 0 or h’. Set CLOSED to the empty
list.

2. Until a goal node is found repeat the procedure:

ü If there are no nodes on OPEN report failure.

93
ü Otherwise pick the node on OPEN with lowest f’
value.
ü Call it BESTNODE, remove it from OPEN and place it
on CLOSED.

ü See if BESTNODE is a goal node, if so exit and report


a solution.

ü Otherwise generate the successors of BESTNODE


but do not set BESTNODE to point to them.

ü For each SUCCESSOR do the following: 94


a) Set SUCCESSOR to point back to BESTNODE. These backward
links make it possible to recover the path once a solution is
found.

b) Compute g(SUCCESSOR)= g(BESTNODE) + cost of getting from


BESTNODE to SUCCESSOR.

c) See if SUCCESSOR is the same as any node on OPEN. If so call it


OLD. Since this node already exists in the graph, throw the
SUCCESSOR away and add OLD to the list of BESTNODE’s
successors.

check whether it is cheaper to get to OLD via its current parent95


or to SUCCESSOR via BESTNODE by comparing their g values.
if OLD is cheaper do nothing.

if SUCCESSOR is cheaper then reset OLD’s parent link to


point to BESTNODE. Update g and f’ values.

d. If SUCCESSOR was not on OPEN, see if it is in CLOSED.

• if so call the node on CLOSED OLD and add OLD to the list of
BESTNODE’s successors .

• Check to see if the new path or old path is better as above


and set the parent link and g and f’ values approximately.

96
• If a better path to OLD is found propagate the improvement
to OLD’s successor’s.
e. If SUCCESSOR was not already on either OPEN or
CLOSED, then put it on OPEN and add it to the
list of BESTNODE’s successors.

• Compute f’(SUCCESSOR) = g(successor) + h’(SUCCESSOR)

97
98
99
100
101
102
103
ProBLEM rEductIon
• AND – OR GRAPHS

• The AND- OR graph is useful for representing the solution of problems


that can be solved by decomposing them into a set of smaller problems.

• This decomposition generates arcs that are called AND arcs.

• One AND arc may point to any number of successor nodes all of which
must be solved in order for the arc to point to a solution.

104
105
106
107
108
knowledge engineering 5-01-2015
109
MEAnS End AnALYSIS
• This method centers around the detection of differences between the
current state and the goal state.

• Once such difference is isolated an operator that can reduce the


difference must be found.

• But this operator cannot be applied to the current state.

• So a subproblem of getting to a state in which the operator can be


applied is set up.

110
• So a subproblem of getting to a state in which the operator can be
applied is set up.

• This kind of backward tracking chaining in which operators are


selected and then sub goals are set establish the preconditions of
operator is called operator subgoaling.

• The opertor may not be producing the exact goal state that we
want.

• Then a second sub problem of getting from the state produced by


the operator to the goal state is created.

111
• Means end analysis relies on a set of rules that can transform one
problem state into another.

• These rules are not represented with complete stat descriptions on


each side.

• Instead they are represented as a left side that describes the


conditions that must be met inorder for the rule to be applied.

• And a right hand side that describes the aspects of the problem
state that will be changed on application of the rule.

112
• A separate table called difference table indexes the rules by the
differences that they can be used to reduce.

• Example

• Assume the general problem solver is given by the following rules.

• R1: (A V B)  (B V A)

• R2: (A & B)  (B & A)

113
• R3: (A  B)  (¬  ¬A)

• R4: (A  B) (¬A V B)

• Suppose GPS is given by initial propositional logic object with

• Li = (R V (¬P  Q))

• L g= (( P V Q) V R)

114
• To determine Lg from Li requires a few simple transformations.

• The system first determines the difference between the two


expressions and then systematically reduces this difference.

• Li = ((¬P  Q) V R) using R1

• Li = ((¬¬ P V Q) V R) using R4

• Li = ((P V Q) V R) using ¬¬(A) =A

115
AO* algorithm
1. Let G be a graph with only starting node INIT.
2. Repeat the followings until INIT is labeled SOLVED or h(INIT) >
FUTILITY
a) Select an unexpanded node from the most promising path
from INIT (call it NODE)
b) Generate successors of NODE. If there are none, set h(NODE)
= FUTILITY (i.e., NODE is unsolvable); otherwise for each
SUCCESSOR that is not an ancestor of NODE do the following:
i. Add SUCCESSSOR to G.
ii. If SUCCESSOR is a terminal node, label it SOLVED and set
h(SUCCESSOR) = 0.
iii. If SUCCESSPR is not a terminal node, compute its h
c) Propagate the newly discovered information up the graph by doing the
following: let S be set of SOLVED nodes or nodes whose h values have
been changed and need to have values propagated back to their
parents. Initialize S to Node. Until S is empty repeat the followings:

i. Remove a node from S and call it CURRENT.


ii. Compute the cost of each of the arcs emerging from CURRENT.
Assign minimum cost of its successors as its h.
iii. Mark the best path out of CURRENT by marking the arc that had
the minimum cost in step ii
iv. Mark CURRENT as SOLVED if all of the nodes connected to it
through new labeled arc have been labeled SOLVED
v. If CURRENT has been labeled SOLVED or its cost was just changed,
propagate its new cost back up through the graph. So add all of the
ancestors of CURRENT to S.
An Example
An Example
(8) A
An Example
[12] A [13]
4 5
5
(1) B D (8)

(2)
C
An Example
[15] A [13]
4 5
5
(4) B 2 D (8)

(2)
C
An Example
[15] A [8]
4 5
5
(4) B 2 D (3)

(2)
C 2
4

(1) E

(0) G
An Example
[15] A [9]
4 5
5
(4) B 2 D (4)

(2)
C 2
2
4

(3) E
3

(0) G
An Example
[15] A Solved
4 5
5
(4) B 2 D Solved

(2)
C 2
2
4

(3) E
3

(0) G Solved
conStrAInt SAtISfActIon

• It is a search procedure that operates in a


space of constraint sets.

• The initial state contains the constraint that is


originally given in the problem description.

• A goal state is any state that has been


constrained “enough” where “enough” must be
defined for each problem.
125
• In order to define a constraint satisfaction problem (CSP), following
should be specified:

• INPUTS: < V, Dv , Cv >

• V: it is the set of variables v1, v2, …… in the CSP

• Dv : it specifies the domain of each variable ( dom(vi)) in the CSP.

• Cv : it indicates all the constraints on values of variables in the CSP.

126
• OUTPUTS: it is a model where assignment of values to
variables are done so that all values val( Vi)are in dom
(Vi) and all the constraints are satisfied.

• It is a two step process:

• First all initial constraints are identified and propagated


as far as possible throughout the system.

• Then if there is still no solution, then we need to search.


127
• We make a guess about something and add as a
new constraint in the set of constraints.

• Then again we propagate this new constraint to


find the solution.

• The level of constraints can make the search easier


or difficult.

• A unary constraint involves a single variale, binary


involves wo variable etc. 128
• In cryptarithmetic problems, an arithmetic equation is given.

• Instead of digits alphabets are written in the equation.

• The digits from 0 to 9 can be assigned to all the alphabets.

• Constraints are that no two alphaqbets can have the same value and
the values assigned should satisfy the given arithmetic equation.

129
C4C3 C2 C1
L O G I C
L O G I C +

PR O L O G

Initial state:

Initially rules for propagating constraints generate the following


constraints:

130
1. 2C = G or 2C = G +10
2. G is even
3. C1 + 2I = O or C1 + 2I = O+10
4. C2 + 2G = L or C2 + 2G =L +10
5. C3 +2O =O or C3 +2O = O + 10
6. C4 +2L =R +10 and P=1 or C4 +2L =R and P=0

Now this list of initial constraints is complete, now some more constraints
based on some guess need to be generated.

Now guessing the value for O which is used in many places.

131
• There are 2 possible values for for O, either O= 0 or

132
133
134
gAME PLAYIng
• Game playing is a fascinating area for AI researchers.

• In 1950, Claude Shanon described the mechanisms for playing chess in a


research paper.

• In 1953, Charles Babbage designed analytical engine to play chess.

• He also designed an engine for playing tic tac toe.

• After this Alan turing described a chess playing program .

135
• In 1960, Arthur Samuel succeded in building the first significant ,
operational game playing for checkers.

• The program could play checkers and learn more from its mistakes and
improve its performance.

• Games can be divided into 1 player, 2 player or multiplayer games.

• Single player games include 8-puzzle, 15- puzzle etc.

• Two player games include chess, checkers, tic- tac-toe etc.

136
• Multiplayer games which involve more than 2players include card games
such as bridge.

• All of these have different goals and hence different AI techniques can be
used to solve these.

• However one thing is common that the game tree needs to be generated
to find the solution for the game.

• The characteristics considered for specifying each game include


knowledge representation scheme used, the timing and space
requirements and the algorithm used for finding optimal solution.

137
• Game playing is different in many aspects from other search problems.

• First the moves are not in the hands of the player. It is difficult to predict
the opponents move.

• Second, since there are 2 players there is always a time limit or there is a
chance of opponent’s win, it is advisable to select approximate move
instead of waiting for the optimal move.

138
MInIMAx SEArcH ProcEdurE

• The minimax algorithm is a specialized search


procedure which returns the optimal sequence of
moves for a player in a zero sum game.

knowledge engineering
• The initial state is the board position and information
on which player move first.

• The successor function returns a list of (move, state)


pairs which each indicate a legal move and resulting 139
state.
• The terminal test determines when the game is over.

• Ending states are called terminal state.

• The utility function assigns a numeric value to the

knowledge engineering
terminal states.

• For example, if the options are win, lose or draw the


terminal values might be +1, -1 and 0 respectively.

140
• These states and legal moves forms the game’s search
tree.
• Given a minimax search tree, the optimal strategy
can be determined by examining the minimax value
of each node.

• This value indicates the utility of being in a certain

knowledge engineering
state, assuming bovh players play optimally.

• Given a choice MAX choose to move to a state of


maximum value and MIN chooses a state of
minimum value.
141
• The steps taken are:

1. Create start node as a MAX node with current board


configuration

2. Generate the search tree from the current position till


the depth d of the tree which is selected as d-ply search.

3. Compute the static evaluation function at each of the


leaf nodes .

4. Propagate the values till the current position on the 142


basis of player.
• If it is the turn of MAX player generate the successors
of the current position applying minimax to each of the
successors and return the maximum of the results.

• If it is the turn of MIN player generate the successors


of the current position applying minimax to each of the
successors and return the minimum of the results.

• Pick the operator associated with the child node whose


backed-up value determined the value at the root
143
MAX node

MIN node

144
knowledge engineering 5-01-2015
145

You might also like