0% found this document useful (0 votes)
80 views12 pages

Artificial Intelligence

This document discusses several topics related to artificial intelligence including the Turing test, different types of agents, agent-environment diagrams, search problems, and game theory. It describes rational agents that take actions based on inputs and percepts. Four types of agents are discussed: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Search problems involve an initial state, actions, a goal test, and a cost function. Solving a sudoku puzzle is provided as an example search problem. Different search methodologies like trees, breadth-first search, depth-first search, iterative deepening search, informed searches like A* search, and local search algorithms are summarized. The document also briefly discusses

Uploaded by

espyter
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views12 pages

Artificial Intelligence

This document discusses several topics related to artificial intelligence including the Turing test, different types of agents, agent-environment diagrams, search problems, and game theory. It describes rational agents that take actions based on inputs and percepts. Four types of agents are discussed: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Search problems involve an initial state, actions, a goal test, and a cost function. Solving a sudoku puzzle is provided as an example search problem. Different search methodologies like trees, breadth-first search, depth-first search, iterative deepening search, informed searches like A* search, and local search algorithms are summarized. The document also briefly discusses

Uploaded by

espyter
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Artificial Intelligence

Wednesday, January 14, 2009


10:03 AM
Turing test
Can computers fool a human
Loebner prize for a good system that passes turing test
Agent
The agent text book
All AI organized around rational agents that take actions based on inputs/percepts
- An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that
environment through actuators (effectors) .
- A percept sequence (percept history) is a complete history of everything an agent has
ever perceived.
Mathematically (abstractly), an agent's behavior can be described by an agent function
that maps any given
percept sequence to an action. The notion of a precept sequence and the associated
agent function is useful
for talking about things like an ideal sequence of actions, etc., it is not something that is
normally available to
the agent when it is actually making decisions.
- IMPORTANT - The agent function IS NOT going to be the same as the agent's internal
program (the
"brains" of the agent), a real program running on the agent's architecture.
Agent_Environment Diagram
- Consider an example of an agent whose only mission is to play chess well or an agent
that ...
- We may think that the ideal agent program would be one that knows what to do under
all possible
cirumstances (by table lookup, for example). That's not realistic. Why?
In AI practice, people tend to use internal programs based on four possible types of
agents:
simple reflex agents
If then
model-based reflex agents
Something has/hasn't worked in past, try new action
goal-based agents
Find blue ball in room
utility-based agents
Simple Reflex Agents
These agents select actions based solely on the basis of the current precept. Simple, but
very limited in
capability. They are only useful if the correct decision can (usually) be made based on the
current
precept--the environment is fully observable.
Reflex Agents that maintain state information
These are reflex agents that have the ability to store some state information that is
dependent upon the
precept history; state information can be simple or sophisticated.
Goal-based Agents
Decisions (moves, etc.) made based on current state and knowledge of a goal. These
programs often involve
search and planning. The choices are not based solely on state but state plus a desired
outcome.
Utility-based Agents
More rational behavior can be generated if the agent not only has knowledge of goals, but
also some measure
of the "utility" or desirability of intermediate states. The task becomes one of moving
from the current state to
a state with a higher utility value. The "goal" state usually has the highest utility of all.
("hill climbing")
 
Nature of evnironments
 
 
 
Chap 2
Wednesday, January 21, 2009
10:06 AM
 
Search: process of looking for a sequence of decisions/choices leading to a goal
Could be anything; getting out of a maze, making a student schedule, GPS nav…
Environment in which we're searching
Static or dynamic
Unchanging or Changing
Observable: partially observable/fully observable
All relevant information is available/unavailable
Discrete or continuous
Discrete: Step by step then adjust
Continuous instructions
Deterministic vs. Stochastic
Stochastic: some element of randomness, repeating same action won't return
same results
Episodic or sequential
Sequential; order of decisions impact each other, decisions are not stand
alone
Single agent or multi
1 agent making decisions to achieve a goal without an opponent
Simplest case
Fully observable, deterministic, episodic, static, discrete, single agent
Real world
Partially observable stochastic, sequential, dynamic, continuous, multi agent
Problems
An initial state (starting config)
A set of actions available to the agaent
A way to see if goal has been reached
A cost function that can calculate the cost of decisions made during a search
Solution to a probrem is a path from initial state to a goal state
Optimal solution: shortest distance
 
Create an agent to Solve Sudoku
Fully observable, discrete, deterministic, sequential, static
States: board configuration
Initial State: initial board configuration with hard numbers
Successor function: way to define what is a legal move (cant have same number in square
or line
Goal test: some way to determine if we have reached a goal
Number of legal states: 1021… unreasonable to try all of them
We need a methodology, search type: trees
State space set of all possible configurations, including the initial state
Node; ….pointer to the node that preceded this one in the search; the action taken
that led to current state; g(n) the cost of getting to this state; number of decisions
made to get to this state from the initial state, also called the depth
Search tree: a way to organize the
Search strategy: breadth first search vs depth first search,
b=9, d = total spaces, 81 - # already filled in, roughly= H
Breadth first (memory hog): queue: generate whole thing. pick one box,
generate children (the fringe, nodes we haven't explored yet) for possible
numbers it could be. Then go to first child
Space: O(bd)
Time: O(bH)
Depth first: stack: generate all children, go to first
Time: O(bH)
Space: O(b*H)
Trees
Nodes at level blevel
Total = (bi+1-1)/(b-1)…O(bi)
 
Iterative deepening search
In the case where you don’t know where the goal is it's the best algorithm
Like depth first search in that it has less space requirements.
DFS where we continually go deeper and deeper
 
Roborealm
Inside worker.cs
Inside doWork
testCreate.(command) (clicking will show available commands)
Variables being passed to roomba sharp
COG_x center of gravity x
COG_y
COG_area
Width
height
 
Informed search
Best first search/greedy search
Use some estimate of the desirability of any state
f(state)
Desirability of being in that state
h(state)
Estimate of the cost of getting from a state to the goal
g(state)
actual cost of getting from an initial state to this state
f(n)=g(n)+h(n)
The h is what makes it an informed search, if h=0 then we have uninformed search
 
A* Search
Search is a special case of best first search, greedy search
Same as f, g, h, but the estimation has to use something called admissible heuristic, that
strictly underestimates remaining cost
Priority queue
Local search/hill climbing algorithms
Can be used when we just want to find an optimal state
Don’t need to reproduce the decisions made to get to a state
Iterative improvement algorithms
Local Beam search
Keep k states instead of 1, choose top k of all their successors
Not the same as k searches fun in parallel
Genetic algorithms/searches
Like chromosome switching/mutating
Based on fitness, like the f(n) calculation, pick encoded states (encoded to be represented
as chromosomes)
Swap certain parts,
 
Chapter 6 L337 Gaming
  Deterministic Chance

Perfect information Chess, checkers, go, Othello Backgammon, monopoly

Imperfect information Battle ships, blind tic tac toe Bridge, poker, scrabble, nuclear war
Tic tac toe
Leaf nodes are qualified into +1 for a win, -1 for a loss, 0 for a tie
Min max: each choice is averaged to a # so you pick the move with the highest #
therefore the highest chance to win
O() same as depth first search, space and time are the same because it is essentially
a depth first search
With chess at ~35 possible moves every turn and ~100 moves per game, 35100
would take too long
Pruning (alpha beta)
Parts of game tree that we never have to look at
Generally helps chop of about half of the game tree, depends on order
that
Come up with factors that determine who's winning and how to weight (how
important) each factor is
For non-deterministic games that have chance
You add a third layer, the chance layer
 
Mapping
Start mobile robot at specific known location, create a map
localization
Given a map, but unknown starting location, have robot figure out where it is
Monte Carlo localization
 
Chapter 7
Knowledge based agents
Wumpus world
Agent
Knowledge base
Facts and rules
Interface engine
Wumpus world
Has a knowledge base, can sense stuff
Observable?? No | only local perception
Deterministic?? Yes | outcomes exactly specified
Episodic?? No | sequential at the level of actions
Static?? Yes | Wumpus and Pits do not move
Discrete?? Yes
Single-agent?? Yes |Wumpus is essentially a natural feature
World
Any environment
Model
Effort to formally represent a particular world
We say that M is a model of sentence S to mean that S is true in M
Need inference
Wumpus worlds
Satisfy-able
Can be true-sometimes true
Unsatisfiable
Not true in any world
Tautology/valid
Always true
Modus ponens
If a implies b, and ~b, then ~a
Resolution
Inference rule for CNF conjunctive normal form: complete for propositional logic
Each clause can only have or's inside of it, v, and ^and's between them
(avb)^(~c)
7.8
a. Tautology/valid: smoke => smoke
Smoke Smoke=>smoke

True True

False True
b. Neither unsatisfiable nor valid
Smoke Fire Smoke=>fire

True True True

True False False

False True True

False False True


c. Neither unsatisfiable nor valid
Smoke Fire Smoke=>fire ~smoke=>~fire (Smoke=>fire)=>(~smoke=>~fire)

True True True True True

True False False True True

False True True False False

False False True True True


d. Valid/tautology
Smoke Fire Smoke V Fire V ~Fire

True True True

True False True

False True True

False False True


e. Neither
Smoke Fire Heat (Smoke v (smoke (Heat=>Fire) ((Smoke v Heat)=> Fire)
Heat)=> =>fire) <=>((smoke =>fire) v
Fire) (Heat=>Fire))

True True True True True True True

True True False True True True True

True False True False False False True

True False False False False True False


False True True True True True True

False True False True True True True

False False True False True False False

False False False True True True True


f. Neither
Smoke Fire Heat (smoke (Smoke ^ (smoke =>fire) =>((Smoke
=>fire) Heat)=> Fire ^Heat) =>Fire)

True True True True True True

True True False True True True

True False True False False True

True False False False True False

False True True True True True

False True False True True True

False False True True True True

False False False True True True


g. Valid/tautology
Big Dumb Big V Dumb V (Big =>Dumb)

True True True

True False True

False True True

False False True


h. Valid
Big Dumb (Big ^ Dumb) V ~Dumb)

True True True

True False True

False True True

False False False


 
 
Test review
Sunday, March 01, 2009
2:15 AM
 
Environment in which we're searching
Static or dynamic
Unchanging or Changing
Observable: partially observable/fully observable
All relevant information is available/unavailable
Discrete or continuous
Discrete: Step by step then adjust
Continuous instructions
Deterministic vs. Stochastic
Stochastic: some element of randomness, repeating same action won't return same
results
Episodic or sequential
Sequential; order of decisions impact each other, decisions are not stand alone
Single agent or multi
1 agent making decisions to achieve a goal without an opponent
Simplest case
Fully observable, deterministic, episodic, static, discrete, single agent
Real world
Partially observable stochastic, sequential, dynamic, continuous, multi agent
Types of agents
Simple reflex
Take sensory data, make action based off of rule
Reflex with state
Sensor-> get current state, figure out what an action would do, do action
Goal based
Will action get agent closer to goal?
Utility
How happy an agent will be in such a state
Learning agents
Learns stuff
Problem types
Single state problem
Deterministic, fully observable, agent know exactly where it is, sequence of actions
towards goal
Conformant problem
Non observable, agent doesn’t know where it is
Contingency problem
Non deterministic, partially observable sensors provide new info about state,
solution is a policy
Exploration problem
Unknown state space, internet
Strategies are evaluated along the following dimensions:
Completeness | does it always find a solution if one exists?
time complexity | number of nodes generated/expanded
space complexity |maximum number of nodes in memory
Optimality | does it always find a least-cost solution?
Time and space complexity are measured in terms of
b | maximum branching factor of the search tree
d| depth of the least-cost solution
m | maximum depth of the state space (may be infinity)
Types of uninformed
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
 

Costs space and time both bd+1 very inefficient with space
Expands children, generates them, but doesn’t check, them until previous
generation has generated all children
Uniform-cost search
Equivalent to breadth-first if step costs all equal
Depth-first search
Last In, first out
 

Much better with space, only linear!


Will go deep, if m, total possible depth (can be infinity) is larger than d, depth where
goal is, this can be very inefficient
Depth-limited search
Same as depth first but will not go all the way as deep as m, uses l which must be at
least >= d
Iterative deepening search
Like a breadth first search, tries at level, then moves on to next level
 
Review guide, Chapters 1,2,3,4,6
Chapter 1 – Introduction
Know about the Turing test, what it is, why it is historically important, etc. You may want to
formulate an opinion about its relevance.
•Can a robot/computer fool a human on AIM? First real AI, competition
What is a rational agent?
•anything that takes percepts of its surroundings/environment and makes actions based off of
them
Chapter 2 – Intelligent Agents
What is an agent? Does rational imply perfection?
•No, more like best possible/logical outcome
Why is it important to characterize precisely the environment in which an agent will operate? Be
able to give a P.E.A.S. characterization of an environment.
•Performance measure, Environment, Actuators, Sensors: Taxi:safety/streets/steering-gas-
brake/mirrors-gauges
Be familiar with the properties of task environments. Don’t memorize them, these types of
properties make good True-False questions.
•OPSDDCDSESSM
Describe how a Simple Reflex agent operates as opposed to a Model-based reflex agent.
•simple reflex: condition(determined by sensor)->action-rule model-based: maintains internal
state of past precepts
You may see a question in a form similar to Exercise 2.5.
a. Robot soccer player: P:goals scored vs. goals scored against, E. soccer field, A
kicker/foot, head/body, S: where ball is, where other players are, where agent is in
relation to field/goal
b. Internet book shopping: P:was book ordered correctly? E: the internets A: filling in html
forms, placing order S: site, price, isbn
c. Mars rover: P:information relayed, ability to get around E:Mars terrain A: wheels what
else? S:many: wheel sensors, thermometers, cameras…
d. Math theorem prover: P:was theorem proved? How close it came to doing so. A: way to
write/convey mathematical language S: theorem detector (ocr?)
Chapter 3 – Uninformed Search
Know the four components of a problem description (p 62).
•Initial state, possible actions/successor states, goal test, path cost
Know when uninformed (blind) search strategies are applicable (p 73).
•No additional information about states beyond that provided at problem definition
Know how each of the following blind searches operates and have some general ideas about the
circumstances under which each is useful: breadth-first (BFS), uniform cost, depth-first (DFS),
depth-limited, iterative deepening. You don’t need to memorize Figure 3.17 but you should be
able to explain, for example, why DFS has a smaller memory footprint than BFS, or why iterative
deepening does not perform as poorly as our intuition would suggest.
•BFS generates fringe and saves it but doesn’t delete it until much later, DFS exhausts one route
before moving on to next, generates very little extra fringe
You may see questions similar to Exercises 3.7-3.9.
3.7 four components of problem Initial Successors Cost Goal Test
a. Different colors map I: empty map S:trying each of the colors in the next spot checking
for validity (no same colors) C: number of colors needed Test: map complete with no
adjacent colors and used no more than 4
b. Monkey I monkey no bananas S: stacking of crates in different locations, C: how many
moves G:got bananas?
c. I: illegal record somewhere in set of records S:test next record C: number of tests
needed to find illegal one G:found illegal record?
d. Jugs I:starting state of water in jugs S: jug poured somewhere C: how much water/many
pours G: one gallon?
3.8
a. Boom roasted
b. Boom roasted
c. Bi directional would be chill, goal would start at 15 and divide by 2 or -1 then /2, start
would do reg
d. 2
e. Yeah do any integer values, follow that path
3.9 skanks and hoes trying to across river on 2 person boat, chill
Chapter 4 – Informed Search
Understand what “informed” means in this chapter. It doesn’t mean omniscient.
Best first search/greedy search
Use some estimate of the desirability of any state
f(state)
Desirability of being in that state
h(state)
Estimate of the cost of getting from a state to the goal
g(state)
actual cost of getting from an initial state to this state
f(n)=g(n)+h(n)
The h is what makes it an informed search, if h=0 then we have uninformed search
Understand the basic idea of a greedy search.
•takes most desirable action
Understand what the A* approach adds to the basic greedy strategy. Understand why we need
to use admissible heuristics with A* search to be assured of an optimal search procedure. Know
what an admissible heuristic is.
•guarantees A*'s optimality. Admissible means at least correct estimate of remaining cost, if not
too optimistic, can't over estimate remaining cost . Priority queue
Know when “local search” is useful and what the drawbacks of approaches such as hill-climbing
are. Have a general idea of how genetic algorithm approaches work and what the limitations are
of this approach. I will not ask you to do an exercise where you have to work through a genetic
algorithm approach.
•no real goal or cost test, genetic algorithm GENEs, like evolution, mix n match
You may see questions similar to Exercises 4.2, 4.3, 4.11.
4.2 weird question. When w equals 1, optimal because just f=g+h, A*,which is optimal, 0 is
current cost only, 2 is estimated remaining cost
4.3 prove
a. There is no preference as to which child is chosen, just FIFO
b. No idea
c. No idea
4.11 boom roasted
Chapter 6 – Adversarial Search
Be able to draw a game tree for an adversarial situation and apply the minimax strategy to the
tree. Be able to describe alpha-beta pruning and illustrate its application.

You may see questions similar to Exercises 6.1, 6.3.
 
 
 
Test 2
Monday, March 16, 2009
10:07 AM
 
Chapter 8
First order logic
As opposed to propositional logic
Proposition, P,Q,R,
Q: today is Tuesday
Objects
Relations between objects
functions
 
 

You might also like