0% found this document useful (0 votes)
5 views76 pages

AI

The document outlines a course on Artificial Intelligence, detailing topics such as intelligent searching, genetic algorithms, and machine learning, along with prerequisites like data structures and logic. It defines AI as the study of rational agents and discusses various types of agents, their characteristics, and examples. Additionally, it covers search algorithms, heuristic search, and concepts like admissibility and consistency in the context of search problems.

Uploaded by

Qusay Alawneh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views76 pages

AI

The document outlines a course on Artificial Intelligence, detailing topics such as intelligent searching, genetic algorithms, and machine learning, along with prerequisites like data structures and logic. It defines AI as the study of rational agents and discusses various types of agents, their characteristics, and examples. Additionally, it covers search algorithms, heuristic search, and concepts like admissibility and consistency in the context of search problems.

Uploaded by

Qusay Alawneh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Artificial Intelligence

Dr. Hashem Tamimi


Palestine Polytechnic University
2024
Course Outline
• Introduction to AI

• Intelligent searching and game playing

• Genetic Algorithms

• Probabilistic models

• Handling Uncertainty

• Introduction to machine learning

• Selected Topics
References

• G. F. Luger, Artificial Intelligence: Structures and


Strategies for Complex Problem Solving,
5th Ed., Addison Wesley, 2005.

• S. Russell & P. Norvig, Artificial Intelligence:


A Modern Approach, 2nd Ed., Prentice Hall, 2002.
Prerequisites

• Data Structures
• Computer Algorithms
• Logic
• Probability
What is AI

AI:
The branch of computer science concerned with the automation of the
intelligent (human/animal/living) behavior.

What is intelligence in AI?


What is intelligence?
• Turing Test
Alan Turing's 1950 paper Computing Machinery and
Intelligence

human

interrogator

computer
Agents
Agents
• Artificial intelligence is defined as the study of rational agents.

• An agent is anything that can be viewed as:


• perceiving its environment through sensors and
• acting upon that environment through actuators
Agents
• Examples of Agent:
• A Robotic agent has Cameras and infrared range finders which act as
sensors and various motors acting as actuators.
• A software agent has Keystrokes, file contents, received network
packages (which act as sensors) and displays on the screen, files, sent
network packets (acting as actuators).
• A Human-agent has eyes, ears, and other organs which act as
sensors, and hands, legs, mouth, and other body parts acting as
actuators.
Agents

• A rational agent is said to perform the right things.


• A Performance measure is used that defines the success criterion.
Agents

• Example: Self driving car


• Performance: Safety, time, comfort
• Environment: Roads, other vehicles, road signs, pedestrian
• Sensors: Camera, GPS, speedometer, accelerometer
• Actuators: Steering, accelerator, brake, signal
Agents
• Example 2:
• The game of sokoban
• Performance: ____________
• Environment: _____________
• Sensors: _________________
• Actuators: ________________
Types of agents
Types of agents
Simple reflex agent

Model-based agent

Goal-based agent

Utility-based agent

Learning agent
Types of agents
Simple reflex agent
• Simple Reflex Agents:
• Operates based on a set of condition-action rules, Model-based agent
also known as "if-then" rules or production rules.

• Does not consider the past percepts Goal-based agent

• Does not consider the consequences


Utility-based agent
of the action

• Does not consider any internal state Learning agent


representing the world.
Types of agents
Simple reflex agent
• Model-based agent
• A model-based agent continuously maintains an
Model-based agent
internal model or representation of the
environment. Goal-based agent
• The agent uses this model to make decisions by
considering different possible actions and their Utility-based agent
outcomes.
Learning agent
• It selects the action that leads to the best
outcome according to its goals or objectives.
Types of agents
Simple reflex agent
• Goal-based agent
• Build on top of Goal based agent. Model-based agent
• The objective (goal state) is clearly defined.
Goal-based agent
• There are different states (alternatives) that
can be reached by a set of actions
Utility-based agent
• the agent selects actions that it believes will
move it closer to achieving its objectives. Learning agent
Types of agents
Simple reflex agent
• Utility-based agent
• Consider the best actions to reach a set of Model-based agent
preferences (including reaching the goal)

• This is usually done using a utility function Goal-based agent

• Utility function represents the agent's


Utility-based agent
preferences or goals, capturing the
desirability of different outcomes. Learning agent
Types of agents
Simple reflex agent
• Learning-based agent
• This agent start without knowledge of how to
act to reach the goal or preference. Model-based agent
• The agent has a learning capability that allows it
to adjust its action.
Goal-based agent
• A performance measure is used to judge the
agent performance after each action
• Feedback from the performance measure Utility-based agent
will allow the agent to update its
actions to improve its performance.
Learning agent
State Representation and
search space
Sokoban game
• The elements of the game:
• Player
• Walls
• Boxes
• Storage locations (equal to the boxes)
• Condition:
• Boxes can only be pushed forwards
(they cannot be pulled)
• Player can push one box only.
• player moves up, down, left or right
• Player cannot penetrate boxes or walls
• Goal: Each box is in a storage location.
Sokoban game
• Representation
• 2D array
• Empty space: 0
• Wall : 1
• Box 2
• Storage location 3
• Box in a storage location 4
• Person 5

• Q: how to represent the goal state?


Sokoban game
• Representation
• 2D array
• Empty space: 0
• Wall : 1
• Box 2
• Storage location 3
• Box in a storage location 4
• Person 5

• Q: how to represent the goal state?


Sokoban game
• Representation
• 2D array
• Empty space: 0
(S1) (S2)
• Wall : 1
• Box 2
• Storage location 3
• Box in a storage location 4
• Person 5

• Q: which of the four states are valid? (S3) (S4)


Sokoban search space
Sokoban search space

Q: How many states are there ?


Search Space
• Search space

• Start State

• Goal State

• Action

• Solution
The branching factor

• Tic-Tac-Toe: 4
• Chess: 35
• Go: 200
• Arimaa: 17,000
Searching
Searching
• A search problem consists of:
• A state space

“N”
• A successor function
(with actions, costs)

“E”
• A start state and a goal test

• A solution is a sequence of actions (a plan) which transforms the start


state to a goal state
State Space Sizes?
• World state:
• Agent positions: 120
• Food count: 30
• Ghost positions: 12
• Agent facing: NSEW

• How many
• World states?
120x(230)x(122)x4
• States for pathing?
120
• States for eat-all-dots?
120x(230)
State Space Graphs
• State space graph: A mathematical
representation of a search problem
• Nodes are (abstracted) world configurations
• Arcs represent successors (action results)
• The goal test is a set of goal nodes (maybe only one)

• In a state space graph, each state occurs only


once!

• We can rarely build this full graph in memory


(it’s too big), but it’s a useful idea
Depth-First Search
Depth-First Search
Strategy: expand a a G
deepest node first b c

Implementation: e
d f
Fringe is a LIFO stack S h
p q r

d e p

b c e h r q

a a h r p q f

p q f q c G

q c G a

a
Search Algorithm Properties
• Infinite branch!
Search Algorithm Properties
• Complete: Guaranteed to find a solution if one exists?
• Optimal: Guaranteed to find the least cost path?
• Time complexity?
1 node
• Space complexity? b
… b nodes
b2 nodes
• Cartoon of search tree:
m tiers
• b is the branching factor
• m is the maximum depth
• solutions at various depths
bm nodes
• Number of nodes in entire tree?
• 1 + b + b2 + …. bm = O(bm)
Depth-First Search (DFS) Properties
• What nodes DFS expand?
• Some left prefix of the tree. 1 node
• Could process the whole tree! b
… b nodes
• If m is finite, takes time O(bm)
b2 nodes
• How much space does the fringe take? m tiers
• Only has siblings on path to root, so O(bm)

• Is it complete?
• m could be infinite, so only if we prevent bm nodes
cycles (more later)

• Is it optimal?
• No, it finds the “leftmost” solution,
regardless of depth or cost
Breadth-First Search
Breadth-First Search
Strategy: expand a a G
shallowest node first b c
Implementation: Fringe e
d f
is a FIFO queue S h
p q r

d e p
Search
b c e h r q
Tiers
a a h r p q f

p q f q c G

q c G a

a
Breadth-First Search (BFS) Properties
• What nodes does BFS expand?
• Processes all nodes above shallowest solution 1 node
b
• Let depth of shallowest solution be s … b nodes
• Search takes time O(bs) s tiers
b2 nodes

• How much space does the fringe take? bs nodes


• Has roughly the last tier, so O(bs)

• Is it complete? bm nodes
• s must be finite if a solution exists, so yes!

• Is it optimal?
• Only if costs are all 1 (more on costs later)
Quiz: DFS vs BFS
• In the previous example what are the visited nodes of the tree using
BFS?
• Using DFS?
• What is the solution path from root to goal ?
Another example
DFS vs BFS
• When will BFS outperform DFS?

• When will DFS outperform BFS?


Quiz

• Depth First Search


• A → E→ i→m → n →o
→p→I →H→g

• Breadth first search


• A→
e→f→b
→i→j→c→
m→n→g
Blind Search vs Intelligent Search
• Blind Search:
• Include BFS and DFS.
• During the search process, the agent has no clue if it is far or near a goal!

• Heuristic search:
• Eg: Best First Search
• Eg: A* search
• The agent has some hint (heuristic) how far it is from goal
• Heuristic search is intelligent search
Heuristic Search

The agent has some hint (heuristic) how far it is from goal
Heuristic Search
A heuristic function:
A function that estimates how close a state is to a goal
Designed for a particular search problem
Examples: Manhattan distance, Euclidean distance for pathing

10

5
11.2
Heuristic Search

• A heuristic function: • The value v reflects how far the


• Takes one state as input state s is from the goal state
• Provide a numeric value as
output

a state s h(s) a numeric


value v
Travel Salesman Problem

Traveling Salesman Problem:

Complexity of the exhaustive


search is: (n-1)!

Time consuming when n is large.


Travel Salesman Problem
Travel Salesman Problem
• Heuristic function : Select the nearest city next (Greedy)
Heuristic Search
• Consider different heuristics of
• 8-puzzle problem:
Heuristic Search
• Consider different heuristics of
8-puzzle problem:

h1: # of misplaced tiles ignores


the distances the tiles must be
moved.
misplacedTilesHeuristic
Heuristic Search
Heuristic Search
• Tic-Tac-Toe:
• Complexity of exhaustive search
= 9x8x7,... = 9!.
• With symmetry reduction:
• 12x7!
Heuristic Search
• Tic-Tac-Toe (continued):
Heuristic: Move to the board in
which X has the most winning lines.
Heuristic Search
Admissibility
• An admissible heuristic is a heuristic that is guaranteed to find the
shortest path from the current state to the goal state.
• To check if a heuristic function h is admissible or not, we need to
check that:
ℎ 𝑠 ≤ ℎ∗ 𝑠 ∀𝑠 ∈ 𝑆, where 𝑆 is the state space
• ℎ∗ 𝑠 is an optimal heuristic function
Quiz
• Which heuristic is admissible?
Consistent
• Consistent or monotonic
• h is consistent if: n
1. h(goal)=0
2. h(s) ≤ c(s,n) + h(n)
For all s belong to S S
Where n is any direct neighbors state s g

Note
If H:consistent=>h(admissible)
Consistent
• Consistent or monotonic
• h is consistent if: n
1. h(goal)=0
2. h(s) ≤ c(s,n)+h(n)
For all s belong to S S
Where n is any direct neighbors state s
g

Note
If H:consistent ➔ h:admissible
But the opposite is not true
A* search
• h(s) is the heuristic value for state s
• g(s) is the distance from the start to the state
• f(s)=h(s)+g(s)
• h(s): admissible for all s
Quiz
Solve using:

• Breadth First Search

• Depth First Search

• Best First Search

• Best First Search including the cost on edges



Quiz
• Which heuristic is monotonic?
MiniMax
• Nim-game
MiniMax
pruning

You might also like