0% found this document useful (0 votes)
8 views22 pages

Intro & Agents

Uploaded by

wimov50071
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views22 pages

Intro & Agents

Uploaded by

wimov50071
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CS 4613

ARTIFICIAL INTELLIGENCE

Professor Edward K Wong


NYU Tandon School of Engineering

Chapter 1: Introduction
Chapter 2: Intelligent Agents

1
Introduction
• What is AI?
– The science of designing computer systems to solve problems
that require human intelligence and expertise.
• AI is a very broad field, has many subfields
– Natural language processing
– Knowledge representation
– Automated reasoning
– Machine learning
– Computer vision
– Robotics
– etc.

2
Introduction

AI Machine
Learning

Reinforcement
Deep
Learning
Learning

Generative
AI

3
Example AI Systems
• Deep Blue (IBM) defeated the reigning world chess champion Garry
Kasparov in 1997.
• Watson (IBM) is a question answering computer system capable of
answering questions posed in natural language. It beat former
human winners in quiz show Jeopardy in 2011.
• AlphaGo (Google DeepMind) beat the number 1 ranked player in the
world - Lee Sedol - in the board game Go in 2016.
• In Dec 2020, Waymo (https://waymo.com) became the first service
provider to offer driverless taxi rides to the general public in Phonix,
AZ.
• ChatCPT (OpenAI), launched in Nov 2022, is an AI powered large-
language-model chatbot, capable of generating human-like answers
based on context and past conversations.

4
Brief History of AI
• Early days
– Boolean circuit model of brain (McCulloch & Pitts 1943)
– Turing’s 1950 paper on “Computing Machinery and Intelligence”
• Simple AI programs (logic-based)
– Samuel’s checkers program, Newell & Simon’s Logic Theorist,
Gelernter’s Geometry Engine, etc., in the 1950’s; Robinson’s algorithm
for logical reasoning (1965.)
– Dartmouth meeting in 1956, the term Artificial Intelligence was adopted.
• Knowledge-based systems
– Development of rule-based systems and commercialization (1969-
1988.)

5
Brief History of AI
• Statistical approaches (1990 - )
– Use probability to model uncertainty
– More focus on real world data and applications
• Modern approaches (2012 - )
– Deep learning, reinforcement learning, large language models
– Big data and large models
– Real world applications

6
Introduction

Four approaches in defining AI or designing AI


systems:

Thinking humanly Thinking rationally


Acting humanly Acting rationally

In this course, we will be following the fourth approach: Acting rationally.

7
Acting humanly
• Turing (1950) test approach
– Operational test for intelligent behavior: the Imitation Game

– Predicted that by 2000, a machine might have a 30% chance


of fooling a lay person for 5 minutes

8
Thinking humanly
• Cognitive modeling approach
– Try to determine how human thinks inside their brains
– Attempt to develop scientific theories of internal activities
of the brain

9
Thinking rationally
• Laws of thought approach
– Several Greek schools developed various forms of logic:
notation and rules of derivation for thoughts
– Problems:
– Big difference between solving problems in principle
and in practice.
– Not easy to take informal knowledge and state it in
formal logical notations.

10
Acting rationally
• Rational agent approach
– Approach taken in this course
– Rational behavior: act to achieve the best outcome or,
when there is uncertainty, the best expected
outcome.
– Attempt to maximize goal achievement given the
available information.

11
Agents
• An agent is any entity that perceives its environment
through sensors and acts upon its environment through
actuators.
• Examples
– Human agents
• Sensors: eyes, ears, etc.
• Actuators: hands, legs, etc.
– Robotic agents
• Sensors: cameras and laser range finders.
• Actuators: motorized manipulators.

12
Agents

• An agent function f maps a percept sequence to an action:


[f : P*  A]
• f is a program running on a physical computer architecture
– agent = program + architecture

13
Rational agents
• This course is about the design of rational agents, following
the approach acting rationally.
• Rational agents: for each possible percept sequence P*, a
rational agent should select an action A that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
• Performance measure: an objective criterion for the success
of an agent's behavior.
– e.g., performance measure of a vacuum-cleaner agent
could be the amount of dirt cleaned up, amount of time
taken, amount of electricity consumed, amount of noise
generated, etc.

14
Agent Types
• Table-driven agents
– Build a table that contains percept sequences and actions
– Not a good approach except for the simplest problems.
• More sophisticated agent types
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents

15
The vacuum-cleaner world

• Environment: squares A and B


• Percepts: current location and status (of current
location,) e.g., [A, Dirty]
• Actions: Left, Right, Suck

16
A table-driven Agent

Partial tabulation of a simple vacuum-cleaner agent function

17
Simple Reflex Agents
• Select action on the basis
of only the current
percept.
• Implement through
condition-action rules
– e.g. “If dirty then suck”
in the vacuum-cleaner
problem.

18
The vacuum-cleaner world

function REFLEX-VACUUM-AGENT ([location, status]) return an


action
if status == Dirty then return Suck
else if location == A then return Right
else if location == B then return Left

19
Model-Based Reflex Agents

• To tackle partially observable


environments
– Maintain an internal state
• Over time, update the internal
state using world knowledge
⇒ Model of the World

20
Goal-Based Agents
• A set of goals is specified.
• The agent keeps track of
the world state as well as
the set of goals it is trying
to achieve, and chooses
an action that will
(eventually) lead to the
achievement of the goals.

21
Utility-Based Agents
• It uses a model of the
world along with a utility
function that measures its
preferences among the
different states of the
world, then chooses the
action that leads to the
best expected utility.

22

You might also like