0% found this document useful (0 votes)
44 views23 pages

AI Robotics Course Material Overview

Ai

Uploaded by

kolaratif9353
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views23 pages

AI Robotics Course Material Overview

Ai

Uploaded by

kolaratif9353
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

COURSE MATERIAL

ARTIFICIAL
SUBJECT INTELLIGENCE(19A05502T
)

UNIT 5

COURSE B.TECH

DEPARTMENT CSE

SEMESTER 3-1

Jasmine Sabena
PREPARED BY
Madhavilatha
(Faculty Name/s)

Version V-2

PREPARED / REVISED DATE 27-09-2022


TABLE OF CONTENTS – UNIT 3
S. NO CONTENTS PAGE NO.
1 COURSE OBJECTIVES 3
2 PREREQUISITES 3
3 SYLLABUS 3
4 COURSE OUTCOMES 4
5 CO - PO/PSO MAPPING 4
6 LESSON PLAN 4
7 ACTIVITY BASED LEARNING 4
8 LECTURE NOTES 5
Robotics: Introduction
2.1 5
Robot Hardware
2.2 10
Robotic Perception
2.3 12
Planning to move
2.4 14

2.5 planning uncertain movements 15


Moving, Robotic software architectures
2.6 16
application domains
2.7 20
Philosophical foundations: Weak AI
2.8 23
2.9 Strong AI, Ethics and Risks of AI
25
Agent Components
2.10 28
Agent Architectures
2.11 31
2.12 Are we going in the right direction, What if AI does succeed 34

9 PRACTICE QUIZ 35
10 ASSIGNMENTS 36
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 36
12 PART B QUESTIONS 37
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 37
14 REAL TIME APPLICATIONS 38
15 CONTENTS BEYOND THE SYLLABUS 38
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 38
17 MINI PROJECT SUGGESTION 38

1. Course Objectives
The objectives of this course is to
1. To understand To learn the basics of designing intelligent agents that can
solve general purpose problems
2. To represent and process knowledge, plan and act, reason under
uncertainty and
3. To learn from experiences.

2. Prerequisites
Students should have knowledge on
1. Database Management Systems
2. Data Warehousing and Mining

3. Syllabus
UNIT V
Robotics: Introduction, Robot Hardware, Robotic Perception, Planning to move,
planning uncertain movements, Moving, Robotic software architectures,
application domains
Philosophical foundations: Weak AI, Strong AI, Ethics and Risks of AI, Agent
Components, Agent Architectures, Are we going in the right direction, What if AI
does succeed.

4. Course outcomes:
1. Student must be able to understand fundamental of artificial intelligence (AI)
and its foundations
2. Student must be able to analyze principles of AI in solutions that require
problem solving, inference, perception and learning.
3. Student must be able to design various applications of AI techniques in artificial
neural networks and other machine learning models
4. Student must be able to demonstrate scientific method to models of machine
learning

5. Co-PO / PSO Mapping


Machine
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 P10 PO11 PO12 PSO1 PS
Tools
3 2 2 3
CO1
3 2 2 3
CO2
3 2 3 3 3
CO3
3 2 3 3 3
CO4

6. Lesson Plan
Lecture
Weeks Topics to be covered References
No.
Robotics: Introduction
1 1 T2
Robot Hardware T2
2
Robotic Perception T2
3
Planning to move T2
4
planning uncertain movements T2
5
Moving, Robotic software architectures T2
6
2
application domains T2
7
Philosophical foundations: Weak AI T2
8
Strong AI, Ethics and Risks of AI T2
9
Agent Components T2
10
Agent Architectures T2
11 3
Are we going in the right direction T2
12
What if AI does succeed T2
13

7. Activity Based Learning


1. Implementing the propositional and first order logics Model using Python

8. Lecture Notes
Robotics: Introduction

Introduction To Robots
What is the first thing that comes to mind when you think of a robot?

For many people it is a machine that imitates a human—like the androids in Star Wars,
Terminator and Star Trek: The Next Generation. However much these robots capture our
imagination, such robots still only inhabit Science Fiction. People still haven't been able to give
a robot enough 'common sense' to reliably interact with a dynamic world. However, Rodney
Brooks and his team at MIT Artificial Intelligence Lab are working on creating such humanoid
robots.

The type of robots that you will encounter most frequently are robots that do work that is too
dangerous, boring, onerous, or just plain nasty. Most of the robots in the world are of this
type. They can be found in auto, medical, manufacturing and space industries. In fact, there
are over a million of these type of robots working for us today.

Some robots like the Mars Rover Sojourner and the upcoming Mars Exploration Rover, or the
underwater robot Caribou help us learn about places that are too dangerous for us to go.
While other types of robots are just plain fun for kids of all ages. Popular toys such as Teckno,
Polly or AIBO ERS-220 seem to hit the store shelves every year around Christmas time.

And as much fun as robots are to play with, robots are even much more fun to build. In Being
Digital, Nicholas Negroponte tells a wonderful story about an eight year old, pressed during a
televised premier of MITMedia Lab's LEGO/Logo work at Hennigan School. A zealous anchor,
looking for a cute sound bite, kept asking the child if he was having fun playing with
LEGO/Logo. Clearly exasperated, but not wishing to offend, the child first tried to put her off.
After her third attempt to get him to talk about fun, the child, sweating under the hot
television lights, plaintively looked into the camera and answered, "Yes it is fun, but it's hard
fun."

But what exactly is a robot?

As strange as it might seem, there really is no standard definition for a robot. However, there
are some essential characteristics that a robot must have and this might help you to decide
what is and what is not a robot. It will also help you to decide what features you will need to
build into a machine before it can count as a robot.

A robot has these essential characteristics:

• Sensing First of all your robot would have to be able to sense its surroundings. It
would do this in ways that are not unsimilar to the way that you sense your
surroundings. Giving your robot sensors: light sensors (eyes), touch and pressure
sensors (hands), chemical sensors (nose), hearing and sonar sensors (ears), and taste
sensors (tongue) will give your robot awareness of its environment.
• Movement A robot needs to be able to move around its environment. Whether rolling
on wheels, walking on legs or propelling by thrusters a robot needs to be able to move.
To count as a robot either the whole robot moves, like the Sojourner or just parts of
the robot moves, like the Canada Arm.
• Energy A robot needs to be able to power itself. A robot might be solar powered,
electrically powered, battery powered. The way your robot gets its energy will depend
on what your robot needs to do.
• Intelligence A robot needs some kind of "smarts." This is where programming enters
the pictures. A programmer is the person who gives the robot its 'smarts.' The robot
will have to have some way to receive the program so that it knows what it is to do.
So what is a robot?

Well it is a system that contains sensors, control systems, manipulators, power supplies and
software all working together to perform a task. Designing, building, programming and testing
a robots is a combination of physics, mechanical engineering, electrical engineering, structural
engineering, mathematics and computing. In some cases biology, medicine, chemistry might
also be involved. A study of robotics means that students are actively engaged with all of
these disciplines in a deeply problem-posing problem-solving environment.

ROBOT HARDWARE
the agent architecture—sensors, effectors, and processors— as given, and we have concentrated on the agent
program. The success of real robots dependsat least as much on the design of sensors and effectors that are
appropriate for the task.

25.2.1 Sensors
Sensors are the perceptual interface between PASSIVE SENSOR robot and environment. Passive sensors, such
as cameras, are true observers of the environment: they capture signals that are generated by
other sources in the environment. Active sensors, such as sonar, send energy into the environment.
They rely on the fact that this energy is reflected back to the sensor. Active sensors
tend to provide more information than passive sensors, but at the expense of increased power
consumption and with a danger of interference when multiple active sensors are used at the
same time. Whether active or passive, sensors can be divided into three types, depending on
whether they sense the environment, the robot’s location, or the robot’s internal configuration.
Range finders are sensors that measure the distance to nearby objects. In the early
days of robotics, robots were commonly equipped with sonar sensors. Sonar sensors emit
directional sound waves, which are reflected by objects, with some of the sound making it

back into the sensor. The time and intensity of the returning signal indicates the distance to nearby objects.
Sonar is the technology of choice for autonomous underwater vehicles. Stereo vision relies on multiple cameras
to image the environment fromslightly different viewpoints, analyzing the resulting parallax in these images to
compute therange of surrounding objects. For mobile ground robots, sonar and stereo vision are now rarely used,
because they are not reliably accurate.
Most ground robots are now equipped with optical range finders. Just like sonar sensors, optical range sensors
emit active signals (light) and measure the time until a reflection of thissignal arrives back at the sensor. Figure (a)
shows a time of flight camera. This cameraacquires range images like the one shown in Figure (b) at up to 60
frames per second. Other range sensors use laser beams and special 1-pixel cameras that can be directed using
complex arrangements of mirrors or rotating elements. These sensors are called scanning lidars (short for light
detection and ranging). Scanning lidars tend to provide longer rangesthan time of flight cameras, and tend to
perform better in bright daylight.

Effectors
Effectors are the means by which robots move and change the shape of their bodies. To understand the design of
effectors, it will help to talk about motion and shape in the abstract,using the concept of a degree of freedom
(DOF) We count one degree of freedom for eachindependent direction in which a robot, or one of its effectors, can
move. For example, a rigidmobile robot such as an AUV has six degrees of freedom, three for its (x, y, z) location
in space and three for its angular orientation, known as yaw, roll, and pitch. These six degrees define the
kinematic state2 or pose of the robot. The dynamic state of a robot includes these six plus an additional six
dimensions for the rate of change of each kinematic dimension, thatis, their velocities.
For nonrigid bodies, there are additional degrees of freedom within the robot itself. Forexample, the elbow
of a human arm possesses two degree of freedom. It can flex the upper arm towards or away, and can rotate right
or left. The wrist has three degrees of freedom. It can move up and down, side to side, and can also rotate. Robot
joints also have one, two,or three degrees of freedom each. Six degrees of freedom are required to place an
object, such as a hand, at a particular point in a particular orientation. The arm in Figure 25.4(a)

Figure (a) The Stanford Manipulator, an early robot arm with five revolute joints (R) and one prismatic joint (P),
for a total of six degrees of freedom. (b) Motion of a nonholo- nomic four-wheeled vehicle with front-wheel
steering.

For mobile robots, the DOFs are not necessarily the same as the number of actuated ele- ments. Consider,
for example, your average car: it can move forward or backward, and it canturn, giving it two DOFs. In
contrast, a car’s kinematic configuration is three-dimensional: on an open flat surface, one can easily
maneuver a car to any (x, y) point, in any orientation.

Thus, the car has three effective degrees of freedom but two control-lable degrees of freedom. We say
a robot is nonholonomic if it has more effective DOFs
SVEC TIRUPATI
than controllable DOFs and holonomic if the two numbers are the same. Holonomic robotsare easier to
control—it would be much easier to park a car that could move sideways as wellas forward and backward—
but holonomic robots are also mechanically more complex. Mostrobot arms are holonomic, and most mobile
robots are nonholonomic.

(a) Mobile manipulator plugging its charge cable into a wall outlet. Image courtesy of Willow Garage, Ⓧ
c 2009. (b) One
of Marc Raibert’s legged robots in motion.

(a) displays a two-armed robot. This robot’s arms use springs to compensate for gravity, and they provide minimal
resistance to external forces. Such a design minimizes the physical danger to people who might stumbleinto such a robot.
This is a key consideration in deploying robots in domestic environments.
(b). This robot is dynamicallystable, meaning that it can remain upright while hopping around. A robot that can remain
upright without moving its legs is called statically stable. A robot is statically stable if its center of gravity is above the
polygon spanned by its legs.

(a) Four-legged dynamically-stable robot “Big Dog.” Image courtesy Boston Dynamics, Ⓧ c 2009. (b) 2009 RoboCup Standard
Platform League competition, showing the winning team, B-Human, from the DFKI center at the University of Bremen.
Throughout the match, B-Human outscored their opponents 64:1. Their success was built on probabilistic state estimation
using particle filters and Kalman filters; on machine-learning models for gait optimization; and on dynamic kicking moves.
Image courtesy DFKI, Ⓧ c 2009.

ROBOTIC PERCEPTION

Perception is the process by which robots map sensor measurements into internal representa-tions of the
environment. Perception is difficult because sensors are noisy, and the environ- ment is partially observable,
unpredictable, and often dynamic. In other words, robots have all the problems of state estimation (or filtering)

1|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
As a rule of thumb, good internal representations for robots have three properties: they contain
enough information for the robot to make good decisions, they are structured so that they can be
updated efficiently, and they are natural in the sense that internal variables correspond to natural
state variables in the physical world.
We saw that Kalman filters, HMMs, and dynamic Bayes nets can repre-sent the transition and sensor
models of a partially observable environment, and we described both exact and approximate
algorithms for updating the belief state—the posterior probabil-ity distribution over the environment
state variables.

Robot perception can be viewed as temporal inference from sequences ofactions and measurements, as illustrated by
this dynamic Bayes network.

Localization and mapping


Localization is the problem of finding out where things are—including the robot itself. Knowledge about where things
are is at the core of any successful physical interaction with the environment. For example, robot manipulators must know
the location of objects they seek to manipulate; navigating robots must know where they are to find their way around.
To keep things simple, let us consider a mobile robot that moves slowly in a flat 2D world. Let us also assume the robot is
given an exact map of the environment.
The pose of such a mobile robot is defined by its two Cartesian coordinates with values x and y and its heading with value
θ, as illustrated in Figure 25.8(a). If we arrange those three values in a vector, then any particular state is given by X t = (x t,
yt, θt )T. So far so good.

(a) A simplified kinematic model of a mobile robot. The robot is shown as a circle with an interior line marking the
forward direction. The state xt consists of the (xt, yt) position (shown implicitly) and the orientation θt. The new state
xt+1 is obtained by an update in position of vtΔt and in orientation of ωtΔt. Also shown is a landmark at (xi, yi) observed
at time t. (b) The range-scan sensor model. Two possible robot poses are shown for a given range scan (z1, z2, z3, z4). It
is much more likely that the pose on the left generated the range scan than the pose on the right.

2|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Monte Carlo localization

Other types of perception


Not all of robot perception is about localization or mapping. Robots also perceive the tem- perature, odors, acoustic signals,
and so on. Many of these quantities can be estimated usingvariants of dynamic Bayes networks. All that is required for such
estimators are conditionalprobability distributions that characterize the evolution of state variables over time, and sen-sor
models that describe the relation of measurements to state variables.
It is also possible to program a robot as a reactive agent, without explicitly reasoning about probability
distributions over states

Machine learning in robot perception


Machine learning plays an important role in robot perception. This is particularly the case when the best internal

3|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
representation is not known. One common approach is to map high- dimensional sensor streams into lower-dimensional spaces
using unsupervised machine learn-ing methods. Such an approach is called low-dimensional embedding. Machine learning
makes it possible to learn sensor and motion models from data, while si- multaneously discovering a suitable internal
representations.
Another machine learning technique enables robots to continuously adapt to broad changes in sensor measurements.
Picture yourself walking from a sun-lit space into a dark neon-lit room. Clearly things are darker inside. But the change of
light source also affects allthe colors: Neon light has a stronger component of green light than sunlight. Yet somehow we
seem not to notice the change. If we walk together with people into a neon-lit room, we don’t think that suddenly their
faces turned green. Our perception quickly adapts to the new lighting conditions, and our brain ignores the differences.
Methods that make robots collect their own training data (with labels!) are called self-supervised. In this instance,
the robot uses machine learning to leverage a short-range sensorthat works well for terrain classification into a sensor that
can see much farther. That allowsthe robot to drive faster, slowing down only when the sensor model says there is a change
inthe terrain that needs to be examined more carefully by the short-range sensors.

W EAK AI: CAN M ACHINES ACT I NTELLIGENTLY ?


whether AI is impossible depends on how it is defined. we de-fined AI as the quest for the best agent program on a
given architecture. With this formulation, AI is by definition possible: for any digital architecture with k bits of program
storage thereare exactly 2k agent programs, and all we have to do to find the best one is enumerate and testthem all. This
might not be feasible for large k, but philosophers deal with the theoretical, not the practical.

Our definition of AI works well for the engineering problem of finding a good agent, given an architecture. Therefore,
we’re tempted to end this section right now, answering thetitle question in the affirmative. But philosophers are interested
in the problem of compar- ing two architectures—human and machine. Furthermore, they have traditionally posed the
question not in terms of maximizing expected utility but rather as, “Can machines think?”

Alan Turing, in his famous paper “Computing Machinery and Intelligence” (1950), sug- gested that instead of
asking whether machines can think, we should ask whether machinescan pass a behavioral intelligence test, which has
come to be called the Turing Test. The testis for a program to have a conversation (via online typed messages) with an
interrogator forfive minutes. The interrogator then has to guess if the conversation is with a program or aperson; the
program passes the test if it fools the interrogator 30% of the time.
The argument from disability
The “argument from disability” makes the claim that “a machine can never do X.” As exam-ples of X, Turing lists the
following:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall
in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the
subject of its own thought, have as much diversity of behavior as man, do something really new

It is clear that computers can do many things as well as or better than humans, includingthings that people believe
require great human insight and understanding. This does not mean, of course, that computers use insight and
understanding in performing these tasks—those arenot part of behavior, and we address such questions elsewhere—
but the point is that one’s first guess about the mental processes required to produce a given behavior is often wrong. It
is also true, of course, that there are many tasks at which computers do not yet excel (to put it mildly), including
Turing’s task of carrying on an open-ended conversation.

4|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI

The mathematical objection


It is well known, through the work of Turing (1936) and Gödel (1931), that certain math- ematical questions are in
principle unanswerable by particular formal systems. Gödel’s in- completeness theorem (see Section 9.5) is the most
famous example of this. Briefly, for anyformal axiomatic system F powerful enough to do arithmetic, it is possible to
construct a so-called Gödel sentence G(F ) with the following properties:

• G(F ) is a sentence of F , but cannot be proved within F .


• If F is consistent, then G(F ) is true.
even if we grant that computers have limitations on what they can prove, there is no evidence that
humans are immune from those limitations. It isall too easy to show rigorously that a formal system cannot
do X, and then claim that hu-mans can do X using their own informal method, without giving any evidence for
this claim. Indeed, it is impossible to prove that humans are not subject to Gödel’s incompleteness theo-rem,
because any rigorous proof would require a formalization of the claimed unformalizablehuman talent, and hence
refute itself. So we are left with an appeal to intuition that humans can somehow perform superhuman feats of
mathematical insight. This appeal is expressed with arguments such as “we must assume our own consistency, if
thought is to be possible atall” (Lucas, 1976). But if anything, humans are known to be inconsistent. This is certainly
true for everyday reasoning, but it is also true for careful mathematical thought. A famous example is the four-
color map problem. Alfred Kempe published a proof in 1879 that was widely accepted and contributed to his
election as a Fellow of the Royal Society. In 1890,however, Percy Heawood pointed out a flaw and the theorem
remained unproved until 1977.

The argument from informality


One of the most influential and persistent criticisms of AI as an enterprise was raised by Tur-ing as the “argument from
informality of behavior.” Essentially, this is the claim that humanbehavior is far too complex to be captured by any simple
set of rules and that because com-puters can do no more than follow a set of rules, they cannot generate behavior as
intelligentas that of humans. The inability to capture everything in a set of logical rules is called the qualification problem
in AI.
1. Good generalization from examples cannot be achieved without background knowl- edge. They claim no
one has any idea how to incorporate background knowledge intothe neural network learning process. In
fact, that there are techniques for using prior knowledge in learning algorithms. Those techniques,
however, rely on the availability of knowledge in explicit form, something that Dreyfus and Dreyfus
strenuously deny. In our view, this is a good reason for a serious redesignof current models of neural
processing so that they can take advantage of previously learned knowledge in the way that other
learning algorithms do.
2. Neural network learning is a form of supervised learning, requiring the prior identification of relevant
inputs and correct outputs. Therefore, they claim, it cannot operate autonomously without the help of
a human trainer. In fact, learning without a teacher can be accomplished by unsupervised learning and
reinforcement learning .
3. Learning algorithms do not perform well with many features, and if we pick a subset of features, “there is
no known way of adding new features should the current set prove inadequate to account for the learned
facts.” In fact, new methods such as support vector machines handle large feature sets very well. With the
introduction of large Web-based data sets, many applications in areas such as language processing (Sha and
Pereira, 2003) and computer vision (Viola and Jones, 2002a) routinely handle millionsof features.
4. The brain is able to direct its sensors to seek relevant information and to process itto extract aspects
relevant to the current situation. But, Dreyfus and Dreyfus claim, “Currently, no details of this mechanism
are understood or even hypothesized in a waythat could guide AI research.” In fact, the field of active
vision, underpinned by the theory of information value , is concerned with exactly the problem of directing
sensors, and already some robots have incorporated the theoretical results obtained

5|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI

S TRONG AI: CAN M ACHINES REALLY T HINK ?


Many philosophers have claimed that a machine that passes the Turing Test would still not be actually thinking, but would
be only a simulation of thinking. Again, the objection was foreseen by Turing. He cites a speech by Professor Geoffrey
Jefferson (1949):
Not until a machine could write a sonnet or compose a concerto because of thoughts andemotions felt, and not by the
chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written
it.

Turing calls this the argument from consciousness—the machine has to be aware of its ownmental states and actions. While
consciousness is an important subject, Jefferson’s key point actually relates to phenomenology, or the study of direct
experience: the machine has to actually feel emotions. Others focus on intentionality—that is, the question of whether the
machine’s purported beliefs, desires, and other representations are actually “about” some- thing in the real world.

Turing argues that Jefferson would be willing to extend the polite convention to ma- chines if only he
had experience with ones that act intelligently. He cites the following dialog,which has become such a part of
AI’s oral tradition that we simply have to include it:
HUMAN: In the first line of your sonnet which reads “shall I compare thee to a summer’s day,” would
not a “spring day” do as well or better?

MACHINE: It wouldn’t scan.

HUMAN: How about “a winter’s day.” That would scan all right.

MACHINE:Yes, but nobody wants to be compared to a winter’s day.HUMAN:


Would you say Mr. Pickwick reminded you of Christmas?MACHINE: In a way.

HUMAN: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the
comparison.

MACHINE: I don’t think you’re serious. By a winter’s day one means a typical winter’sday, rather
than a special one like Christmas

Mental states and the brain in a vat


Physicalist philosophers have attempted to explicate what it means to say that a person—and,by extension, a computer—is
in a particular mental state. They have focused in particular on intentional states. These are states, such as believing,
knowing, desiring, fearing, and so on,that refer to some aspect of the external world. For example, the knowledge that one is
eatinga hamburger is a belief about the hamburger and what is happening to it.

If physicalism is correct, it must be the case that the proper description of a person’s mental state is determined by that
person’s brain state. Thus, if I am currently focused on eating a hamburger in a mindful way, my instantaneous brain state is
an instance of the class of mental states “knowing that one is eating a hamburger.” Of course, the specific configurationsof all
the atoms of my brain are not essential: there are many configurations of my brain, orof other people’s brain, that would
belong to the same class of mental states. The key point isthat the same brain state could not correspond to a fundamentally
distinct mental state, suchas the knowledge that one is eating a banana.
The “wide content” view interprets it from the point of view of an omniscient outside observer with access to the
whole situation, who can distinguish differences in the world. Under this view, the content of mental states involves both the
brainstate and the environment history. Narrow content, on the other hand, considers only the brain state. The narrow
content of the brain states of a real hamburger-eater and a brain-in-a-vat “hamburger”-“eater” is the same in both cases.

Functionalism and the brain replacement experiment


6|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
The theory of functionalism says that a mental state is any intermediate causal condition between input and output. Under
functionalist theory, any two systems with isomorphic causal processes would have the same mental states. Therefore, a
computer program could have the same mental states as a person. Of course, we have not yet said what “isomorphic” really
means, but the assumption is that there is some level of abstraction below which the specific implementation does not
matter.
And this explanation must also apply to the real brain, which has the same functional properties. There are three
possible conclusions:
1. The causal mechanisms of consciousness that generate these kinds of outputs in normalbrains are still
operating in the electronic version, which is therefore conscious.
2. The conscious mental events in the normal brain have no causal connection to behavior,and are missing from
the electronic brain, which is therefore not conscious.
3. The experiment is impossible, and therefore speculation about it is meaningless.
Biological naturalism and the Chinese Room
A strong challenge to functionalism has been mounted by John Searle’s (1980) biological naturalism, according to which
mental states are high-level emergent features that are caused by low-level physical processes in the neurons, and it is the
(unspecified) properties of the neurons that matter. Thus, mental states cannot be duplicated just on the basis of some pro-
gram having the same functional structure with the same input–output behavior; we would require that the program be
running on an architecture with the same causal power as neurons. To support his view, Searle describes a hypothetical system
that is clearly running a programand passes the Turing Test, but that equally clearly (according to Searle) does not understand
anything of its inputs and outputs. His conclusion is that running the appropriate program (i.e., having the right outputs) is
not a sufficient condition for being a mind.

So far, so good. But from the outside, we see a system that is taking input in the form of Chinese sentences and generating
answers in Chinese that are as “intelligent” as thosein the conversation imagined by Turing.4 Searle then argues: the
person in the room doesnot understand Chinese (given). The rule book and the stacks of paper, being just pieces ofpaper,
do not understand Chinese. Therefore, there is no understanding of Chinese. Hence,according to Searle, running the right
program does not necessarily generate understanding.

The real claim made by Searle rests upon thefollowing four axioms (Searle, 1990):
1. Computer programs are formal (syntactic).
2. Human minds have mental contents (semantics).
3. Syntax by itself is neither constitutive of nor sufficient for semantics.
4. Brains cause minds.
From the first three axioms Searle concludes that programs are not sufficient for minds. In other words, an agent running a
program might be a mind, but it is not necessarily a mind just by virtue of running the program. From the fourth axiom he
concludes “Any other system capable of causing minds would have to have causal powers (at least) equivalent to those
of brains.” From there he infers that any artificial brain would have to duplicate the causal powers of brains, not just run a
particular program, and that human brains do not produce mental phenomena solely by virtue of running a program.

Consciousness, qualia, and the explanatory gap

Running through all the debates about strong AI—the elephant in the debating room, soto speak—is the issue of
consciousness. Consciousness is often broken down into aspects such as understanding and self-awareness. The aspect we
will focus on is that of subjective experience: why it is that it feels like something to have certain brain states (e.g., while
eatinga hamburger), whereas it presumably does not feel like anything to have other physical states(e.g., while being a rock).
The technical term for the intrinsic nature of experiences is qualia(from the Latin word meaning, roughly, “such things”).
Qualia present a challenge for functionalist accounts of the mind because different qualia could be involved in what
are otherwise isomorphic causal processes. Consider, for example, the inverted spectrum thought experiment, which

7|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
the subjective experience of per-son X when seeing red objects is the same experience that the rest of us experience
when seeing green objects, and vice versa.

This explanatory gap has led some philosophers to conclude that humans are simply incapable of forming a proper
understanding of their own consciousness.Others, notably Daniel Dennett (1991), avoid the gap by denying the existence
of qualia,attributing them to a philosophical confusion.

THE ETHICS AND RISKS OF DEVELOPING ARTIFICIAL INTELLIGENCE


So far, we have concentrated on whether we can develop AI, but we must also consider whether we should. If the effects of
AI technology are more likely to be negative than positive, then it would be the moral responsibility of workers in the field to
redirect their research. Many new technologies have had unintended negative side effects: nuclear fission brought
Chernobyl and the threat of global destruction; the internal combustion engine brought air pollution, global warming, and
the paving-over of paradise. In a sense, automobiles are robots that have conquered the world by making themselves
indispensable.
AI, however, seems to pose some fresh problems beyond that of, say, building bridges that don’t fall
down:
• People might lose their jobs to automation.
• People might have too much (or too little) leisure time.
• People might lose their sense of being unique.
• AI systems might be used toward undesirable ends.
• The use of AI systems might result in a loss of accountability.
• The success of AI might mean the end of the human race.We will
look at each issue in turn.

People might lose their jobs to automation. The modern industrial economy has be-

come dependent on computers in general, and select AI programs in particular. For example,much of the
economy, especially in the United States, depends on the availability of con- sumer credit. Credit card
applications, charge approvals, and fraud detection are now done by AI programs. One could say that
thousands of workers have been displaced by these AI programs, but in fact if you took away the AI
programs these jobs would not exist, because human labor would add an unacceptable cost to the
transactions.
People might have too much (or too little) leisure time. Alvin Toffler wrote in Future Shock (1970),
“The work week has been cut by 50 percent since the turn of the century. It is not out of the way to predict
that it will be slashed in half again by 2000.” Arthur C. Clarke (1968b) wrote that people in 2001 might be
“faced with a future of utter boredom, where the main problem in life is deciding which of several hundred
TV channels to select.”

People might lose their sense of being unique. In Computer Power and Human Rea-son, Weizenbaum (1976), the
author of the ELIZA program, points out some of the potential threats that AI poses to society. One of Weizenbaum’s
principal arguments is that AI researchmakes possible the idea that humans are automata—an idea that results in a loss
of autonomyor even of humanity.
8|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI

AI systems might be used toward undesirable ends. Advanced technologies have often been used by the powerful to
suppress their rivals. As the number theorist G. H. Hardy wrote (Hardy, 1940), “A science is said to be useful if its
development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the
destruction of human life.” This holds for all sciences, AI being no exception. Autonomous AI systems are now
commonplace on the battlefield; the U.S. military deployed over 5,000 autonomousaircraft and 12,000 autonomous
ground vehicles in Iraq (Singer, 2009).

The use of AI systems might result in a loss of accountability. In the litigious atmo-sphere that prevails in the United
States, legal liability becomes an important issue. When aphysician relies on the judgment of a medical expert system
for a diagnosis, who is at fault ifthe diagnosis is wrong? Fortunately, due in part to the growing influence of decision-
theoreticmethods in medicine, it is now accepted that negligence cannot be shown if the physician performs medical
procedures that have high expected utility, even if the actual result is catas-trophic for the patient.

The success of AI might mean the end of the human race. Almost any technology has the potential to cause harm in
the wrong hands, but with AI and robotics, we have the new problem that the wrong hands might belong to the
technology itself. Countless science fictionstories have warned about robots or robot–human cyborgs running amok.

If ultraintelligent machines are a possibility, we humans would do well to make sure that we design their predecessors
in such a way that they design themselves to treat us well. Science fiction writer Isaac Asimov (1942) was the first to address
this issue, with his threelaws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders wouldconflict with the First
Law.
A robot must protect its own existence as long as such protection does not conflict with the First or
SecondLaw

AGENT COMPONENTS
Interaction with the environment through sensors and actuators: For much of thehistory of AI, this has been a
glaring weak point. With a few honorable exceptions, AI sys- tems were built in such a way that humans had to
supply the inputs and interpret the outputs,

Figure A model-based, utility-based agent

while robotic systems focused on low-level tasks in which high-level reasoning and plan- ning were largely
9|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
absent. This was due in part to the great expense and engineering effort required to get real robots to work
at all. The situation has changed rapidly in recent years with the availability of ready-made programmable
robots. These, in turn, have benefited from small, cheap, high-resolution CCD cameras and compact, reliable
motor drives. MEMS (micro-electromechanical systems) technology has supplied miniaturized
accelerometers, gy-roscopes, and actuators for an artificial flying insect (Floreano et al., 2009). It may also
be possible to combine millions of MEMS devices to produce powerful macroscopic actuators.
Keeping track of the state of the world: This is one of the core capabilities required for an intelligent
agent. It requires both perception and updating of internal representations. showed how to keep track of
atomic state representations, described how to do it for factored (propositional) state representations
extended this to first-order logic; and Chapter 15 described filtering algorithms for probabilistic reasoning
inuncertain environments. Current filtering and perception algorithms can be combined to do areasonable
job of reporting low-level predicates such as “the cup is on the table.” Detecting higher-level actions, such
as “Dr. Russell is having a cup of tea with Dr. Norvig while dis- cussing plans for next week,” is more
difficult. Currently it can be done only with the help of annotated examples.
Projecting, evaluating, and selecting future courses of action: The basic knowledge-representation
requirements here are the same as for keeping track of the world; the primarydifficulty is coping with courses
of action—such as having a conversation or a cup of tea—that consist eventually of thousands or millions
of primitive steps for a real agent. It is only by imposing hierarchical structure on behavior that we humans
cope at all.how to use hierarchical representations to handle problems of this scale; fur- ther more, work in
hierarchical reinforcement learning has succeeded in combining someof these ideas with the techniques
for decision making under uncertainty described in. As yet, algorithms for the partially observable case
(POMDPs) are using the same atomic state representation we used for the search algorithms

It has proven very difficult to decomposepreferences over complex states in the same way that Bayes
nets decompose beliefs over complex states. One reason may be that preferences over states are really
compiled from preferences over state histories, which are described by reward functions

Learning: Chapters 18 to 21 described how learning in an agent can be formulated as inductive


learning (supervised, unsupervised, or reinforcement-based) of the functions that constitute the various
components of the agent. Very powerful logical and statistical tech- niques have been developed that can
cope with quite large problems, reaching or exceedinghuman capabilities in many tasks—as long as we are
dealing with a predefined vocabulary of features and concepts.

AGENT ARCHITECTURES
It is natural to ask, “Which of the agent architectures should an agent use?” The answer is, “All of
them!” We have seen that reflex responses are needed for situations in which time is of the essence, whereas
knowledge-based deliberation allows the agent to plan ahead. A complete agent must be able to do both,
using a hybrid architecture. One important property of hybrid architectures is that the boundaries between
different decision components are not fixed. For example, compilation continually converts declarative in-
formation at the deliberative level into more efficient representations, eventually reaching the reflex level
For example, a taxi-driving agent that sees an accident ahead must decide in a split second either to brake or to take
evasive action. It should also spend that split second thinking about the most important questions, such as whether the
lanes to the left and right are clear and whether there is a large truck close behind, rather than worrying about wear and
tear on the tires or where to pick up the next passenger. These issues are usually studied under the heading of real-time
AI

10|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Fig:Compilation serves to convert deliberative decision making into more effi-cient, reflexive
mechanisms.

Clearly, there is a pressing need for general methods of controlling deliberation, ratherthan specific recipes for what to think
about in each situation. The first useful idea is to em-ploy anytime algorithms

The second technique for controlling deliberation is decision-theoretic metareasoning (Russell and Wefald, 1989, 1991;
Horvitz, 1989; Horvitz and Breese, 1996). This method applies the theory of information value to the selection of individual
computa-tions. The value of a computation depends on both its cost (in terms of delaying action) and its benefits (in terms of
improved decision quality). Metareasoning techniques can be used todesign better search algorithms and to guarantee that
the algorithms have the anytime prop-erty. Metareasoning is expensive, of course, and compilation methods can be applied
so thatthe overhead is small compared to the costs of the computations being controlled. Metalevelreinforcement learning
may provide another way to acquire effective policies for controllingdeliberation

Metareasoning is one specific example of a reflective architecture—that is, an archi-tecture that enables deliberation about
the computational entities and actions occurring withinthe architecture itself. A theoretical foundation for reflective
architectures can be built by defining a joint state space composed from the environment state and the computational stateof
the agent itself.

ARE WE GOING IN THE RIGHT DIRECTION?

The preceding section listed many advances and many opportunities for further progress. But where is this all leading?
Dreyfus (1992) gives the analogy of trying to get to the moon by climbing a tree; one can report steady progress, all the
way to the top of the tree. In this section, we consider whether AI’s current path is more like a tree climb or a rocket trip.

Perfect rationality. A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given
the information it has acquired from the environment. We have seen that the calculations necessary to achieve perfect
rationality in most environmentsare too time consuming, so perfect rationality is not a realistic goal.
Calculative rationality. This is the notion of rationality that we have used implicitly in de-signing logical and decision-
theoretic agents, and most of theoretical AI research has focusedon this property. A calculatively rational agent eventually
returns what would have been therational choice at the beginning of its deliberation. This is an interesting property for a
systemto exhibit, but in most environments, the right answer at the wrong time is of no value. In practice, AI system
designers are forced to compromise on decision quality to obtain reason- able overall performance; unfortunately, the
theoretical basis of calculative rationality does not provide a well-founded way to make such compromises.
Bounded rationality. Herbert Simon (1957) rejected the notion of perfect (or even approx-imately perfect)
rationality and replaced it with bounded rationality, a descriptive theory of decision making by real agents.

Bounded optimality (BO). A bounded optimal agent behaves as well as possible, given its computational resources.
That is, the expected utility of the agent program for a bounded optimal agent is at least as high as the expected
utility of any other agent program running onthe same machine.

WHAT IF AI DOES S UCCEED ?

11|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
In David Lodge’s Small World (1984), a novel about the academic world of literary criticism,the protagonist
causes consternation by asking a panel of eminent but contradictory literary theorists the following question:
“What if you were right?” None of the theorists seems to have considered this question before, perhaps
because debating unfalsifiable theories is an end in itself. Similar confusion can be evoked by asking AI
researchers, “What if you succeed?”
We can expect that medium-level successes in AI would affect all kinds of people in their daily lives.
So far, computerized communication networks, such as cell phones and theInternet, have had this kind of
pervasive effect on society, but AI has not. AI has been at work behind the scenes—for example, in
automatically approving or denying credit card transac-tions for every purchase made on the Web—but has
not been visible to the average consumer.We can imagine that truly useful personal assistants for the office
or the home would have alarge positive impact on people’s lives, although they might cause some economic
disloca- tion in the short term. Automated assistants for driving could prevent accidents, saving tens of
thousands of lives per year. A technological capability at this level might also be applied to the development
of autonomous weapons, which many view as undesirable. Some of the biggest societal problems we face
today—such as the harnessing of genomic information fortreating disease, the efficient management of
energy resources, and the verification of treatiesconcerning nuclear weapons—are being addressed with the
help of AI technologies.
Finally, it seems likely that a large-scale success in AI—the creation of human-level in-telligence and
beyond—would change the lives of a majority of humankind. The very natureof our work and play would be
altered, as would our view of intelligence, consciousness, and the future destiny of the human race. AI
systems at this level of capability could threaten hu-man autonomy, freedom, and even survival. For these
reasons, we cannot divorce AI researchfrom its ethical consequences
In conclusion, we see that AI has made great progress in its short history, but the final sentence of
Alan Turing’s (1950) essay on Computing Machinery and Intelligence is still valid today:

We can see only a short distance ahead, but we can see that
much remains to be done.

9. Practice Quiz
1. 1. What is the name for information sent from robot sensors to robot controllers?
a) temperature
b) pressure
c) feedback
d) signal
Answer: c

2. Which of the following terms refers to the rotational motion of a robot arm?
a) swivel
b) axle
c) retrograde
d) roll

Answer: d

3.What is the name for space inside which a robot unit operates?
a) environment
b) spatial base
c) work envelope
12|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
d) exclusion zone

Answer: c

4. Which of the following terms IS NOT one of the five basic parts of a robot?
a) peripheral tools
b) end effectors
c) controller
d) drive

Answer: a

5. Decision support programs are designed to help managers make __________


a) budget projections
b) visual presentations
c) business decisions
d) vacation schedules

Answer: c

6. PROLOG is an AI programming language which solves problems with a form of symbolic logic known as predicate
calculus. It was developed in 1972 at the University of Marseilles by a team of specialists. Can you name the person who
headed this team?
a) Alain Colmerauer
b) Niklaus Wirth
c) Seymour Papert
d) John McCarthy

Answer: a

7. The number of moveable joints in the base, the arm, and the end effectors of the robot determines_________
a) degrees of freedom
b) payload capacity
c) operational limits
d) flexibility

Answer: a

8. Which of the following places would be LEAST likely to include operational robots?
a) warehouse
b) factory
c) hospitals
d) private homes

Answer: d

9. For a robot unit to be considered a functional industrial robot, typically, how many degrees of freedom would the robot
have?
a) three
b) four
c) six
d) eight

Answer: c

10. Which of the basic parts of a robot unit would include the computer circuitry that could be programmed to determine what
the robot would do?
a) sensor
b) controller
c) arm
d) end effector

13|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Answer: b

10.Assignments

S.No Question BL CO
1 Explain briefly about Robotics 2 1
2 Write and Explain a Robot Hardware. 2 1
3 What is Robotic Perception? Explain briefly. 2 1
4 Explain about Robotic Software architecture 2 1

11. Part A- Question & Answers

S.No Question& Answers BL CO


1 Define FOL.
FOL is a first order logic. It is a representational language of
knowledge which is powerful than propositional logic (i.e.) 1 1
Boolean Logic. It is an expressive, declarative, compositional
language.
2 Define a knowledge Base:
Knowledge base is the central component of knowledge base
1 1
agent and it is described as a set of representations of facts
about the world
3 Define an inference procedure:
An inference procedure reports whether or not a sentence is
entiled by knowledge base provided a knowledge base and a
sentence . 1 1
An inference procedure ‘i’ can be described by the sentences
that it can derive. If i can derive from knowledge base, we can
write. KB --Alpha is derived from KB or i derives alpha from KB
4 Define Ontological commitment.
The difference between propositional and first order logic is in
1 1
the ontological commitment. It assumes about the nature of
reality
5 Define domain and domain elements.
The set of objects is called domain, sometimes these objects are 1 1
referred as domain elements.

12. Part B- Questions

S.No Question BL CO
1 Explain and differentiate between weak AI and Strong AI 1 1
2 Explain about Ethics and risk of developing AI? 2 1
3 Explain about agent Components? 2 1
4 Explain about agent Architecture? 2 1
5 What if AI does Suceed? 3 1

14|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI

13. Supportive Online Certification Courses


1. Opportunities in Artificial Intelligence and Enabling Technologies for Internet of
Things(IoT) FDP by AICTE – 2 weeks
2. Artificial Intelligence by Python organized by Brain Vision Solutions Pvt Limited– 1
week.

14. Real Time Applications


S.No Application CO
1 Artificial intelligence in banking. The banking sector is 1
revolutionized with the use of AI.
2 Artificial intelligence in marketing 1

3 AI in agriculture 1

4 Artificial intelligence in healthcare 1

5 AI in finance. 1

15. Contents Beyond the Syllabus:


1. Knowledge representation using other logic-Structured representation of knowledge.
2. Rule value approach for Knowledge representation

16. Prescribed Text Books & Reference Books , Text Book:


1. S. Russel and P. Norvig, “Artificial Intelligence – A Modern Approach”, Second
Edition, Pearson Education, 2003.
2. David Poole, Alan Mackworth, Randy Goebel,”Computational Intelligence: a
logical approach”, Oxford University Press, 2004

17. MINI PROJECT SUGGESTION:


1. Product Review Analysis For Genuine Rating Android Smart.
2. Online AI Shopping With M-Wallet System

15|A I - U N I T - V
BTECH_CSE-SEM 31

You might also like