AI Robotics Course Material Overview
AI Robotics Course Material Overview
ARTIFICIAL
SUBJECT INTELLIGENCE(19A05502T
)
UNIT 5
COURSE B.TECH
DEPARTMENT CSE
SEMESTER 3-1
Jasmine Sabena
PREPARED BY
Madhavilatha
(Faculty Name/s)
Version V-2
9 PRACTICE QUIZ 35
10 ASSIGNMENTS 36
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 36
12 PART B QUESTIONS 37
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 37
14 REAL TIME APPLICATIONS 38
15 CONTENTS BEYOND THE SYLLABUS 38
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 38
17 MINI PROJECT SUGGESTION 38
1. Course Objectives
The objectives of this course is to
1. To understand To learn the basics of designing intelligent agents that can
solve general purpose problems
2. To represent and process knowledge, plan and act, reason under
uncertainty and
3. To learn from experiences.
2. Prerequisites
Students should have knowledge on
1. Database Management Systems
2. Data Warehousing and Mining
3. Syllabus
UNIT V
Robotics: Introduction, Robot Hardware, Robotic Perception, Planning to move,
planning uncertain movements, Moving, Robotic software architectures,
application domains
Philosophical foundations: Weak AI, Strong AI, Ethics and Risks of AI, Agent
Components, Agent Architectures, Are we going in the right direction, What if AI
does succeed.
4. Course outcomes:
1. Student must be able to understand fundamental of artificial intelligence (AI)
and its foundations
2. Student must be able to analyze principles of AI in solutions that require
problem solving, inference, perception and learning.
3. Student must be able to design various applications of AI techniques in artificial
neural networks and other machine learning models
4. Student must be able to demonstrate scientific method to models of machine
learning
6. Lesson Plan
Lecture
Weeks Topics to be covered References
No.
Robotics: Introduction
1 1 T2
Robot Hardware T2
2
Robotic Perception T2
3
Planning to move T2
4
planning uncertain movements T2
5
Moving, Robotic software architectures T2
6
2
application domains T2
7
Philosophical foundations: Weak AI T2
8
Strong AI, Ethics and Risks of AI T2
9
Agent Components T2
10
Agent Architectures T2
11 3
Are we going in the right direction T2
12
What if AI does succeed T2
13
8. Lecture Notes
Robotics: Introduction
Introduction To Robots
What is the first thing that comes to mind when you think of a robot?
For many people it is a machine that imitates a human—like the androids in Star Wars,
Terminator and Star Trek: The Next Generation. However much these robots capture our
imagination, such robots still only inhabit Science Fiction. People still haven't been able to give
a robot enough 'common sense' to reliably interact with a dynamic world. However, Rodney
Brooks and his team at MIT Artificial Intelligence Lab are working on creating such humanoid
robots.
The type of robots that you will encounter most frequently are robots that do work that is too
dangerous, boring, onerous, or just plain nasty. Most of the robots in the world are of this
type. They can be found in auto, medical, manufacturing and space industries. In fact, there
are over a million of these type of robots working for us today.
Some robots like the Mars Rover Sojourner and the upcoming Mars Exploration Rover, or the
underwater robot Caribou help us learn about places that are too dangerous for us to go.
While other types of robots are just plain fun for kids of all ages. Popular toys such as Teckno,
Polly or AIBO ERS-220 seem to hit the store shelves every year around Christmas time.
And as much fun as robots are to play with, robots are even much more fun to build. In Being
Digital, Nicholas Negroponte tells a wonderful story about an eight year old, pressed during a
televised premier of MITMedia Lab's LEGO/Logo work at Hennigan School. A zealous anchor,
looking for a cute sound bite, kept asking the child if he was having fun playing with
LEGO/Logo. Clearly exasperated, but not wishing to offend, the child first tried to put her off.
After her third attempt to get him to talk about fun, the child, sweating under the hot
television lights, plaintively looked into the camera and answered, "Yes it is fun, but it's hard
fun."
As strange as it might seem, there really is no standard definition for a robot. However, there
are some essential characteristics that a robot must have and this might help you to decide
what is and what is not a robot. It will also help you to decide what features you will need to
build into a machine before it can count as a robot.
• Sensing First of all your robot would have to be able to sense its surroundings. It
would do this in ways that are not unsimilar to the way that you sense your
surroundings. Giving your robot sensors: light sensors (eyes), touch and pressure
sensors (hands), chemical sensors (nose), hearing and sonar sensors (ears), and taste
sensors (tongue) will give your robot awareness of its environment.
• Movement A robot needs to be able to move around its environment. Whether rolling
on wheels, walking on legs or propelling by thrusters a robot needs to be able to move.
To count as a robot either the whole robot moves, like the Sojourner or just parts of
the robot moves, like the Canada Arm.
• Energy A robot needs to be able to power itself. A robot might be solar powered,
electrically powered, battery powered. The way your robot gets its energy will depend
on what your robot needs to do.
• Intelligence A robot needs some kind of "smarts." This is where programming enters
the pictures. A programmer is the person who gives the robot its 'smarts.' The robot
will have to have some way to receive the program so that it knows what it is to do.
So what is a robot?
Well it is a system that contains sensors, control systems, manipulators, power supplies and
software all working together to perform a task. Designing, building, programming and testing
a robots is a combination of physics, mechanical engineering, electrical engineering, structural
engineering, mathematics and computing. In some cases biology, medicine, chemistry might
also be involved. A study of robotics means that students are actively engaged with all of
these disciplines in a deeply problem-posing problem-solving environment.
ROBOT HARDWARE
the agent architecture—sensors, effectors, and processors— as given, and we have concentrated on the agent
program. The success of real robots dependsat least as much on the design of sensors and effectors that are
appropriate for the task.
25.2.1 Sensors
Sensors are the perceptual interface between PASSIVE SENSOR robot and environment. Passive sensors, such
as cameras, are true observers of the environment: they capture signals that are generated by
other sources in the environment. Active sensors, such as sonar, send energy into the environment.
They rely on the fact that this energy is reflected back to the sensor. Active sensors
tend to provide more information than passive sensors, but at the expense of increased power
consumption and with a danger of interference when multiple active sensors are used at the
same time. Whether active or passive, sensors can be divided into three types, depending on
whether they sense the environment, the robot’s location, or the robot’s internal configuration.
Range finders are sensors that measure the distance to nearby objects. In the early
days of robotics, robots were commonly equipped with sonar sensors. Sonar sensors emit
directional sound waves, which are reflected by objects, with some of the sound making it
back into the sensor. The time and intensity of the returning signal indicates the distance to nearby objects.
Sonar is the technology of choice for autonomous underwater vehicles. Stereo vision relies on multiple cameras
to image the environment fromslightly different viewpoints, analyzing the resulting parallax in these images to
compute therange of surrounding objects. For mobile ground robots, sonar and stereo vision are now rarely used,
because they are not reliably accurate.
Most ground robots are now equipped with optical range finders. Just like sonar sensors, optical range sensors
emit active signals (light) and measure the time until a reflection of thissignal arrives back at the sensor. Figure (a)
shows a time of flight camera. This cameraacquires range images like the one shown in Figure (b) at up to 60
frames per second. Other range sensors use laser beams and special 1-pixel cameras that can be directed using
complex arrangements of mirrors or rotating elements. These sensors are called scanning lidars (short for light
detection and ranging). Scanning lidars tend to provide longer rangesthan time of flight cameras, and tend to
perform better in bright daylight.
Effectors
Effectors are the means by which robots move and change the shape of their bodies. To understand the design of
effectors, it will help to talk about motion and shape in the abstract,using the concept of a degree of freedom
(DOF) We count one degree of freedom for eachindependent direction in which a robot, or one of its effectors, can
move. For example, a rigidmobile robot such as an AUV has six degrees of freedom, three for its (x, y, z) location
in space and three for its angular orientation, known as yaw, roll, and pitch. These six degrees define the
kinematic state2 or pose of the robot. The dynamic state of a robot includes these six plus an additional six
dimensions for the rate of change of each kinematic dimension, thatis, their velocities.
For nonrigid bodies, there are additional degrees of freedom within the robot itself. Forexample, the elbow
of a human arm possesses two degree of freedom. It can flex the upper arm towards or away, and can rotate right
or left. The wrist has three degrees of freedom. It can move up and down, side to side, and can also rotate. Robot
joints also have one, two,or three degrees of freedom each. Six degrees of freedom are required to place an
object, such as a hand, at a particular point in a particular orientation. The arm in Figure 25.4(a)
Figure (a) The Stanford Manipulator, an early robot arm with five revolute joints (R) and one prismatic joint (P),
for a total of six degrees of freedom. (b) Motion of a nonholo- nomic four-wheeled vehicle with front-wheel
steering.
For mobile robots, the DOFs are not necessarily the same as the number of actuated ele- ments. Consider,
for example, your average car: it can move forward or backward, and it canturn, giving it two DOFs. In
contrast, a car’s kinematic configuration is three-dimensional: on an open flat surface, one can easily
maneuver a car to any (x, y) point, in any orientation.
Thus, the car has three effective degrees of freedom but two control-lable degrees of freedom. We say
a robot is nonholonomic if it has more effective DOFs
SVEC TIRUPATI
than controllable DOFs and holonomic if the two numbers are the same. Holonomic robotsare easier to
control—it would be much easier to park a car that could move sideways as wellas forward and backward—
but holonomic robots are also mechanically more complex. Mostrobot arms are holonomic, and most mobile
robots are nonholonomic.
(a) Mobile manipulator plugging its charge cable into a wall outlet. Image courtesy of Willow Garage, Ⓧ
c 2009. (b) One
of Marc Raibert’s legged robots in motion.
(a) displays a two-armed robot. This robot’s arms use springs to compensate for gravity, and they provide minimal
resistance to external forces. Such a design minimizes the physical danger to people who might stumbleinto such a robot.
This is a key consideration in deploying robots in domestic environments.
(b). This robot is dynamicallystable, meaning that it can remain upright while hopping around. A robot that can remain
upright without moving its legs is called statically stable. A robot is statically stable if its center of gravity is above the
polygon spanned by its legs.
(a) Four-legged dynamically-stable robot “Big Dog.” Image courtesy Boston Dynamics, Ⓧ c 2009. (b) 2009 RoboCup Standard
Platform League competition, showing the winning team, B-Human, from the DFKI center at the University of Bremen.
Throughout the match, B-Human outscored their opponents 64:1. Their success was built on probabilistic state estimation
using particle filters and Kalman filters; on machine-learning models for gait optimization; and on dynamic kicking moves.
Image courtesy DFKI, Ⓧ c 2009.
ROBOTIC PERCEPTION
Perception is the process by which robots map sensor measurements into internal representa-tions of the
environment. Perception is difficult because sensors are noisy, and the environ- ment is partially observable,
unpredictable, and often dynamic. In other words, robots have all the problems of state estimation (or filtering)
1|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
As a rule of thumb, good internal representations for robots have three properties: they contain
enough information for the robot to make good decisions, they are structured so that they can be
updated efficiently, and they are natural in the sense that internal variables correspond to natural
state variables in the physical world.
We saw that Kalman filters, HMMs, and dynamic Bayes nets can repre-sent the transition and sensor
models of a partially observable environment, and we described both exact and approximate
algorithms for updating the belief state—the posterior probabil-ity distribution over the environment
state variables.
Robot perception can be viewed as temporal inference from sequences ofactions and measurements, as illustrated by
this dynamic Bayes network.
(a) A simplified kinematic model of a mobile robot. The robot is shown as a circle with an interior line marking the
forward direction. The state xt consists of the (xt, yt) position (shown implicitly) and the orientation θt. The new state
xt+1 is obtained by an update in position of vtΔt and in orientation of ωtΔt. Also shown is a landmark at (xi, yi) observed
at time t. (b) The range-scan sensor model. Two possible robot poses are shown for a given range scan (z1, z2, z3, z4). It
is much more likely that the pose on the left generated the range scan than the pose on the right.
2|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Monte Carlo localization
3|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
representation is not known. One common approach is to map high- dimensional sensor streams into lower-dimensional spaces
using unsupervised machine learn-ing methods. Such an approach is called low-dimensional embedding. Machine learning
makes it possible to learn sensor and motion models from data, while si- multaneously discovering a suitable internal
representations.
Another machine learning technique enables robots to continuously adapt to broad changes in sensor measurements.
Picture yourself walking from a sun-lit space into a dark neon-lit room. Clearly things are darker inside. But the change of
light source also affects allthe colors: Neon light has a stronger component of green light than sunlight. Yet somehow we
seem not to notice the change. If we walk together with people into a neon-lit room, we don’t think that suddenly their
faces turned green. Our perception quickly adapts to the new lighting conditions, and our brain ignores the differences.
Methods that make robots collect their own training data (with labels!) are called self-supervised. In this instance,
the robot uses machine learning to leverage a short-range sensorthat works well for terrain classification into a sensor that
can see much farther. That allowsthe robot to drive faster, slowing down only when the sensor model says there is a change
inthe terrain that needs to be examined more carefully by the short-range sensors.
Our definition of AI works well for the engineering problem of finding a good agent, given an architecture. Therefore,
we’re tempted to end this section right now, answering thetitle question in the affirmative. But philosophers are interested
in the problem of compar- ing two architectures—human and machine. Furthermore, they have traditionally posed the
question not in terms of maximizing expected utility but rather as, “Can machines think?”
Alan Turing, in his famous paper “Computing Machinery and Intelligence” (1950), sug- gested that instead of
asking whether machines can think, we should ask whether machinescan pass a behavioral intelligence test, which has
come to be called the Turing Test. The testis for a program to have a conversation (via online typed messages) with an
interrogator forfive minutes. The interrogator then has to guess if the conversation is with a program or aperson; the
program passes the test if it fools the interrogator 30% of the time.
The argument from disability
The “argument from disability” makes the claim that “a machine can never do X.” As exam-ples of X, Turing lists the
following:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall
in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the
subject of its own thought, have as much diversity of behavior as man, do something really new
It is clear that computers can do many things as well as or better than humans, includingthings that people believe
require great human insight and understanding. This does not mean, of course, that computers use insight and
understanding in performing these tasks—those arenot part of behavior, and we address such questions elsewhere—
but the point is that one’s first guess about the mental processes required to produce a given behavior is often wrong. It
is also true, of course, that there are many tasks at which computers do not yet excel (to put it mildly), including
Turing’s task of carrying on an open-ended conversation.
4|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
5|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Turing calls this the argument from consciousness—the machine has to be aware of its ownmental states and actions. While
consciousness is an important subject, Jefferson’s key point actually relates to phenomenology, or the study of direct
experience: the machine has to actually feel emotions. Others focus on intentionality—that is, the question of whether the
machine’s purported beliefs, desires, and other representations are actually “about” some- thing in the real world.
Turing argues that Jefferson would be willing to extend the polite convention to ma- chines if only he
had experience with ones that act intelligently. He cites the following dialog,which has become such a part of
AI’s oral tradition that we simply have to include it:
HUMAN: In the first line of your sonnet which reads “shall I compare thee to a summer’s day,” would
not a “spring day” do as well or better?
HUMAN: How about “a winter’s day.” That would scan all right.
HUMAN: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the
comparison.
MACHINE: I don’t think you’re serious. By a winter’s day one means a typical winter’sday, rather
than a special one like Christmas
If physicalism is correct, it must be the case that the proper description of a person’s mental state is determined by that
person’s brain state. Thus, if I am currently focused on eating a hamburger in a mindful way, my instantaneous brain state is
an instance of the class of mental states “knowing that one is eating a hamburger.” Of course, the specific configurationsof all
the atoms of my brain are not essential: there are many configurations of my brain, orof other people’s brain, that would
belong to the same class of mental states. The key point isthat the same brain state could not correspond to a fundamentally
distinct mental state, suchas the knowledge that one is eating a banana.
The “wide content” view interprets it from the point of view of an omniscient outside observer with access to the
whole situation, who can distinguish differences in the world. Under this view, the content of mental states involves both the
brainstate and the environment history. Narrow content, on the other hand, considers only the brain state. The narrow
content of the brain states of a real hamburger-eater and a brain-in-a-vat “hamburger”-“eater” is the same in both cases.
So far, so good. But from the outside, we see a system that is taking input in the form of Chinese sentences and generating
answers in Chinese that are as “intelligent” as thosein the conversation imagined by Turing.4 Searle then argues: the
person in the room doesnot understand Chinese (given). The rule book and the stacks of paper, being just pieces ofpaper,
do not understand Chinese. Therefore, there is no understanding of Chinese. Hence,according to Searle, running the right
program does not necessarily generate understanding.
The real claim made by Searle rests upon thefollowing four axioms (Searle, 1990):
1. Computer programs are formal (syntactic).
2. Human minds have mental contents (semantics).
3. Syntax by itself is neither constitutive of nor sufficient for semantics.
4. Brains cause minds.
From the first three axioms Searle concludes that programs are not sufficient for minds. In other words, an agent running a
program might be a mind, but it is not necessarily a mind just by virtue of running the program. From the fourth axiom he
concludes “Any other system capable of causing minds would have to have causal powers (at least) equivalent to those
of brains.” From there he infers that any artificial brain would have to duplicate the causal powers of brains, not just run a
particular program, and that human brains do not produce mental phenomena solely by virtue of running a program.
Running through all the debates about strong AI—the elephant in the debating room, soto speak—is the issue of
consciousness. Consciousness is often broken down into aspects such as understanding and self-awareness. The aspect we
will focus on is that of subjective experience: why it is that it feels like something to have certain brain states (e.g., while
eatinga hamburger), whereas it presumably does not feel like anything to have other physical states(e.g., while being a rock).
The technical term for the intrinsic nature of experiences is qualia(from the Latin word meaning, roughly, “such things”).
Qualia present a challenge for functionalist accounts of the mind because different qualia could be involved in what
are otherwise isomorphic causal processes. Consider, for example, the inverted spectrum thought experiment, which
7|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
the subjective experience of per-son X when seeing red objects is the same experience that the rest of us experience
when seeing green objects, and vice versa.
This explanatory gap has led some philosophers to conclude that humans are simply incapable of forming a proper
understanding of their own consciousness.Others, notably Daniel Dennett (1991), avoid the gap by denying the existence
of qualia,attributing them to a philosophical confusion.
People might lose their jobs to automation. The modern industrial economy has be-
come dependent on computers in general, and select AI programs in particular. For example,much of the
economy, especially in the United States, depends on the availability of con- sumer credit. Credit card
applications, charge approvals, and fraud detection are now done by AI programs. One could say that
thousands of workers have been displaced by these AI programs, but in fact if you took away the AI
programs these jobs would not exist, because human labor would add an unacceptable cost to the
transactions.
People might have too much (or too little) leisure time. Alvin Toffler wrote in Future Shock (1970),
“The work week has been cut by 50 percent since the turn of the century. It is not out of the way to predict
that it will be slashed in half again by 2000.” Arthur C. Clarke (1968b) wrote that people in 2001 might be
“faced with a future of utter boredom, where the main problem in life is deciding which of several hundred
TV channels to select.”
People might lose their sense of being unique. In Computer Power and Human Rea-son, Weizenbaum (1976), the
author of the ELIZA program, points out some of the potential threats that AI poses to society. One of Weizenbaum’s
principal arguments is that AI researchmakes possible the idea that humans are automata—an idea that results in a loss
of autonomyor even of humanity.
8|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
AI systems might be used toward undesirable ends. Advanced technologies have often been used by the powerful to
suppress their rivals. As the number theorist G. H. Hardy wrote (Hardy, 1940), “A science is said to be useful if its
development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the
destruction of human life.” This holds for all sciences, AI being no exception. Autonomous AI systems are now
commonplace on the battlefield; the U.S. military deployed over 5,000 autonomousaircraft and 12,000 autonomous
ground vehicles in Iraq (Singer, 2009).
The use of AI systems might result in a loss of accountability. In the litigious atmo-sphere that prevails in the United
States, legal liability becomes an important issue. When aphysician relies on the judgment of a medical expert system
for a diagnosis, who is at fault ifthe diagnosis is wrong? Fortunately, due in part to the growing influence of decision-
theoreticmethods in medicine, it is now accepted that negligence cannot be shown if the physician performs medical
procedures that have high expected utility, even if the actual result is catas-trophic for the patient.
The success of AI might mean the end of the human race. Almost any technology has the potential to cause harm in
the wrong hands, but with AI and robotics, we have the new problem that the wrong hands might belong to the
technology itself. Countless science fictionstories have warned about robots or robot–human cyborgs running amok.
If ultraintelligent machines are a possibility, we humans would do well to make sure that we design their predecessors
in such a way that they design themselves to treat us well. Science fiction writer Isaac Asimov (1942) was the first to address
this issue, with his threelaws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders wouldconflict with the First
Law.
A robot must protect its own existence as long as such protection does not conflict with the First or
SecondLaw
AGENT COMPONENTS
Interaction with the environment through sensors and actuators: For much of thehistory of AI, this has been a
glaring weak point. With a few honorable exceptions, AI sys- tems were built in such a way that humans had to
supply the inputs and interpret the outputs,
while robotic systems focused on low-level tasks in which high-level reasoning and plan- ning were largely
9|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
absent. This was due in part to the great expense and engineering effort required to get real robots to work
at all. The situation has changed rapidly in recent years with the availability of ready-made programmable
robots. These, in turn, have benefited from small, cheap, high-resolution CCD cameras and compact, reliable
motor drives. MEMS (micro-electromechanical systems) technology has supplied miniaturized
accelerometers, gy-roscopes, and actuators for an artificial flying insect (Floreano et al., 2009). It may also
be possible to combine millions of MEMS devices to produce powerful macroscopic actuators.
Keeping track of the state of the world: This is one of the core capabilities required for an intelligent
agent. It requires both perception and updating of internal representations. showed how to keep track of
atomic state representations, described how to do it for factored (propositional) state representations
extended this to first-order logic; and Chapter 15 described filtering algorithms for probabilistic reasoning
inuncertain environments. Current filtering and perception algorithms can be combined to do areasonable
job of reporting low-level predicates such as “the cup is on the table.” Detecting higher-level actions, such
as “Dr. Russell is having a cup of tea with Dr. Norvig while dis- cussing plans for next week,” is more
difficult. Currently it can be done only with the help of annotated examples.
Projecting, evaluating, and selecting future courses of action: The basic knowledge-representation
requirements here are the same as for keeping track of the world; the primarydifficulty is coping with courses
of action—such as having a conversation or a cup of tea—that consist eventually of thousands or millions
of primitive steps for a real agent. It is only by imposing hierarchical structure on behavior that we humans
cope at all.how to use hierarchical representations to handle problems of this scale; fur- ther more, work in
hierarchical reinforcement learning has succeeded in combining someof these ideas with the techniques
for decision making under uncertainty described in. As yet, algorithms for the partially observable case
(POMDPs) are using the same atomic state representation we used for the search algorithms
It has proven very difficult to decomposepreferences over complex states in the same way that Bayes
nets decompose beliefs over complex states. One reason may be that preferences over states are really
compiled from preferences over state histories, which are described by reward functions
AGENT ARCHITECTURES
It is natural to ask, “Which of the agent architectures should an agent use?” The answer is, “All of
them!” We have seen that reflex responses are needed for situations in which time is of the essence, whereas
knowledge-based deliberation allows the agent to plan ahead. A complete agent must be able to do both,
using a hybrid architecture. One important property of hybrid architectures is that the boundaries between
different decision components are not fixed. For example, compilation continually converts declarative in-
formation at the deliberative level into more efficient representations, eventually reaching the reflex level
For example, a taxi-driving agent that sees an accident ahead must decide in a split second either to brake or to take
evasive action. It should also spend that split second thinking about the most important questions, such as whether the
lanes to the left and right are clear and whether there is a large truck close behind, rather than worrying about wear and
tear on the tires or where to pick up the next passenger. These issues are usually studied under the heading of real-time
AI
10|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Fig:Compilation serves to convert deliberative decision making into more effi-cient, reflexive
mechanisms.
Clearly, there is a pressing need for general methods of controlling deliberation, ratherthan specific recipes for what to think
about in each situation. The first useful idea is to em-ploy anytime algorithms
The second technique for controlling deliberation is decision-theoretic metareasoning (Russell and Wefald, 1989, 1991;
Horvitz, 1989; Horvitz and Breese, 1996). This method applies the theory of information value to the selection of individual
computa-tions. The value of a computation depends on both its cost (in terms of delaying action) and its benefits (in terms of
improved decision quality). Metareasoning techniques can be used todesign better search algorithms and to guarantee that
the algorithms have the anytime prop-erty. Metareasoning is expensive, of course, and compilation methods can be applied
so thatthe overhead is small compared to the costs of the computations being controlled. Metalevelreinforcement learning
may provide another way to acquire effective policies for controllingdeliberation
Metareasoning is one specific example of a reflective architecture—that is, an archi-tecture that enables deliberation about
the computational entities and actions occurring withinthe architecture itself. A theoretical foundation for reflective
architectures can be built by defining a joint state space composed from the environment state and the computational stateof
the agent itself.
The preceding section listed many advances and many opportunities for further progress. But where is this all leading?
Dreyfus (1992) gives the analogy of trying to get to the moon by climbing a tree; one can report steady progress, all the
way to the top of the tree. In this section, we consider whether AI’s current path is more like a tree climb or a rocket trip.
Perfect rationality. A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given
the information it has acquired from the environment. We have seen that the calculations necessary to achieve perfect
rationality in most environmentsare too time consuming, so perfect rationality is not a realistic goal.
Calculative rationality. This is the notion of rationality that we have used implicitly in de-signing logical and decision-
theoretic agents, and most of theoretical AI research has focusedon this property. A calculatively rational agent eventually
returns what would have been therational choice at the beginning of its deliberation. This is an interesting property for a
systemto exhibit, but in most environments, the right answer at the wrong time is of no value. In practice, AI system
designers are forced to compromise on decision quality to obtain reason- able overall performance; unfortunately, the
theoretical basis of calculative rationality does not provide a well-founded way to make such compromises.
Bounded rationality. Herbert Simon (1957) rejected the notion of perfect (or even approx-imately perfect)
rationality and replaced it with bounded rationality, a descriptive theory of decision making by real agents.
Bounded optimality (BO). A bounded optimal agent behaves as well as possible, given its computational resources.
That is, the expected utility of the agent program for a bounded optimal agent is at least as high as the expected
utility of any other agent program running onthe same machine.
11|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
In David Lodge’s Small World (1984), a novel about the academic world of literary criticism,the protagonist
causes consternation by asking a panel of eminent but contradictory literary theorists the following question:
“What if you were right?” None of the theorists seems to have considered this question before, perhaps
because debating unfalsifiable theories is an end in itself. Similar confusion can be evoked by asking AI
researchers, “What if you succeed?”
We can expect that medium-level successes in AI would affect all kinds of people in their daily lives.
So far, computerized communication networks, such as cell phones and theInternet, have had this kind of
pervasive effect on society, but AI has not. AI has been at work behind the scenes—for example, in
automatically approving or denying credit card transac-tions for every purchase made on the Web—but has
not been visible to the average consumer.We can imagine that truly useful personal assistants for the office
or the home would have alarge positive impact on people’s lives, although they might cause some economic
disloca- tion in the short term. Automated assistants for driving could prevent accidents, saving tens of
thousands of lives per year. A technological capability at this level might also be applied to the development
of autonomous weapons, which many view as undesirable. Some of the biggest societal problems we face
today—such as the harnessing of genomic information fortreating disease, the efficient management of
energy resources, and the verification of treatiesconcerning nuclear weapons—are being addressed with the
help of AI technologies.
Finally, it seems likely that a large-scale success in AI—the creation of human-level in-telligence and
beyond—would change the lives of a majority of humankind. The very natureof our work and play would be
altered, as would our view of intelligence, consciousness, and the future destiny of the human race. AI
systems at this level of capability could threaten hu-man autonomy, freedom, and even survival. For these
reasons, we cannot divorce AI researchfrom its ethical consequences
In conclusion, we see that AI has made great progress in its short history, but the final sentence of
Alan Turing’s (1950) essay on Computing Machinery and Intelligence is still valid today:
We can see only a short distance ahead, but we can see that
much remains to be done.
9. Practice Quiz
1. 1. What is the name for information sent from robot sensors to robot controllers?
a) temperature
b) pressure
c) feedback
d) signal
Answer: c
2. Which of the following terms refers to the rotational motion of a robot arm?
a) swivel
b) axle
c) retrograde
d) roll
Answer: d
3.What is the name for space inside which a robot unit operates?
a) environment
b) spatial base
c) work envelope
12|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
d) exclusion zone
Answer: c
4. Which of the following terms IS NOT one of the five basic parts of a robot?
a) peripheral tools
b) end effectors
c) controller
d) drive
Answer: a
Answer: c
6. PROLOG is an AI programming language which solves problems with a form of symbolic logic known as predicate
calculus. It was developed in 1972 at the University of Marseilles by a team of specialists. Can you name the person who
headed this team?
a) Alain Colmerauer
b) Niklaus Wirth
c) Seymour Papert
d) John McCarthy
Answer: a
7. The number of moveable joints in the base, the arm, and the end effectors of the robot determines_________
a) degrees of freedom
b) payload capacity
c) operational limits
d) flexibility
Answer: a
8. Which of the following places would be LEAST likely to include operational robots?
a) warehouse
b) factory
c) hospitals
d) private homes
Answer: d
9. For a robot unit to be considered a functional industrial robot, typically, how many degrees of freedom would the robot
have?
a) three
b) four
c) six
d) eight
Answer: c
10. Which of the basic parts of a robot unit would include the computer circuitry that could be programmed to determine what
the robot would do?
a) sensor
b) controller
c) arm
d) end effector
13|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
Answer: b
10.Assignments
S.No Question BL CO
1 Explain briefly about Robotics 2 1
2 Write and Explain a Robot Hardware. 2 1
3 What is Robotic Perception? Explain briefly. 2 1
4 Explain about Robotic Software architecture 2 1
S.No Question BL CO
1 Explain and differentiate between weak AI and Strong AI 1 1
2 Explain about Ethics and risk of developing AI? 2 1
3 Explain about agent Components? 2 1
4 Explain about agent Architecture? 2 1
5 What if AI does Suceed? 3 1
14|A I - U N I T - V
BTECH_CSE-SEM 31
SVEC TIRUPATI
3 AI in agriculture 1
5 AI in finance. 1
15|A I - U N I T - V
BTECH_CSE-SEM 31