0% found this document useful (0 votes)
22 views32 pages

Chapter 1

Uploaded by

suhan4me
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views32 pages

Chapter 1

Uploaded by

suhan4me
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 1

1 Introduction
1.1 What is AI?
1.4 The State of the Art
Basic Idea on Artificial Intelligence (AI)
• Simple Definition: AI, or Artificial Intelligence, is when machines or computers are designed to
think, learn, and solve problems like humans do. It's like teaching a computer to be smart and
make decisions on its own!
• A simple example of AI is virtual assistants like Siri or Google Assistant. They can understand what
you say, answer questions, set reminders, and even control smart devices, all by using AI to
process your requests and respond
• It’s hard to completely escape from AI since it’s embedded in so many aspects of modern life,
from smartphones and apps to online services and smart devices. AI is used in things like:
• Search engines
• Social media algorithms
• Navigation apps (like Google Maps)
• Online shopping recommendations
• Banking and fraud detection
•Virtual Assistants (like Siri, Google Assistant): They help with tasks like setting reminders, sending texts, and

answering questions.

•Facial Recognition: AI scans and recognizes your face to unlock your phone securely.

•Voice Recognition: AI recognizes your voice for voice commands, dictation, and unlocking the phone.

•Camera Enhancements: AI improves photos by adjusting settings like lighting, focus, and color based on the scene.

•Autocorrect and Predictive Text: AI suggests words or corrects typing errors while you're texting or typing.

•App Suggestions: AI analyzes your usage patterns to suggest apps you might need based on the time of day or your

routine.

•Battery Optimization: AI learns your phone usage and optimizes settings to conserve battery life.

•Smart Replies: AI generates quick, context-based replies in messaging apps.

•Personalized Content: AI curates news, social media feeds, or shopping suggestions based on your preferences.

•Object and Image Recognition: AI identifies objects, landmarks, and people in your photos, making it easier to search

and organize them.


The "father of AI" is widely considered to be John McCarthy, an American computer
scientist. He coined the term "Artificial Intelligence" in 1956 and was one of the key
figures in the development of AI as a field of research. McCarthy also developed the
programming language LISP, which became widely used in AI research
• Here are some of the most interesting breakthroughs in AI in recent years:

• 1. GPT Models (OpenAI's GPT-3 and GPT-4) : These large language models can generate human-
like text, answer questions, and assist with tasks like coding, writing, and research. GPT-4, in
particular, has significantly improved in understanding context, logic, and handling complex tasks
compared to its predecessors.
• 2. AlphaFold (by DeepMind) : AlphaFold made a groundbreaking advancement by solving the
protein-folding problem, predicting the 3D structure of proteins based solely on their amino acid
sequences. This has vast implications for biology, medicine, and drug discovery.
• 3. DALL·E and Stable Diffusion (Generative AI for Images): These AI models can generate highly
detailed and creative images from simple text prompts. This opens new doors in art, design, and
visual storytelling, allowing AI to create visually stunning content based on user descriptions.
• 4. Self-Supervised Learning :Traditional AI models rely on large amounts of labeled data, but self-
supervised learning models (like Facebook's SEER or DeepMind's BYOL) learn from vast amounts
of unlabelled data. This dramatically reduces the need for human-labeled datasets, speeding up
AI development.
• 5. Reinforcement Learning Breakthroughs (AlphaGo/AlphaZero) : DeepMind’s AlphaGo and its
successor AlphaZero mastered games like Go, Chess, and Shogi using reinforcement learning.
AlphaZero can learn how to play these games from scratch, with no human input beyond the
rules.
• 6. AI in Healthcare :AI models, like Google’s DeepMind Health, are being used to detect
diseases such as diabetic retinopathy, breast cancer, and more by analyzing medical
images. AI tools are becoming crucial in early diagnosis and precision medicine.
• 7. Neural Radiance Fields (NeRF) : NeRF is an AI technique that can create 3D models of
objects from 2D images. It allows for ultra-realistic renderings and reconstructions, which
can have applications in video games, virtual reality, and digital preservation.
• 8. Chatbots and Conversational AI : Conversational AI like ChatGPT and Google's Bard
have made customer support and online interactions much more fluid and intelligent.
These systems can hold meaningful conversations, answer complex questions, and even
manage tasks autonomously.
• 9. Generative Adversarial Networks (GANs) : GANs are used to create highly realistic
images, videos, and even voices. They work by having two neural networks (a generator
and a discriminator) compete with each other to improve the quality of generated
outputs.
• 10. AI in Robotics (Boston Dynamics) : AI combined with advanced robotics has resulted
in robots like Boston Dynamics' Spot and Atlas, which can navigate challenging terrains,
perform complex movements, and assist in industries like construction, healthcare, and
rescue operations.
The relationship between Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) is hierarchical,
with each being a subset of the other.
• Artificial Intelligence (AI):
• AI is the broadest concept. It refers to the development of computer systems that can perform tasks
typically requiring human intelligence, such as decision-making, problem-solving, and understanding
language. AI encompasses all techniques that enable machines to simulate intelligent behavior.
• Example: Virtual assistants like Siri or self-driving cars.
• Machine Learning (ML):
• ML is a subset of AI. It involves training computers to learn from data and improve over time without
being explicitly programmed for each task. In ML, algorithms find patterns or make decisions based on
data.
• Example: A spam filter that learns to identify and block unwanted emails by analyzing incoming emails.
• ML focuses on developing algorithms that allow systems to automatically improve through experience.
• Deep Learning (DL):
• DL is a subset of ML that uses neural networks with many layers (hence "deep") to learn complex
patterns in data. It is inspired by the structure of the human brain. DL models can handle large amounts
of unstructured data, such as images, videos, and text, and are especially useful in tasks like image
recognition, speech processing, and natural language understanding.
• Example: Image recognition systems, like those used by Facebook to tag friends in photos.
• DL models, like convolutional neural networks (CNNs) and recurrent neural networks (RNNs), are at the
core of many advanced AI applications today.
Hierarchical Structure:
•AI is the overarching concept (intelligent
systems),
•ML is a method within AI (learning from
data),
•DL is a specialized type of ML (using
neural networks for complex tasks).
In summary:
•AI = The big goal of creating intelligent
systems.
•ML = A way to achieve AI by enabling
machines to learn from data.
•DL = A more advanced form of ML, using
layered neural networks to solve complex
problems.
Coming to the actual syllabus
What is AI?
• We call ourselves Homo sapiens—man the wise—because our intelligence is so
important to us. For thousands of years, we have tried to understand how we
think; that is, how a mere handful of matter can perceive, understand, predict,
and manipulate a world far larger and more complicated than itself.
• The field of artificial intelligence, or AI, goes further still: it attempts not just to
understand but also to build intelligent entities.
• AI is one of the newest fields in science and engineering. Work started in earnest
soon after World War II, and the name itself was coined in 1956.
• Along with molecular biology, AI is regularly cited as the “field I would most like
to be in” by scientists in other disciplines.
• Astudent in physics might reasonably feel that all the good ideas have already
been taken by Galileo, Newton, Einstein, and the rest. AI, on the other hand, still
has openings for several full-time Einsteins and Edisons.
• The Turing test: Can a computer pass for a human? - Alex Gendler
(youtube.com)
• What is a Turing Test? A Brief History of the Turing Test and its Impact
(youtube.com)
1.1.1 Acting humanly: The Turing Test approach
• The Turing Test is one of the most well-known and debated concepts
in artificial intelligence (AI)
• It was proposed by the British mathematician and computer scientist
Alan Turing in 1950 in his seminal paper, “Computing Machinery and
Intelligence.” He proposed that the “Turing test is used to determine
whether or not a computer(machine) can think intelligently like
humans”?
What is the Turing Test?

The Turing Test is a widely recognized benchmark for evaluating a


machine’s ability to demonstrate human-like intelligence. The core idea
is simple: A human judge engages in a text-based conversation with
both a human and a machine. The judge’s task is to determine which
participant is human and which is the machine. If the judge is unable to
distinguish between the human and the machine based solely on the
conversation, the machine is said to have passed the Turing Test.
Criteria for the Turing Test
• The Turing Test does not require the machine to be correct or logical
in its responses but rather to be convincing in simulating human
conversation. The test is fundamentally about deception—the
machine must fool the judge into believing that it is human.
• The computer would need to possess the following capabilities:
• natural language processing to enable it to communicate successfully
in English;
• knowledge representation to store what it knows or hears;
• automated reasoning to use the stored information to answer
questions and to draw new conclusions;
• machine learning to adapt to new circumstances and to detect and
extrapolate patterns.
How the Turing Test Works?
• In a typical Turing Test scenario, three participants are involved: two humans and
one machine.
• The interrogator, a human judge, is isolated from the other two participants. The
judge asks questions to both the human and the machine, aiming to identify
which one is the human. The machine’s goal is to respond in a way that makes it
indistinguishable from the human participant. If the judge cannot reliably identify
the machine, the machine is considered to have passed the Turing Test.
Here’s an example of a conversation between the interrogator and the
machine:
• Judge: Are you a computer?
• Machine: No.
• Judge: Multiply 158745887 by 56755647.
• Machine: (After a long pause) [Provides an incorrect answer].
• Judge: Add 5,478,012 and 4,563,145.
• Machine: (Pauses for 20 seconds and then responds) 10,041,157.
• If the judge cannot distinguish between the responses of the human
and the machine, the machine passes the test. The conversation is
limited to a text-only format, such as a computer keyboard and
screen, to prevent the judge from being influenced by any non-verbal
cues.
1.1.2 Thinking humanly: The cognitive
modeling approach
• Cognitive artificial intelligence (Cognitive AI) refers to systems that
mimic human thought processes and simulate the way humans learn
and interact with information.
• If we are going to say that a given program thinks like a human, we
must have some way of determining how humans think. We need to
get inside the actual workings of human minds
• There are three ways to do this:
• through introspection—trying to catch our own thoughts as they go by;
• through psychological experiments—observing a person in action;
• through brain imaging—observing the brain in action.
• Once we have a sufficiently precise theory of the mind, it becomes
possible to express the theory as a computer program
• If the program’s input–output behavior matches corresponding
human behavior, that is evidence that some of the program’s
mechanisms could also be operating in humans
1.1.3 Thinking rationally: The “laws of
thought” approach
• The Greek philosopher Aristotle was one of the first to attempt to
codify “right thinking,” that is, irrefutable reasoning processes.
• His syllogisms provided patterns for argument structures that always
yielded correct conclusions when given correct premises
• for example, “Socrates is a man; all men are mortal; therefore,
Socrates is mortal.”
• These laws of thought were supposed to govern the operation of the
mind; their study initiated the field called logic
• Logicians in the 19th century developed a precise notation for
statements about all kinds of objects in the world and the relations
among them
• By 1965, programs existed that could, in principle, solve any solvable
problem described in logical notation.
• The so-called logicist tradition within artificial intelligence hopes to
build on such programs to create intelligent systems
• There are two main obstacles to this approach
• First, it is not easy to take informal knowledge and state it in the
formal terms required by logical notation, particularly when the
knowledge is less than 100% certain
• Second, there is a big difference between solving a problem “in
principle” and solving it in practice
• Even problems with just a few hundred facts can exhaust the
computational resources of any computer unless it has some
guidance as to which reasoning steps to try first.
1.1.4 Acting rationally: The rational agent approach
• An agent is just something that acts
• Of course, all computer programs do something, but computer agents are
expected to do more: operate autonomously, perceive their environment,
persist over a prolonged time period, adapt to change, and create and pursue
goals
• A rational agent is one that acts so as to achieve the best outcome or, when
there is uncertainty, the best expected outcome.
• Example: Self-driving cars make decisions based on sensor data and optimize
for safety and efficiency
• (In artificial intelligence (AI), an agent refers to any entity or system that
perceives its environment through sensors, processes that information, and
then takes actions to achieve specific goals. The agent interacts with its
environment, trying to maximize a certain performance measure based on its
perception and actions.)
• The "laws of thought" approach in AI focuses on making correct
inferences, as logical reasoning can help an agent achieve its goals.
However, not all rational actions involve inference, as some, like
reflexes, are instinctive and don't require logical deliberation
• The skills needed for the Turing Test help an agent act rationally.
These include reasoning for making good decisions, using natural
language to communicate effectively, and learning to improve
behavior
• The rational-agent approach has two advantages: it is more flexible
than the "laws of thought" approach, as correct reasoning is just one
way to act rationally, and it is easier to develop scientifically than
methods based on human behavior or thought
1.4 THE STATE OF THE ART(What can AI do today?
Applications of AI, examples)
• Robotic vehicles: A driverless robotic car named STANLEY sped through the rough
terrain of the Mojave dessert at 22 mph, finishing the 132-mile course first to win
the 2005 DARPAGrand Challenge. STANLEY is a Volkswagen Touareg outfitted with
cameras, radar, and laser rangefinders to sense the environment and onboard
software to command the steer ing, braking, and acceleration (Thrun, 2006). The
following year CMU’s BOSS won the Ur ban Challenge, safely driving in traffic
through the streets of a closed Air Force base, obeying traffic rules and avoiding
pedestrians and other vehicles.
• Speech recognition: A traveler calling United Airlines to book a flight
can have the entire conversation guided by an automated speech
recognition and dialog management system
a traveler can interact with an AI system over the phone. This AI system recognizes the traveler's spoken words
(speech recognition) and manages the conversation, guiding the user through the flight booking process without
the need for a human operator. The system can understand spoken inputs, respond accordingly, and assist in
completing the booking
• Autonomous planning and scheduling: A hundred million miles from
Earth, NASA’s Remote Agent program became the first on-board
autonomous planning program to control the scheduling of
operations for a spacecraft (Jonsson et al., 2000). REMOTE AGENT
generated plans from high-level goals specified from the ground and
monitored the execution of those plans—detecting, diagnosing, and
recovering from problems as they occurred. Succes sor program
MAPGEN (Al-Chang et al., 2004) plans the daily operations for NASA’s
Mars Exploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission
planning—both logistics and science planning—for the European
Space Agency’s Mars Express mission in 2008.
• Game playing: IBM’sDEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score of 3.5
to 2.5 in an exhibition match (Goodman and Keene, 1997). Kasparov said that he
felt a “new kind of intelligence” across the board from him. Newsweek magazine
described the match as “The brain’s last stand.” The value of IBM’s stock
increased by $18 billion. Human champions studied Kasparov’s loss and were
able to draw a few matches in subsequent years, but the most recent human-
computer matches have been won convincingly by the computer.
Spam fighting: Each day, learning algorithms classify over a billion messages as
spam, saving the recipient from having to waste time deleting what, for many
users, could comprise 80% or 90% of all messages, if not classified away by
algorithms. Because the spammers are continually updating their tactics, it is
difficult for a static programmed approach to keep up, and learning algorithms
work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
• Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do
automated logistics planning and scheduling for transportation. This involved up
to 50,000 vehicles, cargo, and people at a time, and had to account for starting
points, destinations, routes, and conflict resolution among all parameters. The AI
planning techniques generated in hours a plan that would have taken weeks with
older methods. The Defense Advanced Research Project Agency (DARPA) stated
that this single application more than paid back DARPA’s 30-year investment in AI
• Robotics: The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use. The company also deploys the more rugged
PackBot to Iraq and Afghanistan, where it is used to handle hazardous materials,
clear explosives, and identify the location of snipers.
• Machine Translation: A computer program automatically translates from Arabic
to English, allowing an English speaker to see the headline “Ardogan Confirms
That Turkey Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.”
The program uses a statistical model built from examples of Arabic-to-English
translations and from examples of English text totaling two trillion words (Brants
et al., 2007). None of the computer scientists on the team speak Arabic, but they
do understand statistics and machine learning algorithms.

You might also like