0% found this document useful (0 votes)
10 views

AI_ML

The document outlines various types of Artificial Intelligence (AI) classified by capabilities, functionality, and applications, including Narrow AI, General AI, and Superintelligent AI. It also discusses Machine Learning (ML) as a subset of AI, detailing its types (supervised, unsupervised, and reinforcement learning), key concepts, challenges, and common algorithms. Additionally, it highlights the applications of ML in everyday life and introduces Deep Learning as an advanced form of ML using neural networks.

Uploaded by

sr5824241
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

AI_ML

The document outlines various types of Artificial Intelligence (AI) classified by capabilities, functionality, and applications, including Narrow AI, General AI, and Superintelligent AI. It also discusses Machine Learning (ML) as a subset of AI, detailing its types (supervised, unsupervised, and reinforcement learning), key concepts, challenges, and common algorithms. Additionally, it highlights the applications of ML in everyday life and introduces Deep Learning as an advanced form of ML using neural networks.

Uploaded by

sr5824241
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Here is an abstract list of the types of Artificial Intelligence (AI):

1. Based on Capabilities:

2. Based on Functionality:

3. Based on Application:

===================================================
Artificial Intelligence (AI) can be classified into different types based on
its capabilities and the tasks it is designed to perform.

1. Based on Capabilities:
a) Narrow AI (Weak AI)
●​ Definition: Narrow AI refers to AI systems designed to perform
specific tasks.
●​ These systems are highly specialized and excel in one domain, but
they do not possess general cognitive abilities or awareness.
●​ They are capable of performing a single, well-defined task with a
high level of efficiency.
●​ Examples:
○​ Virtual assistants (like Siri, Alexa)
○​ Image recognition software
○​ Recommendation algorithms (e.g., Netflix or Amazon
suggestions)
○​ Self-driving cars (in specific conditions)
●​ Characteristics:
○​ Task-specific
○​ Does not possess awareness, self-consciousness, or reasoning
outside its specific task
○​ Cannot adapt to new tasks outside its predefined domain

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


b) General AI (Strong AI)
●​ Definition: General AI refers to hypothetical AI systems that
possess the ability to understand, learn, and apply intelligence
across a wide range of tasks, much like a human. It would have
cognitive abilities comparable to those of humans, enabling it to
reason, learn, and apply knowledge across diverse domains.
●​ Examples: As of now, General AI does not exist. It's a concept in
research and development, often depicted in science fiction.
●​ Characteristics:
○​ Can perform any intellectual task a human can do
○​ Flexible and adaptable across various domains
○​ Can reason, understand abstract concepts, learn from
experiences, and apply knowledge in diverse situations
○​ Requires self-awareness, consciousness, and emotional
intelligence
c) Superintelligent AI
●​ Definition: Superintelligent AI would surpass human intelligence in
every field, including creativity, general wisdom, problem-solving,
and social intelligence. This type of AI could theoretically
outperform the best human minds in all economically valuable
work.
●​ Examples: This is a theoretical concept and has not been realized.
●​ Characteristics:
○​ Extremely advanced and beyond human intelligence
○​ Could outperform human experts in every field
○​ Associated with significant risks (such as loss of control or
ethical dilemmas) due to its intelligence level
○​ Potential for rapid, autonomous development

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


2. Based on Functionality (Approaches):
a) Reactive Machines
●​ Definition: These AI systems can only react to the current situation
and do not retain memory of past experiences. They are
task-specific and operate based on pre-programmed instructions
or learned patterns.
●​ Examples:
○​ IBM's Deep Blue, which defeated the world chess champion
Garry Kasparov. It could evaluate moves but had no memory
of past games or the ability to learn from them.
●​ Characteristics:
○​ Task-specific and lacks long-term memory
○​ Cannot improve or adapt outside of pre-set rules or learning
○​ No ability to store past experiences to influence current
decisions
b) Limited Memory
●​ Definition: These AI systems can look at past experiences or data
to make decisions and improve their performance. Unlike reactive
machines, limited memory AI can store previous information and
learn from it, but their memory is often limited and task-specific.
●​ Examples:
○​ Self-driving cars that use past data (e.g., road conditions,
previous trips) to inform current driving decisions.
●​ Characteristics:
○​ Retains data and uses it to improve decision-making in similar
future situations
○​ Can learn from past experiences but still has constraints on
memory and scope
○​ Used in more advanced AI systems like autonomous vehicles
and virtual assistants

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


c) Theory of Mind (Future AI)
●​ Definition: This type of AI would possess the ability to understand
emotions, intentions, and the mental states of others, similar to
human cognitive abilities. It would enable AI systems to interact
with humans in a more empathetic and contextually aware
manner.
●​ Examples: Currently, Theory of Mind AI is a theoretical concept
and does not yet exist in practical applications.
●​ Characteristics:
○​ Would be capable of understanding human emotions, beliefs,
and desires
○​ Could communicate and interact with humans on a deeper,
more intuitive level
○​ Would require advanced reasoning and social skills
d) Self-Aware AI (Ultimate Form)
●​ Definition: This is the most advanced type of AI, where the system
has self-awareness, consciousness, and can understand its own
existence. It would be capable of forming its own desires, goals,
and intentions, possibly making decisions autonomously and
beyond human control.
●​ Examples: Like General AI, self-aware AI is currently a concept and
does not exist.
●​ Characteristics:
○​ Consciousness and self-awareness
○​ Ability to understand and reflect upon its own actions,
motivations, and goals
○​ Could possess emotional intelligence, make independent
decisions, and have moral reasoning

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Based on Applications and Domains:
a) Artificial Narrow Intelligence (ANI)
●​ Definition: This is another term for Narrow AI, which excels in
specific tasks but lacks general intelligence.
●​ Examples:
○​ Chatbots (e.g., customer service bots)
○​ Predictive text (e.g., autocorrect)
○​ Facial recognition systems
b) Artificial General Intelligence (AGI)
●​ Definition: Also called Strong AI, it refers to AI that can learn and
reason across multiple domains, just like a human. AGI would
understand abstract concepts, learn from diverse experiences, and
apply that knowledge in new, different contexts.
●​ Examples: As of now, AGI does not exist.
c) Artificial Superintelligence (ASI)
●​ Definition: This is a level of intelligence far surpassing human
cognition in all aspects, from creativity to social interaction. ASI
would have the capacity to improve itself, making it potentially
dangerous if not carefully controlled.
●​ Examples: ASI is speculative at this point and is the subject of
much debate among AI researchers.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


https://www.tutorialspoint.com/machine_learning/index.htm

Machine Learning (ML):


Machine Learning is a type of Artificial Intelligence where computers
learn from data, rather than being explicitly programmed.
It means that instead of telling a computer exactly how to perform a
task, we give it examples, and the machine figures out the best way to
do the task on its own.
●​ Example: Imagine you're teaching a computer to recognize
pictures of cats. You give it thousands of pictures, some with cats
and some without. Over time, the computer learns to recognize
patterns in the pictures, like the shape of ears or the whiskers, that
help it identify a cat. It doesn’t need you to program every specific
detail—just the examples, and the computer learns from them.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Key Idea: Machine Learning enables systems to learn and improve from
experience without being explicitly programmed for each task.

Machine Learning (ML) Explained in Detail


Machine Learning (ML) is a field of Artificial Intelligence (AI) that
enables computers to learn from data and improve from experience
without being explicitly programmed for specific tasks.
Instead of being manually instructed on how to perform a task, machine
learning systems are given data and use statistical methods to infer
patterns, make predictions, or take actions based on that data.
The core idea behind machine learning is that the system automatically
learns from examples or experiences, and the more data it is provided,
the better its performance typically becomes over time.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Key Concepts of Machine Learning
1. Learning from Data
In traditional software development, a programmer writes explicit
instructions for every task a computer needs to perform.
In machine learning, the focus is on teaching the computer to
automatically discover patterns in data and use those patterns to make
decisions or predictions.
●​ Example: To recognize images of cats, instead of writing code to
explicitly identify every feature of a cat, you provide the machine
with thousands of images of cats (positive examples) and images of
other objects (negative examples). The machine then uses this
data to learn what characteristics (such as shape, color, and size)
are common in cat images.
2. Types of Machine Learning
Machine learning can be broadly categorized into three types based on
the way the model learns from the data.
a) Supervised Learning
Supervised learning is the most common type of machine learning. In
this approach, the machine learns from labeled data. "Labeled data"
means that for each input example, the correct output (label) is already
provided.
●​ How it works:
○​ You provide the model with a set of input-output pairs (the
data is labeled with the correct answer).
○​ The model learns to map the input to the correct output.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


○​ After training, the model can make predictions on new,
unseen data.
●​ Example: Email spam detection.
○​ Input: An email message with features such as the subject
line, sender, and content.
○​ Output (Label): "Spam" or "Not Spam"
○​ The machine learns which features in the email (like certain
words or patterns) indicate whether the email is spam or not.
●​ Common Algorithms: Linear Regression, Logistic Regression,
Decision Trees, Support Vector Machines (SVM), K-Nearest
Neighbors (KNN).
b) Unsupervised Learning
Unsupervised learning, unlike supervised learning, does not require
labeled data. The goal here is to find hidden patterns or structures in the
input data.
●​ How it works:
○​ The model is given data without any specific output labels.
○​ The algorithm tries to learn the inherent structure of the data
by finding clusters or groupings.
●​ Example: Customer segmentation in marketing.
○​ Input: Customer data, such as age, income, buying habits,
etc.
○​ The model groups customers into different segments based
on similar characteristics without knowing beforehand how
many segments there should be.
●​ Common Algorithms: K-means Clustering, Hierarchical Clustering,
Principal Component Analysis (PCA), Association Rule Learning.
c) Reinforcement Learning
In reinforcement learning, an agent learns by interacting with an
environment and receiving feedback in the form of rewards or

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


punishments.
●​ How it works:
○​ The agent takes actions in the environment and receives
feedback (rewards or penalties) based on those actions.
○​ The goal is to maximize the cumulative reward over time by
learning which actions lead to better outcomes.
●​ Example: A robot learning to navigate a maze.
○​ Action: The robot moves through the maze.
○​ Reward: The robot receives a positive reward when it moves
closer to the goal and a negative penalty if it hits an obstacle.
○​ Over time, the robot learns the best path to reach the goal.
●​ Common Algorithms: Q-Learning, Deep Q-Networks (DQN),
Proximal Policy Optimization (PPO), Monte Carlo methods.

3. Training the Model


In machine learning, the process of teaching a model is called training.
This involves feeding the model large amounts of data and adjusting the
model's parameters to minimize errors and improve accuracy.
●​ Training Data: A dataset used to teach the model. It includes
examples (input-output pairs in supervised learning or just input
data in unsupervised learning).
●​ Testing Data: A separate dataset used to evaluate the model's
performance after training. This helps assess how well the model
generalizes to new, unseen data.
●​ Validation Data: A subset of data used to tune hyperparameters
and prevent overfitting.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


4. Overfitting and Underfitting
Two common challenges in machine learning are overfitting and
underfitting:
●​ Overfitting: This occurs when the model learns the training data
too well, including the noise and small fluctuations, making it
overly complex and unable to generalize to new data. It performs
well on training data but poorly on testing data.
○​ Solution: Regularization techniques like L2 regularization,
reducing the model's complexity, or using more training data.
●​ Underfitting: This happens when the model is too simple and fails
to capture important patterns in the data, leading to poor
performance both on training and testing data.
○​ Solution: Use a more complex model or provide more
meaningful features to the model.
5. Features and Labels
●​ Features: The input variables or attributes used to make
predictions. For example, in a dataset of houses, features might
include the number of bedrooms, square footage, and location.
●​ Labels: The output or target variable that the model is trying to
predict. In supervised learning, labels are the known values used to
teach the model. For example, in a house price prediction model,
the label would be the price of the house.
6. Evaluation Metrics
To assess the performance of a machine learning model, various
evaluation metrics are used depending on the problem type (e.g.,
classification or regression).
●​ Classification Metrics: Accuracy, Precision, Recall, F1-Score,

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Confusion Matrix, AUC-ROC curve.
●​ Regression Metrics: Mean Squared Error (MSE), Mean Absolute
Error (MAE), R-squared.

Common Machine Learning Algorithms


Here are some well-known algorithms used in machine learning:
1.​ Linear Regression: A simple algorithm used for predicting
continuous values (e.g., house prices).
2.​ Logistic Regression: Used for binary classification tasks (e.g.,
determining whether an email is spam or not).
3.​ Decision Trees: A tree-like structure that makes decisions by
splitting data based on features.
4.​ K-Nearest Neighbors (KNN): A classification algorithm that
classifies a new point based on the majority label of its neighbors.
5.​ Support Vector Machines (SVM): A powerful classifier that finds
the optimal hyperplane to separate data into classes.
6.​ Neural Networks: Algorithms inspired by the human brain, used
for tasks like image recognition and speech processing.
7.​ Random Forests: An ensemble method using many decision trees
to improve prediction accuracy.
8.​ K-Means Clustering: An unsupervised learning algorithm used to
group data points into clusters based on similarity.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Applications of Machine Learning
Machine learning is already being used in many aspects of our daily
lives:
●​ Email Filtering: Automatically sorting emails into categories like
inbox, spam, or promotions.
●​ Recommendation Systems: Platforms like Netflix, Amazon, and
YouTube use ML to suggest content based on user preferences.
●​ Speech Recognition: Used in virtual assistants like Siri and Alexa to
recognize and respond to voice commands.
●​ Image Recognition: Identifying objects, faces, or text within images
(e.g., Google Photos).
●​ Healthcare: Diagnosing diseases from medical images, predicting
patient outcomes, and recommending treatments.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Deep Learning
Deep Learning is a subset of Machine Learning (ML) that uses
algorithms inspired by the structure and function of the human brain,
known as neural networks, to model and solve complex tasks such as
image recognition, natural language processing, and autonomous
driving.
While Machine Learning involves learning from data to make predictions
or decisions, Deep Learning takes this further by using deep neural
networks with multiple layers to automatically extract patterns and
features from raw data, making it suitable for more complex tasks that
involve large amounts of unstructured data.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Neural networks are machine learning models that mimic the complex
functions of the human brain. These models consist of interconnected
nodes or neurons that process data, learn patterns, and enable tasks
such as pattern recognition and decision-making.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Core Concepts of Deep Learning
1. Neural Networks: The Foundation of Deep Learning
At the heart of deep learning is the neural network, which is a
computational model inspired by the way biological neurons work in the
human brain. These networks consist of nodes (neurons) and layers that
process data to make predictions or decisions.
●​ Neuron: A basic computational unit in a neural network. It receives
input, processes it, and passes the output to the next layer.
●​ Weights and Biases: Neurons are connected through "weights"
that determine the strength of the connection. Biases help adjust
the output of neurons.
●​ Activation Function: After processing the input, an activation
function determines whether the neuron should be activated or
not, deciding the output of each neuron.
2. Layers in a Neural Network
Neural networks are organized into layers:
●​ Input Layer: The layer that receives the input data.
●​ Hidden Layers: Intermediate layers where the actual processing
and learning occur. Deep learning networks typically have many
hidden layers, hence the term "deep" in Deep Learning.
●​ Output Layer: The final layer that produces the prediction or
decision based on the learned features from the input data
3. Deep Neural Networks (DNN)
A Deep Neural Network (DNN) is a neural network with many hidden
layers. The depth (number of layers) allows the network to learn and
represent more complex relationships in the data, especially when
dealing with large and high-dimensional datasets.
●​ Shallow Networks: Networks with only one or two hidden layers

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


are considered shallow and may not capture complex patterns.
●​ Deep Networks: Networks with many hidden layers can learn
intricate features and hierarchies of patterns, enabling them to
perform tasks like image and speech recognition.

4. How Deep Learning Models Learn


Deep learning models are trained using data through a process called
backpropagation and gradient descent:
●​ Backpropagation: This algorithm adjusts the weights in the
network based on the error or difference between the predicted
output and the actual output.
●​ Gradient Descent: A method used to minimize the error by
adjusting the weights. The network calculates the gradient (slope)
of the error and moves in the direction that reduces the error.
This training process involves feeding data into the model, adjusting
weights, and refining predictions iteratively until the model achieves an
acceptable level of accuracy.
5. Deep Learning Architectures
There are several specialized deep learning architectures designed for
different types of tasks, particularly those involving unstructured data
like images, audio, and text.
●​ Convolutional Neural Networks (CNNs):
○​ CNNs are primarily used for tasks related to image and video

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


recognition, object detection, and image segmentation.
○​ Convolution Layers: These layers perform a mathematical
operation (convolution) to detect patterns like edges, shapes,
and textures in images.
○​ Pooling Layers: Reduce the spatial dimensions (size) of the
data, making the model more efficient.
○​ CNNs are highly effective because they automatically detect
hierarchical patterns in visual data, from edges to complex
objects.

●​ Recurrent Neural Networks (RNNs):


○​ RNNs are designed for tasks involving sequential data, such
as natural language processing (NLP), speech recognition, and
time-series prediction.
○​ Memory Cells: RNNs have feedback connections that allow
them to "remember" previous inputs, making them useful for
sequences like text or audio.
○​ Long Short-Term Memory (LSTM): A type of RNN that
improves the model's ability to remember long-term
dependencies in data by preventing the vanishing gradient
problem.
●​ Generative Adversarial Networks (GANs):
○​ GANs are used for generative tasks, such as creating new
images, videos, and even music.
○​ Two Networks: GANs consist of two networks, a generator
that creates data and a discriminator that evaluates the data.
They are trained together, with the generator attempting to
create realistic data while the discriminator tries to
distinguish between real and fake data.
○​ GANs have shown remarkable results in generating realistic
images and videos, and they are often used in creative

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


applications.
●​ Transformer Networks:
○​ Transformer architectures are primarily used in natural
language processing tasks, including translation,
summarization, and question-answering.
○​ Self-Attention Mechanism: This mechanism allows the model
to focus on different parts of the input sequence
simultaneously, making it more efficient and capable of
understanding the context of words or tokens in a sentence.
○​ BERT and GPT: Modern transformer-based models like BERT
(Bidirectional Encoder Representations from Transformers)
and GPT (Generative Pre-trained Transformer) have set new
records in NLP tasks, improving tasks like language
understanding and generation.

6. Training Deep Learning Models


Deep learning models require large amounts of labeled data to train
effectively. The training process involves:
●​ Data Preprocessing: Preparing the data by cleaning, normalizing,
and augmenting it to improve model performance.
●​ Model Architecture: Deciding on the structure of the neural
network (e.g., the number of layers and neurons).
●​ Hyperparameters: Tuning parameters like the learning rate, batch
size, and the number of epochs (iterations of training).
●​ Loss Function: A function that calculates the error between the
predicted output and the actual output. The model tries to
minimize this error during training.

7. Challenges in Deep Learning

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Deep learning, while powerful, faces several challenges:
●​ Data Requirements: Deep learning models require vast amounts of
labeled data to perform well, which may not always be available.
●​ Computational Resources: Deep learning models can be
computationally intensive, requiring powerful hardware like GPUs
(Graphics Processing Units) or TPUs (Tensor Processing Units).
●​ Interpretability: Deep learning models, especially deep neural
networks, are often considered "black boxes" because it is difficult
to understand how they make decisions or predictions.
●​ Overfitting: Deep networks are prone to overfitting if not properly
regularized or trained with enough data, meaning they perform
well on training data but poorly on unseen data.
●​
●​ Applications of Deep Learning
Deep learning has revolutionized many fields and is particularly effective
in solving complex problems that involve unstructured data. Some of the
major applications include:
●​ Computer Vision:
○​ Image recognition: Classifying images (e.g., detecting objects
or faces).
○​ Image segmentation: Dividing an image into segments for
better analysis.
○​ Object detection: Identifying and localizing objects in an
image (e.g., detecting cars or pedestrians in traffic).
○​ Facial recognition: Identifying or verifying individuals based
on facial features.
●​ Natural Language Processing (NLP):
○​ Machine translation: Automatically translating text between
languages (e.g., Google Translate).
○​ Text summarization: Condensing long pieces of text into
concise summaries.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


○​ Sentiment analysis: Determining the sentiment (positive,
negative, neutral) of a text or speech.
○​ Chatbots: Engaging in conversations with users, as seen in
virtual assistants like Siri, Alexa, and Google Assistant.
●​ Speech Recognition:
○​ Converting speech into text (e.g., voice-activated assistants
like Siri or Alexa).
○​ Voice-based control systems in applications and devices.
●​ Healthcare:
○​ Medical image analysis: Detecting diseases such as tumors or
abnormalities in medical images.
○​ Drug discovery: Identifying new potential drugs by analyzing
chemical compounds and molecular structures.
●​ Autonomous Vehicles:
○​ Self-driving cars use deep learning to understand their
environment, make decisions, and navigate safely without
human intervention.
●​ Gaming and Entertainment:
○​ Game AI: Deep learning is used to create more intelligent and
adaptive AI for video games.
○​ Content generation: AI-generated artwork, music, and even
video content.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Summary:
●​ Machine Learning: Teaches machines to learn from examples and
make predictions based on data.
●​ Deep Learning: A more complex form of machine learning, which
uses neural networks to process data and make decisions,
especially for difficult tasks like recognizing voices or images.
In short, machine learning is about learning from data, and deep
learning is a more advanced approach that learns from data using
models inspired by the brain.

Summary
Deep Learning is a powerful subset of Machine Learning that uses neural
networks with multiple layers (hence "deep") to automatically learn
features from raw data and perform complex tasks like image
recognition, natural language processing, and more.

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK


Through architectures like CNNs, RNNs, GANs, and Transformers, deep
learning has achieved remarkable success in fields ranging from
healthcare to autonomous driving, but it also comes with challenges,
such as the need for vast amounts of data and significant computational
power.
Despite these challenges, deep learning continues to drive innovation in
many areas of technology

UNIT 1 INTRODUCTION OF AI​ ​ ​ ​ ​ ​ ​ NMK

You might also like