0% found this document useful (0 votes)
18 views16 pages

AI As Subset

Uploaded by

xorayo4472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views16 pages

AI As Subset

Uploaded by

xorayo4472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

supervised nlp

ML
unsupervised deep learning

reinforcement
r v i s e d
pe
1. Supervised Learning Algorithms:
su 2. Linear Regression
3. Logistic Regression
4. Decision Trees
5. Random Forest
6. Gradient Boosting Machines (GBM)
7. Support Vector Machines (SVM)
8. Naive Bayes
9. k-Nearest Neighbors (k-NN)
10. Neural Networks (Multi-layer Perceptron)
11. Gaussian Processes
12. Ensemble Methods (e.g., AdaBoost)
r v i s e d
n s u p e 1. K-Means Clustering
u 2. Hierarchical Clustering
3. DBSCAN (Density-Based Spatial Clustering
of Applications with Noise)
4. Mean Shift
5. Expectation-Maximization (EM) Algorithm
6. Principal Component Analysis (PCA)
7. Independent Component Analysis (ICA)
8. Autoencoders
9. Generative Adversarial Networks (GANs)
10. Self-Organizing Maps (SOM)
m e n t
f o r c e
re i n 1. Q-Learning
2. Deep Q-Networks (DQN)
3. Policy Gradient Methods
4. Actor-Critic Methods
5. Monte Carlo Methods
6. Temporal Difference (TD)
Learning
7.
e
de ingp 1. Feedforward Neural Networks

l e a r n 2. Recurrent Neural Networks (RNNs)


3. Long Short-Term Memory (LSTM)
Networks
4. Convolutional Neural Networks
(CNNs)
5. Generative Adversarial Networks
(GANs)
6. Transformer Models (BERT, GPT)
DEEP LEARNING
1. Autoencoders:
Variational Autoencoders (VAEs): Train VAEs to learn a probabilistic representation of normal
data and generate reconstructions. Anomalies are identified based on high reconstruction
errors.
Denoising Autoencoders: Train autoencoders to reconstruct clean data from noisy input.
Anomalies result in larger reconstruction errors due to their deviation from normal patterns.
Sparse Autoencoders: Use autoencoders with sparsity constraints to learn compact
representations of normal data. Outliers or anomalies often fail to be reconstructed accurately.
2. Recurrent Neural Networks (RNNs):
LSTM (Long Short-Term Memory) Networks: Apply LSTMs to model sequential data and detect
anomalies based on deviations from learned temporal patterns.
GRU (Gated Recurrent Unit) Networks: Use GRUs for sequence modeling and anomaly detection in
time-series or sequential data.
3.
GENERATIVE ADVERSARIAL NETWORKS (GANS):
ANOMALY DETECTION GANS: TRAIN GANS TO GENERATE REALISTIC
SAMPLES OF NORMAL DATA AND DETECT ANOMALIES BASED ON THE
DISCRIMINATOR'S ABILITY TO DISTINGUISH BETWEEN REAL AND
dl GENERATED DATA.
CONDITIONAL GANS: CONDITION GANS ON NORMAL DATA AND
GENERATE SAMPLES THAT DEVIATE SIGNIFICANTLY FROM THE
LEARNED DISTRIBUTION, INDICATING ANOMALIES.
DEEP NEURAL NETWORKS (DNNS):
DEEP FEEDFORWARD NETWORKS: UTILIZE DEEP FEEDFORWARD
NETWORKS (MULTI-LAYER PERCEPTRONS) TO LEARN COMPLEX
MAPPINGS FROM INPUT FEATURES TO ANOMALY SCORES.
DEEP CONVOLUTIONAL NETWORKS (CNNS): APPLY CNNS FOR IMAGE-
BASED ANOMALY DETECTION BY LEARNING HIERARCHICAL
REPRESENTATIONS OF NORMAL VISUAL PATTERNS.
Supervised learning
1. Logistic Regression:
Approach: Train a logistic regression classifier to learn
a decision boundary that separates normal instances
(labeled as 0) from anomalies (labeled as 1).
Usage: Suitable for binary classification tasks where
anomalies are defined based on labeled data.
2. Support Vector Machines (SVM):
Approach: Train an SVM with a nonlinear kernel to
learn a hyperplane that maximizes the margin
between normal and anomalous instances.
Usage: Effective for separating complex patterns and
handling non-linear decision boundaries.
1. Decision Trees:
Approach: Build decision trees that recursively split the feature space to classify
instances as normal or anomalous based on labeled data.
Usage: Useful for capturing non-linear relationships and interactions between
features.
1. Random Forest:
Approach: Ensemble of decision trees where each tree is trained on a subset of
features and instances. Anomalies can be identified based on outlier scores or
ensemble voting.
Usage: Robust and scalable method for detecting anomalies in high-dimensional
data.
1. Gradient Boosting Machines (GBM):
Approach: Train a gradient boosting model (e.g., XGBoost, LightGBM) to iteratively
improve the classification of normal and anomalous instances based on labeled
data.
Usage: Provides high accuracy and is effective in capturing complex relationships in
the data.
1. Neural Networks (e.g., Multi-layer Perceptron):
Approach: Train a neural network with multiple layers to learn a mapping from
input features to anomaly labels.
Usage: Deep learning models can automatically learn hierarchical representations
and feature transformations for anomaly detection tasks.
1. Naive Bayes:
Approach: Use a Naive Bayes classifier to estimate the probability of an instance
being normal or anomalous based on labeled data and conditional independence
assumptions.
Usage: Simple and efficient method for probabilistic anomaly detection tasks.
1. Ensemble Methods:
Approach: Combine multiple supervised learning models (e.g., SVMs, decision trees)
using ensemble techniques such as bagging or boosting to improve anomaly
detection performance.
Usage: Enhances robustness and generalization by leveraging the diversity of
individual models.
unsupervised learning
1. Density-Based Methods:
Density Estimation: Use techniques like Gaussian Mixture Models
(GMM) or Kernel Density Estimation (KDE) to model the
probability distribution of normal data.
Anomaly Detection: Identify instances with low probability
densities (outliers) as anomalies.
1. Clustering Algorithms:
2. K-Means Clustering: Group data points into clusters and
identify instances that do not belong to any cluster or
belong to small clusters as anomalies.
3. DBSCAN (Density-Based Spatial Clustering of Applications
with Noise): Detect outliers as data points that are not
part of any dense region in the feature space.
1. Autoencoders:
Neural Network-based Approach: Train autoencoder models to
reconstruct normal data and measure reconstruction errors.
Anomaly Detection: Identify instances with high reconstruction
errors as anomalies, indicating deviations from learned normal
patterns.
1. Isolation Forest:
Tree-based Ensemble Method: Use isolation forest algorithms to
isolate anomalies efficiently by randomly partitioning the
feature space.
Anomaly Detection: Measure the average path length in isolation
trees to identify instances that require fewer partitions,
indicating anomalies.
1. One-Class SVM (Support Vector Machines):
Boundary Learning: Train SVM models on normal data to learn a
boundary that encapsulates the normal behavior.
Anomaly Detection: Classify instances lying outside the learned
boundary as anomalies.
1. Statistical Approaches:
Z-Score or Standard Deviation: Calculate statistical measures
(e.g., Z-Score) to identify instances that deviate significantly
from the mean or standard deviation of normal data.
Anomaly Detection: Set thresholds based on statistical
properties to classify outliers as anomalie

You might also like