0% found this document useful (0 votes)
8 views

ML

Uploaded by

Ranit Biswas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ML

Uploaded by

Ranit Biswas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Machine Learning models and their types are broadly categorized based on how they learn

from data, their objectives, and the kind of tasks they perform. Here’s a detailed look at the
different types of ML models:

1. Supervised Learning

 Overview: Supervised learning models learn from labeled data, where each input has
a corresponding output label. The model makes predictions by mapping inputs to
outputs and adjusts its internal parameters to minimize the difference between
predictions and actual labels.
 Applications: Used in tasks where we know the outcome, like email spam detection,
house price prediction, and image classification.
 Types of Supervised Learning Models:
o Classification: Predicts categorical labels. Examples:
 Logistic Regression: Used for binary or multiclass classification;
calculates probabilities.
 Support Vector Machines (SVM): Finds the optimal boundary
(hyperplane) between classes.
 Decision Trees: Creates tree-based rules to classify data points.
 K-Nearest Neighbors (KNN): Classifies based on the majority label
among neighboring data points.
o Regression: Predicts continuous values. Examples:
 Linear Regression: Predicts continuous values by fitting a line to
minimize error.
 Ridge and Lasso Regression: Linear regression methods with
regularization to reduce overfitting.
 Random Forest Regressor: An ensemble of decision trees providing
better accuracy by averaging results.
 Neural Networks (NNs): Can perform both classification and
regression by adjusting architecture and layers.

2. Unsupervised Learning

 Overview: In unsupervised learning, the model learns from data without labeled
outputs. It tries to identify patterns, relationships, or groupings within the data, often
by reducing data dimensions or grouping similar data points.
 Applications: Used for clustering customer segments, anomaly detection, and market
basket analysis.
 Types of Unsupervised Learning Models:
o Clustering: Groups data points into clusters based on similarity. Examples:
 K-Means Clustering: Divides data into K clusters based on the
distance from the cluster centroid.
 Hierarchical Clustering: Creates a tree of clusters, allowing for a
hierarchy of groupings.
 DBSCAN (Density-Based Spatial Clustering): Clusters data based
on density, effective for arbitrary shape clusters.
o Dimensionality Reduction: Reduces the number of variables in the dataset.
Examples:
 Principal Component Analysis (PCA): Converts correlated features
into uncorrelated principal components.
 t-SNE (t-Distributed Stochastic Neighbor Embedding): Reduces
dimensions while preserving high-dimensional data structure, used for
visualization.
 Autoencoders: Neural networks that learn efficient data encodings,
useful in anomaly detection.
o Association Rule Learning: Finds rules that describe large portions of the
data. Examples:
 Apriori Algorithm: Identifies frequently occurring item sets in
transactional datasets.
 Eclat Algorithm: Another association rule technique, focusing on the
intersection of item sets.

3. Semi-Supervised Learning

 Overview: Semi-supervised learning falls between supervised and unsupervised


learning, using a mix of labeled and unlabeled data. It is especially useful when
labeling data is expensive or time-consuming.
 Applications: Image recognition with limited labeled images and vast unlabeled
ones, or medical diagnostics where labeled data is scarce.
 Common Approaches:
o Self-Training Models: A model is initially trained on labeled data, then used
to label the unlabeled data and retrain itself.
o Generative Models: Models like GANs (Generative Adversarial Networks)
can generate synthetic labeled data to supplement real labeled data.

4. Reinforcement Learning (RL)

 Overview: Reinforcement learning models learn by interacting with an environment


and receiving feedback in the form of rewards or penalties. The model learns to
maximize cumulative rewards by taking a series of actions that lead to desired
outcomes.
 Applications: Autonomous driving, robotics, game AI, and portfolio management.
 Key Concepts in RL:
o Agent: The entity making decisions (e.g., a robot, game character).
o Environment: The world the agent interacts with (e.g., a game, physical
space).
o State: The current situation of the agent within the environment.
o Action: The moves or decisions the agent can take.
o Reward: Feedback from the environment for each action taken.
 Types of RL Models:
o Q-Learning: A value-based method that learns the value of each action in a
given state.
o Deep Q-Networks (DQN): A deep learning approach to Q-Learning using
neural networks.
o Policy Gradient Methods: Directly optimize the policy function, focusing on
which actions to take rather than the value of actions.
o Actor-Critic Methods: Combines value-based and policy-based approaches,
where the actor chooses actions, and the critic evaluates them.

5. Self-Supervised Learning

 Overview: Self-supervised learning generates labels from the data itself, often by
solving a pretext task that doesn’t need manual labels. The idea is for the model to
learn useful representations from vast amounts of unlabeled data, which can then be
fine-tuned for specific supervised tasks.
 Applications: Language modeling (e.g., BERT, GPT), image processing, and video
understanding.
 Common Techniques:
o Contrastive Learning: Maximizes agreement between differently augmented
views of the same data while minimizing similarity with other data points.
o Masked Modeling: Masks parts of input data and trains the model to predict
these masked parts (used in NLP models like BERT).

6. Transfer Learning

 Overview: Transfer learning reuses a pre-trained model trained on a large dataset to


solve a similar problem with a smaller dataset, often fine-tuning it on the new data.
 Applications: Common in NLP and computer vision, where models like GPT and
ResNet are trained on extensive datasets and then adapted for more specific tasks like
sentiment analysis or medical image classification.
 Common Techniques:
o Feature Extraction: Using pre-trained model layers as feature extractors
while training a new output layer.
o Fine-Tuning: Adjusting the parameters of some or all pre-trained layers to
adapt the model to the new task.

These ML model types cater to a wide range of tasks, from straightforward predictive tasks to
complex real-world applications where models must learn without supervision or adapt to
new contexts efficiently.

You might also like