0% found this document useful (0 votes)
8 views11 pages

CSGL

Uploaded by

gungun.joshi560
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views11 pages

CSGL

Uploaded by

gungun.joshi560
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Generative Learning

• Generative Machine Learning is a subset of deep learning that focuses on training models
to produce new data samples that closely resemble the original training set. These models
are designed to learn and capture the underlying patterns and distributions present in the
training data, enabling them to generate novel data that retains the essential features of
the original.
• The primary goal of generative models is to gain a deep understanding of the data’s core
structure, allowing them to create diverse and unique outputs.
• At the heart of generative machine learning lies the concept of probability distributions. By
leveraging these distributions, the models can simulate the process of generating a
sample dataset. The appreciation and application of these probability frameworks are key
to accurately producing outputs that resemble the characteristics of the training data.
• Generative models have wide-ranging applications, from creating realistic images and
videos to enhancing natural language processing systems and simulating complex
systems in fields like healthcare and finance. The power of these models lies in their ability
to not just replicate existing data but to generate creative, unforeseen possibilities.
Imagine your task is to classify a speech into a language.

You can do it by either:


1.Learning each language, and then classifying it using the knowledge you just
gained
or
2.Determining the difference in the linguistic models without learning the
languages, and then classifying the speech.

The first one is the Generative approach and the second one is
the Discriminative approach.
Different aspects of Generative and Discriminative models.
Goal
• Generative Models: Learn the complete data distribution across all classes.
• Discriminative Models: Focus on learning the decision boundary that separates
different classes.

Produced Outcome
• Generative Models: Capable of generating new data that resembles the original
training data.
• Discriminative Models: Classify input data into predefined categories or labels.

Data Input Type


• Generative Models: Can work with both labeled and unlabeled data, making them
versatile in data-limited scenarios.
• Discriminative Models: Rely on labeled data for training, focusing on specific class
boundaries.

Key Scenarios
• Generative Models: Widely used for tasks like image synthesis, data augmentation,
• Discriminative Models: Ideal for classification tasks such as image classification,
sentiment analysis, and object detection.

Model Complexity
• Generative Models: Typically, more complex to train due to the need to model entire
data distributions.
• Discriminative Models: Simpler and easier to train since they only focus on decision
boundaries.

Practical Uses
• Generative Models: Found in natural language generation, image generation, and
deep fake technologies.
• Discriminative Models: Commonly used in classification tasks like spam detection,
medical diagnosis, and regression analysis.

Notable Examples
• Generative Models: Examples include Generative Adversarial Networks (GANs) and
Variational Autoencoders (VAEs).
• Discriminative Models: Examples include Support Vector Machines (SVMs), Logistic
Regression, and Random Forests.
Gaussian parameter estimation

• The left-most figure shows a Gaussian with mean and covariance matrix Σ = I. A
Gaussian with zero mean and identity covariance is also called the standard normal
distribution.
• The middle figure shows the density of a Gaussian with zero mean and Σ = 0.6I; and
• The rightmost figure shows one with, Σ = 2I.
• We see that as Σ becomes larger, the Gaussian becomes more “spread-out,” and as it
becomes smaller, the distribution becomes more “compressed.”
Maximum likelihood estimation
• The goal of maximum likelihood is to find the optimal way to fit a distribution
of data.
This representation shows different types of
distribution for different types of data.

Normal Exponential

Gamma
With a mean value here, the likelihood (probability)
of observing all these weights is low.
If we shift the distribution, the probability
(likelihood) of observing these weights becomes
higher.

If we keep on shifting this distribution the probability


might again keeps on changing.
We need the
location that maximizes
the likelihood of observing
the weights we measured.

Likelihood This location maximizes


of observing data the likelihood of observing
the weights measured.

Thus this is the


“Maximum Likelihood
Estimate value”.

You might also like