AAI - Module 1 - Generative Adversarial Network and Probabilistic Models
AAI - Module 1 - Generative Adversarial Network and Probabilistic Models
Generative models are a type of AI that learn to generate new data points similar to a given
set of training data.
● Input: A dataset (e.g., images, text, etc.).
● Output: New, realistic samples similar to the training data.
Common Applications:
Pronounced as: f of x equals one divided by the square root of two pi sigma
squared, multiplied by e to the power negative of x minus mu squared over
two sigma squared.
2. Bernoulli Distribution
● Applies to binary outcomes (e.g., success or failure).
● Formula:
Where x x is either 0 or 1, and p is the probability of success.
GANs are a type of generative model introduced by Ian Goodfellow in 2014. They consist of
two neural networks:
1. Generator: Creates fake data points.
2. Discriminator: Classifies data as real or fake.
These two networks are trained together in a process called adversarial training:
● The Generator tries to produce data that looks real.
● The Discriminator tries to distinguish between real and fake data.
This process repeats until the generator is good enough that the discriminator can no longer
distinguish real from fake.
1. Training Instability: The two networks compete, which makes balancing training
difficult.
2. Mode Collapse: The generator learns to produce limited variations, causing a lack of
diversity.
3. Convergence Issues: GANs often struggle to reach a stable equilibrium during
training.
Generative models are vital to artificial intelligence because they allow machines to exhibit
creativity and produce original content.
Key Advantages:
This process can produce realistic human faces, landscapes, or even entirely synthetic
datasets.
Key Advantages:
Example:
HMMs are used to model sequential or time-series data where the system evolves over time,
but certain variables (states) are hidden.
Example:
Speech recognition — the states are phonemes (hidden), and the observations are sound
wave features.
GMMs are probabilistic models that represent data as a mixture of multiple Gaussian
distributions. They are often used for clustering tasks.
3. Important Concepts
4. Probabilistic Models in Practice
Applications:
Generative Adversarial Networks (GANs) are deep learning models introduced by Ian
Goodfellow in 2014. GANs are designed to generate realistic data by training two neural
networks—the Generator and the Discriminator—in an adversarial setting.
The generator is a neural network that creates fake data from random noise, aiming to trick
the discriminator into classifying fake data as real.
A random noise vector z, typically sampled from a probability distribution (e.g., Gaussian or
uniform).
The discriminator is a binary classifier that tries to distinguish between real data (from the
dataset) and fake data (from the generator).
A probability:
● 1 if the data is real.
● 0 if the data is fake.
5. Challenges in GANs
6. Variants of GANs
7. Mathematical Details
8. Applications of GANs
1. Image Translation:
● Converting images from one domain to another (e.g., day-to-night,
sketch-to-photo).
2. Super-Resolution:
● Converting low-resolution images to high-resolution versions.
3. Text-to-Image Synthesis:
● Generating images from textual descriptions.
4. Data Augmentation:
● Generating synthetic samples to expand training datasets.