0% found this document useful (0 votes)
18 views18 pages

MSU-Deep Learning

The document discusses machine learning and deep learning strategies for creating intelligent systems. It covers topics like convolutional neural networks, recurrent neural networks, generative adversarial networks, overfitting and underfitting in neural networks, and optimization algorithms for training neural networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views18 pages

MSU-Deep Learning

The document discusses machine learning and deep learning strategies for creating intelligent systems. It covers topics like convolutional neural networks, recurrent neural networks, generative adversarial networks, overfitting and underfitting in neural networks, and optimization algorithms for training neural networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Machine and Deep Learning

Strategies for Making Intelligent


Systems

Mohammad Shorif Uddin, PhD


Professor of Computer Science and Engineering
Jahangirnagar University
Dhaka, Bangladesh
[email protected]
https://www.juniv.edu/teachers/shorifuddin

1 © MSU 2021
Machine Learning
Computerand
Vision
Deep Learning

Source: A. A. Valliani et al., “Deep Learning and Neurology: A Systematic Review,” Neurology and
2 Therapy, 8, 2019. https://doi.org/10.1007/s40120-019-00153-8
Computer
Machine Vision
Learning Styles

3
The CNN – Prime Model for Deep Learning

4 Source: https://towardsdatascience.com
CNN Working

Convolution Operation
(sum of shifted product)

Pooling operation
(max pooling)
5
Neural Networks and Deep Learning

Supervised:
Convolutional Neural Network (CNN)
Recurrent Neural Network (RNN)
Deep Neural Network
LSTM is a sub class of RNN
Generative Adversarial Network (GAN) If number of hidden-
layer ≥ 3
Unsupervised:
Deep Belief Network
AutoEncoder

6
Deep
CNN Learning is Deep
Evolution (Go in Everywhere
to Deeper)

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC)


(source : https://medium.com/@sidereal/cnns-architectures-lenet-alexnet-vgg-googlenet-
7 resnet-and-more-666091488df5)
CNN Architectural Evolution

8 Source: https://www.aismartz.com/blog/cnn-architectures/
CNN Loss Functions

Loss is simply the prediction error of CNN and the method to calculate
the loss (i.e. the gradients used to update the weights for training the
CNN) is called Loss Function.

Keras and Tensorflow (APIs) have various inbuilt loss functions,


such as

• Mean Squared Error (MSE)


• Binary Cross-Entropy (BCE)
• Categorical Cross-Entropy (CC)
• Sparse Categorical Cross-Entropy (SCC)

9
CNN Optimization Algorithms (1)
Various Optimization Algorithms for Training the CNN:
• Gradient Descent
• Stochastic Gradient Descent (SGD)
• eMini-Batch Gradient Descent
• Momentum
• Nesterov Accelerated Gradient (NAG)
• Adagrad
• AdaDelta
• Adam

Adam is the best optimizers to train the CNN in less time and
more efficiently.

10 Source: https://towardsdatascience.com/optimizers-for-training-neural-network-59450d71caf6
Best Python Libraries for ML & DL (1)

11
Source: https://towardsdatascience.com
Over-fitting and Under-fitting

under fitting balanced Over fitting

Source: https://bit.ly/2IoAGzL

12
Generative Adversarial
Network

13
Generative Adversarial Network

14
The GANfather: The man who’s given
machines the gift of imagination
by Martin Giles, Feb 21, 2018

15
Discriminator vs Generator

16
Generative Adversarial Network (GAN)

GAN was first introduced by Ian Goodfellow et al in 2014


How GAN’s Work?
oA GAN has two players: a generator and a discriminator. A
generator generates new instances of an object while the
discriminator determines whether the new instance belongs to
the actual dataset.

o During the training process, weights and biases are adjusted


through backpropagation until the discriminator learns to
distinguish real images of objects from fake images.

o The generator gets feedback from the discriminator and uses it


to produce images that are more ‘real’.

o The discriminator is a convolutional neural network whereas the


generator is a de-convolutional neural network.

18

You might also like