0% found this document useful (0 votes)
2 views3 pages

AMLlab 06

Uploaded by

huyquangph2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views3 pages

AMLlab 06

Uploaded by

huyquangph2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Student name: Phan Huy Quang

Student ID: 104177128

1. Explain the concept of transfer learning. How does it differ from training a model from
scratch?

Answer:
Transfer learning is using a pre-trained model (developed for a bigger classification task using a
large dataset) as the starting point for building a model for another specific task. Instead of
training a new model from scratch, which requires a large dataset, significant computational
resources and a lot of time, transfer learning leverages the knowledge learned from a model
trained on a large dataset and applies it to a different but related problem. The pre-trained
model's feature map are often reused, and new layers are added/ top layers are unfreezed for
task-specific adjustments.

Differences from training a model from scratch:

From scratch Transfer learning

Require large dataset Only dataset related to classification problem

Train entire model Add layers to an existing model

Computationally expensive Computationally cheap

2. What is fine-tuning in the context of transfer learning, and why is it useful?


Ans:
Fine-tuning is the technique of unfreezing some of the pre-trained model top layers and
retraining them. These models are often responsible for specific features (not generic features),
so retraining them helps to increase accuracy with our specific classification.
It's useful because it help the model adapt better to the specific data and improve accuracy

3. Why is it important to freeze the convolutional base during feature extraction?


Ans:
This is to prevent the convolutional base weights from being updated during training, preserving
the general-purpose features learned from the large dataset. This help to reduce overfitting and
save computational resource as we don’t need to retrain the features extractor.
4. Why use data augmentation?
Ans:
Data augmentation is used when the dataset is not too large, which can cause poor
generalization of the model and overfitting. By using data augmentation, which increases the
sample in dataset by rotating, flipping or zooming the original image, we can reduce overfitting
by increasing the diversity of the training data and help the model learn more robust features by
exposing it to more variations in the data.
5. Screenshot: Pre-trained MobileNetV2 model loaded without the top classification
layers

The include_top=False is used to specify that we don’t load the top layers of the model

6. Screenshot: Set pre-trained model to be non-trainable for feature extraction

7. Screenshot: Data augmentation layers defined in the model

8. Screenshot: Addition of new classifier layers on top of the base model

You might also like