Chapter 1
Chapter 1
oriented
programming
I N T E R M E D I AT E D E E P L E A R N I N G W I T H P Y T O R C H
Michal Oleszak
Machine Learning Engineer
What we will learn
How to train robust deep learning models:
Loss calculation
Model evaluation
Prerequisite course: Introduction to Deep Learning with PyTorch
PyTorch Models
Data (attributes)
account = BankAccount(100)
print(account.balance)
100
account = BankAccount(100)
account.deposit(50)
print(account.balance)
150
Features: tensor([
from torch.utils.data import DataLoader [0.4899, 0.4180, 0.6299, 0.3496, 0.4575,
0.3615, 0.3259, 0.5011, 0.7545],
dataloader_train = DataLoader( [0.7953, 0.6305, 0.4480, 0.6549, 0.7813,
dataset_train, 0.6566, 0.6340, 0.5493, 0.5789]
batch_size=2, ]),
shuffle=True, Labels: tensor([1., 0.])
)
net = Net()
Michal Oleszak
Machine Learning Engineer
Training loop
import torch.nn as nn Define loss function and optimizer
import torch.optim as optim BCELoss for binary classification
Update for each parameter based on the size of its previous gradients
accuracy = acc.compute()
print(f"Accuracy: {accuracy}")
Accuracy: 0.6759443283081055
Michal Oleszak
Machine Learning Engineer
Vanishing gradients
Gradients get smaller and smaller during
backward pass
Training diverges
3. Batch normalization
Parameter containing:
tensor([[-0.0195, 0.0992, 0.0391, 0.0212,
-0.3386, -0.1892, -0.3170, 0.2148]])
init.kaiming_uniform_(layer.weight)
print(layer.weight)
Parameter containing:
tensor([[-0.3063, -0.2410, 0.0588, 0.2664,
0.0502, -0.0136, 0.2274, 0.0901]])
class Net(nn.Module):
def __init__(self): def forward(self, x):
super().__init__() x = nn.functional.relu(self.fc1(x))
self.fc1 = nn.Linear(9, 16) x = nn.functional.relu(self.fc2(x))
self.fc2 = nn.Linear(16, 8) x = nn.functional.sigmoid(self.fc3(x))
self.fc3 = nn.Linear(8, 1) return x
init.kaiming_uniform_(self.fc1.weight)
init.kaiming_uniform_(self.fc2.weight)
init.kaiming_uniform_(
self.fc3.weight,
nonlinearity="sigmoid",
)
...