0% found this document useful (0 votes)
46 views42 pages

Lecture 11 Advanced CNN

The document discusses advanced convolutional neural networks (CNNs) in PyTorch. It begins by reviewing the typical structure of CNNs with convolutional and pooling layers for feature extraction followed by fully connected layers for classification. It then introduces GoogLeNet, which uses inception modules containing parallel convolutional layers of different sizes (1x1, 3x3, 5x5) followed by concatenation. The key benefits of 1x1 convolutions are reducing parameters and computational cost while combining input channels. The implementation of inception modules in PyTorch is then demonstrated.

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views42 pages

Lecture 11 Advanced CNN

The document discusses advanced convolutional neural networks (CNNs) in PyTorch. It begins by reviewing the typical structure of CNNs with convolutional and pooling layers for feature extraction followed by fully connected layers for classification. It then introduces GoogLeNet, which uses inception modules containing parallel convolutional layers of different sizes (1x1, 3x3, 5x5) followed by concatenation. The key benefits of 1x1 convolutions are reducing parameters and computational cost while combining input channels. The implementation of inception modules in PyTorch is then demonstrated.

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

PyTorch Tutorial

11. Advanced CNN

Lecturer : Hongpu Liu Lecture 11-1 PyTorch Tutorial @ SLAM Research Group
Revision

𝑪𝟏 𝑺𝟏 𝑪𝟐 𝑺𝟐 𝒏𝟏 𝒏𝟐
Input Feature maps Feature maps Feature maps Feature maps Output
1 × 28 × 28 4 × 24 × 24 4 × 12 × 12 8×8×8 8×4×4

0
1

8
9
5×5 2×2 5×5 2×2 Fully Fully
Convolution Subsampling Convolution Subsampling Connected Connected

Feature Extraction Classification

Lecturer : Hongpu Liu Lecture 11-2 PyTorch Tutorial @ SLAM Research Group
GoogLeNet

Inception Module

Lecturer : Hongpu Liu Lecture 11-3 PyTorch Tutorial @ SLAM Research Group
Inception Module

Concatenate

𝟑 × 𝟑 Conv
(24)

𝟏 × 𝟏 Conv
(16)
𝟏 × 𝟏 Conv 𝟓 × 𝟓 Conv 𝟑 × 𝟑 Conv
(24) (24) (24)
What is 𝟏 × 𝟏 convolution?
Average 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv
Pooling (16) (16) (16)

Concatenate

Lecturer : Hongpu Liu Lecture 11-4 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5

4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

Lecturer : Hongpu Liu Lecture 11-5 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5

4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

1 4 7 0.3 1.2 2.1

2 5 8 ⨀ 0.3 = 0.6 1.5 2.4

3 6 9 0.9 1.8 2.7

Lecturer : Hongpu Liu Lecture 11-6 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5

4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

1 4 7 0.3 1.2 2.1

2 5 8 ⨀ 0.3 = 0.6 1.5 2.4

3 6 9 0.9 1.8 2.7

9 8 7 1.8 1.6 1.4

6 5 4 ⨀ 0.2 = 1.2 1.0 0.8

3 2 1 0.6 0.4 0.2

Lecturer : Hongpu Liu Lecture 11-7 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5

4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

1 4 7 0.3 1.2 2.1

2 5 8 ⨀ 0.3 = 0.6 1.5 2.4 + =


3 6 9 0.9 1.8 2.7

9 8 7 1.8 1.6 1.4

6 5 4 ⨀ 0.2 = 1.2 1.0 0.8

3 2 1 0.6 0.4 0.2

Lecturer : Hongpu Liu Lecture 11-8 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5

4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

1 4 7 0.3 1.2 2.1 2.6 3.8 5.0

2 5 8 ⨀ 0.3 = 0.6 1.5 2.4 + = 3.8 5.0 6.2

3 6 9 0.9 1.8 2.7 5.0 6.2 7.4

9 8 7 1.8 1.6 1.4

6 5 4 ⨀ 0.2 = 1.2 1.0 0.8

3 2 1 0.6 0.4 0.2

Lecturer : Hongpu Liu Lecture 11-9 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5


1x1 Convolution
4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

1 4 7 0.3 1.2 2.1 2.6 3.8 5.0

2 5 8 ⨀ 0.3 = 0.6 1.5 2.4 + = 3.8 5.0 6.2

3 6 9 0.9 1.8 2.7 5.0 6.2 7.4

9 8 7 1.8 1.6 1.4

6 5 4 ⨀ 0.2 = 1.2 1.0 0.8

3 2 1 0.6 0.4 0.2

Lecturer : Hongpu Liu Lecture 11-10 PyTorch Tutorial @ SLAM Research Group
What is 1x1 convolution?

1 2 3 0.5 1.0 1.5


1x1 Convolution
4 5 6 ⨀ 0.5 = 2.0 2.5 3.0

7 8 9 3.5 4.0 4.5

1 4 7 0.3 1.2 2.1 2.6 3.8 5.0

2 5 8 ⨀ 0.3 = 0.6 1.5 2.4 + = 3.8 5.0 6.2

3 6 9 0.9 1.8 2.7 5.0 6.2 7.4

9 8 7 1.8 1.6 1.4

6 5 4 ⨀ 0.2 = 1.2 1.0 0.8

3 2 1 0.6 0.4 0.2

Lecturer : Hongpu Liu Lecture 11-11 PyTorch Tutorial @ SLAM Research Group
Why is 1x1 convolution?
192@28x28 32@28x28

Operations:
52 × 282 × 192 × 32 =
120,422,400
5x5
Convolution

Lecturer : Hongpu Liu Lecture 11-12 PyTorch Tutorial @ SLAM Research Group
Why is 1x1 convolution?
192@28x28 32@28x28

Operations:
52 × 282 × 192 × 32 =
120,422,400
5x5
Convolution

192@28x28 16@28x28 32@28x28

Operations:
12 × 282 × 192 × 16 +
52 × 282 × 16 × 32 =
12,433,648
1x1 5x5
Convolution Convolution

Lecturer : Hongpu Liu Lecture 11-13 PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module

Concatenate

𝟑 × 𝟑 Conv
(24)

𝟏 × 𝟏 Conv 𝟓 × 𝟓 Conv 𝟑 × 𝟑 Conv


(24) (24) (24)

Average 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv


Pooling (16) (16) (16)

Concatenate

Lecturer : Hongpu Liu Lecture 11-14 PyTorch Tutorial @ SLAM Research Group
PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module

Lecture 11-15
Concatenate
𝟑 × 𝟑 Conv
(24)
𝟏 × 𝟏 Conv 𝟓 × 𝟓 Conv 𝟑 × 𝟑 Conv

Lecturer : Hongpu Liu


(24) (24) (24)
Average 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv
Pooling (16) (16) (16)
Concatenate
Implementation of Inception Module

𝟏 × 𝟏 Conv
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
Average
Pooling

(24)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)
𝟏 × 𝟏 Conv
(16)
Concatenate

Concatenate
𝟏 × 𝟏 Conv

𝟓 × 𝟓 Conv
(16)

(24)
𝟏 × 𝟏 Conv

𝟑 × 𝟑 Conv

𝟑 × 𝟑 Conv
(16)

(24)

(24)

Lecturer : Hongpu Liu Lecture 11-16 PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module

𝟏 × 𝟏 Conv
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
Average
Pooling

(24)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)

self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)


𝟏 × 𝟏 Conv
(16)

branch1x1 = self.branch1x1(x)
Concatenate

Concatenate
𝟏 × 𝟏 Conv

𝟓 × 𝟓 Conv
(16)

(24)
𝟏 × 𝟏 Conv

𝟑 × 𝟑 Conv

𝟑 × 𝟑 Conv
(16)

(24)

(24)

Lecturer : Hongpu Liu Lecture 11-17 PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module

𝟏 × 𝟏 Conv
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
Average
Pooling

(24)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)

self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)


𝟏 × 𝟏 Conv
(16)

branch1x1 = self.branch1x1(x)

self.branch5x5_1 = nn.Conv2d(in_channels,16, kernel_size=1)


self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)
Concatenate

𝟏 × 𝟏 Conv

𝟓 × 𝟓 Conv
(16)

(24)

branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)
𝟏 × 𝟏 Conv

𝟑 × 𝟑 Conv

𝟑 × 𝟑 Conv
(16)

(24)

(24)

Lecturer : Hongpu Liu Lecture 11-18 PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module

𝟏 × 𝟏 Conv
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
Average
Pooling

(24)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)

self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)


𝟏 × 𝟏 Conv
(16)

branch1x1 = self.branch1x1(x)

self.branch5x5_1 = nn.Conv2d(in_channels,16, kernel_size=1)


self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)
Concatenate

𝟏 × 𝟏 Conv

𝟓 × 𝟓 Conv
(16)

(24)

branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)

self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)


𝟏 × 𝟏 Conv

𝟑 × 𝟑 Conv

𝟑 × 𝟑 Conv

self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)


self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)
(16)

(24)

(24)

branch3x3 = self.branch3x3_1(x)
branch3x3 = self.branch3x3_2(branch3x3)

Lecturer : Hongpu Liu Lecture 11-19 PyTorch Tutorial @ SLAM Research Group
PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module

Lecture 11-20
𝟑 × 𝟑 Conv
(24)
𝟏 × 𝟏 Conv 𝟓 × 𝟓 Conv 𝟑 × 𝟑 Conv

Lecturer : Hongpu Liu


(24) (24) (24)
Average 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv
Pooling (16) (16) (16)
Concatenate
PyTorch Tutorial @ SLAM Research Group
Concatenate
Implementation of Inception Module

Lecture 11-21
𝟑 × 𝟑 Conv
(24)
𝟏 × 𝟏 Conv 𝟓 × 𝟓 Conv 𝟑 × 𝟑 Conv

Lecturer : Hongpu Liu


(24) (24) (24)
Average 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv 𝟏 × 𝟏 Conv
Pooling (16) (16) (16)
Concatenate
Implementation of Inception Module

𝟏 × 𝟏 Conv
Average
Pooling

(24)
𝟏 × 𝟏 Conv
(16)
Concatenate

Concatenate
𝟏 × 𝟏 Conv

𝟓 × 𝟓 Conv

outputs = [branch1x1, branch5x5, branch3x3, branch_pool]


(16)

(24)

return torch.cat(outputs, dim=1)


𝟏 × 𝟏 Conv

𝟑 × 𝟑 Conv

𝟑 × 𝟑 Conv
(16)

(24)

(24)

Lecturer : Hongpu Liu Lecture 11-22 PyTorch Tutorial @ SLAM Research Group
Implementation of Inception Module
class InceptionA(nn.Module):
def __init__(self, in_channels):
super(InceptionA, self).__init__()
self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)

self.branch5x5_1 = nn.Conv2d(in_channels,16, kernel_size=1)


self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)

self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)


self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)
self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)

self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)

def forward(self, x):


branch1x1 = self.branch1x1(x)

branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)

branch3x3 = self.branch3x3_1(x)
branch3x3 = self.branch3x3_2(branch3x3)
branch3x3 = self.branch3x3_3(branch3x3)

branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)


branch_pool = self.branch_pool(branch_pool)

outputs = [branch1x1, branch5x5, branch3x3, branch_pool]


return torch.cat(outputs, dim=1)

Lecturer : Hongpu Liu Lecture 11-23 PyTorch Tutorial @ SLAM Research Group
Using Inception Module
class InceptionA(nn.Module):
def __init__(self, in_channels): class Net(nn.Module):
super(InceptionA, self).__init__() def __init__(self):
self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)
super(Net, self).__init__()
self.branch5x5_1 = nn.Conv2d(in_channels,16, kernel_size=1) self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2) self.conv2 = nn.Conv2d(88, 20, kernel_size=5)
self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1) self.incep1 = InceptionA(in_channels=10)
self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1) self.incep2 = InceptionA(in_channels=20)
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
self.mp = nn.MaxPool2d(2)
def forward(self, x): self.fc = nn.Linear(1408, 10)
branch1x1 = self.branch1x1(x)

branch5x5 = self.branch5x5_1(x) def forward(self, x):


branch5x5 = self.branch5x5_2(branch5x5) in_size = x.size(0)
branch3x3 = self.branch3x3_1(x)
x = F.relu(self.mp(self.conv1(x)))
branch3x3 = self.branch3x3_2(branch3x3) x = self.incep1(x)
branch3x3 = self.branch3x3_3(branch3x3) x = F.relu(self.mp(self.conv2(x)))
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
x = self.incep2(x)
branch_pool = self.branch_pool(branch_pool) x = x.view(in_size, -1)
x = self.fc(x)
outputs = [branch1x1, branch5x5, branch3x3, branch_pool] return x
return torch.cat(outputs, dim=1)

Lecturer : Hongpu Liu Lecture 11-24 PyTorch Tutorial @ SLAM Research Group
Using Inception Module
class InceptionA(nn.Module):
def __init__(self, in_channels): class Net(nn.Module):
super(InceptionA, self).__init__() def __init__(self):
self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)
super(Net, self).__init__()
self.branch5x5_1 = nn.Conv2d(in_channels,16, kernel_size=1) self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2) self.conv2 = nn.Conv2d(88, 20, kernel_size=5)
self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1) self.incep1 = InceptionA(in_channels=10)
self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1) self.incep2 = InceptionA(in_channels=20)
self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
self.mp = nn.MaxPool2d(2)
def forward(self, x): self.fc = nn.Linear(1408, 10)
branch1x1 = self.branch1x1(x)

branch5x5 = self.branch5x5_1(x) def forward(self, x):


branch5x5 = self.branch5x5_2(branch5x5) in_size = x.size(0)
branch3x3 = self.branch3x3_1(x)
x = F.relu(self.mp(self.conv1(x)))
branch3x3 = self.branch3x3_2(branch3x3) x = self.incep1(x)
branch3x3 = self.branch3x3_3(branch3x3) x = F.relu(self.mp(self.conv2(x)))
x = self.incep2(x)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool) x = x.view(in_size, -1)
x = self.fc(x)
outputs = [branch1x1, branch5x5, branch3x3, branch_pool] return x
return torch.cat(outputs, dim=1)

Lecturer : Hongpu Liu Lecture 11-25 PyTorch Tutorial @ SLAM Research Group
Results of using Inception Module

Accuracy on test set: 9 % [982/10000]


[1, 300] loss: 0.141
[1, 600] loss: 0.031
[1, 900] loss: 0.020
Accuracy on test set: 95 % [9554/10000]
[2, 300] loss: 0.015
[2, 600] loss: 0.014
[2, 900] loss: 0.012
Accuracy on test set: 97 % [9793/10000]
……
[9, 300] loss: 0.005
[9, 600] loss: 0.005
[9, 900] loss: 0.005
Accuracy on test set: 98 % [9888/10000]
[10, 300] loss: 0.005
[10, 600] loss: 0.005
[10, 900] loss: 0.005
Accuracy on test set: 98 % [9866/10000]

Lecturer : Hongpu Liu Lecture 11-26 PyTorch Tutorial @ SLAM Research Group
Go Deeper

Lecturer : Hongpu Liu Lecture 11-27 PyTorch Tutorial @ SLAM Research Group
Can we stack layers to go deeper?

Plain nets: stacking 3x3 conv layers

He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2016:770-778.

Lecturer : Hongpu Liu Lecture 11-28 PyTorch Tutorial @ SLAM Research Group
Deep Residual Learning

Plain net Residual net

𝑥 𝑥

Weight Layer Weight Layer

relu 𝐹(𝑥) relu

Weight Layer Weight Layer


relu
𝐻 𝑥
𝐻 𝑥 =𝐹 𝑥 +𝑥
relu

Lecturer : Hongpu Liu Lecture 11-29 PyTorch Tutorial @ SLAM Research Group
Residual Network

Plain net Residual net

Lecturer : Hongpu Liu Lecture 11-30 PyTorch Tutorial @ SLAM Research Group
Implementation of Simple Residual Network

(𝑏𝑎𝑡𝑐ℎ, 1,28,28)
(𝑏𝑎𝑡𝑐ℎ, 1,28,28) self.conv1 = nn.Conv2d(1, 16, kernel_size=5)
(𝑏𝑎𝑡𝑐ℎ, 16,24,24)

(𝑏𝑎𝑡𝑐ℎ, 16,24,24)
Input Layer
(𝑏𝑎𝑡𝑐ℎ, 16,12,12)
Conv2d Layer
?
ReLU Layer (𝑏𝑎𝑡𝑐ℎ, 16,12,12)

Pooling Layer self.conv2 = nn.Conv2d(16, 32, kernel_size=5)


(𝑏𝑎𝑡𝑐ℎ, 32,8,8)
Linear Layer
(𝑏𝑎𝑡𝑐ℎ, 32,8,8)
Residual Block
(𝑏𝑎𝑡𝑐ℎ, 32,4,4)
Output Layer
?
(𝑏𝑎𝑡𝑐ℎ, 32,4,4) → (𝑏𝑎𝑡𝑐ℎ, 512)
(𝑏𝑎𝑡𝑐ℎ, 10)

Lecturer : Hongpu Liu Lecture 11-31 PyTorch Tutorial @ SLAM Research Group
Implementation of Residual Block

𝑥 class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
Weight Layer self.channels = channels
self.conv1 = nn.Conv2d(channels, channels,
𝐹(𝑥) relu kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels,
Weight Layer kernel_size=3, padding=1)

def forward(self, x):


𝐻 𝑥 =𝐹 𝑥 +𝑥 y = F.relu(self.conv1(x))
relu y = self.conv2(y)
return F.relu(x + y)

Lecturer : Hongpu Liu Lecture 11-32 PyTorch Tutorial @ SLAM Research Group
Implementation of Residual Block

𝑥 class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
Weight Layer self.channels = channels
self.conv1 = nn.Conv2d(channels, channels,
𝐹(𝑥) relu kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels,
Weight Layer kernel_size=3, padding=1)

def forward(self, x):


𝐻 𝑥 =𝐹 𝑥 +𝑥 y = F.relu(self.conv1(x))
relu y = self.conv2(y)
return F.relu(x + y)

Lecturer : Hongpu Liu Lecture 11-33 PyTorch Tutorial @ SLAM Research Group
Implementation of Residual Block

𝑥 class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
Weight Layer self.channels = channels
self.conv1 = nn.Conv2d(channels, channels,
𝐹(𝑥) relu kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels,
Weight Layer kernel_size=3, padding=1)

def forward(self, x):


𝐻 𝑥 =𝐹 𝑥 +𝑥 y = F.relu(self.conv1(x))
relu y = self.conv2(y)
return F.relu(x + y)

Lecturer : Hongpu Liu Lecture 11-34 PyTorch Tutorial @ SLAM Research Group
Implementation of Residual Block

𝑥 class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
Weight Layer self.channels = channels
self.conv1 = nn.Conv2d(channels, channels,
𝐹(𝑥) relu kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels,
Weight Layer kernel_size=3, padding=1)

def forward(self, x):


𝐻 𝑥 =𝐹 𝑥 +𝑥 y = F.relu(self.conv1(x))
relu y = self.conv2(y)
return F.relu(x + y)

Lecturer : Hongpu Liu Lecture 11-35 PyTorch Tutorial @ SLAM Research Group
Implementation of Residual Block

𝑥 class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
Weight Layer self.channels = channels
self.conv1 = nn.Conv2d(channels, channels,
𝐹(𝑥) relu kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels,
Weight Layer kernel_size=3, padding=1)

def forward(self, x):


𝐻 𝑥 =𝐹 𝑥 +𝑥 y = F.relu(self.conv1(x))
relu y = self.conv2(y)
return F.relu(x + y)

Lecturer : Hongpu Liu Lecture 11-36 PyTorch Tutorial @ SLAM Research Group
Implementation of Residual Block

𝑥 class ResidualBlock(nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
Weight Layer self.channels = channels
self.conv1 = nn.Conv2d(channels, channels,
𝐹(𝑥) relu kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(channels, channels,
Weight Layer kernel_size=3, padding=1)

def forward(self, x):


𝐻 𝑥 =𝐹 𝑥 +𝑥 y = F.relu(self.conv1(x))
relu y = self.conv2(y)
return F.relu(x + y)

Lecturer : Hongpu Liu Lecture 11-37 PyTorch Tutorial @ SLAM Research Group
Implementation of Simple Residual Network

class Net(nn.Module):
(𝑏𝑎𝑡𝑐ℎ, 1,28,28) def __init__(self):
(𝑏𝑎𝑡𝑐ℎ, 1,28,28) super(Net, self).__init__()
(𝑏𝑎𝑡𝑐ℎ, 16,24,24) self.conv1 = nn.Conv2d(1, 16, kernel_size=5)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5)
(𝑏𝑎𝑡𝑐ℎ, 16,24,24) self.mp = nn.MaxPool2d(2)
Input Layer
(𝑏𝑎𝑡𝑐ℎ, 16,12,12) self.rblock1 = ResidualBlock(16)
Conv2d Layer self.rblock2 = ResidualBlock(32)
ReLU Layer (𝑏𝑎𝑡𝑐ℎ, 16,12,12) self.fc = nn.Linear(512, 10)
Pooling Layer
(𝑏𝑎𝑡𝑐ℎ, 32,8,8) def forward(self, x):
Linear Layer in_size = x.size(0)
(𝑏𝑎𝑡𝑐ℎ, 32,8,8) x = self.mp(F.relu(self.conv1(x)))
Residual Block x = self.rblock1(x)
(𝑏𝑎𝑡𝑐ℎ, 32,4,4) x = self.mp(F.relu(self.conv2(x)))
Output Layer
x = self.rblock2(x)
x = x.view(in_size, -1)
(𝑏𝑎𝑡𝑐ℎ, 32,4,4) → (𝑏𝑎𝑡𝑐ℎ, 512)
x = self.fc(x)
(𝑏𝑎𝑡𝑐ℎ, 10) return x

Lecturer : Hongpu Liu Lecture 11-38 PyTorch Tutorial @ SLAM Research Group
Accuracy on test set: 9 % [916/10000]
[1, 300] loss: 0.074
[1, 600] loss: 0.021
[1, 900] loss: 0.017
Accuracy on test set: 97 % [9736/10000]
[2, 300] loss: 0.013
[2, 600] loss: 0.011
[2, 900] loss: 0.011
Accuracy on test set: 98 % [9831/10000]
……
[9, 300] loss: 0.003
[9, 600] loss: 0.004
[9, 900] loss: 0.004
Accuracy on test set: 99 % [9900/10000]
[10, 300] loss: 0.003
[10, 600] loss: 0.003
[10, 900] loss: 0.004
Accuracy on test set: 99 % [9901/10000]

Lecturer : Hongpu Liu Lecture 11-39 PyTorch Tutorial @ SLAM Research Group
Exercise 11-1: Reading Paper and Implementing

He K, Zhang X, Ren S, et al. Identity Mappings in Deep Residual Networks[C]

Lecturer : Hongpu Liu Lecture 11-40 PyTorch Tutorial @ SLAM Research Group
Exercise 11-2: Reading and Implementing DenseNet

Huang G, Liu Z, Laurens V D M, et al. Densely Connected Convolutional Networks[J]. 2016:2261-2269.

Lecturer : Hongpu Liu Lecture 11-41 PyTorch Tutorial @ SLAM Research Group
PyTorch Tutorial
11. Advanced CNN

Lecturer : Hongpu Liu Lecture 11-42 PyTorch Tutorial @ SLAM Research Group

You might also like