0% found this document useful (0 votes)
22 views27 pages

Lecture 07 Multiple Dimension Input

This document summarizes a PyTorch tutorial on using multiple dimension inputs. It discusses using multiple feature inputs for problems like logistic regression. It shows examples of diabetes datasets with 8 features and 1 output. It introduces the concept of using a linear layer followed by a sigmoid activation to perform multivariate logistic regression on mini-batches of data. The linear layer learns weights to project the multiple inputs down to the number of outputs. This allows modeling problems with multiple input features and one or more numeric outputs.

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views27 pages

Lecture 07 Multiple Dimension Input

This document summarizes a PyTorch tutorial on using multiple dimension inputs. It discusses using multiple feature inputs for problems like logistic regression. It shows examples of diabetes datasets with 8 features and 1 output. It introduces the concept of using a linear layer followed by a sigmoid activation to perform multivariate logistic regression on mini-batches of data. The linear layer learns weights to project the multiple inputs down to the number of outputs. This allows modeling problems with multiple input features and one or more numeric outputs.

Uploaded by

lingyun wu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

PyTorch Tutorial

07. Multiple Dimension Input

Lecturer : Hongpu Liu Lecture 7-1 PyTorch Tutorial @ SLAM Research Group
Revision

x (hours) y (points) x (hours) y (pass/fail)


1 2 1 0 (fail)
2 4 2 0 (fail)
3 6 3 1 (pass)
4 ? 4 ?

Lecturer : Hongpu Liu Lecture 7-2 PyTorch Tutorial @ SLAM Research Group
Diabetes Dataset

X1 X2 X3 X4 X5 X6 X7 X8 Y
-0.29 0.49 0.18 -0.29 0.00 0.00 -0.53 -0.03 0
-0.88 -0.15 0.08 -0.41 0.00 -0.21 -0.77 -0.67 1 Sample
-0.06 0.84 0.05 0.00 0.00 -0.31 -0.49 -0.63 0
-0.88 -0.11 0.08 -0.54 -0.78 -0.16 -0.92 0.00 1
0.00 0.38 -0.34 -0.29 -0.60 0.28 0.89 -0.60 0
-0.41 0.17 0.21 0.00 0.00 -0.24 -0.89 -0.70 1
-0.65 -0.22 -0.18 -0.35 -0.79 -0.08 -0.85 -0.83 0
0.18 0.16 0.00 0.00 0.00 0.05 -0.95 -0.73 1
-0.76 0.98 0.15 -0.09 0.28 -0.09 -0.93 0.07 0
-0.06 0.26 0.57 0.00 0.00 0.00 -0.87 0.10 0

Lecturer : Hongpu Liu Lecture 7-3 PyTorch Tutorial @ SLAM Research Group
Diabetes Dataset

X1 X2 X3 X4 X5 X6 X7 X8 Y
-0.29 0.49 0.18 -0.29 0.00 0.00 -0.53 -0.03 0
-0.88 -0.15 0.08 -0.41 0.00 -0.21 -0.77 -0.67 1
-0.06 0.84 0.05 0.00 0.00 -0.31 -0.49 -0.63 0
-0.88 -0.11 0.08 -0.54 -0.78 -0.16 -0.92 0.00 1
0.00 0.38 -0.34 -0.29 -0.60 0.28 0.89 -0.60 0
-0.41 0.17 0.21 0.00 0.00 -0.24 -0.89 -0.70 1
-0.65 -0.22 -0.18 -0.35 -0.79 -0.08 -0.85 -0.83 0
0.18 0.16 0.00 0.00 0.00 0.05 -0.95 -0.73 1
-0.76 0.98 0.15 -0.09 0.28 -0.09 -0.93 0.07 0
-0.06 0.26 0.57 0.00 0.00 0.00 -0.87 0.10 0

Feature

Lecturer : Hongpu Liu Lecture 7-4 PyTorch Tutorial @ SLAM Research Group
Multiple Dimension Logistic Regression Model

Logistic Regression Model

𝑦ො (𝑖) = 𝜎(𝑥 (𝑖) ∗ 𝜔 + 𝑏)

Logistic Regression Model


8
(𝑖) (𝑖)
𝑦ො = 𝜎(෍ 𝑥𝑛 ∙ 𝜔𝑛 + 𝑏)
𝑛=1

Lecturer : Hongpu Liu Lecture 7-5 PyTorch Tutorial @ SLAM Research Group
Multiple Dimension Logistic Regression Model

Logistic Regression Model 8 𝜔1


(𝑖)
෍ 𝑥𝑛 ∙ 𝜔𝑛 = 𝑥1(𝑖) ⋯ 𝑥8
(𝑖) ⋮
𝑦ො (𝑖) = 𝜎(𝑥 (𝑖) ∗ 𝜔 + 𝑏) 𝑛=1 𝜔8

Logistic Regression Model


8
(𝑖) (𝑖)
𝑦ො = 𝜎(෍ 𝑥𝑛 ∙ 𝜔𝑛 + 𝑏)
𝑛=1

Lecturer : Hongpu Liu Lecture 7-6 PyTorch Tutorial @ SLAM Research Group
Multiple Dimension Logistic Regression Model

Logistic Regression Model 8 𝜔1


(𝑖)
෍ 𝑥𝑛 ∙ 𝜔𝑛 = 𝑥1(𝑖) ⋯ 𝑥𝑁
(𝑖) ⋮
𝑦ො (𝑖) = 𝜎(𝑥 (𝑖) ∗ 𝜔 + 𝑏) 𝑛=1 𝜔8

Logistic Regression Model Logistic Regression Model


8 𝜔1
(𝑖) (𝑖)
𝑦ො = 𝜎(෍ 𝑥𝑛 ∙ 𝜔𝑛 + 𝑏) 𝑦ො (𝑖) = 𝜎( 𝑥1𝑖 ⋯ 𝑥8
𝑖 ⋮ + 𝑏)
𝑛=1
𝜔8
= 𝜎(𝑧 (𝑖) )

Lecturer : Hongpu Liu Lecture 7-7 PyTorch Tutorial @ SLAM Research Group
Mini-Batch (N samples)

𝑦ො (1) 𝜎(𝑧 (1) ) 𝑧 (1)


⋮ = ⋮ = 𝜎( ⋮ ) Sigmoid function is in an element-wise fashion.
𝑦ො (𝑁) 𝜎(𝑧 (𝑁) ) 𝑧 (𝑁)

Lecturer : Hongpu Liu Lecture 7-8 PyTorch Tutorial @ SLAM Research Group
Mini-Batch (N samples)

𝑦ො (1) 𝜎(𝑧 (1) ) 𝑧 (1)


⋮ = ⋮ = 𝜎( ⋮ ) Sigmoid function is in an element-wise fashion.
𝑦ො (𝑁) 𝜎(𝑧 (𝑁) ) 𝑧 (𝑁)

𝜔1
𝑧 (1) = 𝑥1(1) ⋯ 𝑥8
(1) ⋮ +𝑏
𝜔8

𝜔1
𝑧 (𝑁) = 𝑥1(𝑁) ⋯ 𝑥8
(𝑁) ⋮ +𝑏
𝜔8

Lecturer : Hongpu Liu Lecture 7-9 PyTorch Tutorial @ SLAM Research Group
Mini-Batch (N samples)

𝑦ො (1) 𝜎(𝑧 (1) ) 𝑧 (1)


⋮ = ⋮ = 𝜎( ⋮ ) Sigmoid function is in an element-wise fashion.
𝑦ො (𝑁) 𝜎(𝑧 (𝑁) ) 𝑧 (𝑁)

𝜔1
𝑧 (1) = 𝑥1(1) ⋯ 𝑥8
(1) ⋮ +𝑏 (1) (1)
𝜔8 𝑧 (1) 𝑥1 … 𝑥8 𝜔1 𝑏
⋮ ⋮ = ⋮ ⋱ ⋮ ⋮ + ⋮
𝜔1 𝑧 (𝑁) (𝑁)
𝑥1
(𝑁)
… 𝑥8 𝜔8 𝑏
𝑧 (𝑁) = 𝑥1(𝑁) ⋯ 𝑥8
(𝑁) ⋮ +𝑏
𝜔8 𝑁×1 𝑁×8 8×1 𝑁×1

Lecturer : Hongpu Liu Lecture 7-10 PyTorch Tutorial @ SLAM Research Group
Mini-Batch (N samples)
class Model(torch.nn.Module):
def __init__(self):
𝑦ො (1) 𝜎(𝑧 (1) ) 𝑧 (1) super(Model, self).__init__()
self.linear = torch.nn.Linear(8, 1)
⋮ = ⋮ = 𝜎( ⋮ ) self.sigmoid = torch.nn.Sigmoid()
𝑦ො (𝑁) 𝜎(𝑧 (𝑁) ) 𝑧 (𝑁) def forward(self, x):
x = self.sigmoid(self.linear(x))
return x

model = Model()
𝜔1
𝑧 (1) = 𝑥1(1) ⋯ 𝑥8
(1) ⋮ +𝑏 (1) (1)
𝜔8 𝑧 (1) 𝑥1 … 𝑥8 𝜔1 𝑏
⋮ ⋮ = ⋮ ⋱ ⋮ ⋮ + ⋮
𝜔1 𝑧 (𝑁) 𝑥1
(𝑁) (𝑁)
… 𝑥8 𝜔8 𝑏
𝑧 (𝑁) = 𝑥1(𝑁) ⋯ 𝑥8
(𝑁) ⋮ +𝑏
𝜔8 𝑁×1 𝑁×8 8×1 𝑁×1

Lecturer : Hongpu Liu Lecture 7-11 PyTorch Tutorial @ SLAM Research Group
Linear Layer

self.linear = torch.nn.Linear(8, 1)

Size of each input sample Size of each output sample


Linear Layer
(1) (1)
𝑥1 … 𝑥8 𝑜 (1)
𝑋= ⋮ ⋱ ⋮ 𝑋 𝑜 = 𝜎(𝑋 ∙ 𝑊 + 𝑏) 𝑂 𝑂= ⋮
(𝑁)
𝑥1
(𝑁)
… 𝑥8 𝑜 (𝑁)

8×1 𝑊 𝑏

Lecturer : Hongpu Liu Lecture 7-12 PyTorch Tutorial @ SLAM Research Group
Linear Layer

self.linear = torch.nn.Linear(8, 2)

Linear Layer
(1) (1) (1) (1)
𝑥1 … 𝑥8 𝑜1 𝑜2
𝑋= ⋮ ⋱ ⋮ 𝑋 𝑜 = 𝜎(𝑋 ∙ 𝑊 + 𝑏) 𝑂 𝑂= ⋮ ⋮
(𝑁) (𝑁) (𝑁) (𝑁)
𝑥1 … 𝑥8 𝑜1 𝑜2
8×2 𝑊 𝑏

Lecturer : Hongpu Liu Lecture 7-13 PyTorch Tutorial @ SLAM Research Group
Linear Layer

self.linear = torch.nn.Linear(8, 6)

Linear Layer
(1) (1) (1) (1)
𝑥1 … 𝑥8 𝑜1 ⋯ 𝑜6
𝑋= ⋮ ⋱ ⋮ 𝑋 𝑜 = 𝜎(𝑋 ∙ 𝑊 + 𝑏) 𝑂 𝑂= ⋮ ⋱ ⋮
(𝑁) (𝑁) (𝑁) (𝑁)
𝑥1 … 𝑥8 𝑜1 ⋯ 𝑜6
8×6 𝑊 𝑏

Lecturer : Hongpu Liu Lecture 7-14 PyTorch Tutorial @ SLAM Research Group
Neural Network

Linear Layer Linear Layer

𝑋 𝜎(𝑋 ∙ 𝑊1 + 𝑏1 ) 𝑋1
𝑂 𝜎(𝑋 ∙ 𝑊2 + 𝑏2 ) 𝑂2 ……

𝑊1 𝑏1 𝑊2 𝑏2

Lecturer : Hongpu Liu Lecture 7-15 PyTorch Tutorial @ SLAM Research Group
Example: Artificial Neural Network

𝑋 ∈ ℝ𝑁×8

𝐿𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑦𝑒𝑟 1 self.linear1 = torch.nn.Linear(8, 6)

𝑂1 ∈ ℝ𝑁×6

𝐿𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑦𝑒𝑟 2 self.linear2 = torch.nn.Linear(6, 4)

𝑂2 ∈ ℝ𝑁×4

𝐿𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑦𝑒𝑟 3 self.linear3 = torch.nn.Linear(4, 1)

𝑌෠ ∈ ℝ𝑁×1

Lecturer : Hongpu Liu Lecture 7-16 PyTorch Tutorial @ SLAM Research Group
Example: Diabetes Prediction

X1 X2 X3 X4 X5 X6 X7 X8 Y
-0.29 0.49 0.18 -0.29 0.00 0.00 -0.53 -0.03 0
-0.88 -0.15 0.08 -0.41 0.00 -0.21 -0.77 -0.67 1
-0.06 0.84 0.05 0.00 0.00 -0.31 -0.49 -0.63 0
-0.88 -0.11 0.08 -0.54 -0.78 -0.16 -0.92 0.00 1
0.00 0.38 -0.34 -0.29 -0.60 0.28 0.89 -0.60 0
-0.41 0.17 0.21 0.00 0.00 -0.24 -0.89 -0.70 1
-0.65 -0.22 -0.18 -0.35 -0.79 -0.08 -0.85 -0.83 0
0.18 0.16 0.00 0.00 0.00 0.05 -0.95 -0.73 1
-0.76 0.98 0.15 -0.09 0.28 -0.09 -0.93 0.07 0
-0.06 0.26 0.57 0.00 0.00 0.00 -0.87 0.10 0

Lecturer : Hongpu Liu Lecture 7-17 PyTorch Tutorial @ SLAM Research Group
Example: Diabetes Prediction

Prepare dataset Design model using Class


1 2
we shall talk about this later inherit from nn.Module

Construct loss and optimizer Training cycle


3 4
using PyTorch API forward, backward, update

Lecturer : Hongpu Liu Lecture 7-18 PyTorch Tutorial @ SLAM Research Group
Example: 1. Prepare Dataset

import numpy as np
xy = np.loadtxt('diabetes.csv.gz', delimiter=',', dtype=np.float32)
x_data = torch.from_numpy(xy[:,:-1])
y_data = torch.from_numpy(xy[:, [-1]])

Lecturer : Hongpu Liu Lecture 7-19 PyTorch Tutorial @ SLAM Research Group
Example: 2. Define Model

𝑋 ∈ ℝ𝑁×8 import torch

𝐿𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑦𝑒𝑟 1 class Model(torch.nn.Module):


def __init__(self):
super(Model, self).__init__()
𝑂1 ∈ ℝ𝑁×6 self.linear1 = torch.nn.Linear(8, 6)
self.linear2 = torch.nn.Linear(6, 4)
self.linear3 = torch.nn.Linear(4, 1)
𝐿𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑦𝑒𝑟 2 self.sigmoid = torch.nn.Sigmoid()

def forward(self, x):


𝑂2 ∈ ℝ𝑁×4 x = self.sigmoid(self.linear1(x))
x = self.sigmoid(self.linear2(x))
x = self.sigmoid(self.linear3(x))
𝐿𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑦𝑒𝑟 3 return x

model = Model()
𝑌෠ ∈ ℝ𝑁×1

Lecturer : Hongpu Liu Lecture 7-20 PyTorch Tutorial @ SLAM Research Group
Example: 3. Construct Loss and Optimizer

Mini-Batch Loss Function for Binary Classification Update


𝑁
1 𝜕𝑐𝑜𝑠𝑡
𝑙𝑜𝑠𝑠 = − ෍ 𝑦𝑛 log 𝑦ො𝑛 + (1 − 𝑦𝑛 ) log(1 − 𝑦ො𝑛 ) 𝜔 =𝜔−𝛼
𝑁 𝜕𝜔
𝑛=1

criterion = torch.nn.BCELoss(size_average=True)

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

Lecturer : Hongpu Liu Lecture 7-21 PyTorch Tutorial @ SLAM Research Group
Example: 4. Training Cycle

for epoch in range(100):


# Forward
y_pred = model(x_data) NOTICE:
loss = criterion(y_pred, y_data)
print(epoch, loss.item()) This program has not use Mini-Batch
for training.
# Backward
optimizer.zero_grad() We shall talk about DataLoader later.
loss.backward()

# Update
optimizer.step()

Lecturer : Hongpu Liu Lecture 7-22 PyTorch Tutorial @ SLAM Research Group
Exercise: Try different activate function

http://rasbt.github.io/mlxtend/user_guide/general_concepts/activation-functions/#activation-functions-for-artificial-neural-networks

Lecturer : Hongpu Liu Lecture 7-23 PyTorch Tutorial @ SLAM Research Group
Exercise: Try different activate function

https://dashee87.github.io/data%20science/deep%20learning/visualising-activation-functions-in-neural-networks/

Lecturer : Hongpu Liu Lecture 7-24 PyTorch Tutorial @ SLAM Research Group
Exercise: Try different activate function

https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity

Lecturer : Hongpu Liu Lecture 7-25 PyTorch Tutorial @ SLAM Research Group
Exercise: Try different activate function

import torch

class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear1 = torch.nn.Linear(8, 6)
self.linear2 = torch.nn.Linear(6, 4)
self.linear3 = torch.nn.Linear(4, 1)
self.activate = torch.nn.ReLU()

def forward(self, x):


x = self.activate(self.linear1(x))
x = self.activate(self.linear2(x))
x = self.activate(self.linear3(x))
return x

model = Model()

Lecturer : Hongpu Liu Lecture 7-26 PyTorch Tutorial @ SLAM Research Group
PyTorch Tutorial
07. Multiple Dimension Input

Lecturer : Hongpu Liu Lecture 7-27 PyTorch Tutorial @ SLAM Research Group

You might also like