《PyTorch深度学习实践》(四)反向传播

目录

1、反向传播过程

2、y=w*x

2、课后作业 y=w1*x^2+w2*x+b


每一层运算完之后会加一个非线性变化,如果不这么做的话,我们看左边的式子,每一层的权重系数最后可以合并成一样的形式

1、反向传播过程

(1)正向计算loss

(2)反向求梯度(实际就是偏导数)

(3)然后再用梯度下降,更新权重

2、y=w*x

过程:

(1)初始化权重张量

(2)正向计算loss

(3)反向传播loss

(4)利用反向传播计算的梯度,结合梯度下降算法,更新权重

注:记得计算完梯度之后手动清零

import torch
from matplotlib import pyplot as plt

x_data=[1.0,2.0,3.0]
y_data=[2.0,4.0,6.0]

#创建权重
w=torch.Tensor([1.0])
w.requires_grad = True #需要计算梯度

def forward(x):
    return x*w

def loss(x,y):
    y_pred=forward(x)
    return (y_pred-y)**2

#由于w是张量,所以模型计算的时候会自动把x转为张量,我们要获取的是值(标量),所以用forward(4).item()
print("predict(before training)",4,forward(4).item())

loss_list=[]
for epoch in range(50):
    l=torch.Tensor([])
    for x,y in zip(x_data,y_data):
        l=loss(x,y)
        loss_list.append(l.item())
        l.backward()#反向传播,把梯度值都存到w中
        if epoch%5==0:
            print("\tgrad:",x,y,w.item())
        w.data-=0.01*w.grad.data

        w.grad.data.zero_()

    if epoch%10==0:
        print("progress:",epoch,l.item())

print("predict(after training)",4,forward(4).item())

plt.plot(loss_list)
plt.xlabel("epoch")
plt.xlim(0,50)
plt.ylabel("loss")
plt.show()
predict(before training) 4 4.0
	grad: 1.0 2.0 1.0
	grad: 2.0 4.0 1.0199999809265137
	grad: 3.0 6.0 1.0983999967575073
progress: 0 7.315943717956543
	grad: 1.0 2.0 1.779128909111023
	grad: 2.0 4.0 1.7835463285446167
	grad: 3.0 6.0 1.8008626699447632
	grad: 1.0 2.0 1.9512161016464233
	grad: 2.0 4.0 1.9521918296813965
	grad: 3.0 6.0 1.9560165405273438
progress: 10 0.017410902306437492
	grad: 1.0 2.0 1.989225149154663
	grad: 2.0 4.0 1.989440679550171
	grad: 3.0 6.0 1.9902853965759277
	grad: 1.0 2.0 1.9976202249526978
	grad: 2.0 4.0 1.9976677894592285
	grad: 3.0 6.0 1.9978543519973755
progress: 20 4.143271507928148e-05
	grad: 1.0 2.0 1.999474287033081
	grad: 2.0 4.0 1.9994847774505615
	grad: 3.0 6.0 1.999526023864746
	grad: 1.0 2.0 1.9998838901519775
	grad: 2.0 4.0 1.999886155128479
	grad: 3.0 6.0 1.9998952150344849
progress: 30 9.874406714516226e-08
	grad: 1.0 2.0 1.9999743700027466
	grad: 2.0 4.0 1.9999748468399048
	grad: 3.0 6.0 1.9999768733978271
	grad: 1.0 2.0 1.9999942779541016
	grad: 2.0 4.0 1.9999943971633911
	grad: 3.0 6.0 1.9999948740005493
progress: 40 2.3283064365386963e-10
	grad: 1.0 2.0 1.999998688697815
	grad: 2.0 4.0 1.999998688697815
	grad: 3.0 6.0 1.9999988079071045
predict(after training) 4 7.999998569488525

2、课后作业 y=w1*x^2+w2*x+b

这里假设 w1=2,w2=1,b=1

import torch
from matplotlib import pyplot as plt

x_data=[1.0,2.0,3.0]
y_data=[4.0,11.0,22.0]

#初始化权重和偏置
w1=torch.Tensor([0.0])
w1.requires_grad=True
w2=torch.Tensor([0.0])
w2.requires_grad=True
b=torch.Tensor([0.0])
b.requires_grad=True

def forward(x):
    return w1*x*x+w2*x+b

def loss(x,y):
    y_pred=forward(x)
    return (y_pred - y)**2

print("predict(before training)",4,forward(4).item())

loss_list=[]
for epoch in range(100):
    l=torch.Tensor([])
    for x,y in zip(x_data,y_data):
        l=loss(x,y)
        l.backward()
        loss_list.append(l.item())
        w1.data-=0.01*w1.grad.data
        w2.data-=0.01*w2.grad.data
        b.data-=0.01*b.grad.data
        if(epoch%10==0):
            print("\tgrad:",x,y,w1.item(),w2.item(),b.item())
        w1.grad.zero_()
        w2.grad.zero_()
        b.grad.zero_()
    if(epoch%10==0):
        print("progress:",epoch,l.item())

print("predict(before training)",4,forward(4).item())

plt.plot(loss_list)
plt.xlabel("epoch")
plt.xlim(0,100)
plt.ylabel("loss")
plt.show()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值