tensorflow2.0线性回归

这篇博客介绍了如何在TensorFlow 2.x中使用动态图实现线性回归,通过GradientTape进行反向传播。代码示例展示了Regressor类的构建,以及训练和验证过程。作者认为TF2.x的API在某些方面比PyTorch更简洁,如不需要手动清零梯度。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

tf2和pytorch比较像了,感觉都是动态图,好写一些,这里找到个线性回归的代码供大家参考,是github上一个教程TensorFlow-2.x-Tutorials的。tf2用tape进行反向传播等操作,感觉比pytorch还要简单些,不用清零了。

import  tensorflow as tf
import  numpy as np
from    tensorflow import keras
import  os


class Regressor(keras.layers.Layer):

    def __init__(self):
        super(Regressor, self).__init__()
        # here must specify shape instead of tensor !
        # name here is meanless !
        # [dim_in, dim_out]
        self.w = self.add_variable('meanless-name', [13, 1])
        # [dim_out]
        self.b = self.add_variable('meanless-name', [1])
        print(self.w.shape, self.b.shape)
        print(type(self.w), tf.is_tensor(self.w), self.w.name)
        print(type(self.b), tf.is_tensor(self.b), self.b.name)

    def call(self, x):
        x = tf.matmul(x, self.w) + self.b
        return x

def main():

    tf.random.set_seed(22)
    np.random.seed(22)
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    assert tf.__version__.startswith('2.')

    (x_train, y_train), (x_val, y_val) = keras.datasets.boston_housing.load_data()
    #
    x_train, x_val = x_train.astype(np.float32), x_val.astype(np.float32)
    # (404, 13) (404,) (102, 13) (102,)
    print(x_train.shape, y_train.shape, x_val.shape, y_val.shape)
    # Here has two mis-leading issues:
    # 1. (x_train, y_train) cant be written as [x_train, y_train]
    db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(64)
    db_val = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(102)


    model = Regressor()
    criteon = keras.losses.MeanSquaredError()
    optimizer = keras.optimizers.Adam(learning_rate=1e-2)

    for epoch in range(200):
        for step, (x, y) in enumerate(db_train):
            with tf.GradientTape() as tape:
                # [b, 1]
                logits = model(x)
                # [b]
                logits = tf.squeeze(logits, axis=1)
                # [b] vs [b]
                loss = criteon(y, logits)

            grads = tape.gradient(loss, model.trainable_variables)
            optimizer.apply_gradients(zip(grads, model.trainable_variables))

        print(epoch, 'loss:', loss.numpy())

        if epoch % 10 == 0:

            for x, y in db_val:
                # [b, 1]
                logits = model(x)
                # [b]
                logits = tf.squeeze(logits, axis=1)
                # [b] vs [b]
                loss = criteon(y, logits)

                print(epoch, 'val loss:', loss.numpy())

if __name__ == '__main__':
    main()
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值