tf.keras批量归一化报错之 Layer ModuleWrapper has arguments in `__init__` and therefore must override `get_co

本文探讨了在使用tf.keras构建模型时,如何处理由于混用tf和tf.keras引发的错误,重点介绍了如何通过调整BatchNormalization的导入方式以及处理不同版本TensorFlow的兼容问题。推荐了将tf.keras.layers.BatchNormalization用于高版本TF的解决方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

我们使用tf.keras进行摸模型搭建时,一般使用如下操作进行批量归一化


from keras.layers.normalization import BatchNormalization

但是训练的过程中会报错如下:

这是因为我们混用了tf和keras(注意是keras,而不是tf.keras)。

而如果我们换成了(tf较低版本可以使用):

from tensorflow.contrib.layers.python.layers import batch_norm 

 mModel.add(tf.keras.layers.batch_norm())

则就会报错为:

ModuleNotFoundError: No module named 'tensorflow.contrib'

这是因为我们的tf的版本过高了,网上一般建议回退版本。当然也有其它复杂的。但是的但是,

我们只需将其换为:
   mModel.add(tf.keras.layers.BatchNormalization())

就行了,就行了啊。

额外说一句,针对tf的大多数低版本有的,而高版本导入报错的情况,都只需要更改其路径即可。如果不知道具体路径,就去官网、百度上查找。实在不行,我们可以用dir(tf.keras.layers)的方式自己一级一级的去寻找嘛。

``` import tensorflow as tf from keras import datasets, layers, models import matplotlib.pyplot as plt # 导入mnist数据,依次分别为训练集图片、训练集标签、测试集图片、测试集标签 (train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data() # 将像素的值标准化至0到1的区间内。(对于灰度图片来说,每个像素最大值是255,每个像素最小值是0,也就是直接除以255就可以完成归一化。) train_images, test_images = train_images / 255.0, test_images / 255.0 # 查看数据维数信息 print(train_images.shape,test_images.shape,train_labels.shape,test_labels.shape) #调整数据到我们需要的格式 train_images = train_images.reshape((60000, 28, 28, 1)) test_images = test_images.reshape((10000, 28, 28, 1)) print(train_images.shape,test_images.shape,train_labels.shape,test_labels.shape) train_images = train_images.astype("float32") / 255.0 def image_to_patches(images, patch_size=4): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images[:, :, :, tf.newaxis], sizes=[1, patch_size, patch_size, 1], strides=[1, patch_size, patch_size, 1], rates=[1, 1, 1, 1], padding="VALID" ) return tf.reshape(patches, [batch_size, -1, patch_size*patch_size*1]) class TransformerBlock(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads): super().__init__() self.att = tf.keras.layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential([ tf.keras.layers.Dense(embed_dim*4, activation="relu"), tf.keras.layers.Dense(embed_dim) ]) self.layernorm1 = tf.keras.layers.LayerNormalization() self.layernorm2 = tf.keras.layers.LayerNormalization() def call(self, inputs): attn_output = self.att(inputs, inputs) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) return self.layernorm2(out1 + ffn_output) class PositionEmbedding(tf.keras.layers.Layer): def __init__(self, max_len, embed_dim): super().__init__() self.pos_emb = tf.keras.layers.Embedding(input_dim=max_len, output_dim=embed_dim) def call(self, x): positions = tf.range(start=0, limit=tf.shape(x)[1], delta=1) return x + self.pos_emb(positions) def build_transformer_model(): inputs = tf.keras.Input(shape=(49, 16)) # 4x4 patches x = tf.keras.layers.Dense(64)(inputs) # 嵌入维度64 # 添加位置编码 x = PositionEmbedding(max_len=49, embed_dim=64)(x) # 堆叠Transformer模块 x = TransformerBlock(embed_dim=64, num_heads=4)(x) x = TransformerBlock(embed_dim=64, num_heads=4)(x) # 分类头 x = tf.keras.layers.GlobalAveragePooling1D()(x) outputs = tf.keras.layers.Dense(10, activation="softmax")(x) return tf.keras.Model(inputs=inputs, outputs=outputs) model = build_transformer_model() model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) # 数据预处理 train_images_pt = image_to_patches(train_images[..., tf.newaxis]) test_images_pt = image_to_patches(test_images[..., tf.newaxis]) history = model.fit( train_images_pt, train_labels, validation_data=(test_images_pt, test_labels), epochs=10, batch_size=128 )```Exception has occurred: NotImplementedError Layer PositionEmbedding has arguments ['self', 'max_len', 'embed_dim'] in `__init__` and therefore must override `get_config()`. Example: class CustomLayer(keras.layers.Layer): def __init__(self, arg1, arg2): super().__init__() self.arg1 = arg1 self.arg2 = arg2 def get_config(self): config = super().get_config() config.update({ "arg1": self.arg1, "arg2": self.arg2, }) return config File "D:\source\test3\transform.py", line 129, in <module> model.save('transform_model.keras') NotImplementedError: Layer PositionEmbedding has arguments ['self', 'max_len', 'embed_dim'] in `__init__` and therefore must override `get_config()`.
最新发布
03-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值