使用TensorFlow和Keras构建自编码器
立即解锁
发布时间: 2025-08-30 00:50:12 阅读量: 4 订阅数: 12 AIGC 

# 使用TensorFlow和Keras构建自编码器
## 1. 引言
自编码器是一种无监督学习模型,可用于数据压缩、特征提取和去噪等任务。本文将介绍如何使用TensorFlow和Keras构建不同类型的自编码器,包括堆叠自编码器、去噪自编码器和变分自编码器,并使用MNIST数据集进行实验。
## 2. 堆叠自编码器
### 2.1 TensorFlow实现
构建堆叠自编码器的步骤如下:
1. **定义超参数**:
```python
learning_rate = 0.001
n_epochs = 20
batch_size = 100
n_batches = int(mnist.train.num_examples/batch_size)
n_inputs = 784
n_outputs = n_inputs
```
2. **定义输入和输出占位符**:
```python
x = tf.placeholder(dtype=tf.float32, name="x", shape=[None, n_inputs])
y = tf.placeholder(dtype=tf.float32, name="y", shape=[None, n_outputs])
```
3. **定义编码器和解码器层的神经元数量**:
```python
n_layers = 2
n_neurons = [512,256]
n_neurons.extend(list(reversed(n_neurons)))
n_layers = n_layers * 2
```
4. **定义权重和偏置参数**:
```python
w=[]
b=[]
for i in range(n_layers):
w.append(tf.Variable(tf.random_normal([n_inputs if i==0 else n_neurons[i-1],n_neurons[i]]),
name="w_{0:04d}".format(i)))
b.append(tf.Variable(tf.zeros([n_neurons[i]]),
name="b_{0:04d}".format(i)))
w.append(tf.Variable(tf.random_normal([n_neurons[n_layers-1] if n_layers > 0 else n_inputs,n_outputs]),
name="w_out"))
b.append(tf.Variable(tf.zeros([n_outputs]),name="b_out"))
```
5. **构建网络并使用sigmoid激活函数**:
```python
layer = x
for i in range(n_layers):
layer = tf.nn.sigmoid(tf.matmul(layer, w[i]) + b[i])
layer = tf.nn.sigmoid(tf.matmul(layer, w[n_layers]) + b[n_layers])
model = layer
```
6. **定义损失函数和优化器**:
```python
mse = tf.losses.mean_squared_error
loss = mse(predictions=model, labels=y)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
optimizer = optimizer.minimize(loss)
```
7. **训练模型并预测图像**:
```python
with tf.Session() as tfs:
tf.global_variables_initializer().run()
for epoch in range(n_epochs):
epoch_loss = 0.0
for batch in range(n_batches):
X_batch, _ = mnist.train.next_batch(batch_size)
feed_dict={x: X_batch,y: X_batch}
_,batch_loss = tfs.run([optimizer,loss], feed_dict)
epoch_loss += batch_loss
if (epoch%10==9) or (epoch==0):
average_loss = epoch_loss / n_batches
print('epoch: {0:04d} loss = {1:0.6f}'.format(epoch,average_loss))
Y_train_pred = tfs.run(model, feed_dict={x: train_images})
Y_test_pred = tfs.run(model, feed_dict={x: test_images})
```
训练20个epoch后,损失显著降低:
| 轮数 | 损失 |
| ---- | ---- |
| 0000 | 0.156696 |
| 0009 | 0.091367 |
| 0019 | 0.078550 |
### 2.2 Keras实现
使用Keras构建堆叠自编码器的步骤如下:
1. **导入库并定义超参数和层**:
```python
import keras
from keras.layers import Dense
from keras.models import Sequential
learning_rate = 0.001
n_epochs = 20
batch_size = 100
n_batches = int(mnist.train.num_examples/batch_size)
n_inputs = 784
n_outputs = n_inputs
n_layers = 2
n_neurons = [512,256]
n_neurons.extend(list(reversed(n_neurons)))
n_layers = n_layers * 2
```
2. **构建顺序模型并添加密集层**:
```python
model = Sequential()
model.add(Dense(units=n_neurons[0], activation='relu', input_shape=(n_inputs,)))
for i in range(1,n_layers):
model.add(Dense(units=n_neurons[i], activation='relu'))
model.add(Dense(units=n_outputs, activation='linear'))
```
3. **显示模型摘要**:
```python
model.summary()
```
模型共有1,132,816个参数:
| 层名 | 输出形状 | 参数数量 |
| ---- | ---- | ---- |
| dense_1 | (None, 512) | 401920 |
| dense_2 | (None, 256) | 131328 |
| dense_3 | (None, 256) | 65792 |
| dense_4 | (None, 512) | 131584 |
| dense_5 | (None, 784) | 402192 |
4. **编译模型并训练**:
```python
model.compile(loss='mse',
optimizer=keras.optimizers.Adam(lr=learning_rate),
metrics=['accuracy'])
model.fit(X_train, X_train,batch_size=batch_size, epochs=n_epochs)
```
20个epoch后,损失降至0.0046:
```plaintext
Epoch 1/20
55000/55000 [==========================] - 18s - loss: 0.0193 - acc: 0.0117
Epoch 2/20
55000/55000 [==========================] - 18s - loss: 0.0087 - acc: 0.0139
...
Epoch 20/20
55000/55000 [==========================] - 16s - loss: 0.0046 - acc: 0.0171
```
### 2.3 流程图
```mermaid
graph TD;
A[定义超参数] --> B[定义输入和输出占位符];
B --> C[定义编码器和解码器层的神经元数量];
C --> D[定义权重和偏置参数];
D --> E[构建网络并使用激活函数];
E --> F[定义损失函数和优化器];
F --> G[训练模型并预测图
```
0
0
复制全文
相关推荐









