[深度学习框架] Keras上使用CNN进行mnist分类

该博客介绍了如何使用Keras深度学习框架构建一个卷积神经网络(CNN)来对MNIST手写数字数据集进行分类。首先,加载并预处理MNIST数据,然后创建一个包含两个卷积层、两个最大池化层以及全连接层的CNN模型,并使用Adam优化器进行训练。最后,模型在测试集上进行了评估。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

# coding: utf-8
import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Flatten
from keras.optimizers import Adam
np.random.seed(1337)

# download the mnist
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()

# data pre-processing
X_train = X_train.reshape(-1, 1, 28, 28)/255
X_test = X_test.reshape(-1, 1, 28, 28)/255
Y_train = np_utils.to_categorical(Y_train, num_classes=10)
Y_test = np_utils.to_categorical(Y_test, num_classes=10)

# build CNN
model = Sequential()

# conv layer 1 output shape(32, 28, 28)
model.add(Convolution2D(filters=32,
                       kernel_size=5,
                       strides=1,
                       padding='same',
                       batch_input_shape=(None, 1, 28, 28),
                       data_format='channels_first'))
model.add(Activation('relu'))

# pooling layer1 (max pooling) output shape(32, 14, 14)
model.add(MaxPooling2D(pool_size=2, 
                       strides=2, 
                       padding='same', 
                       data_format='channels_first'))

# conv layer 2 output shape (64, 14, 14)
model.add(Convolution2D(64, 5, 
                        strides=1, 
                        padding='same', 
                        data_format='channels_first'))
model.add(Activation('relu'))

# pooling layer 2 (max pooling) output shape (64, 7, 7)
model.add(MaxPooling2D(2, 2, 'same', 
                       data_format='channels_first'))

# full connected layer 1 input shape (64*7*7=3136), output shape (1024)
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))

# full connected layer 2 to shape (10) for 10 classes
model.add(Dense(10))
model.add(Activation('softmax'))

# define optimizer
adam = Adam(lr=1e-4)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])

# training
print 'Training'
model.fit(X_train, Y_train, epochs=1, batch_size=64)

# testing
print 'Testing'
loss, accuracy = model.evaluate(X_test, Y_test)
print 'loss, accuracy: ', (loss, accuracy)


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值