深度学习概念、TensorFlow与卷积神经网络入门
立即解锁
发布时间: 2025-09-07 01:29:32 阅读量: 6 订阅数: 86 AIGC 


深度学习实战:从理论到应用
### 深度学习概念、TensorFlow与卷积神经网络入门
#### 1. 线性回归与房价预测
在使用TensorFlow进行数据分析时,我们可以通过线性回归来预测房价。以下代码展示了如何绘制实际房价与预测房价的散点图:
```python
#-------------------------------------------------------------------------------------------
# Plot the Predicted House Prices vs the Actual House Prices
#-------------------------------------------------------------------------------------------
fig, ax = plt.subplots()
plt.scatter(Y_input,pred_)
ax.set_xlabel('Actual House price')
ax.set_ylabel('Predicted House price')
```
通过这个散点图,我们可以直观地看到预测结果与实际结果的差异,从而评估模型的性能。
#### 2. 基于全批量梯度下降的多类分类(使用SoftMax函数)
在处理多类分类问题时,我们可以使用全批量梯度下降算法结合SoftMax函数。这里以MNIST数据集为例,该数据集有10个输出类别,对应10个整数。以下是具体的实现代码:
```python
#-------------------------------------------------------------------------------------------
# Import the required libraries
#-------------------------------------------------------------------------------------------
import tensorflow as tf
import numpy as np
from sklearn import datasets
from tensorflow.examples.tutorials.mnist import input_data
#------------------------------------------------------------------------------------------
# Function to read the MNIST dataset along with the labels
#------------------------------------------------------------------------------------------
def read_infile():
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
train_X, train_Y,test_X, test_Y = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
return train_X, train_Y,test_X, test_Y
#-------------------------------------------------------------------------------------------
# Define the weights and biases for the neural network
#-------------------------------------------------------------------------------------------
def weights_biases_placeholder(n_dim,n_classes):
X = tf.placeholder(tf.float32,[None,n_dim])
Y = tf.placeholder(tf.float32,[None,n_classes])
w = tf.Variable(tf.random_normal([n_dim,n_classes],stddev=0.01),name='weights')
b = tf.Variable(tf.random_normal([n_classes]),name='weights')
return X,Y,w,b
#-------------------------------------------------------------------------------------------
# Define the forward pass
#-------------------------------------------------------------------------------------------
def forward_pass(w,b,X):
out = tf.matmul(X,w) + b
return out
#-------------------------------------------------------------------------------------------
# Define the cost function for the SoftMax unit
#-------------------------------------------------------------------------------------------
def multiclass_cost(out,Y):
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=out,labels=Y))
return cost
#-------------------------------------------------------------------------------------------
# Define the initialization op
#------------------------------------------------------------------------------------------
def init():
return tf.global_variables_initializer()
#-------------------------------------------------------------------------------------------
# Define the training op
#-------------------------------------------------------------------------------------------
def train_op(learning_rate,cost):
op_train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
return op_train
train_X, train_Y,test_X, test_Y = read_infile()
X,Y,w,b = weights_biases_placeholder(train_X.shape[1],train_Y.shape[1])
out = forward_pass(w,b,X)
cost = multiclass_cost(out,Y)
learning_rate,epochs = 0.01,1000
op_train = train_op(learning_rate,cost)
init = init()
loss_trace = []
accuracy_trace = []
#-------------------------------------------------------------------------------------------
# Activate the TensorFlow session and execute the stochastic gradient descent
#-------------------------------------------------------------------------------------------
with tf.Session() as sess:
sess.run(init)
for i in xrange(epochs):
sess.run(op_train,feed_dict={X:train_X,Y:train_Y})
loss_ = sess.run(cost,feed_dict={X:train_X,Y:train_Y})
accuracy_ = np.mean(np.argmax(sess.run(out,feed_dict={X:train_X,Y:train_Y}),axis=1) == np.argmax(train_Y,axis=1))
loss_trace.append(loss_)
accuracy_trace.append(accuracy_)
if (((i+1) >= 100) and ((i+1) % 100 == 0 )) :
print 'Ep
```
0
0
复制全文
相关推荐









