MXNet基础篇(计算机视觉算法实践--魏凯峰)


在MXNet中至少了解三架马车:NDArray,Symbol和Module.假如将训练算法的过程比作建造一个房子,NDArray相当于钢筋水泥零部件,Symbol想当于房子的每一层,Module相当于房子整体框架.

1.NDArray

NDArray是MXNet框架中,数据流的基础结构.采用命令式编程(imperative programming).

详细教学见官方文档.其中有知识点的demo.以下是在anaconda虚拟环境mxnet中的demo代码,清晰易懂,覆盖少量基础知识.
Colding:

In [1]: import mxnet as mx

In [2]: import numpy as np

In [3]: a = mx.nd.array([[1, 2], [3, 4]])

In [4]: print(a)

[[1. 2.]
 [3. 4.]]
<NDArray 2x2 @cpu(0)>

In [5]: b = np.array([[1, 2], [3, 4]])

In [6]: print(b)
[[1 2]
 [3 4]]

In [7]: print(a.shape)
(2, 2)

In [8]: print(b.shape)
(2, 2)

In [10]: print(a.dtype)
<class 'numpy.float32'>

In [11]: print(b.dtype)
int64
In [13]: c = mx.nd.array([[1, 2], [3, 4]], dtype=np.int8)

In [14]: print(c.dtype)
<class 'numpy.int8'>

In [15]: d = np.array([[1, 2], [3, 4]], dtype=np.int8)

In [16]: print(d)
[[1 2]
 [3 4]]

In [17]: d = np.array([[1, 2], [3, 4]], dtype=np.int8)

In [18]: print(d.dtype)
int8

In [19]: c = mx.nd.array([[1,2,3,4],[5,6,7,8]])

In [20]: print(c[0,1:3])

[2. 3.]
<NDArray 2 @cpu(0)>

In [21]: print(c[1,1:3])

[6. 7.]
<NDArray 2 @cpu(0)>

In [22]: d =np.array([[1,2,3,4],[5,6,7,8]])

In [23]: print(d[0,1:3])
[2 3]

In [24]: print(c)

[[1. 2. 3. 4.]
 [5. 6. 7. 8.]]
<NDArray 2x4 @cpu(0)>

In [25]: f =c.copy()

In [26]: print(f)

[[1. 2. 3. 4.]
 [5. 6. 7. 8.]]
<NDArray 2x4 @cpu(0)>

In [27]: f[0, 0] = -1

In [28]: print(f)

[[-1.  2.  3.  4.]
 [ 5.  6.  7.  8.]]
<NDArray 2x4 @cpu(0)>

In [29]: print(c)

[[1. 2. 3. 4.]
 [5. 6. 7. 8.]]
<NDArray 2x4 @cpu(0)>

In [30]: e = c

In [31]: e[0, 0] = -1

In [32]: print(e)

[[-1.  2.  3.  4.]
 [ 5.  6.  7.  8.]]
<NDArray 2x4 @cpu(0)>

In [33]: print(c)

[[-1.  2.  3.  4.]
 [ 5.  6.  7.  8.]]
<NDArray 2x4 @cpu(0)>

In [34]: g = e.asnumpy()

In [35]: print(g)
[[-1.  2.  3.  4.]
 [ 5.  6.  7.  8.]]

In [36]: print(g.dtype)
float32

In [37]: print(e.dtype)
<class 'numpy.float32'>

In [38]: print(mx.nd.array(g))

[[-1.  2.  3.  4.]
 [ 5.  6.  7.  8.]]
<NDArray 2x4 @cpu(0)>

In [39]: print(e.context)
cpu(0)

In [40]: e = e.as_in_context(mx.gpu(0))

In [41]: print(e.context)
gpu(0)

In [42]: f = mx.nd.array([[2,3,4,5],[6,7,8,9]])
In [43]: print(e + f)
---------------------------------------------------------------------------
MXNetError                                Traceback (most recent call last)
<ipython-input-43-f71ad0fb8d23> in <module>()
----> 1 print(e + f)
MXNetError: [10:14:36] src/imperative/./imperative_utils.h:70: Check failed: inputs[i]->ctx().dev_mask() == ctx.dev_mask() (1 vs. 2) Operator broadcast_add require all inputs live on the same context. But the first argument is on gpu(0) while the 2-th argument is on cpu(0)

In [45]: f  = f.as_in_context(mx.gpu(0))

In [46]: print(e + f)

[[ 1.  5.  7.  9.]
 [11. 13. 15. 17.]]
<NDArray 2x4 @gpu(0)>

2. Symbol

Symbol是MXNet中用于构建网络层的模块.采用符号式编程(symbolic programming).提高训练效率,节省显存.
Symbol官方文档

  • mxnet.symbol.Variable()接口定义数据.类似于一个占位符.
  • mxnet.symbol.Convolution()j接口定义卷积核尺寸,数量等参数.卷积层是深度学习提取特征算法中的主要网络层.
  • mxnet.symbol.BatchNorm()接口定义一个批标准化(batch normalization,BN),有助于算法收敛.
  • mxnet.symbol.Activation()接口定义一个ReLu激活层,用来增加网络当中的非线性.
  • mxnet.symbol.Pooling()接口定义一个最大池化层(pooling),通过缩减维度去除特征图噪声并减少后续计算量.
  • mxnet.symbol.FullyConnected()接口定义一个全连接层,一般位于网络后几层,该接口num_hidden参数表示分类的级别数.
  • mxnet.symblo.SoftmaxOutput()接口定义一个损失层,该接口的损失函数是是交叉熵损失函数(cross entropy loss),该损失函数的输入是通过softmax函数得到的,softmax函数是一个变换函数,将一个向量变换到另一个维度相同,数值在[0,1]之间的向量,因此该层用 mxnet.symblo.SoftmaxOutput()命名.
    colding
In [1]: import mxnet as mx

In [2]: data = mx.sym.Variable('data')

In [3]: conv = mx.sym.Convolution(data=data, num_filter=128, kernel=(3,3), pad=(1,1), name='conv1')

In [4]: bn = mx.sym.BatchNorm(data=conv, name='bn1')

In [5]: relu = mx.sym.Activation(data=bn, act_type='relu', name='relu1')

In [6]: pool  = mx.sym.Pooling(data=relu, kernel=(2,2),stride=(2,2),pool_type='max', name='pool1')

In [7]: fc = mx.sym.FullyConnected(data=pool, num_hidden=2, name='fc1')

In [8]: sym = mx.sym.Softmax(data=fc, name='softmax')
[12:53:47] src/operator/./softmax_output-inl.h:404: Softmax symbol is renamed to SoftmaxOutput. This API will be deprecated in Dec, 2015
# list_arguments()方法可查看Symbol对象的参数
In [9]: print(sym.list_arguments())
['data', 'conv1_weight', 'conv1_bias', 'bn1_gamma', 'bn1_beta', 'fc1_weight', 'fc1_bias', 'softmax_label']
# infer_shape()方法查看一个Symbol对象的层参数维度,输出维度,辅助层参数维度信息.
In [12]: arg_shape, out_shape, aux_shape = sym.infer_shape(data=(1,3,10,10))
#(2, 3200)由来:全连接层输出节点为2,     3200=5*5*128;    5=(10+1*2-3+1)/2
In [13]: print(arg_shape)
[(1, 3, 10, 10), (128, 3, 3, 3), (128,), (128,), (128,), (2, 3200), (2,), (1,)]
#数据批次为1,全连接层输出类别为2
In [14]: print(out_shape)
[(1, 2)]
#辅助层此例中为BN层的参数维度.
In [15]:  print(aux_shape)
[(128,), (128,)]
#sym.get_internals()方法的到MXNet所有层信息
In [16]: sym_mini = sym.get_internals()

In [17]: print(sym_mini)
<Symbol group [data, conv1_weight, conv1_bias, conv1, bn1_gamma, bn1_beta, bn1_moving_mean, bn1_moving_var, bn1, relu1, pool1, fc1_weight, fc1_bias, fc1, softmax_label, softmax]>
#选择要截取的层,比如截取到池化层为止
In [18]: sym_mini = sym.get_internals()['pool1_output']

In [19]: print(sym_mini)
<Symbol pool1>

In [20]: print(sym_mini.list_arguments())
['data', 'conv1_weight', 'conv1_bias', 'bn1_gamma', 'bn1_beta']

In [21]: fc_new = mx.sym.FullyConnected(data=sym_mini, num_hidden=5, name='fa_ne
   ...: w')

In [22]: sym_new = mx.sym.SoftmaxOutput(data=fc_new, name='softmax_label')

In [23]: print(sym_new.list_arguments())
['data', 'conv1_weight', 'conv1_bias', 'bn1_gamma', 'bn1_beta', 'fa_new_weight', 'fa_new_bias', 'softmax_label_label']

-bind()方法

In [1]: import mxnet as mx

In [2]: data_a = mx.sym.Variable('data_a')

In [3]: data_b = mx.sym.Variable('data_b')

In [4]: data_c = mx.sym.Variable('data_c')

In [5]: s = data_c *(data_a + data_b)

In [6]: print(type(s))
<class 'mxnet.symbol.symbol.Symbol'>
#用s.bind()方法将具体输入和定义的操作绑定到执行器,同时为该方法指定是在cpu还是gpu上进行.
In [8]: e = s.bind(mx.cpu(), {'data_a':mx.nd.array([1,2,3]), 'data_b':mx.nd.arra
   ...: y([4,5,6]),'data_c':mx.nd.array([2,3,4])})

In [9]: print(e)
<mxnet.executor.Executor object at 0x7f73a8d37320>

In [10]: output = e.forward()

In [11]: print(output[0])

[10. 21. 36.]
<NDArray 3 @cpu(0)>

对比NDArray方法:

In [13]: import mxnet as mx

In [14]: data_a = mx.nd.array([1,2,3])

In [15]: data_b = mx.nd.array([4,5,6])

In [17]: data_c = mx.nd.array([2,3,4])

In [18]: result = data_c * (data_a + data_b)

In [19]: print(result)

[10. 21. 36.]
<NDArray 3 @cpu(0)>

两者结果一样.但symbol接口定义好计算图后,很多显存是可以重复利用或者共享的,其特点是高效,主要用来定义计算图;NDArray特点是直观,常用来实现底层的计算.
下例:首先用mx.nd.arange()初始化输入数据,这里定义了一个4维data,因为模型中的数据流基本上都是4维的.将weight初始化为值权威1的4维变量,bias初始化为值全为0的1维变量.

In [20]: data = mx.nd.arange(0,28).reshape((1,1,4,7))

In [21]: print(data)

[[[[ 0.  1.  2.  3.  4.  5.  6.]
   [ 7.  8.  9. 10. 11. 12. 13.]
   [14. 15. 16. 17. 18. 19. 20.]
   [21. 22. 23. 24. 25. 26. 27.]]]]
<NDArray 1x1x4x7 @cpu(0)>


In [23]: conv1 = mx.nd.Convolution(data=data, weight=mx.nd.ones((10,1,3,3)), bias=mx.nd.zeros((10)),num_filter=10,kernel=(3,3),name='conv1')

In [24]: print(conv1)

[[[[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]

  [[ 72.  81.  90.  99. 108.]
   [135. 144. 153. 162. 171.]]]]
<NDArray 1x10x2x5 @cpu(0)>

2. Module

Symbol官方文档
Module接口提供了需多非常方便的方法用于训练,只须要将准备好的数据,超参数等传给对应的方法就能启动了.


mod = mx.mod.Module(symbol=sym, context=mx.gpu(0))

mod.bind(data_shapes=[('data',(8,3,28,28))],
         label_shapes=[('softmax_label',(8,))],
         for_training=False
         )
#调用init_params()方法初始化网络结构参数
mod.init_params()
#初始化时传入一个定义好的网络结构(上文中定义好的sym),并设置gpu环境,可通过watch nvidia-smi命令查看执行bind前后,显存变化.
In [10]: mod = mx.mod.Module(symbol=sym, context=mx.gpu(0))
#bind操作中,for_training参数默认为True
In [11]: mod.bind(data_shapes=[('data',(8,3,28,28))],
    ...:          label_shapes=[('softmax_label',(8,))],
    ...:          for_training=False)
    ...:          
#调用init_params()方法初始化网络结构参数
In [12]: mod.init_params()

In [13]: data = mx.nd.random.uniform(0,1,shape=(8,3,28,28))

In [14]: mod.forward(mx.io.DataBatch([data]))

In [15]: print(mod.get_outputs()[0])

[[0.49787188 0.5021281 ]
 [0.49567536 0.5043246 ]
 [0.49425143 0.50574857]
 [0.49420723 0.5057928 ]
 [0.4971569  0.50284314]
 [0.49470276 0.50529724]
 [0.49467736 0.50532264]
 [0.4921246  0.5078754 ]]
<NDArray 8x2 @gpu(0)>

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  • demo1
import mxnet as mx
import logging

data = mx.sym.Variable('data')
conv = mx.sym.Convolution(data=data, num_filter=128, kernel=(3,3), pad=(1,1),
                          name='conv1')
bn = mx.sym.BatchNorm(data=conv, name='bn1')
relu = mx.sym.Activation(data=bn, act_type='relu', name='relu1')
pool = mx.sym.Pooling(data=relu, kernel=(2,2), stride=(2,2), pool_type='max',
                      name='pool1')
fc = mx.sym.FullyConnected(data=pool, num_hidden=2, name='fc1')
sym = mx.sym.SoftmaxOutput(data=fc, name='softmax')

data = mx.nd.random.uniform(0,1,shape=(1000,3,224,224))
label = mx.nd.round(mx.nd.random.uniform(0,1,shape=(1000)))
train_data = mx.io.NDArrayIter(data={'data':data},
                               label={'softmax_label':label},
                               batch_size=8,
                               shuffle=True)

print(train_data.provide_data)
print(train_data.provide_label)
mod = mx.mod.Module(symbol=sym,context=mx.gpu(0))
mod.bind(data_shapes=train_data.provide_data,
         label_shapes=train_data.provide_label)
mod.init_params()
mod.init_optimizer()
eval_metric = mx.metric.create('acc')
for epoch in range(5):
    end_of_batch = False
    eval_metric.reset()
    data_iter = iter(train_data)
    next_data_batch = next(data_iter)
    while not end_of_batch:
        data_batch = next_data_batch
        mod.forward(data_batch)
        mod.backward()
        mod.update()
        mod.update_metric(eval_metric, labels=data_batch.label)
        try:
            next_data_batch = next(data_iter)
            mod.prepare(next_data_batch)
        except StopIteration:
            end_of_batch = True
    eval_name_vals = eval_metric.get_name_value()
    print("Epoch:{} Train_Acc:{:.4f}".format(epoch, eval_name_vals[0][1]))
    arg_params, aux_params = mod.get_params()
    mod.set_params(arg_params, aux_params)
    train_data.reset()

输出结果:

(mxnet) yuyang@oceanshadow:~/下载/MXNet-Deep-Learning-in-Action-master/chapter3-seKnowledge-of-MXNet$ python Module_code3-1.py 
[DataDesc[data,(8, 3, 224, 224),<class 'numpy.float32'>,NCHW]]
[DataDesc[softmax_label,(8,),<class 'numpy.float32'>,NCHW]]
[17:16:36] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Epoch:0 Train_Acc:0.4890
Epoch:1 Train_Acc:0.7620
Epoch:2 Train_Acc:0.9320
Epoch:3 Train_Acc:0.9920
Epoch:4 Train_Acc:1.0000

  • demo2
import mxnet as mx
import logging

data = mx.sym.Variable('data')
conv = mx.sym.Convolution(data=data, num_filter=128, kernel=(3,3), pad=(1,1), 
                         name='conv1')
bn = mx.sym.BatchNorm(data=conv, name='bn1')
relu = mx.sym.Activation(data=bn, act_type='relu', name='relu1')
pool = mx.sym.Pooling(data=relu, kernel=(2,2), stride=(2,2), pool_type='max', 
                     name='pool1')
fc = mx.sym.FullyConnected(data=pool, num_hidden=2, name='fc1')
sym = mx.sym.SoftmaxOutput(data=fc, name='softmax')

data = mx.nd.random.uniform(0,1,shape=(1000,3,224,224))
label = mx.nd.round(mx.nd.random.uniform(0,1,shape=(1000)))
train_data = mx.io.NDArrayIter(data={'data':data},
                              label={'softmax_label':label},
                              batch_size=8,
                              shuffle=True)

print(train_data.provide_data)
print(train_data.provide_label)
mod = mx.mod.Module(symbol=sym,context=mx.gpu(0))

logger = logging.getLogger()
logger.setLevel(logging.INFO)
mod.fit(train_data=train_data, num_epoch=5)

输出结果:

seKnowledge-of-MXNet$ python Module_code3-2.py 
[DataDesc[data,(8, 3, 224, 224),<class 'numpy.float32'>,NCHW]]
[DataDesc[softmax_label,(8,),<class 'numpy.float32'>,NCHW]]
[17:23:41] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
INFO:root:Epoch[0] Train-accuracy=0.518000
INFO:root:Epoch[0] Time cost=3.704
INFO:root:Epoch[1] Train-accuracy=0.752000
INFO:root:Epoch[1] Time cost=3.470
INFO:root:Epoch[2] Train-accuracy=0.938000
INFO:root:Epoch[2] Time cost=3.493
INFO:root:Epoch[3] Train-accuracy=0.990000
INFO:root:Epoch[3] Time cost=3.490
INFO:root:Epoch[4] Train-accuracy=1.000000
INFO:root:Epoch[4] Time cost=3.491
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值