ultralytics/yolov5网络模型代码解读

本文深入解析YOLOv5的网络结构,通过分析`yolov5s.yaml`配置文件和`train.py`中的模型实例化代码,详细解释了模型的构建过程。文章首先介绍了`yolov5s.yaml`中的参数设置,接着详细剖析了`train.py`中`Model`类的实例化和`parse_model`函数,展示了如何从配置文件生成网络层。通过实际运行代码并展示结果,验证了模型的正确性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

先附上yolov5代码地址:ultralytics/yolov5

网络上已经有很多关于yolov5代码解析,为什么还要写一篇?

因为关于模型的代码一直看的半懂,再不动手写下,可能代码都看下去了。。。(【注】代码中的有些注释,有一开始学习yolov5代码时学习博主的代码剖析文章添加的。链接放上,多向大佬学习!!!)yolov5深度剖析+源码debug级讲解系列(二)backbone构建)YOLOV5训练代码train.py注释与解析

有关构建yolov5网络模型的代码主要由在models文件夹下的common.py、 yolov5(s,m,l,x).yaml配置文件和在models文件夹下的yolo.py文件组成。

common.py文件中定义了在yolov5网络中用到的全部的模块的代码。这部分代码量不大,可读性很好,就不再说了。

yolov5模型分为小(s)、中(m)、大(l)和最大(x)4种网络模型。这里选取yolov5s.yaml配置文件作为辅助解读,见下方代码:

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license

# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  # from: -1表示由上一层的输出作为本次操作的输入。(Focus在顺序中记为0,之后依次递增)
  # number: 表示调用int(depth_multiple * number)次该操作。
  # module: 表示调用了common.py文件中定义的模块操作
  # args: 表示调用模块的参数。(后面会在yolo.py文件中具体分析)

  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, C3, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, C3, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, C3, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, C3, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, C3, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

现在对yolov5s.yaml配置文件有点记忆就好,趁热转到train.py文件中关于model实例化代码上,见下面代码:

model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create

从Model类打进去就看到其中参数cfg传入的是yolov5s.yaml配置文件,ch是输入图像的通道数为3,nc是检测任务中的分类数。继续在__init__()中往下就可以找到yolov5s模型解析,见下面代码:

self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch])  # model, savelist

parse_model函数传入了yolov5s.yaml文件,这里ch = [ch],也就是传入ch = [3]。这里ch中存放了图像的通道数,后面会有用。函数返回了self.model。打进去parse_model函数,见下面代码:

def parse_model(d, ch):  # model_dict, [input_channels(3)]--->[3]
    LOGGER.info('\n%3s%18s%3s%10s  %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
    anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
    # anchors见yolov5s.yaml中的列表
    # 读出配置dict里面的参数,na是判断anchor的数量
    na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchors
    # no代表网络模型最后一层的输出通道数
    # no输出维度=anchor数量*(类别数量+置信度+xywh四个回归坐标)
    # 比如对于coco数据集是255=3*(5+80)
    no = na * (nc + 5)  # number of outputs = anchors * (classes + 5)
    layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch out
    # 这里开始迭代循环backbone与head的配置。f,n,m,args分别代表着从哪层开始,模块的默认深度,模块的类型和模块的参数。
    for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, args
        m = eval(m) if isinstance(m, str) else m  # eval strings
        for j, a in enumerate(args):
            try:
                args[j] = eval(a) if isinstance(a, str) else a  # eval strings
            except:
                pass
        # 网络用n*gd控制模块的深度缩放,比如对于yolo5s来讲,gd为0.33,也就是把默认的深度缩放为原来的1/3。
        # 深度在这里指的是类似CSP这种模块的重复迭代次数。而宽度一般我们指的是特征图的channels。
        n = n_ = max(round(n * gd), 1) if n > 1 else n  # depth gain
        # 对于这几种类型的模块,ch是一个用来保存之前所有的模块输出的channels,ch[-1]代表着上一个模块的输出通道。args[0]是默认的输出通道。
        if m in [Focus, Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
                 BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]:
            # args[0]为每次迭代的模块的输出通道
            c1, c2 = ch[f], args[0]
            if c2 != no:  # if not output
                # gw = 0.5 是yolov5s.yaml中读取的width_multiple控制特征图channels。
                # 将网络模块中间层的特征图的输出通道数满足是8的倍数。
                c2 = make_divisible(c2 * gw, 8)
            # 将yolov5s.yaml文件中的backbone和head中每一层最后的列表取出来后,改为[in_channels, out_channels, 原来args后面的参数就是kernel_size, stride]
            args = [c1, c2, *args[1:]]
            # 只有BottleneckCSP,C3,C3TR, C3Ghost会根据深度参数n被调整该模块的重复迭加次数
            if m in [BottleneckCSP, C3, C3TR, C3Ghost]:
                args.insert(2, n)  # number of repeats
                n = 1
        # 以下是其他几种类型的Module。
        # 如果是nn.BatchNorm2d则通道数保持不变。
        # 如果是Concat则f是所有需要拼接层的index,则输出通道c2是所有层的和。
        # 如果是Detect则对应检测头,这部分后面再详细讲。
        # Contract和Expand目前未在模型中使用。
        elif m is nn.BatchNorm2d:
            args = [ch[f]]
        elif m is Concat:
            c2 = sum([ch[x] for x in f])
        elif m is Detect:
            args.append([ch[x] for x in f])
            if isinstance(args[1], int):  # number of anchors
                args[1] = [list(range(args[1] * 2))] * len(f)
        elif m is Contract:
            c2 = ch[f] * args[0] ** 2
        elif m is Expand:
            c2 = ch[f] // args[0] ** 2
        else:
            c2 = ch[f]
        # 这里的*[m(*args) for _ in range(n)])就很关键。
        # 就将yolov5s.yaml配置文件中关于每层网络的配置信息就用在构建网络层上了。
        # 整体都受到宽度缩放,C3模块受到深度缩放
        m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args)  # module
        t = str(m)[8:-2].replace('__main__.', '')  # module type
        np = sum([x.numel() for x in m_.parameters()])  # number params
        m_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number params
        #                  from  n    params  module                                  arguments
        # 0                -1  1      3520  models.common.Focus                     [3, 32, 3]
        LOGGER.info('%3s%18s%3s%10.0f  %-40s%-30s' % (i, f, n_, np, t, args))  # print
        save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelist
        layers.append(m_)
        if i == 0:
            ch = []
        # 将每个模块放在nn.Sequential()中后,就将该模块的输出通道数又存放在ch列表中,这样,下一个模块从ch中读取c1 = ch[-1],就可以把上次的模块的特征输出通道数作为自己的输入通道数。
        ch.append(c2)
    return nn.Sequential(*layers), sorted(save)

yolov5网络模型搭建的代码就看完了,接下来在yolo.py文件中,进行测试下吧 。我是在cpu上进行测试的,故输入是torch.rand(1, 3, 640, 640)大小的tensor。

终端输入:python yolo.py --profile

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--profile', action='store_true', help='profile model speed')
    opt = parser.parse_args()
    opt.cfg = check_file(opt.cfg)  # check file
    set_logging()
    device = select_device(opt.device)

    # Create model
    model = Model(opt.cfg).to(device)
    model.train()

    # Profile
    if opt.profile:
        img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
        print(img.size())
        y = model(img)
        print(len(y), y[0].size(), y[1].size(), y[2].size())

也附上测试结果:(结果输出size可能有差异,因在yolov5模型基础上做了改动,不必纠结,出来结果就证明网络模型跑通了)

torch.Size([1, 3, 640, 640])
3 torch.Size([1, 3, 80, 80, 65]) torch.Size([1, 3, 40, 40, 65]) torch.Size([1, 3, 20, 20, 65])

root@autodl-container-74204da04f-a96714b5:~/autodl-tmp/yolov8_ws# /root/miniconda3/bin/python /root/autodl-tmp/yolov8_ws/src/ultralytics/scripts/train/coal_mine_train.py 开始YOLOv8训练... 数据集: /root/autodl-tmp/yolov8_ws/src/ultralytics/dataset/hydraulic_support_guard_plate_data2023_yolo/dataset.yaml WARNING ⚠️ no model scale passed. Assuming scale='n'. Traceback (most recent call last): File "/root/autodl-tmp/yolov8_ws/src/ultralytics/scripts/train/coal_mine_train.py", line 29, in <module> main() File "/root/autodl-tmp/yolov8_ws/src/ultralytics/scripts/train/coal_mine_train.py", line 9, in main model = YOLO('/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/cfg/models/v8/yolov8-cmh.yaml') File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/models/yolo/model.py", line 79, in __init__ super().__init__(model=model, task=task, verbose=verbose) File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/engine/model.py", line 149, in __init__ self._new(model, task=task, verbose=verbose) File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/engine/model.py", line 261, in _new self.model = (model or self._smart_load("model"))(cfg_dict, verbose=verbose and RANK == -1) # build model File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/nn/tasks.py", line 435, in __init__ m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/nn/tasks.py", line 431, in _forward return self.forward(x)[0] if isinstance(m, (Segment, YOLOESegment, Pose, OBB)) else self.forward(x) File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/nn/tasks.py", line 157, in forward return self.predict(x, *args, **kwargs) File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ultralytics/nn/tasks.py", line 175, in predict return self._predict_once(x, profile, visualize, embed) File "/root/autodl-tmp/yolov8_ws/src/ultralytics/ul
最新发布
07-24
Traceback (most recent call last): File "/root/ultralytics/yolov8-seg/yolov8-MobileNetv3-CA-train.py", line 3, in <module> model=YOLO("/root/ultralytics/ultralytics/cfg/models/v8/yolov8-MobileNetv3-CA.yaml",verbose=True) File "/root/ultralytics/ultralytics/models/yolo/model.py", line 79, in __init__ super().__init__(model=model, task=task, verbose=verbose) File "/root/ultralytics/ultralytics/engine/model.py", line 149, in __init__ self._new(model, task=task, verbose=verbose) File "/root/ultralytics/ultralytics/engine/model.py", line 261, in _new self.model = (model or self._smart_load("model"))(cfg_dict, verbose=verbose and RANK == -1) # build model File "/root/ultralytics/ultralytics/nn/tasks.py", line 419, in __init__ m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward File "/root/ultralytics/ultralytics/nn/tasks.py", line 415, in _forward return self.forward(x)[0] if isinstance(m, (Segment, YOLOESegment, Pose, OBB)) else self.forward(x) File "/root/ultralytics/ultralytics/nn/tasks.py", line 141, in forward return self.predict(x, *args, **kwargs) File "/root/ultralytics/ultralytics/nn/tasks.py", line 159, in predict return self._predict_once(x, profile, visualize, embed) File "/root/ultralytics/ultralytics/nn/tasks.py", line 182, in _predict_once x = m(x) # run File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/root/ultralytics/ultralytics/nn/modules/conv.py", line 848, in forward out = identity * a_w * a_h RuntimeError: The size of tensor a (256) must match the size of tensor b (8) at non-singleton dimension 1
06-19
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值