基于yolov8宠物识别系统毕设
(本文共有两处yolov8改进:添加SE注意机制和添加新卷积核)
(其他识别内容如人脸、红绿灯、植作物叶片、鱼类等,只更改数据集即可实现)
一、配置环境
1、pycharm
pycharm版本必须为专业版,本人所用为PyCharm 2024.2.1
2、云服务器(AutoDL)
一、AutoDL租用实例
AutoDL链接:AutoDL
1、创建实例
注册登录后使用学生认证进入控制台,左侧容器实例—租用新实例
在租用实例页面:选择计费方式(用的不多的建议按量计费),选择合适的主机,选择要创建实例中的GPU数量(创建完后也可以增加GPU数量),选择镜像,内置了不同的深度学习框架可以直接选择配置好的yolov8镜像源,(亦可以选用更新的镜像yolo11,但是添加改进部分和网络微调是否与v8步骤一样,我还没了解过),最后创建即可。
3、将pyharm连接AutoDl云服务器
软件:Xshell、Xftp、pycharm
一、ssh登录
采用无卡模式登录
获取ssh登录端口和密码
切换回pycharm,点击project—>settings->python interpreter-> add i
点击On SSH
登录指令:ssh -p 30558 root@connect.666.seetacloud.com
Host:connect.666.seetacloud.com
port:30558
key为密码
二、pycharm中SFTP连接云服务器
1、sftp配置
在导航栏部分找到Tools→Deployment→Configuration
点击左上角加号,选择SFTP(connection配置)
我的云服务器的yolo代码存放在/root/ultralytics,所以root path选/root/ultralytics。
切换到mappings部分,这部分的用处是将我们本地的文件夹和服务器上的项目文件夹联系映射起来,所以这个地方我们要填入的是我们的项目文件夹,而这里要注意的是要填入相对于Root Path的路径,而不是绝对路径,比如说我在Root Path部分填入的路径是/root/ultralyti,而我真正的项目的绝对路径是/root/ultralyti,这时候我在Mapping的Deployment Path中填入的就是/。如果服务器上没有这个文件夹就先去创建。
2、服务器项目下载到本地
这时我们可以在导航栏找到Tools→Deployment→Browse Remote Host。
这时则需要我点击上图所示None旁边的下滑菜单,选中我们刚刚命名的AutoDL_server。出现以下界面:
在导航栏找到Tools→Deployment→Download from AutoDL_server,随后我们可以在左侧菜单栏就可以看到我们已经将服务器上的项目下载到本地中。
这个时候我们可以对其进行编辑,在导航栏找到Tools→Deployment→Automatic Upload,打开,这时候我们对文件的更改就会自动上传到服务器上。
然后双击选中要调试的文件夹,就可以进行调试运行了,当然也可以用命令行去运行程序。
3、项目拉取失败
需要考虑:①云服务器有没有开机;
②
应该不选择use Rsync for download/upload/sync
③云服务器没有Rsync插件,需要下载相应的包,使用
rsync --version
检测是否安装rsync。
二、数据集构建
1、数据构建
选择开源数据集平台:huggingfaceing、阿里云天池、百度飞浆、robowflow(YoLo专属)。限于设备在此只选用The Oxford-IIIT Pet Dataset,如后续有空闲时间则再混合多数据集以扩大数据量。
2、数据格式准备
以我所用的The Oxford-IIIT Pet Dataset为例,其内容格式为:
images:存放原始图像。 annotations/:存放图像分割掩码和其他注释信息。 trimaps:分割掩码,包括背景、宠物及其其他部分的分类。 groundtruth:每张图像的真实标签。 labels.txt:每种宠物的详细信息和标签索引。
我们yolo需要的格式与该文件夹内不相符,所以我们需要更换xml为yolo所需的txt格式。
①计算label,新创voc_label.py
# TODO: 将Oxford-IIIT Pet Dataset的.xml标注转换为YOLO可用的格式(.txt),并分割训练集和验证集 import os import shutil from lxml import etree import random images_path = "D:\Last Dance\The Oxford-IIIT Pet Dataset_datasets\images\images" annotations_path = "D:\Last Dance\The Oxford-IIIT Pet Dataset_datasets\Annotations\demo_xmls" trainval_file = "D:\Last Dance\The Oxford-IIIT Pet Dataset_datasets\Annotations\\trainval.txt" train_image_path = "D:\Last Dance\pet_demo/train/images" train_txt_path = "D:\Last Dance\pet_demo/train/labels" val_image_path = "D:\Last Dance\pet_demo/val/images" val_txt_path = "D:\Last Dance\pet_demo/val/labels" os.makedirs(train_image_path, exist_ok=True) os.makedirs(train_txt_path, exist_ok=True) os.makedirs(val_image_path, exist_ok=True) os.makedirs(val_txt_path, exist_ok=True) def get_classes(xml_folder): class_set = set() for xml_file in os.listdir(xml_folder): if xml_file.endswith('.xml'): tree = etree.parse(os.path.join(xml_folder, xml_file)) for obj in tree.xpath('//object'): class_set.add(obj.find('name').text) return list(class_set) classes = get_classes(annotations_path) class_dict = {name: i for i, name in enumerate(classes)} # convert XML to TXT def convert_xml_to_txt(xml_file, txt_folder, class_dict, class_name): tree = etree.parse(xml_file) root = tree.getroot() size = root.find('size') width = int(size.find('width').text) height = int(size.find('height').text) txt_file = os.path.join(txt_folder, os.path.basename(xml_file).replace('.xml', '.txt')) with open(txt_file, 'w') as f: for obj in root.findall('object'): # class_name = obj.find('name').text # class_id = class_dict[class_name] bbox = obj.find('bndbox') xmin = int(bbox.find('xmin').text) ymin = int(bbox.find('ymin').text) xmax = int(bbox.find('xmax').text) ymax = int(bbox.find('ymax').text) x_center = (xmin + xmax) / 2 / width y_center = (ymin + ymax) / 2 / height bbox_width = (xmax - xmin) / width bbox_height = (ymax - ymin) / height f.write(f"{class_name} {x_center} {y_center} {bbox_width} {bbox_height}\n") with open(trainval_file, 'r') as f: lines = f.readlines() for line in lines: parts = line.strip().split() image_name = parts[0] + '.jpg' xml_name = parts[0] + '.xml' # split = int(parts[2]) split = random.uniform(0, 1) # 用随机数划分测试集和验证集 class_name = int(parts[1]) image_path = os.path.join(images_path, image_name) xml_path = os.path.join(annotations_path, xml_name) # 官网直接下载的images与annotation文件有些不能一一对应,直接跳过 if not os.path.exists(image_path) or not os.path.exists(xml_path): print(f"skip {image_name} or {xml_name}") continue # 按训练集和验证集分开保存 训练集:验证集==8:2 if split >= 0.2: shutil.copy(image_path, os.path.join(train_image_path, image_name)) convert_xml_to_txt(xml_path, train_txt_path, class_dict, class_name) else: shutil.copy(image_path, os.path.join(val_image_path, image_name)) convert_xml_to_txt(xml_path, val_txt_path, class_dict, class_name) print("It's ok!")
运行后我们得到
②上传数据集到服务器中
通过xftp上传至/root/datasets
③修改数据集yaml文件
在/root/ultralytics/ultralytics/cfg/datasets新建pet.yaml文件
# Ultralytics YOLO 🚀, AGPL-3.0 license # COCO 2017 dataset https://cocodataset.org by Microsoft # Documentation: https://docs.ultralytics.com/datasets/detect/coco/ # Example usage: yolo train data=coco.yaml # parent # ├── ultralytics # └── datasets # └── coco ← downloads here (20.1 GB) # Train/val/test sets path: ../datasets/pet_demo # 数据集根目录 train: /root/datasets/pet_demo/train/images # 训练集路径(相对path) val: /root/datasets/pet_demo/val/images # 验证集路径 test: # 测试集路径 # 类别定义 nc: 38 # 必须与names数量一致 names: - - Abyssinian - American Bulldog - American Pit Bull Terrier - Basset Hound - Beagle - Bengal - Birman - Bombay - Boxer - British Shorthair - Chihuahua - Egyptian Mau - English Cocker Spaniel - English Setter - German Shorthaired - Great Pyrenees - Havanese - Japanese Chin - Keeshond - Leonberger - Maine Coon - Miniature Pinscher - Newfoundland - Persian - Pomeranian - Pug - RagDoll - Russian Blue - Saint Bernard - Samoyed - Scottish Terrier - Shiba Inu - Siamese - Sphynx - Staffordshire Bull Terrie - Wheaten Terrier - Yorkshire Terrier
在pycharm中直接上传改动部分到autodl服务器中。
三、项目运行
1、替换原有数据集
预测图片所在位置/root/ultralytics/ultralytics/assets/bus.jpg 预测结果图片所在ultralytics/runs/detect/predict/1.png 训练集所在/root/datasets/Pet_data/images label所在/root/datasets/Pet_data/labels 配置文件所在/root/ultralytics/ultralytics/cfg/datasets/pet.yml 生成模型所在位置/root/ultralytics/runs/detect/train/weights/best.pt根据自己使用的数据集进行修改 train_v8.py 里面的参数 : data='coco128.yaml', 修改为对应的自己数据集的yaml文件
2、打开XShell控制台,命令行输入
cd ultralytics
3、训练代码,命令行输入
python train_v8.py --cfg ultralytics/cfg/models/v8/yolov8.yaml
如果想终止训练,可以使用 快捷键:Ctrl+C,终止
查看显存使用情况 输入:watch -n 1 nvidia-smi
加速访问的学术资源地址 输入: source /etc/network_turbo
runs目录输出的内容即为:训练完成的内容
4、推理预测代码
python predict.py
5、结束训练
训练结果保存在:/root/ultralytics/runs/detect/train7
结论:在batch:40的情况下,
-
整体性能:
-
mAP50 为 0.713,表示模型在 IoU 阈值为 0.5 时的平均精度较高,说明模型在检测目标时的准确性较好。
-
mAP50-95 为 0.569,表示模型在不同 IoU 阈值下的平均精度较低,说明模型在检测小目标或边界框精度要求较高的情况下表现稍弱。
-
-
类别性能:
-
一些类别的性能较好,例如
Bombay
和Siamese
,其 mAP50 接近或超过 0.9。 -
一些类别的性能较差,例如
Staffordshire Bull Terrier
,其 mAP50 仅为 0.305。
-
四、模型优化
1、优化背景
在train_v8.py内修改为
results = model.train(data='/root/ultralytics/ultralytics/cfg/datasets/pet.yml', # 训练参数均可以重新设置 epochs=200, imgsz=640, workers=12, batch=-1, optimizer='AdamW', lr0=2e-3, cos_lr=True, label_smoothing=0.1, patience=20 )
-
mAP50:0.914,表明模型在 IoU 阈值为 0.5 时的平均精度非常高,说明模型在检测目标时的准确性很好。
-
mAP50-95:0.795,表明模型在不同 IoU 阈值下的平均精度也较高,说明模型在检测小目标或边界框精度要求较高的情况下表现良好。
2、添加SE注意力机制
下列代码块粘贴至:ultralytics/nn/modules/conv.py
class SE(nn.Module): def __init__(self, channel=512, reduction=16): super().__init__() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() ) def init_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): init.kaiming_normal_(m.weight, mode='fan_out') if m.bias is not None: init.constant_(m.bias, 0) elif isinstance(m, nn.BatchNorm2d): init.constant_(m.weight, 1) init.constant_(m.bias, 0) elif isinstance(m, nn.Linear): init.normal_(m.weight, std=0.001) if m.bias is not None: init.constant_(m.bias, 0) def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) y = self.fc(y).view(b, c, 1, 1) return x * y.expand_as(x) def autopad(k, p=None, d=1): # kernel, padding, dilation """Pad to 'same' shape outputs.""" if d > 1: k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size if p is None: p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad return p
ultralytics/nn/tasks.py,搜索elif in m
elif m in {SE}: args = [ch[f], *args]
创建ultralytics/cfg/models/v8/yolov8-SE.yaml
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 3, SE, [1024]] #SE - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 18 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 21 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)
train_v8.py修改
3、添加AKConv卷积层
AKConv是一种可变形卷积层,具体的原理可参考对应的文献和博文,在此不展开说明。
https://arxiv.org/pdf/2311.11587.pdf
新建ultralytics/cfg/models/v8/yolov8-SE-AKConv.yaml
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, AKConv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, AKConv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, AKConv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, AKConv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 3, SE, [1024]] #SE - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, AKConv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 18 (P4/16-medium) - [-1, 1, AKConv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 21 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)
ultralytics/nn/modules/conv.py
import numpy as np from torch.nn import init import torch.nn as nn import torch from einops import rearrange import math __all__ = ( "Conv", "Conv2", "LightConv", "DWConv", "DWConvTranspose2d", "ConvTranspose", "Focus", "GhostConv", "ChannelAttention", "SpatialAttention", "CBAM", "Concat", "RepConv", "AKConv", ) class AKConv(nn.Module): def __init__(self, inc, outc, num_param, stride=1, bias=None): super(AKConv, self).__init__() self.num_param = num_param self.stride = stride self.conv = nn.Sequential(nn.Conv2d(inc, outc, kernel_size=(num_param, 1), stride=(num_param, 1), bias=bias), nn.BatchNorm2d(outc), nn.SiLU()) # the conv adds the BN and SiLU to compare original Conv in YOLOv5. self.p_conv = nn.Conv2d(inc, 2 * num_param, kernel_size=3, padding=1, stride=stride) nn.init.constant_(self.p_conv.weight, 0) self.p_conv.register_full_backward_hook(self._set_lr) @staticmethod def _set_lr(module, grad_input, grad_output): grad_input = (grad_input[i] * 0.1 for i in range(len(grad_input))) grad_output = (grad_output[i] * 0.1 for i in range(len(grad_output))) def forward(self, x): # N is num_param. offset = self.p_conv(x) dtype = offset.data.type() N = offset.size(1) // 2 # (b, 2N, h, w) p = self._get_p(offset, dtype) # (b, h, w, 2N) p = p.contiguous().permute(0, 2, 3, 1) q_lt = p.detach().floor() q_rb = q_lt + 1 q_lt = torch.cat([torch.clamp(q_lt[..., :N], 0, x.size(2) - 1), torch.clamp(q_lt[..., N:], 0, x.size(3) - 1)], dim=-1).long() q_rb = torch.cat([torch.clamp(q_rb[..., :N], 0, x.size(2) - 1), torch.clamp(q_rb[..., N:], 0, x.size(3) - 1)], dim=-1).long() q_lb = torch.cat([q_lt[..., :N], q_rb[..., N:]], dim=-1) q_rt = torch.cat([q_rb[..., :N], q_lt[..., N:]], dim=-1) # clip p p = torch.cat([torch.clamp(p[..., :N], 0, x.size(2) - 1), torch.clamp(p[..., N:], 0, x.size(3) - 1)], dim=-1) # bilinear kernel (b, h, w, N) g_lt = (1 + (q_lt[..., :N].type_as(p) - p[..., :N])) * (1 + (q_lt[..., N:].type_as(p) - p[..., N:])) g_rb = (1 - (q_rb[..., :N].type_as(p) - p[..., :N])) * (1 - (q_rb[..., N:].type_as(p) - p[..., N:])) g_lb = (1 + (q_lb[..., :N].type_as(p) - p[..., :N])) * (1 - (q_lb[..., N:].type_as(p) - p[..., N:])) g_rt = (1 - (q_rt[..., :N].type_as(p) - p[..., :N])) * (1 + (q_rt[..., N:].type_as(p) - p[..., N:])) # resampling the features based on the modified coordinates. x_q_lt = self._get_x_q(x, q_lt, N) x_q_rb = self._get_x_q(x, q_rb, N) x_q_lb = self._get_x_q(x, q_lb, N) x_q_rt = self._get_x_q(x, q_rt, N) # bilinear x_offset = g_lt.unsqueeze(dim=1) * x_q_lt + \ g_rb.unsqueeze(dim=1) * x_q_rb + \ g_lb.unsqueeze(dim=1) * x_q_lb + \ g_rt.unsqueeze(dim=1) * x_q_rt x_offset = self._reshape_x_offset(x_offset, self.num_param) out = self.conv(x_offset) return out # generating the inital sampled shapes for the AKConv with different sizes. def _get_p_n(self, N, dtype): base_int = round(math.sqrt(self.num_param)) row_number = self.num_param // base_int mod_number = self.num_param % base_int p_n_x, p_n_y = torch.meshgrid( torch.arange(0, row_number), torch.arange(0, base_int), indexing='xy') p_n_x = torch.flatten(p_n_x) p_n_y = torch.flatten(p_n_y) if mod_number > 0: mod_p_n_x, mod_p_n_y = torch.meshgrid( torch.arange(row_number, row_number + 1), torch.arange(0, mod_number), indexing='xy') mod_p_n_x = torch.flatten(mod_p_n_x) mod_p_n_y = torch.flatten(mod_p_n_y) p_n_x, p_n_y = torch.cat((p_n_x, mod_p_n_x)), torch.cat((p_n_y, mod_p_n_y)) p_n = torch.cat([p_n_x, p_n_y], 0) p_n = p_n.view(1, 2 * N, 1, 1).type(dtype) return p_n
ultralytics/nn/modules/init.py
ultralytics/nn/tasks.py
from ultralytics.nn.modules import ( SE, AIFI, C1, C2, C3, C3TR, OBB, SPP, SPPF, Bottleneck, BottleneckCSP, C2f, C3Ghost, C3x, Classify, Concat, Conv, Conv2, ConvTranspose, Detect, DWConv, DWConvTranspose2d, Focus, GhostBottleneck, GhostConv, HGBlock, HGStem, Pose, RepC3, RepConv, ResNetLayer, RTDETRDecoder, Segment, AKConv )
搜索elif m in,插入
elif m is AKConv: c2 = args[0] c1 = ch[f] args = [c1, c2, *args[1:]]
train_v8.py
def parse_opt(known=False): parser = argparse.ArgumentParser() parser.add_argument('--cfg', type=str, default='ultralytics/cfg/models/v8/yolov8-SE-AKConv.yaml', help='initial weights path') parser.add_argument('--weights', type=str, default='', help='')d
当然具体的AKConv插入的位置是需要自己调试到最优 ,跑出来的结果有可能会比我的更好,也有可能会跑一天反而比baseline效果要差很多。需要自己手动去排列组合最合适的位置。
此时运行命令为;
python train_v8.py --cfg ultralytics/cfg/models/v8/yolov8-SE-AKConv.yaml
五、AutoDL远程桌面
USER=root /opt/TurboVNC/bin/vncserver :1 -desktop X -auth /root/.Xauthority -geometry 1920x1080 -depth 24 -rfbwait 120000 -rfbauth /root/.vnc/passwd -fp /usr/share/fonts/X11/misc/,/usr/share/fonts -rfbport 6006
六、前端页面
自行在github上搜索“pyqt5”、“vue”等相关前端框架做修改即可。
七、论文事项
注意匿名和格式修改。降重采用kimi,ds偏学术了容易高AIGC。
八、答辩问题
1、为什么选用yolov8而不是最新的yolo框架?
2、数据集的来源
3、演示页面
4、卷积的过程是?
5、se注意力机制的过程
6、AKconv如何实现可变形卷积?为什么训练的速度会比baseline慢?
7、为什么还要用17年提出的SE注意力机制?你的创新点和工作量在哪里?
8、数据增强的过程中采用什么技术?
9、看代码问问题
本人为第一个答辩,被老登们狠狠攻击了20分钟。
warming:
①、解决Corrupt JPEG data: premature end of data segment
解决Corrupt JPEG data: premature end of data segment-CSDN博客
②、FileNotFoundError: /root/ultralytics/ultralytics/assets/bus.jpg does not exist
该文件不存在。这通常是在模型验证或AMP(Automatic Mixed Precision)检查过程中发生的。一般是原有yolo的coco128数据集的预测图片未修改为新数据的预测数据图片,直接在pycharm内搜索bus逐一替换为预测图片的地址
③、ModuleNotFoundError: No module named 'einops'
通过 pip 安装 einops
pip install einops
通过查看安装依赖
pip list
附:开源该过程文档,是因为在完成本次毕设的过程中遇到太多莫名的坑 ,为节省各位时间同时回馈过程中看过的issue,故此开源。