Traceback (most recent call last): File "D:\桌面\ultralytics-yolov8-main\test1\blueberry.py", line 4, in <module> a1 = YOLO('yolov8n.pt') # 官方提供的基础测试和训练模型 ^^^^^^^^^^^^^^^^^^ File "D:\桌面\ultralytics-yolov8-main\ultralytics\yolo\engine\model.py", line 106, in __init__ self._load(model, task) File "D:\桌面\ultralytics-yolov8-main\ultralytics\yolo\engine\model.py", line 145, in _load self.model, self.ckpt = attempt_load_one_weight(weights) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\桌面\ultralytics-yolov8-main\ultralytics\nn\tasks.py", line 396, in attempt_load_one_weight ckpt, weight = torch_safe_load(weight) # load ckpt ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\桌面\ultralytics-yolov8-main\ultralytics\nn\tasks.py", line 336, in torch_safe_load return torch.load(file, map_location='cpu'), file # load ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\李佳伟\AppData\Roaming\Python\Python312\site-packages\torch\serialization.py", line 1524, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL ultralytics.nn.tasks.DetectionModel was not an allowed global by default. Please use `torch.serialization.add_safe_globals([ultralytics.nn.tasks.DetectionModel])` or the `torch.serialization.safe_globals([ultralytics.nn.tasks.DetectionModel])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.解释代码
时间: 2025-05-30 12:12:57 浏览: 42
### 错误原因分析
`UnpicklingError` 是由 Python 的 `pickle` 模块抛出的一种异常,通常发生在反序列化过程中。当使用 `torch.load()` 函数加载模型时,默认情况下会通过 `pickle` 进行反序列化操作。如果数据损坏、文件格式不匹配或存在恶意代码注入风险,则可能导致此错误。
根据提供的信息以及引用内容[^1],可以推测当前问题的核心在于以下几个方面:
1. **安全性隐患**:由于默认配置下 `weights_only=False`,这意味着不仅限于加载权重还会尝试恢复整个状态(包括优化器等)。这种方式依赖于 `pickle` 实现,而后者本身并不安全。
2. **版本差异**:不同版本之间保存与读取机制可能存在细微差别,尤其是跨平台迁移或者升级框架之后更容易出现问题。
3. **文件完整性受损**:存储介质故障或其他外部因素造成实际磁盘上的二进制流发生变化也会触发类似的错误提示。
---
### 解决方案
针对上述提到的各种可能性逐一给出对应的处理办法如下所示:
#### 方法一:启用更严格模式——设置 `weights_only=True`
为了规避因滥用 `pickle` 带来的安全隐患并减少不必要的复杂度,可以直接指定参数 `weights_only=True` 来仅提取网络层参数部分的数据结构。这是即将成为新标准的行为调整方向之一[^1]。
修改后的调用形式应类似于这样:
```python
state_dict = torch.load(model_path, map_location='cpu', weights_only=True)[^1]
```
> 注意事项:并非所有的场景都支持单独分离出来的纯权重组件;对于那些需要连同额外元信息一起重建实例的应用场合来说,仍需维持原有设定即保持缺省值不变。
#### 方法二:验证输入资源的有效性
确保所提供的路径指向合法有效的 `.pt/.pth` 类型文件而非其他随机垃圾资料片段。可以通过简单的预检手段加以防范比如利用内置库函数测试其是否存在以及可访问属性等等。
示范代码片断如下:
```python
import os
if not os.path.exists(model_path):
raise ValueError("The provided model path does not point to any existing entity.")
elif not os.path.isfile(model_path):
raise TypeError("Expected regular file but got directory instead.")
else:
# Proceed safely now...
pass
```
#### 方法三:回退至低级接口手动解析
假如前两种途径均无法奏效的话,还可以考虑采用更为底层的方式自行完成解码流程。具体而言就是先借助常规 I/O 流程获取原始字节串然后再交由专门负责此项工作的类方法去进一步加工整理成最终期望的形式。
例如下面这段伪代码展示了大致思路:
```python
with open(model_path, 'rb') as fobj:
raw_data = fobj.read()
# Assuming we know exactly what kind of object it represents here...
loaded_obj = SomeCustomClass.from_bytes(raw_data)
```
当然这里假设已经定义好了相应的辅助工具类及其成员函数来配合实现这一转换逻辑。
#### 方法四:降级/重新训练模型
最后也是最无奈的选择便是回到最初的状态要么寻找兼容的老版本软件包要么干脆重头再来构建一套全新的解决方案。不过鉴于实际情况往往比较特殊所以不到万不得已最好不要轻易采取这种极端措施除非确实找不到更好的替代品可供选用。
---
### 综合案例修复
综合以上各点我们可以设计这样一个健壮可靠的通用加载模块供后续反复调用:
```python
from typing import Union
import os
import torch
def safe_load_model(filepath: str, device: Union[str, torch.device]='cpu', strict_mode: bool=True)->dict:
"""Safely attempts to deserialize a serialized pytorch model."""
if not isinstance(device,(str,torch.device)):
raise ValueError("`device` must be either string representation like 'cuda'/'cpu'"
"or actual instance derived from class `torch.device`. Found {}".format(type(device)))
if not os.path.isabs(filepath):
filepath=os.path.abspath(filepath)
if not os.path.isfile(filepath):
raise FileNotFoundError(f"Invalid location specified:{filepath}. No such file exists.")
try:
kwargs={'map_location':torch.device(device)}
if hasattr(torch,'_utils'): # Older versions may lack this attribute thus requiring fallback strategy.
kwargs['encoding']='latin1'
if strict_mode and ('weights_only'in dir(torch)):
kwargs['weights_only']=True
return torch.load(fp=filepath,**kwargs)
except EOFError as eof_exc:
msg="Premature end-of-file detected while trying to unpickle data."
logging.error(msg=msg,exc_info=eof_exc)
raise IOError(msg) from eof_exc
except AttributeError as attr_err:
detailed_msg=("Missing critical attributes required for successful deserialization.\n"
+"Likely cause could involve mismatched library versions used during serialization vs de-serialization phases.")
logging.critical(detailed_msg=detailed_msg,exc_info=attr_err)
raise ImportError(detailed_msg) from attr_err
except Exception as generic_exception:
log_entry=f"Encountered unexpected exception type {type(generic_exception).__name__} with message:\n{generic_exception}"
logging.warning(log_entry=log_entry)
raise RuntimeError(log_entry) from generic_exception
if __name__=='__main__':
abs_model_path='/full/path/to/my_trained_weights.pt'
target_device='gpu' if torch.cuda.is_available() else 'cpu'
try:
recovered_states=safe_load_model(abs_model_path,target_device,True)
print("Successfully retrieved all necessary components!")
except BaseException as excpctn:
sys.stderr.write(str(excpctn)+'\n')
exit(-1)
```
---
###
阅读全文
相关推荐



















