pycharm报错The “freeze_support()“ line can be omitted if the program is not going to be frozen

在Windows环境下编写Python代码时,使用multiprocessing或threading模块需注意加入`if __name__ == '__main__':`判断。这段代码能避免无限递归拷贝问题,并确保子进程正确启动。此问题在macOS和Linux系统中不会出现,同时也起到防止直接导入时执行主程序的作用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在写python代码时,如果涉及进程线程,也就是使用multiprocessing进程包,或者使用threading模块包时,注意加入如下代码:

if __name__ == '__main__':  # 这句代码,把真正要执行的代码放在下面
    sing_process = multiprocessing.Process(target=sing)   # 子进程
    sing_process.start()    # 启动子进程

这是由于Windows系统中的无限拷贝执行问题,加入这句代码就可以防止,不过在mac和Linux系统中就不会出现这个问题。
设置这个程序入口模块也可以防止别人导入执行这两句代码。

E:\Anaconda\envs\yolov8\python.exe E:\pycharm学习文件\yolov8\垃圾分类.py Ultralytics 8.3.130 Python-3.11.0 torch-2.5.1 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB) engine\trainer: agnostic_nms=False, amp=True, augment=False, auto_augment=randaugment, batch=32, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=.yaml, degrees=0.0, deterministic=True, device=0, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=200, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolo11n.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=train, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs\detect\train, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=True, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None Overriding model.yaml nc=80 with nc=4 from n params module arguments 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 2 -1 1 6640 ultralytics.nn.modules.block.C3k2 [32, 64, 1, False, 0.25] 3 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 4 -1 1 26080 ultralytics.nn.modules.block.C3k2 [64, 128, 1, False, 0.25] 5 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 6 -1 1 87040 ultralytics.nn.modules.block.C3k2 [128, 128, 1, True] 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] 8 -1 1 346112 ultralytics.nn.modules.block.C3k2 [256, 256, 1, True] 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 249728 ultralytics.nn.modules.block.C2PSA [256, 256, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 13 -1 1 111296 ultralytics.nn.modules.block.C3k2 [384, 128, 1, False] 14 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 15 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 16 -1 1 32096 ultralytics.nn.modules.block.C3k2 [256, 64, 1, False] 17 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 18 [-1, 13] 1 0 ultralytics.nn.modules.conv.Concat [1] 19 -1 1 86720 ultralytics.nn.modules.block.C3k2 [192, 128, 1, False] 20 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 21 [-1, 10] 1 0 ultralytics.nn.modules.conv.Concat [1] 22 -1 1 378880 ultralytics.nn.modules.block.C3k2 [384, 256, 1, True] 23 [16, 19, 22] 1 431452 ultralytics.nn.modules.head.Detect [4, [64, 128, 256]] YOLO11n summary: 181 layers, 2,590,620 parameters, 2,590,604 gradients, 6.4 GFLOPs Transferred 448/499 items from pretrained weights Freezing layer 'model.23.dfl.conv.weight' AMP: running Automatic Mixed Precision (AMP) checks... AMP: checks passed train: Fast image access (ping: 0.10.0 ms, read: 588.9148.2 MB/s, size: 87.3 KB) train: Scanning E:\pycharm学习文件\yolov8\xun\labels\train.cache... 3784 images, 0 backgrounds, 0 corrupt: 100%|██████████| 3784/3784 [00:00<?, ?it/s] Ultralytics 8.3.130 Python-3.11.0 torch-2.5.1 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB) engine\trainer: agnostic_nms=False, amp=True, augment=False, auto_augment=randaugment, batch=32, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=.yaml, degrees=0.0, deterministic=True, device=0, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=200, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolo11n.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=train2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs\detect\train2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=True, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None Overriding model.yaml nc=80 with nc=4 from n params module arguments 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 2 -1 1 6640 ultralytics.nn.modules.block.C3k2 [32, 64, 1, False, 0.25] 3 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 4 -1 1 26080 ultralytics.nn.modules.block.C3k2 [64, 128, 1, False, 0.25] 5 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 6 -1 1 87040 ultralytics.nn.modules.block.C3k2 [128, 128, 1, True] 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] 8 -1 1 346112 ultralytics.nn.modules.block.C3k2 [256, 256, 1, True] 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 249728 ultralytics.nn.modules.block.C2PSA [256, 256, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 13 -1 1 111296 ultralytics.nn.modules.block.C3k2 [384, 128, 1, False] 14 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 15 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 16 -1 1 32096 ultralytics.nn.modules.block.C3k2 [256, 64, 1, False] 17 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 18 [-1, 13] 1 0 ultralytics.nn.modules.conv.Concat [1] 19 -1 1 86720 ultralytics.nn.modules.block.C3k2 [192, 128, 1, False] 20 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 21 [-1, 10] 1 0 ultralytics.nn.modules.conv.Concat [1] 22 -1 1 378880 ultralytics.nn.modules.block.C3k2 [384, 256, 1, True] 23 [16, 19, 22] 1 431452 ultralytics.nn.modules.head.Detect [4, [64, 128, 256]] YOLO11n summary: 181 layers, 2,590,620 parameters, 2,590,604 gradients, 6.4 GFLOPs Transferred 448/499 items from pretrained weights Freezing layer 'model.23.dfl.conv.weight' AMP: running Automatic Mixed Precision (AMP) checks... AMP: checks passed train: Fast image access (ping: 0.10.0 ms, read: 631.5116.5 MB/s, size: 87.3 KB) train: Scanning E:\pycharm学习文件\yolov8\xun\labels\train.cache... 3784 images, 0 backgrounds, 0 corrupt: 100%|██████████| 3784/3784 [00:00<?, ?it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\spawn.py", line 120, in spawn_main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\spawn.py", line 129, in _main prepare(preparation_data) File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\spawn.py", line 240, in prepare _fixup_main_from_path(data['init_main_from_path']) File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\spawn.py", line 291, in _fixup_main_from_path main_content = runpy.run_path(main_path, ^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen runpy>", line 291, in run_path File "<frozen runpy>", line 98, in _run_module_code File "<frozen runpy>", line 88, in _run_code File "E:\pycharm学习文件\yolov8\垃圾分类.py", line 3, in <module> a1.train( File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\engine\model.py", line 793, in train self.trainer.train() File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\engine\trainer.py", line 212, in train self._do_train(world_size) File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\engine\trainer.py", line 328, in _do_train self._setup_train(world_size) File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\engine\trainer.py", line 292, in _setup_train self.train_loader = self.get_dataloader(self.trainset, batch_size=batch_size, rank=LOCAL_RANK, mode="train") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\models\yolo\detect\train.py", line 88, in get_dataloader return build_dataloader(dataset, batch_size, workers, shuffle, rank) # return dataloader ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\data\build.py", line 169, in build_dataloader return InfiniteDataLoader( ^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\site-packages\ultralytics\data\build.py", line 50, in __init__ self.iterator = super().__iter__() ^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\site-packages\torch\utils\data\dataloader.py", line 484, in __iter__ return self._get_iterator() ^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\site-packages\torch\utils\data\dataloader.py", line 415, in _get_iterator return _MultiProcessingDataLoaderIter(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\site-packages\torch\utils\data\dataloader.py", line 1138, in __init__ w.start() File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) ^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\context.py", line 336, in _Popen return Popen(process_obj) ^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\spawn.py", line 158, in get_preparation_data _check_not_importing_main() File "E:\Anaconda\envs\yolov8\Lib\multiprocessing\spawn.py", line 138, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable.什么原因
最新发布
05-12
### 解决 PyCharm 中 `reeds_shepp` 模块导入问题 当在 PyCharm 中运行代码时遇到 `ModuleNotFoundError: No module named 'reeds_shepp'` 错误,这通常是因为当前使用的 Python 虚拟环境未安装该模块。以下是解决问题的方法: #### 方法一:通过 PyCharm 的包管理器安装 PyCharm 提供了一个内置工具来管理和安装所需的第三方库。 1. 打开 **Settings/Preferences** 对话框(可以通过菜单栏中的 `File -> Settings` 或者快捷键 `Ctrl+Alt+S` 实现)。 2. 导航到 `Project: <your_project_name> -> Python Interpreter` 部分。 3. 在右上角点击齿轮图标并选择 `Add...` 来切换或创建一个新的虚拟环境,或者直接使用现有的解释器。 4. 返回到 `Python Interpreter` 页面,在页面下方找到加号 (`+`) 图标以打开可用软件包列表。 5. 在搜索框中输入 `reeds-shepp` 并确认其存在。如果找不到,则可能需要手动安装[^1]。 #### 方法二:命令行 pip 安装 另一种方法是在终端中执行以下命令来全局或针对特定虚拟环境安装所需模块: ```bash pip install reeds-shepp ``` 确保此操作对应于 PyCharm 使用的正确 Python 解释器版本。可以验证所选解释器路径是否一致,具体方式如下: - 前往 `Run -> Edit Configurations...` 查看配置文件设置下的 Python interpreter 字段。 #### 方法三:修改项目解释器配置 有时即使已成功安装依赖项但仍报错,原因可能是 PyCharm 当前工作区并未关联至实际完成安装的位置。此时需重新指定解释器位置: - 如前所述进入 `Settings -> Project: ... -> Python Interpreter`; - 更改为包含目标模块的有效解释器实例即可。 以上任意一种途径均能有效处理因缺少必要扩展而导致的功能缺失状况。另外值得注意的是并非所有开源资源都维护良好,部分名称可能存在差异,请仔细核对官方文档说明后再尝试上述步骤。 ```python import reeds_shepp # 测试安装后的正常调用情况 print(reeds_shepp.__version__) ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

啊这怎么会这样

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值