活动介绍

ERROR: Exception: Traceback (most recent call last): File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_internal\cli\base_command.py", line 106, in _run_wrapper status = _inner_run() File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_internal\cli\base_command.py", line 97, in _inner_run return self.run(options, args) ~~~~~~~~^^^^^^^^^^^^^^^ File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_internal\cli\req_command.py", line 67, in wrapper return func(self, options, args) File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_internal\commands\install.py", line 386, in run requirement_set = resolver.resolve( reqs, check_supported_wheels=not options.target_dir ) File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 95, in resolve result = self._result = resolver.resolve( ~~~~~~~~~~~~~~~~^ collected.requirements, max_rounds=limit_how_complex_resolution_can_be ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 397, in resolve self._add_to_criteria(self.state.criteria, r, parent=None) ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria if not criterion.candidates: ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\miniconda3\envs\yolov5\Lib\site-packages\pip\_vendor\resolvelib\structs.py", line 156, in __boo

时间: 2025-05-13 08:47:54 浏览: 59
<think>嗯,用户现在遇到了pip安装时的异常问题,尤其是提到了yolov5和miniconda3。我需要结合之前提供的引用内容来帮助解决。首先,用户之前的问题涉及多个pip安装错误,比如环境配置、依赖缺失和网络问题。 先看看引用里的情况:比如,在Docker里构建时安装flask,可能需要注意Ubuntu系统下的Python环境是否正确配置,有没有安装必要的构建工具。还有jnius安装报错提到JDK_HOME的问题,说明可能需要Java开发工具包。另外,pyhash安装出错可能需要gcc等编译工具,用户可能缺少这些依赖。 用户现在的问题是关于pip安装异常,特别是yolov5,可能需要用到PyTorch和其他依赖。miniconda3的环境管理可能能帮助隔离环境,避免系统级别的冲突。常见的安装问题可能包括依赖缺失、网络超时、权限问题或者编译器工具链不完整。 首先应该建议用户创建独立的conda环境,这样避免包冲突。然后安装必要的系统依赖,比如gcc、python3-dev等,这在引用4中提到过。接着,处理网络问题,比如更换pip源或使用代理,引用3中提到了ReadTimeoutError,可能需要调整超时设置或使用国内镜像。 然后,针对yolov5的安装,可能需要特定版本的PyTorch,需要检查是否兼容。如果遇到编译错误,可能需要安装CUDA工具包,或者参考官方文档。此外,用户可能需要升级pip和setuptools,确保安装工具是最新的。 最后,用户可能需要查看具体的错误日志,根据traceback信息来针对性解决,比如缺少什么库或者权限问题。如果用户提供了具体的错误信息,可以更准确地定位问题,比如是否涉及权限问题需要sudo或者使用--user选项,或者是否有缺失的系统包需要安装。</think>### Pip安装异常问题解决方案(针对YOLOv5/Minconda3环境) 根据引用[2][3][4]中提到的常见问题,结合YOLOv5安装需求,解决方案分步骤说明: 1. **创建独立conda环境**(推荐优先方案) ```bash conda create -n yolov5_env python=3.8 # 创建Python3.8环境 conda activate yolov5_env ``` 2. **安装系统依赖**(引用[4]解决方案延伸) ```bash # Ubuntu/Debian系统 sudo apt-get update && sudo apt-get install -y \ gcc \ g++ \ make \ python3-dev \ libgl1-mesa-glx # 针对OpenCV的图形依赖 # CentOS系统参考引用[4]的yum安装方式 ``` 3. **设置pip镜像源**(解决引用[3]超时问题) ```bash pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/ pip config set install.timeout 600 # 延长超时时间 ``` 4. **分步安装YOLOv5依赖** ```bash # 先安装PyTorch基础包(选择与CUDA对应的版本) conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch # 再安装YOLOv5核心依赖 pip install numpy>=1.18.5 opencv-python>=4.1.2 pip install tensorboard>=2.4.1 seaborn>=0.11.0 # 最后安装完整依赖 pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt ``` 5. **常见错误处理** - **编译错误**:确认gcc版本 >= 5.0 (`gcc --version`) - **权限错误**:添加`--user`参数或使用虚拟环境 - **CUDA相关错误**:确认驱动版本与CUDA版本匹配 - **SSL验证错误**:临时使用`--trusted-host mirrors.aliyun.com` 6. **验证安装** ```python import torch print(torch.cuda.is_available()) # 应返回True(GPU环境) from yolov5 import detect # 需下载官方仓库 ```
阅读全文

相关推荐

Traceback (most recent call last): File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\Administrator\miniconda3\envs\yolov5\Scripts\labelme.exe\__main__.py", line 4, in <module> File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\labelme\__main__.py", line 17, in <module> from labelme.app import MainWindow File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\labelme\app.py", line 21, in <module> from labelme._automation import bbox_from_text File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\labelme\_automation\bbox_from_text.py", line 6, in <module> import osam File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\osam\__init__.py", line 5, in <module> from . import apis # noqa: F401 File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\osam\apis.py", line 5, in <module> import onnxruntime File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\onnxruntime\__init__.py", line 57, in <module> raise import_capi_exception File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\onnxruntime\__init__.py", line 23, in <module> from onnxruntime.capi._pybind_state import ExecutionMode # noqa: F401 File "C:\Users\Administrator\miniconda3\envs\yolov5\lib\site-packages\onnxruntime\capi\_pybind_state.py", line 32, in <module> from .onnxruntime_pybind11_state import * # noqa ImportError: DLL load failed while importing onnxruntime_pybind11_state: 找不到指定的模块。

Traceback (most recent call last): File "D:\Miniconda3\envs\test\lib\site-packages\win32com\client\dynamic.py", line 86, in _GetGoodDispatch IDispatch = pythoncom.connect(IDispatch) pywintypes.com_error: (-2147221005, '无效的类字符串', None, None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Administrator\Desktop\automation_hec.py", line 57, in <module> discharge_results = run_hechms_simulation(hec_hms_project_path, daily_precipitation) File "C:\Users\Administrator\Desktop\automation_hec.py", line 17, in run_hechms_simulation hms = win32.Dispatch("HECHMS.Application") # 启动 HEC-HMS COM 组件 File "D:\Miniconda3\envs\test\lib\site-packages\win32com\client\__init__.py", line 117, in Dispatch dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch, userName, clsctx) File "D:\Miniconda3\envs\test\lib\site-packages\win32com\client\dynamic.py", line 106, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File "D:\Miniconda3\envs\test\lib\site-packages\win32com\client\dynamic.py", line 88, in _GetGoodDispatch IDispatch = pythoncom.CoCreateInstance( pywintypes.com_error: (-2147221005, '无效的类字符串', None, None) [Finished in 1.2s with exit code 1] [shell_cmd: CALL conda.bat activate test && python -u "C:\Users\Administrator\Desktop\automation_hec.py"] [dir: C:\Users\Administrator\Desktop] [path: C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Microsoft MPI\Bin\;C:\Windows\system32是;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\texlive\2023\bin\windows;D:\Miniconda3;D:\Miniconda3\condabin;D:\Miniconda3\Scripts;C:\Users\Administrator\AppData\Local\Android\Sdk;D:\Python\Python38\Scripts\;D:\Python\Python38\]是不是 hms = win32.Dispatch("HECHMS.Application")输入Application的路径,请问一般在哪

Traceback (most recent call last): File "C:\Users\Administrator\anaconda3\envs\py38-v5\lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Users\Administrator\anaconda3\envs\py38-v5\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "D:\chens\3.model\Circuit_breaker\yolov5\utils\plots.py", line 224, in plot_images annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames File "D:\chens\3.model\Circuit_breaker\yolov5\utils\plots.py", line 120, in text w, h = self.font.getsize(text) # text width, height AttributeError: 'FreeTypeFont' object has no attribute 'getsize' 0/299 21.6G 0.1261 0.0221 0.01815 371 640: 29%|██▊ | 2/7 [00:04<00:10, 2.12s/it] Exception in thread Thread-12: Traceback (most recent call last): File "C:\Users\Administrator\anaconda3\envs\py38-v5\lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Users\Administrator\anaconda3\envs\py38-v5\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "D:\chens\3.model\Circuit_breaker\yolov5\utils\plots.py", line 224, in plot_images annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames File "D:\chens\3.model\Circuit_breaker\yolov5\utils\plots.py", line 120, in text w, h = self.font.getsize(text) # text width, height AttributeError: 'FreeTypeFont' object has no attribute 'getsize' 0/299 21.6G 0.1256 0.02202 0.01816 371 640: 43%|████▎ | 3/7 [00:05<00:05, 1.43s/it] Exception in thread Thread-13: Traceback (most recent call last): File "C:\Users\Administrator\anaconda3\envs\py38-v5\lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Users\Administrator\anaconda3\envs\py38-v5\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "D:\chens\3.model\Circuit_breaker\yolov5\utils\plots.py", line 224, in plot_images annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames File "D:\chens\3.model\Circuit_breaker\yolov5\utils\plots.py", line 120, in text w, h = self.font.getsize(text) # text width, height AttributeError: 'FreeTypeFont' object has no attribute 'getsize'

File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\platform\egl.py", line 70, in EGL mode=ctypes.RTLD_GLOBAL File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\platform\ctypesloader.py", line 45, in loadLibrary return dllType( name, mode ) File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\ctypes\__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 126] 找不到指定的模块。 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "newdemo.py", line 29, in <module> from common.utils.vis import vis_mesh, save_obj, render_mesh File "E:\Program Files\I2L-MeshNet_RELEASE\common\utils\vis.py", line 8, in <module> import pyrender File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\pyrender\__init__.py", line 3, in <module> from .light import Light, PointLight, DirectionalLight, SpotLight File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\pyrender\light.py", line 10, in <module> from OpenGL.GL import * File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\GL\__init__.py", line 3, in <module> from OpenGL import error as _error File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\error.py", line 12, in <module> from OpenGL import platform, _configflags File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\platform\__init__.py", line 35, in <module> _load() File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\platform\__init__.py", line 32, in _load plugin.install(globals()) File "E:\Users\Administrator\anaconda3\envs\myi2l\lib\site-packages\OpenGL\platform\baseplatform.py", line 92, in install namespace[ name ] = getattr(self,name,None) File "

Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [47 lines of output] Compiling src/gevent/resolver/cares.pyx because it changed. [1/1] Cythonizing src/gevent/resolver/cares.pyx performance hint: src\gevent\libev\corecext.pyx:1339:0: Exception check on '_syserr_cb' will always require the GIL to be acquired. Possible solutions: 1. Declare '_syserr_cb' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions. 2. Use an 'int' return type on '_syserr_cb' to allow an error code to be returned. Error compiling Cython file: ------------------------------------------------------------ ... cdef tuple integer_types if sys.version_info[0] >= 3: integer_types = int, else: integer_types = (int, long) ^ ------------------------------------------------------------ src\gevent\libev\corecext.pyx:60:26: undeclared name not builtin: long Compiling src/gevent/libev/corecext.pyx because it changed. [1/1] Cythonizing src/gevent/libev/corecext.pyx Traceback (most recent call last): File "D:\conda\envs\fay\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module> main() File "D:\conda\envs\fay\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main json_out["return_val"] = hook(**hook_input["kwargs"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\conda\envs\fay\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Temp\p

[07/16 09:53:29] ppdet.utils.checkpoint INFO: Skipping import of the encryption module. loading annotations into memory... Done (t=0.00s) creating index... index created! [07/16 09:53:29] ppdet.data.source.coco INFO: Load [217 samples valid, 0 samples invalid] in file C:\Users\Administrator\Desktop\PaddleX\dataset\min_pai\annotations/instance_train.json. use cuda ms_deformable_attn op! [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.dec_score_head.0.bias is unmatched with the shape [11] in model transformer.dec_score_head.0.bias. And the weight transformer.dec_score_head.0.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.dec_score_head.0.weight is unmatched with the shape [256, 11] in model transformer.dec_score_head.0.weight. And the weight transformer.dec_score_head.0.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.dec_score_head.1.bias is unmatched with the shape [11] in model transformer.dec_score_head.1.bias. And the weight transformer.dec_score_head.1.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.dec_score_head.1.weight is unmatched with the shape [256, 11] in model transformer.dec_score_head.1.weight. And the weight transformer.dec_score_head.1.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.dec_score_head.2.bias is unmatched with the shape [11] in model transformer.dec_score_head.2.bias. And the weight transformer.dec_score_head.2.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.dec_score_head.2.weight is unmatched with the shape [256, 11] in model transformer.dec_score_head.2.weight. And the weight transformer.dec_score_head.2.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.dec_score_head.3.bias is unmatched with the shape [11] in model transformer.dec_score_head.3.bias. And the weight transformer.dec_score_head.3.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.dec_score_head.3.weight is unmatched with the shape [256, 11] in model transformer.dec_score_head.3.weight. And the weight transformer.dec_score_head.3.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.dec_score_head.4.bias is unmatched with the shape [11] in model transformer.dec_score_head.4.bias. And the weight transformer.dec_score_head.4.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.dec_score_head.4.weight is unmatched with the shape [256, 11] in model transformer.dec_score_head.4.weight. And the weight transformer.dec_score_head.4.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.dec_score_head.5.bias is unmatched with the shape [11] in model transformer.dec_score_head.5.bias. And the weight transformer.dec_score_head.5.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.dec_score_head.5.weight is unmatched with the shape [256, 11] in model transformer.dec_score_head.5.weight. And the weight transformer.dec_score_head.5.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23, 256] in pretrained weight transformer.denoising_class_embed.weight is unmatched with the shape [11, 256] in model transformer.denoising_class_embed.weight. And the weight transformer.denoising_class_embed.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [23] in pretrained weight transformer.enc_score_head.bias is unmatched with the shape [11] in model transformer.enc_score_head.bias. And the weight transformer.enc_score_head.bias will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: The shape [256, 23] in pretrained weight transformer.enc_score_head.weight is unmatched with the shape [256, 11] in model transformer.enc_score_head.weight. And the weight transformer.enc_score_head.weight will not be loaded [07/16 09:53:33] ppdet.utils.checkpoint INFO: Finish loading model weights: C:\Users\Administrator/.cache/paddle/weights\PP-DocLayout-L_pretrained.pdparams Exception in thread Thread-2 (_thread_loop): Traceback (most recent call last): File "C:\ProgramData\miniconda3\envs\Paddlex\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\ProgramData\miniconda3\envs\Paddlex\lib\threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "C:\ProgramData\miniconda3\envs\Paddlex\lib\site-packages\paddle\io\dataloader\dataloader_iter.py", line 282, in _thread_loop raise e File "C:\ProgramData\miniconda3\envs\Paddlex\lib\site-packages\paddle\io\dataloader\dataloader_iter.py", line 267, in _thread_loop tmp.set(slot, core.CPUPlace()) ValueError: (InvalidArgument) Input object type error or incompatible array data type. tensor.set() supports array with bool, float16, float32, float64, int8, int16, int32, int64, uint8 or uint16, please check your input or input array data type. (at ..\paddle/fluid/pybind/tensor_py.h:537) Traceback (most recent call last): File "C:\Users\Administrator\Desktop\PaddleX\paddlex\repo_manager\repos\PaddleDetection\tools\train.py", line 212, in <module> main() File "C:\Users\Administrator\Desktop\PaddleX\paddlex\repo_manager\repos\PaddleDetection\tools\train.py", line 208, in main run(FLAGS, cfg) File "C:\Users\Administrator\Desktop\PaddleX\paddlex\repo_manager\repos\PaddleDetection\tools\train.py", line 161, in run trainer.train(FLAGS.eval) File "C:\Users\Administrator\Desktop\PaddleX\paddlex\repo_manager\repos\PaddleDetection\ppdet\engine\trainer.py", line 567, in train for step_id, data in enumerate(self.loader): File "C:\ProgramData\miniconda3\envs\Paddlex\lib\site-packages\paddle\io\dataloader\dataloader_iter.py", line 298, in __next__ self._reader.read_next_list()[0] SystemError: (Fatal) Blocking queue is killed because the data reader raises an exception. [Hint: Expected killed_ != true, but received killed_:1 == true:1.] (at ..\paddle/fluid/operators/reader/blocking_queue.h:174)

(oqc) PS C:\Users\Administrator\Downloads\PyNET-PyTorch-master\PyNET-PyTorch-master> python train_model.py level=5 The following parameters will be applied for CNN training: Training level: 5 Batch size: 50 Learning rate: 5e-05 Training epochs: 8 Restore epoch: None Path to the dataset: raw_images/ CUDA visible devices: 1 CUDA Device Name: NVIDIA GeForce RTX 2080 SUPER D:\anaconda\envs\oqc\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( D:\anaconda\envs\oqc\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG19_Weights.IMAGENET1K_V1. You can also use weights=VGG19_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to C:\Users\Administrator/.cache\torch\hub\checkpoints\vgg19-dcbb9e9d.pth 100.0% The following parameters will be applied for CNN training: Training level: 5 Batch size: 50 Learning rate: 5e-05 Training epochs: 8 Restore epoch: None Path to the dataset: raw_images/ Traceback (most recent call last): File "C:\Users\Administrator\Downloads\PyNET-PyTorch-master\PyNET-PyTorch-master\train_model.py", line 197, in <module> train_model() File "C:\Users\Administrator\Downloads\PyNET-PyTorch-master\PyNET-PyTorch-master\train_model.py", line 86, in train_model x, y = next(train_iter) File "D:\anaconda\envs\oqc\lib\site-packages\torch\utils\data\dataloader.py", line 733, in __next__ data = self._next_data() File "D:\anaconda\envs\oqc\lib\site-packages\torch\utils\data\dataloader.py", line 1515, in _next_data return self._process_data(data, worker_id) File "D:\anaconda\envs\oqc\lib\site-packages\torch\utils\data\dat

ninja: build stopped: subcommand failed. Traceback (most recent call last): File "G:\soft\Anaconda\Lib\site-packages\torch\utils\cpp_extension.py", line 2506, in _run_ninja_build subprocess.run( File "G:\soft\Anaconda\Lib\subprocess.py", line 571, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> File "", line 34, in <module> File "C:\Users\Administrator\pytorch3d\setup.py", line 144, in <module> setup( File "G:\soft\Anaconda\Lib\site-packages\setuptools\__init__.py", line 117, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\core.py", line 183, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\core.py", line 199, in run_commands dist.run_commands() File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\dist.py", line 954, in run_commands self.run_command(cmd) File "G:\soft\Anaconda\Lib\site-packages\setuptools\dist.py", line 950, in run_command super().run_command(command) File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\dist.py", line 973, in run_command cmd_obj.run() File "G:\soft\Anaconda\Lib\site-packages\setuptools\command\develop.py", line 35, in run self.install_for_development() File "G:\soft\Anaconda\Lib\site-packages\setuptools\command\develop.py", line 112, in install_for_development self.run_command('build_ext') File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\cmd.py", line 316, in run_command self.distribution.run_command(command) File "G:\soft\Anaconda\Lib\site-packages\setuptools\dist.py", line 950, in run_command super().run_command(command) File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\dist.py", line 973, in run_command cmd_obj.run() File "G:\soft\Anaconda\Lib\site-packages\setuptools\command\build_ext.py", line 98, in run _build_ext.run(self) File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 359, in run self.build_extensions() File "G:\soft\Anaconda\Lib\site-packages\torch\utils\cpp_extension.py", line 1010, in build_extensions build_ext.build_extensions(self) File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 476, in build_extensions self._build_extensions_serial() File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 502, in _build_extensions_serial self.build_extension(ext) File "G:\soft\Anaconda\Lib\site-packages\setuptools\command\build_ext.py", line 263, in build_extension _build_ext.build_extension(self, ext) File "G:\soft\Anaconda\Lib\site-packages\setuptools\_distutils\command\build_ext.py", line 557, in build_extension objects = self.compiler.compile( ^^^^^^^^^^^^^^^^^^^^^^ File "G:\soft\Anaconda\Lib\site-packages\torch\utils\cpp_extension.py", line 978, in win_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "G:\soft\Anaconda\Lib\site-packages\torch\utils\cpp_extension.py", line 2159, in _write_ninja_file_and_compile_objects _run_ninja_build( File "G:\soft\Anaconda\Lib\site-packages\torch\utils\cpp_extension.py", line 2522, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension [end of output] note: This error originates from a subprocess, and is likely not a problem with pip.

# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< Traceback (most recent call last): File "D:\Work\BaiDuPan1\Lib\site-packages\conda\exception_handler.py", line 18, in __call__ return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\cli\main.py", line 61, in main_subshell exit_code = do_call(args, parser) ^^^^^^^^^^^^^^^^^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\cli\conda_argparse.py", line 205, in do_call result = getattr(module, func_name)(args, parser) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\notices\core.py", line 132, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\cli\main_update.py", line 103, in execute install(args, parser, "update") File "D:\Work\BaiDuPan1\Lib\site-packages\conda\cli\install.py", line 409, in install unlink_link_transaction = solver.solve_for_transaction( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\core\solve.py", line 166, in solve_for_transaction context.plugin_manager.invoke_post_solves( File "D:\Work\BaiDuPan1\Lib\site-packages\conda\plugins\manager.py", line 455, in invoke_post_solves for hook in self.get_hook_results("post_solves"): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\plugins\manager.py", line 236, in get_hook_results plugins = [item for items in hook(**kwargs) for item in items] ^^^^^ File "D:\Work\BaiDuPan1\Lib\site-packages\conda\plugins\post_solves\signature_verification.py", line 10, in condas from ...trust.signature_verification import signature_verification File "D:\Work\BaiDuPan1\Lib\site-packages\conda\trust\signature_verification.py", line 16, in <module> from conda_content_trust.authentication import verify_delegation, verify_root File "D:\Work\BaiDuPan1\Lib\site-packages\conda_content_trust\authentication.py", line 20, in <module> from .common import ( File "D:\Work\BaiDuPan1\Lib\site-packages\conda_content_trust\common.py", line 55, in <module> from cryptography.hazmat.primitives import serialization File "D:\Work\BaiDuPan1\Lib\site-packages\cryptography\hazmat\primitives\serialization\__init__.py", line 25, in > from cryptography.hazmat.primitives.serialization.ssh import ( File "D:\Work\BaiDuPan1\Lib\site-packages\cryptography\hazmat\primitives\serialization\ssh.py", line 27, in <modu> from cryptography.hazmat.primitives.ciphers import ( File "D:\Work\BaiDuPan1\Lib\site-packages\cryptography\hazmat\primitives\ciphers\__init__.py", line 11, in <modul> from cryptography.hazmat.primitives.ciphers.base import ( File "D:\Work\BaiDuPan1\Lib\site-packages\cryptography\hazmat\primitives\ciphers\base.py", line 143, in <module> CipherContext.register(rust_openssl.ciphers.CipherContext) ^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'openssl' has no attribute 'ciphers' $ D:\Work\BaiDuPan1\Scripts\conda-script.py update openssl cryptography -y environment variables: CIO_TEST=<not set> CLASSPATH=D:\LINGO64_18\Lingo18.jar CONDA_ALLOW_SOFTLINKS=false CONDA_BAT=D:\Work\BaiDuPan1\condabin\conda.bat CONDA_DEFAULT_ENV=base CONDA_EXE=D:\Work\BaiDuPan1\Scripts\conda.exe CONDA_EXES="D:\Work\BaiDuPan1\Scripts\conda.exe" CONDA_PREFIX=D:\Work\BaiDuPan1 CONDA_PROMPT_MODIFIER=(base) CONDA_PYTHON_EXE=D:\Work\BaiDuPan1\python.exe CONDA_ROOT=D:\Work\BaiDuPan1 CONDA_SHLVL=1 CURL_CA_BUNDLE=<not set> HOMEPATH=\Users\30855 LD_PRELOAD=<not set> PATH=D:\Work\BaiDuPan1;D:\Work\BaiDuPan1\Library\mingw- w64\bin;D:\Work\BaiDuPan1\Library\usr\bin;D:\Work\BaiDuPan1\Library\bi n;D:\Work\BaiDuPan1\Scripts;D:\Work\BaiDuPan1\bin;D:\Work\BaiDuPan1\co ndabin;D:\MPI\Bin;D:\VMware\bin;D:\浣犳兂瑕佺殑\Scripts;C:\;indows;C:\Window s\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\ System32\OpenSSH;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;D:\runtime\win64;D:\bin;D:\LINGO64_18;C:\Program Files\dotnet;D:\Work\Pycharm\Git\cmd;D:\RedPanda- Cpp\MinGW64\bin;D:\Work\Cygwin64\bin;D:\OpenSSL- Win64\bin;C:\Windows\System32;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit;C:\Work\Pycharm\Git\cmd;D:\Work\Ba iDuPan1;D:\Work\BaiDuPan1\Library\mingw- w64\bin;D:\Work\BaiDuPan1\Library\usr\bin;D:\Work\BaiDuPan1\Library\bi n;D:\Work\BaiDuPan1\Scripts;C:\Users\30855\AppData\Local\Microsoft\Win dowsApps;D:\Work\Pycharm\JetBrains\PyCharm Community Edition 2024.2.1\bin;.;;D:\RedPanda- Cpp\MinGW64\bin;C:\Users\30855\AppData\Local\Programs\Microsoft VS Cod e\bin;D:\Quartus18.1\Quartus18.1\modelsim_ase\win32aloem;C:\Windows\Sy stem32;. PSMODULEPATH=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\Windows PowerShell\v1.0\Modules REQUESTS_CA_BUNDLE=<not set> SSL_CERT_FILE=D:\Work\BaiDuPan1\Library\ssl\cacert.pem active environment : base active env location : D:\Work\BaiDuPan1 shell level : 1 user config file : C:\Users\30855\.condarc populated config files : D:\Work\BaiDuPan1\.condarc C:\Users\30855\.condarc conda version : 24.11.3 conda-build version : 24.9.0 python version : 3.12.7.final.0 solver : libmamba (default) virtual packages : __archspec=1=skylake __conda=24.11.3=0 __cuda=12.0=0 __win=0=0 base environment : D:\Work\BaiDuPan1 (writable) conda av data dir : D:\Work\BaiDuPan1\etc\conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : D:\Work\BaiDuPan1\pkgs C:\Users\30855\.conda\pkgs C:\Users\30855\AppData\Local\conda\conda\pkgs envs directories : D:\Work\BaiDuPan1\envs C:\Users\30855\.conda\envs C:\Users\30855\AppData\Local\conda\conda\envs platform : win-64 user-agent : conda/24.11.3 requests/2.32.3 CPython/3.12.7 Windows/11 Windows/10.0.22631 solver/libmamba co8 administrator : False netrc file : None offline mode : False An unexpected error has occurred. Conda has prepared the above report. If you suspect this error is being caused by a malfunctioning plugin, consider using the --no-plugins option to turn off plugins. Example: conda --no-plugins install Alternatively, you can set the CONDA_NO_PLUGINS environment variable on the command line to run the command without plugins enabled. Example: CONDA_NO_PLUGINS=true conda install Upload did not complete.

Traceback (most recent call last): File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 2107, in _run_ninja_build subprocess.run( File "D:\anaconda\envs\pytorch2.3.1\lib\subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\TrainCNO.py", line 160, in <module> output_pred_batch = model(input_batch) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\CNOModule.py", line 476, in forward x = self.lift(x) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\CNOModule.py", line 151, in forward x = self.inter_CNOBlock(x) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\CNOModule.py", line 104, in forward return self.activation(x) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\training\filtered_networks.py", line 392, in forward x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype), File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\torch_utils\ops\filtered_lrelu.py", line 114, in filtered_lrelu if impl == 'cuda' and x.device.type == 'cuda' and _init(): File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\torch_utils\ops\filtered_lrelu.py", line 26, in _init _plugin = custom_ops.get_plugin( File "D:\pycharm\pythonProject\ConvolutionalNeuralOperator-main\torch_utils\custom_ops.py", line 136, in get_plugin torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir, File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 1309, in load return _jit_compile( File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 1719, in _jit_compile _write_ninja_file_and_build_library( File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 1832, in _write_ninja_file_and_build_library _run_ninja_build( File "D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\utils\cpp_extension.py", line 2123, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error building extension 'filtered_lrelu_plugin': [1/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output filtered_lrelu_ns.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=filtered_lrelu_plugin -DTORCH_API_INCLUDE_EXTENSION_H -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\torch\csrc\api\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\TH -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\anaconda\envs\pytorch2.3.1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17 --use_fast_math --allow-unsupported-compiler -c C:\Users\Administrator\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu118\filtered_lrelu_plugin\2e9606d7cf844ec44b9f500eaacd35c0-nvidia-geforce-rtx-4060-ti\filtered_lrelu_ns.cu -o filtered_lrelu_ns.cuda.o [2/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output filtered_lrelu_rd.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=filtered_lrelu_plugin -DTORCH_API_INCLUDE_EXTENSION_H -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\torch\csrc\api\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\TH -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\anaconda\envs\pytorch2.3.1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17 --use_fast_math --allow-unsupported-compiler -c C:\Users\Administrator\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu118\filtered_lrelu_plugin\2e9606d7cf844ec44b9f500eaacd35c0-nvidia-geforce-rtx-4060-ti\filtered_lrelu_rd.cu -o filtered_lrelu_rd.cuda.o [3/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc --generate-dependencies-with-compile --dependency-output filtered_lrelu_wr.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4068 -Xcompiler /wd4067 -Xcompiler /wd4624 -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=filtered_lrelu_plugin -DTORCH_API_INCLUDE_EXTENSION_H -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\torch\csrc\api\include -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\TH -ID:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -ID:\anaconda\envs\pytorch2.3.1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 -std=c++17 --use_fast_math --allow-unsupported-compiler -c C:\Users\Administrator\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu118\filtered_lrelu_plugin\2e9606d7cf844ec44b9f500eaacd35c0-nvidia-geforce-rtx-4060-ti\filtered_lrelu_wr.cu -o filtered_lrelu_wr.cuda.o [4/4] "D:\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64/link.exe" filtered_lrelu.o filtered_lrelu_wr.cuda.o filtered_lrelu_rd.cuda.o filtered_lrelu_ns.cuda.o /nologo /DLL c10.lib c10_cuda.lib torch_cpu.lib torch_cuda.lib -INCLUDE:?warp_size@cuda@at@@YAHXZ torch.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\lib torch_python.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\libs "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib\x64" cudart.lib /out:filtered_lrelu_plugin.pyd FAILED: filtered_lrelu_plugin.pyd "D:\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64/link.exe" filtered_lrelu.o filtered_lrelu_wr.cuda.o filtered_lrelu_rd.cuda.o filtered_lrelu_ns.cuda.o /nologo /DLL c10.lib c10_cuda.lib torch_cpu.lib torch_cuda.lib -INCLUDE:?warp_size@cuda@at@@YAHXZ torch.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\lib\site-packages\torch\lib torch_python.lib /LIBPATH:D:\anaconda\envs\pytorch2.3.1\libs "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\lib\x64" cudart.lib /out:filtered_lrelu_plugin.pyd 正在创建库 filtered_lrelu_plugin.lib 和对象 filtered_lrelu_plugin.exp filtered_lrelu.o : error LNK2019: 无法解析的外部符号 __std_find_trivial_1,函数 "char const * __cdecl std::_Find_vectorized<char const ,char>(char const * const,char const * const,char)" (??$_Find_vectorized@$$CBDD@std@@YAPEBDQEBD0D@Z) 中引用了该符号 ninja: build stopped: subcommand failed.这是怎么回事

最新推荐

recommend-type

三菱FX3U三轴伺服电机与威纶通触摸屏组合程序详解:轴点动、回零与定位控制及全流程解析

三菱FX3U三轴伺服电机与威纶通触摸屏的程序编写方法及其应用。主要内容涵盖伺服电机主控程序、触摸屏程序、轴点动、回零及定位程序、通讯模块程序以及威纶显示器程序的分析。通过对各个模块的深入探讨,帮助读者理解每个部分的功能和实现方式,确保机械运动控制的准确性、高效性和稳定性。此外,文章还提供了关于程序编写过程中可能遇到的问题及解决方案。 适合人群:从事自动化控制领域的工程师和技术人员,尤其是对三菱FX3U三轴伺服电机和威纶通触摸屏有实际操作需求的专业人士。 使用场景及目标:适用于工业自动化项目中,旨在提高对三菱FX3U三轴伺服电机和威纶通触摸屏的理解和应用能力,掌握模块化编程技巧,解决实际工程中的编程难题。 其他说明:文中不仅讲解了各模块的具体实现细节,还强调了程序的安全性和可靠性,为项目的成功实施提供了有力的支持。
recommend-type

职业介绍与人才招聘综合管理系统-基于宏达数据库信息管理开发平台的专业人力资源服务软件-包含基本信息设置-用人单位管理-求职人员登记-数据查询-统计分析-报表生成-打印输出-权限控制.zip

cursor免费次数用完职业介绍与人才招聘综合管理系统_基于宏达数据库信息管理开发平台的专业人力资源服务软件_包含基本信息设置_用人单位管理_求职人员登记_数据查询_统计分析_报表生成_打印输出_权限控制.zip
recommend-type

基于Spark2x分布式计算框架的实时新闻大数据分析可视化系统-实现用户浏览日志采集与实时处理-新闻话题热度排名统计-时段流量峰值分析-新闻曝光量监控-数据可视化展示-采用Kaf.zip

基于Spark2x分布式计算框架的实时新闻大数据分析可视化系统_实现用户浏览日志采集与实时处理_新闻话题热度排名统计_时段流量峰值分析_新闻曝光量监控_数据可视化展示_采用Kaf.zip大数据实战项目
recommend-type

基于springboot小型哺乳类宠物诊所管理系统-4339s0c8【附万字论文+PPT+包部署+录制讲解视频】.zip

基于springboot小型哺乳类宠物诊所管理系统-4339s0c8【附万字论文+PPT+包部署+录制讲解视频】.zip
recommend-type

基于Simulink的风电永磁同步电机并网系统仿真模型与SVPWM控制机制探究

基于Simulink/Matlab构建的风电永磁同步电机并网系统的仿真模型。该模型主要涵盖了SVPWM控制、MPPT风能跟踪算法以及Crowbar电路的低压穿越功能。文中首先解释了机侧变流器的工作原理及其核心——MPPT算法的具体实现方法,采用了黄金分割法进行最大功率点跟踪,并提供了相应的Matlab函数代码。接着讨论了网侧变流器的电网电压定向控制和SVPWM模块的应用,强调了载波频率设置和死区补偿的重要性。对于Crowbar电路部分,则着重讲述了其触发逻辑和保护机制,确保在电网电压骤降时能够稳定运行。此外,还分享了一些仿真设置的小技巧,如选择合适的求解器和优化参数的方法。 适合人群:从事风电系统研究的技术人员、高校相关专业师生、对电力电子控制系统感兴趣的工程技术人员。 使用场景及目标:①为风电并网仿真提供可靠的模型支持;②深入理解SVPWM控制、MPPT算法和Crowbar电路的功能;③掌握风电系统关键组件的设计与优化方法。 其他说明:本文不仅提供了详细的理论解析和技术细节,还附带了具体的代码片段,便于读者实际操作和验证。
recommend-type

Pansophica开源项目:智能Web搜索代理的探索

Pansophica开源项目是一个相对较新且具有创新性的智能Web搜索代理,它突破了传统搜索引擎的界限,提供了一种全新的交互方式。首先,我们来探讨“智能Web搜索代理”这一概念。智能Web搜索代理是一个软件程序或服务,它可以根据用户的查询自动执行Web搜索,并尝试根据用户的兴趣、历史搜索记录或其他输入来提供个性化的搜索结果。 Pansophica所代表的不仅仅是搜索结果的展示,它还强调了一个交互式的体验,在动态和交互式虚拟现实中呈现搜索结果。这种呈现方式与现有的搜索体验有着根本的不同。目前的搜索引擎,如Google、Bing和Baidu等,多以静态文本和链接列表的形式展示结果。而Pansophica通过提供一个虚拟现实环境,使得搜索者可以“扭转”视角,进行“飞行”探索,以及“弹网”来浏览不同的内容。这种多维度的交互方式使得信息的浏览变得更加快速和直观,有望改变用户与网络信息互动的方式。 接着,我们关注Pansophica的“开源”属性。所谓开源,指的是软件的源代码可以被公众获取,任何个人或组织都可以自由地使用、学习、修改和分发这些代码。开源软件通常由社区进行开发和维护,这样的模式鼓励了协作创新并减少了重复性劳动,因为全世界的开发者都可以贡献自己的力量。Pansophica项目作为开源软件,意味着其他开发者可以访问和使用其源代码,进一步改进和扩展其功能,甚至可以为Pansophica构建新的应用或服务。 最后,文件名称“Pansophica-src-1.3”表明了我们讨论的特定版本的Pansophica开源代码。数字“1.3”很可能指的是该版本号,表明这是Pansophica项目的第1.3个公开版本。这个版本号对于开发者来说很重要,因为它可以帮助他们追踪更新的进度、比较版本间的差异以及选择最适合其项目的版本。 综合以上信息,Pansophica开源项目不仅在搜索代理领域引入了创新的交互方式,而且通过开源的方式,提供了技术共享和协作的可能性。对于IT行业来说,开源项目是推动技术发展的重要力量,它激发了社区的活力,促进了知识的传播和技术的进步。通过构建一个开源社区,Pansophica可以吸引来自世界各地的开发者和研究人员,共同探索如何将搜索技术与交互式虚拟现实技术结合起来,为用户带来更加丰富和有效的信息检索体验。同时,Pansophica的开源性质也使其更容易适应未来技术的发展,随着社区的不断贡献和反馈,这一平台有望在功能和性能上得到持续的优化和增强。
recommend-type

跨平台内容提取无忧:coze工作流应对社交媒体挑战

# 1. 社交媒体内容提取的必要性与挑战 ## 1.1 内容提取的必要性 在数字时代,社交媒体内容是信息与数据流动的主干道,公司、个人和研究者都在寻求从这些信息中提取价值。无论是为了市场趋势分析、品牌监控、消费者洞察还是公共舆论监测,有效的社交媒体内容提取变得至关重要。这不仅有助于企业做出数据驱动的决策,也能使个人更好地理
recommend-type

vrrp主设备发送的免费arp

<think>好的,我现在需要回答用户关于VRRP主设备发送免费ARP的目的和工作原理的问题。首先,我得回忆一下VRRP的基本概念和工作机制。VRRP,虚拟路由冗余协议,主要用于在局域网中提供默认网关的冗余备份,确保网络的高可用性。主设备(Master)负责转发流量,而备用设备(Backup)则在主设备故障时接管工作。 用户的问题集中在主设备发送免费ARP的目的和机制上。根据引用[2],免费ARP用于通知下游设备虚拟MAC地址的变更。当主设备被选举出来后,它需要让局域网内的其他设备知道虚拟IP对应的MAC地址已经指向自己,这样流量才会被正确路由到主设备。免费ARP的作用应该就是更新这些设备的
recommend-type

为Ghost博客平台打造的Meteor流星包装使用指南

从给定文件信息中,我们可以提炼出以下IT知识点: ### 标题知识点:流星Ghost软件包 1. **流星Ghost软件包的用途**:流星Ghost软件包是专为Ghost博客平台设计的流星(Meteor)应用程序。流星是一个开源的全栈JavaScript平台,用于开发高性能和易于编写的Web应用程序。Ghost是一个开源博客平台,它提供了一个简单且专业的写作环境。 2. **软件包的作用**:流星Ghost软件包允许用户在流星平台上轻松集成Ghost博客。这样做的好处是可以利用流星的实时特性以及易于开发和部署的应用程序框架,同时还能享受到Ghost博客系统的便利和美观。 ### 描述知识点:流星Ghost软件包的使用方法 1. **软件包安装方式**:用户可以通过流星的命令行工具添加名为`mrt:ghost`的软件包。`mrt`是流星的一个命令行工具,用于添加、管理以及配置软件包。 2. **初始化Ghost服务器**:描述中提供了如何在服务器启动时运行Ghost的基本代码示例。这段代码使用了JavaScript的Promise异步操作,`ghost().then(function (ghostServer) {...})`这行代码表示当Ghost服务器初始化完成后,会在Promise的回调函数中提供一个Ghost服务器实例。 3. **配置Ghost博客**:在`then`方法中,首先会获取到Ghost服务器的配置对象`config`,用户可以在此处进行自定义设置,例如修改主题、配置等。 4. **启动Ghost服务器**:在配置完成之后,通过调用`ghostServer.start()`来启动Ghost服务,使其能够处理博客相关的请求。 5. **Web浏览器导航**:一旦流星服务器启动并运行,用户便可以通过Web浏览器访问Ghost博客平台。 ### 标签知识点:JavaScript 1. **JavaScript作为流星Ghost软件包的开发语言**:标签指出流星Ghost软件包是使用JavaScript语言开发的。JavaScript是一种在浏览器端广泛使用的脚本语言,它也是流星平台的基础编程语言。 2. **流星和Ghost共同使用的语言**:JavaScript同样也是Ghost博客平台的开发语言。这表明流星Ghost软件包可以无缝集成,因为底层技术栈相同。 ### 压缩包子文件的文件名称列表知识点:meteor-ghost-master 1. **版本控制和软件包结构**:文件名称`meteor-ghost-master`暗示了该软件包可能托管在像GitHub这样的版本控制系统上。文件名中的`master`通常指的是主分支或主版本。 2. **软件包的目录结构**:通过文件名称可以推断出该软件包可能拥有一个标准的流星软件包结构,包含了初始化、配置、运行等必要的模块和文件。 3. **软件包的维护状态**:由于文件名没有包含特定的版本号,我们无法直接得知软件包的最新更新情况。通常,软件包维护者会将最新的版本代码放在`master`分支上。 ### 总结 流星Ghost软件包提供了一个有效的解决方案,使得流星平台的开发者能够在他们的应用中添加Ghost博客功能。软件包的使用简便,通过流星的命令行工具安装,并通过JavaScript代码配置和启动Ghost服务。通过流星Ghost软件包,开发者能够享受流星的实时特性以及Ghost博客系统的便利性。此外,软件包的命名和结构也暗示了其维护和版本控制的模式,有助于开发者更好地理解如何使用和维护这一软件包。
recommend-type

抖音标题生成自动化:用coze工作流释放创意

# 1. 抖音标题生成自动化的重要性 随着社交媒体平台的崛起,内容的吸引力很大程度上取决于标题的创意与精准性。抖音作为一个日活亿级的短视频平台,高质量的标题能够有效提高视频的点击率,增加内容的传播。但是,人工撰写标题不仅耗时耗力,而且很难做到快速响应热点,自动化标题生成工具应运而生。coze工作流,作为一种实现自动化生成抖音标题的工具,其重要性不言而喻。它能够利用大数据分析和机器学习技术,提高标题的吸引