file-type

PyTorch C++ API在Kaggle Flowers数据集上实现CNN测试

ZIP文件

下载需积分: 50 | 1.45MB | 更新于2025-08-14 | 10 浏览量 | 0 下载量 举报 收藏
download 立即下载
知识点: 1. PyTorch C++ API (Libtorch): PyTorch是一个流行的开源机器学习库,专为Python设计,但其核心是用C++编写的。PyTorch C++ API,也被称作Libtorch,是PyTorch库的C++版本,使得开发者可以完全在C++环境中使用PyTorch的全部功能。尽管Python接口更加广泛使用,但Libtorch在需要高性能计算或避免Python解释器开销的场景中非常有用,比如嵌入式系统或生产环境。 2. Kaggle Flowers数据集:Kaggle是全球著名的数据科学竞赛平台,提供了多种数据集供数据科学家进行训练和比赛。Kaggle Flowers数据集包含了来自不同种类的花朵图片,通常用于图像分类任务。这个数据集对于初学者来说是一个很好的起点,因为数据集不大,且属于视觉识别领域的基础问题。使用此类数据集进行深度学习模型训练可以帮助学习者理解和掌握图像处理和分类的基础知识。 3. 卷积神经网络 (CNN):CNN是一种深度学习模型,特别适用于处理图像和其他二维数据。CNN通常包含卷积层、池化层(下采样层)、全连接层等,通过这些层的组合能够自动学习图像的层次化特征,无需手动特征提取。在本例中,通过使用PyTorch C++ API实现的小型卷积神经网络来对从Kaggle Flowers数据集中获得的图片进行特征提取和分类。 4. Ubuntu上的Libtorch安装:在Ubuntu系统上安装Libtorch可以通过多种方法,包括直接安装和使用Anaconda环境。直接安装可能涉及下载相应的Libtorch预编译包,设置环境变量以及配置系统路径。Anaconda提供了一种更便捷的环境管理方案,允许用户创建隔离的环境以避免包版本冲突。安装过程中,需要根据系统的具体配置来调整环境变量,确保系统能够找到Libtorch的库文件。 5. 代码管理:在本例中,使用了Git代码管理工具来获取并管理5_classes_PyTorch_test代码。首先,创建了工作目录并使用git clone命令将远程仓库克隆到本地。之后,用户可以通过代码编辑器(如Visual Studio Code)来查看和编辑代码。这一部分介绍了基本的代码版本控制和管理流程,这对于开发工作来说非常重要。 6. CMakeLists.txt的配置:CMake是一个跨平台的构建系统,广泛用于控制编译过程并生成各种类型的构建文件(比如Makefile)。在本例中,需要修改CMakeLists.txt文件,以便其配置符合本地开发环境的具体路径。这通常涉及指定库的路径、包含目录以及其他构建参数。对于Libtorch这类大型项目来说,正确配置CMake是编译和运行程序的关键步骤。 7. C++编程和深度学习:本例凸显了C++在深度学习领域中的应用。尽管Python是深度学习研究和开发的主流语言,但C++在性能要求较高的场合依然有其优势。掌握如何利用PyTorch C++ API进行深度学习模型的构建和部署,可以让开发者在需要时更灵活地控制性能和资源使用。此外,这也有助于理解深度学习库背后的工作原理和算法实现,加深对机器学习模型构建过程的理解。

相关推荐

filetype

(pytorch_env) PS E:\PyTorch_Build\pytorch> # 安装支持 sm_120 架构的 PyTorch Nightly 版本 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y torch torchvision torchaudio Found existing installation: torch 2.3.0+cu121 Uninstalling torch-2.3.0+cu121: Successfully uninstalled torch-2.3.0+cu121 Found existing installation: torchvision 0.18.0+cu121 Uninstalling torchvision-0.18.0+cu121: Successfully uninstalled torchvision-0.18.0+cu121 Found existing installation: torchaudio 2.3.0+cu121 Uninstalling torchaudio-2.3.0+cu121: Successfully uninstalled torchaudio-2.3.0+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 Looking in indexes: https://download.pytorch.org/whl/nightly/cu121 Collecting torch Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.6.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (2456.2 MB) Collecting torchvision Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.20.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (6.2 MB) Collecting torchaudio Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.5.0.dev20241112%2Bcu121-cp310-cp310-win_amd64.whl (4.2 MB) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.19.1) Requirement already satisfied: typing-extensions>=4.10.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (4.15.0) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (2025.7.0) Requirement already satisfied: sympy==1.13.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torch) (1.13.1) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from sympy==1.13.1->torch) (1.3.0) Requirement already satisfied: numpy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torchvision) (2.1.2) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from torchvision) (11.0.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from jinja2->torch) (3.0.2) Installing collected packages: torch, torchvision, torchaudio Successfully installed torch-2.6.0.dev20241112+cu121 torchaudio-2.5.0.dev20241112+cu121 torchvision-0.20.0.dev20241112+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 或者从源码编译支持 sm_120 (pytorch_env) PS E:\PyTorch_Build\pytorch> git clone --depth 1 --branch release/2.4 https://github.com/pytorch/pytorch Cloning into 'pytorch'... remote: Enumerating objects: 11178, done. remote: Counting objects: 100% (11178/11178), done. remote: Compressing objects: 100% (9646/9646), done. remote: Total 11178 (delta 1404), reused 6483 (delta 1248), pack-reused 0 (from 0) Receiving objects: 100% (11178/11178), 99.55 MiB | 2.38 MiB/s, done. Resolving deltas: 100% (1404/1404), done. Updating files: 100% (15987/15987), done. (pytorch_env) PS E:\PyTorch_Build\pytorch> cd pytorch (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip install -r requirements.txt Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Ignoring setuptools: markers 'python_version >= "3.12"' don't match your environment Collecting astunparse (from -r requirements.txt (line 2)) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2b/03/13dde6512ad7b4557eb792fbcf0c653af6076b81e5941d36ec61f7ce6028/astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting expecttest!=0.2.0 (from -r requirements.txt (line 3)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/27/fb/deeefea1ea549273817ca7bed3db2f39cc238a75a745a20e3651619f7335/expecttest-0.3.0-py3-none-any.whl (8.2 kB) Collecting hypothesis (from -r requirements.txt (line 4)) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f7/fb/0eca64797a914b73a3601ed0c8941764d0b4232a4900eb03502e1daf8407/hypothesis-6.138.11-py3-none-any.whl (533 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 533.4/533.4 kB 17.0 MB/s 0:00:00 Requirement already satisfied: numpy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 5)) (2.1.2) Collecting psutil (from -r requirements.txt (line 6)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/50/1b/6921afe68c74868b4c9fa424dad3be35b095e16687989ebbb50ce4fceb7c/psutil-7.0.0-cp37-abi3-win_amd64.whl (244 kB) Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 7)) (6.0.2) Collecting requests (from -r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl (64 kB) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 9)) (80.9.0) Collecting types-dataclasses (from -r requirements.txt (line 10)) Downloading https://pypi.tuna.tsinghua.edu.cn/packages/31/85/23ab2bbc280266af5bf22ded4e070946d1694d1721ced90666b649eaa795/types_dataclasses-0.6.6-py3-none-any.whl (2.9 kB) Requirement already satisfied: typing-extensions>=4.8.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 11)) (4.15.0) Requirement already satisfied: sympy in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 12)) (1.13.1) Requirement already satisfied: filelock in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 13)) (3.19.1) Requirement already satisfied: networkx in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 14)) (3.4.2) Requirement already satisfied: jinja2 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 15)) (3.1.6) Requirement already satisfied: fsspec in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 16)) (2025.7.0) Collecting lintrunner (from -r requirements.txt (line 17)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/40/c1/eb0184a324dd25e19b84e52fc44c6262710677737c8acca5d545b3d25ffb/lintrunner-0.12.7-py3-none-win_amd64.whl (1.7 MB) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from -r requirements.txt (line 18)) (1.13.0) Collecting packaging (from -r requirements.txt (line 21)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl (66 kB) Collecting optree>=0.11.0 (from -r requirements.txt (line 22)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/74/fa/83d4cd387043483ee23617b048829a1289bf54afe2f6cb98ec7b27133369/optree-0.17.0-cp310-cp310-win_amd64.whl (304 kB) Requirement already satisfied: wheel<1.0,>=0.23.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from astunparse->-r requirements.txt (line 2)) (0.45.1) Collecting six<2.0,>=1.6.1 (from astunparse->-r requirements.txt (line 2)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl (11 kB) Collecting attrs>=22.2.0 (from hypothesis->-r requirements.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/77/06/bb80f5f86020c4551da315d78b3ab75e8228f89f0162f2c3a819e407941a/attrs-25.3.0-py3-none-any.whl (63 kB) Collecting exceptiongroup>=1.0.0 (from hypothesis->-r requirements.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/36/f4/c6e662dade71f56cd2f3735141b265c3c79293c109549c1e6933b0651ffc/exceptiongroup-1.3.0-py3-none-any.whl (16 kB) Collecting sortedcontainers<3.0.0,>=2.1.0 (from hypothesis->-r requirements.txt (line 4)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB) Collecting charset_normalizer<4,>=2 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e2/c6/f05db471f81af1fa01839d44ae2a8bfeec8d2a8b4590f16c4e7393afd323/charset_normalizer-3.4.3-cp310-cp310-win_amd64.whl (107 kB) Collecting idna<4,>=2.5 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl (70 kB) Collecting urllib3<3,>=1.21.1 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl (129 kB) Collecting certifi>=2017.4.17 (from requests->-r requirements.txt (line 8)) Using cached https://pypi.tuna.tsinghua.edu.cn/packages/e5/48/1549795ba7742c948d2ad169c1c8cdbae65bc450d6cd753d124b17c8cd32/certifi-2025.8.3-py3-none-any.whl (161 kB) Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from sympy->-r requirements.txt (line 12)) (1.3.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from jinja2->-r requirements.txt (line 15)) (3.0.2) Installing collected packages: types-dataclasses, sortedcontainers, urllib3, six, psutil, packaging, optree, lintrunner, idna, expecttest, exceptiongroup, charset_normalizer, certifi, attrs, requests, hypothesis, astunparse Successfully installed astunparse-1.6.3 attrs-25.3.0 certifi-2025.8.3 charset_normalizer-3.4.3 exceptiongroup-1.3.0 expecttest-0.3.0 hypothesis-6.138.11 idna-3.10 lintrunner-0.12.7 optree-0.17.0 packaging-25.0 psutil-7.0.0 requests-2.32.5 six-1.17.0 sortedcontainers-2.4.0 types-dataclasses-0.6.6 urllib3-2.5.0 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 设置计算能力环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> $env:TORCH_CUDA_ARCH_LIST="8.9;9.0" # RTX 5070 需要 8.9+ (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 开始编译 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> python setup.py install Building wheel torch-2.4.0a0+gitee1b680 -- Building version 2.4.0a0+gitee1b680 --- Trying to initialize submodules Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' Submodule 'third_party/foxi' (https://github.com/houseroad/foxi.git) registered for path 'third_party/foxi' Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' Submodule 'third_party/nccl/nccl' (https://github.com/NVIDIA/nccl) registered for path 'third_party/nccl/nccl' Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/android/libs/fbjni'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/FP16'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/FXdiv'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/NNPACK'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/VulkanMemoryAllocator'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/XNNPACK'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cpp-httplib'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cpuinfo'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cudnn_frontend'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/cutlass'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/eigen'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/flatbuffers'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fmt'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/foxi'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/gloo'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/ideep'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/ittapi'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/mimalloc'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/nccl/nccl'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/nlohmann'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/onnx'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/pocketfft'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/protobuf'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/psimd'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/pthreadpool'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/pybind11'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/python-peachpy'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/sleef'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe'... Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' Submodule path 'third_party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191' Submodule path 'third_party/XNNPACK': checked out 'fcbf55af6cf28a4627bcd1f703ab7ad843f0f3a2' Submodule path 'third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' Submodule path 'third_party/cpp-httplib': checked out '3b6597bba913d51161383657829b7e644e59c006' Submodule path 'third_party/cpuinfo': checked out 'd6860c477c99f1fce9e28eb206891af3c0e1a1d7' Submodule path 'third_party/cudnn_frontend': checked out 'b740542818f36857acf7f9853f749bbad4118c65' Submodule path 'third_party/cutlass': checked out 'bbe579a9e3beb6ea6626d9227ec32d0dae119a49' Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' Submodule path 'third_party/fbgemm': checked out 'dbc3157bf256f1339b3fa1fef2be89ac4078be0e' Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/fbgemm/third_party/cutlass' Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' Submodule 'third_party/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/third_party/hipify_torch' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/cutlass'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/fbgemm/third_party/hipify_torch'... Submodule path 'third_party/fbgemm/third_party/asmjit': checked out 'd3fbf7c9bc7c1d1365a94a45614b91c5a3706b81' Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' Submodule path 'third_party/fbgemm/third_party/cutlass': checked out 'fc9ebc645b63f3a6bc80aaefde5c063fb72110d6' Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' Submodule path 'third_party/fbgemm/third_party/hipify_torch': checked out '23f53b025b466d8ec3c45d52290d3442f7fbe6b1' Submodule path 'third_party/flatbuffers': checked out '01834de25e4bf3975a9a00e816292b1ad0fe184b' Submodule path 'third_party/fmt': checked out 'e69e5f977d458f2650bb346dadf2ad30c5320281' Submodule path 'third_party/foxi': checked out 'c278588e34e535f0bb8f00df3880d26928038cad' Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' Submodule path 'third_party/gloo': checked out '5354032ea08eadd7fc4456477f7f7c6308818509' Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' Submodule path 'third_party/ideep': checked out '55ca0191687aaf19aca5cdb7881c791e3bea442b' Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/ideep/mkl-dnn'... Submodule path 'third_party/ideep/mkl-dnn': checked out '1137e04ec0b5251ca2b4400a4fd3c667ce843d67' Submodule path 'third_party/ittapi': checked out '5b8a7d7422611c3a0d799fb5fc5dd4abfae35b42' Submodule path 'third_party/kineto': checked out 'be1317644c68b4bfc4646024a6b221066e430031' Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out 'a33701196adfad74917046096bf5a2aa0ab0bb50' Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' Submodule path 'third_party/mimalloc': checked out 'b66e3214d8a104669c2ec05ae91ebc26a8f5ab78' Submodule path 'third_party/nccl/nccl': checked out '48bb7fec7953112ff37499a272317f6663f8f600' Submodule path 'third_party/nlohmann': checked out '87cda1d6646592ac5866dc703c8e1839046a6806' Submodule path 'third_party/onnx': checked out '990217f043af7222348ca8f0301e17fa7b841781' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/benchmark' Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/onnx/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/onnx/third_party/pybind11'... Submodule path 'third_party/onnx/third_party/benchmark': checked out '2dd015dfef425c866d9a43f2c67d8b52d709acb6' Submodule path 'third_party/onnx/third_party/pybind11': checked out '5b0a6fc2017fcc176545afe3e09c9f9885283242' Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' Submodule path 'third_party/pocketfft': checked out '9d3ab05a7fffbc71a492bc6a17be034e83e8f0fe' Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/protobuf/third_party/googletest'... Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' Submodule path 'third_party/pybind11': checked out '3e9dfa2866941655c56877882565e7577de6fc7b' Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' Submodule path 'third_party/sleef': checked out '60e76d2bce17d278b439d9da17177c8f957a9e9b' Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' Cloning into 'E:/PyTorch_Build/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' --- Submodule initialization took 555.36 sec Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\pytorch\setup.py", line 1510, in <module> main() File "E:\PyTorch_Build\pytorch\pytorch\setup.py", line 1176, in main build_deps() File "E:\PyTorch_Build\pytorch\pytorch\setup.py", line 471, in build_deps build_caffe2( File "E:\PyTorch_Build\pytorch\pytorch\tools\build_pytorch_libs.py", line 82, in build_caffe2 my_env = _create_build_env() File "E:\PyTorch_Build\pytorch\pytorch\tools\build_pytorch_libs.py", line 68, in _create_build_env my_env = _overlay_windows_vcvars(my_env) File "E:\PyTorch_Build\pytorch\pytorch\tools\build_pytorch_libs.py", line 37, in _overlay_windows_vcvars vc_env: Dict[str, str] = distutils._msvccompiler._get_vc_env(vc_arch) AttributeError: module 'distutils' has no attribute '_msvccompiler'. Did you mean: 'ccompiler'? (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 降级 NumPy 到兼容版本 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip uninstall -y numpy Found existing installation: numpy 2.1.2 Uninstalling numpy-2.1.2: Successfully uninstalled numpy-2.1.2 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip install numpy==1.26.4 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting numpy==1.26.4 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/19/77/538f202862b9183f54108557bfda67e17603fc560c384559e769321c9d92/numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Installing collected packages: numpy Successfully installed numpy-1.26.4 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 安装兼容的 PyTorch 版本 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> pip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 --no-deps Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple ERROR: Could not find a version that satisfies the requirement torch==2.3.0+cu121 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.7.0, 2.7.1, 2.8.0) ERROR: No matching distribution found for torch==2.3.0+cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> import torch import: The term 'import' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> import numpy as np import: The term 'import' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print('='*50) 无法初始化设备 PRN (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'PyTorch 版本: {torch.__version__}') fPyTorch 版本: {torch.__version__}: The term 'fPyTorch 版本: {torch.__version__}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'CUDA 可用: {torch.cuda.is_available()}') fCUDA 可用: {torch.cuda.is_available()}: The term 'fCUDA 可用: {torch.cuda.is_available()}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'CUDA 版本: {torch.version.cuda}') fCUDA 版本: {torch.version.cuda}: The term 'fCUDA 版本: {torch.version.cuda}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'cuDNN 版本: {torch.backends.cudnn.version()}') fcuDNN 版本: {torch.backends.cudnn.version()}: The term 'fcuDNN 版本: {torch.backends.cudnn.version()}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'GPU 名称: {torch.cuda.get_device_name(0)}') fGPU 名称: {torch.cuda.get_device_name(0)}: The term 'fGPU 名称: {torch.cuda.get_device_name(0)}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'计算能力: {torch.cuda.get_device_capability(0)}') f计算能力: {torch.cuda.get_device_capability(0)}: The term 'f计算能力: {torch.cuda.get_device_capability(0)}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # 简单 CUDA 测试 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> x = torch.randn(100, 100, device='cuda') ParserError: Line | 1 | x = torch.randn(100, 100, device='cuda') | ~ | Missing expression after ','. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> y = torch.randn(100, 100, device='cuda') ParserError: Line | 1 | y = torch.randn(100, 100, device='cuda') | ~ | Missing expression after ','. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> z = x @ y ParserError: Line | 1 | z = x @ y | ~ | Unrecognized token in source text. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'矩阵乘法完成: {z.size()}') f矩阵乘法完成: {z.size()}: The term 'f矩阵乘法完成: {z.size()}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> # NumPy 互操作测试 (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> a = np.random.rand(100, 100) a: The term 'a' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> b = torch.from_numpy(a).cuda() a: The term 'a' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> c = b.cpu().numpy() ParserError: Line | 1 | c = b.cpu().numpy() | ~ | An expression was expected after '('. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print(f'NumPy 互操作测试成功: {c.shape == a.shape}') fNumPy 互操作测试成功: {c.shape == a.shape}: The term 'fNumPy 互操作测试成功: {c.shape == a.shape}' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch> print('='*50) 无法初始化设备 PRN (pytorch_env) PS E:\PyTorch_Build\pytorch\pytorch>

filetype

(pcr) PS D:\A_Code\Pointnet_Pointnet2_pytorch-master> python train_partseg.py --model pointnet2_part_seg_msg --normal --log_dir pointnet2_part_seg_msg PARAMETER ... Namespace(model='pointnet2_part_seg_msg', batch_size=16, epoch=251, learning_rate=0.001, gpu='0', optimizer='Adam', log_dir='pointnet2_part_seg_msg', decay_rate=0.0001, npoint=2048, normal=True, step_size=20, lr_decay=0.5) The number of training data is: 13998 The number of test data is: 2874 No existing model, starting training from scratch... Epoch 1 (1/251): Learning rate:0.001000 BN momentum updated to: 0.100000 0%| | 0/874 [00:00<?, ?it/s]Points shape after numpy conversion: (16, 2048, 6) Points shape after torch conversion: torch.Size([16, 2048, 6]) Points shape after transpose: torch.Size([16, 6, 2048]) FPS input shape: torch.Size([16, 3, 6]) 0%| | 0/874 [00:00<?, ?it/s] Traceback (most recent call last): File "D:\A_Code\Pointnet_Pointnet2_pytorch-master\train_partseg.py", line 308, in <module> main(args) File "D:\A_Code\Pointnet_Pointnet2_pytorch-master\train_partseg.py", line 200, in main seg_pred, trans_feat = classifier(points, to_categorical(label, num_classes)) File "C:\Users\10614\.conda\envs\pcr\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\10614\.conda\envs\pcr\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "D:\A_Code\Pointnet_Pointnet2_pytorch-master\models\pointnet2_part_seg_msg.py", line 66, in forward l1_xyz, l1_points = self.sa1(l0_xyz, l0_points) File "C:\Users\10614\.conda\envs\pcr\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\10614\.conda\envs\pcr\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "D:\A_Code\Pointnet_Pointnet2_pytorch-master\models\pointnet2_utils.py", line 241, in forward new_xyz = index_points(xyz, farthest_point_sample(xyz, S)) File "D:\A_Code\Pointnet_Pointnet2_pytorch-master\models\pointnet2_utils.py", line 80, in farthest_point_sample centroid = xyz[batch_indices, farthest, :].view(B, 1, 3) RuntimeError: shape '[16, 1, 3]' is invalid for input of size 96 (pcr) PS D:\A_Code\Pointnet_Pointnet2_pytorch-master>

filetype

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建虚拟环境 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env Error: [Errno 13] Permission denied: 'E:\\PyTorch_Build\\pytorch\\rtx5070_env\\Scripts\\python.exe' (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置执行策略 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-ExecutionPolicy RemoteSigned -Scope Process -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建所有脚本文件 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # [粘贴上面的四个脚本创建命令] (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行完整构建流程 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\full_build.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证CUDA编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\compile_cuda_test.ps1 ParserError: E:\PyTorch_Build\pytorch\compile_cuda_test.ps1:56 Line | 56 | '@ | Set-Content compile_cuda_test.ps1 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | The string is missing the terminator: '. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证PyTorch安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; print(f'PyTorch版本: {torch.__version__}'); print(f'CUDA可用: {torch.cuda.is_available()}')" PyTorch版本: 2.8.0+cpu CUDA可用: False (rtx5070_env) PS E:\PyTorch_Build\pytorch> PowerShell 7 环境已加载 (版本: 7.5.2) 版本:: The term '版本:' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch Get-Process: A positional parameter cannot be found that accepts argument 'cd'. (rtx5070_env) PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env Get-Process: A positional parameter cannot be found that accepts argument 'python'. (rtx5070_env) PS E:\PyTorch_Build\pytorch> PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate Get-Process: A positional parameter cannot be found that accepts argument '.\rtx5070_env\Scripts\activate'. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建虚拟环境 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建虚拟环境 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> Error: [Errno 13] Permission denied: 'E:\\PyTorch_Build\\pytorch\\rtx5070_env\\Scripts\\python.exe' Error:: The term 'Error:' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\acti … | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置执行策略 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置执行策略 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-ExecutionPolicy RemoteSigned -Scope Process -Force ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-ExecutionPolicy Remote … | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建所有脚本文件 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 创建所有脚本文件 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # [粘贴上面的四个脚本创建命令] ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # [粘贴上面的四个脚本创建命令] | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行完整构建流程 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 运行完整构建流程 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\full_build.ps1 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\full_build.ps1 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证CUDA编译 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证CUDA编译 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\compile_cuda_test.ps1 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\compile_cuda_test.ps1 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: E:\PyTorch_Build\pytorch\compile_cuda_test.ps1:56 ParserError:: The term 'ParserError:' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> Line | >> 56 | '@ | Set-Content compile_cuda_test.ps1 >> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> | The string is missing the terminator: '. >> (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: Line | 2 | 56 | '@ | Set-Content compile_cuda_test.ps1 | ~~ | Expressions are only allowed as the first element of a pipeline. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证PyTorch安装 ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 验证PyTorch安装 | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; print(f'PyTorch版本: {torch.__version__}'); print(f'CUDA可用: {torch.cuda.is_available()}')" ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; p … | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> PyTorch版本: 2.8.0+cpu PyTorch版本:: The term 'PyTorch版本:' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> CUDA可用: False CUDA可用:: The term 'CUDA可用:' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> ParserError: Line | 1 | (rtx5070_env) PS E:\PyTorch_Build\pytorch> | ~~ | Unexpected token 'PS' in expression or statement. (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 1. 环境设置脚本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # setup_environment.ps1 >> Set-ExecutionPolicy RemoteSigned -Scope Process -Force >> >> `$vsPath = "C:\Program Files\Microsoft Visual Studio\2022\Community" >> `$vcPath = "`$vsPath\VC\Tools\MSVC" >> `$sdkPath = "C:\Program Files (x86)\Windows Kits\10" >> >> # 获取最新MSVC版本 >> `$msvcVersions = Get-ChildItem -Path `$vcPath | >> Where-Object { `$_.Name -match '^\d+\.\d+\.\d+' } | >> Sort-Object { [version]`$_.Name } -Descending >> `$latestMsvc = `$msvcVersions | Select-Object -First 1 >> `$msvcVersion = `$latestMsvc.Name >> `$env:VCToolsInstallDir = "`$vcPath\`$msvcVersion" >> >> # 获取最新Windows SDK >> `$sdkVersions = Get-ChildItem -Path "`$sdkPath\Include" | >> Where-Object { `$_.Name -match '^\d+\.\d+\.\d+' } | >> Sort-Object { [version]`$_.Name } -Descending >> `$latestSdk = `$sdkVersions | Select-Object -First 1 >> `$sdkVersion = `$latestSdk.Name >> >> # 设置环境变量 >> `$env:PATH = @( >> "`$env:VCToolsInstallDir\bin\Hostx64\x64", >> "`$sdkPath\bin\`$sdkVersion\x64", >> "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin", >> "E:\Program Files\NVIDIA\CUDNN\v9.12\bin", >> `$env:PATH >> ) -join ';' >> >> `$env:INCLUDE = @( >> "`$env:VCToolsInstallDir\include", >> "`$sdkPath\Include\`$sdkVersion\um", >> "`$sdkPath\Include\`$sdkVersion\ucrt", >> "`$sdkPath\Include\`$sdkVersion\shared", >> "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include", >> `$env:INCLUDE >> ) -join ';' >> >> `$env:LIB = @( >> "`$env:VCToolsInstallDir\lib\x64", >> "`$sdkPath\Lib\`$sdkVersion\um\x64", >> "`$sdkPath\Lib\`$sdkVersion\ucrt\x64", >> "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\lib\x64", >> `$env:LIB >> ) -join ';' >> >> # 设置CUDA编译器 >> `$env:CUDA_HOST_COMPILER = "`$env:VCToolsInstallDir\bin\Hostx64\x64\cl.exe" >> "@ | Set-Content setup_environment.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 2. CMake配置脚本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # cmake_config.ps1 >> `$sourceDir = (Get-Item -Path "..").FullName >> `$cmakeArgs = @( >> "-G", "Ninja", >> "-DCMAKE_BUILD_TYPE=Release", >> "-DCMAKE_C_COMPILER=cl.exe", >> "-DCMAKE_CXX_COMPILER=cl.exe", >> "-DCMAKE_CUDA_COMPILER=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe", >> "-DCMAKE_CUDA_HOST_COMPILER=cl.exe", >> "-DCMAKE_SYSTEM_VERSION=10.0.22621.0", >> "-DCUDA_NVCC_FLAGS=-Xcompiler /wd4819 -gencode arch=compute_89,code=sm_89", >> "-DTORCH_CUDA_ARCH_LIST=8.9", >> "-DUSE_CUDA=ON", >> "-DUSE_NCCL=OFF", >> "-DUSE_DISTRIBUTED=OFF", >> "-DBUILD_TESTING=OFF", >> "-DBLAS=OpenBLAS", >> "-DCUDA_TOOLKIT_ROOT_DIR=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0", >> "-DCUDNN_ROOT_DIR=E:/Program Files/NVIDIA/CUDNN/v9.12", >> "-DPYTHON_EXECUTABLE=`$((Get-Command python).Source)" >> ) >> >> Write-Host "运行 CMake: cmake `$sourceDir @cmakeArgs" >> cmake `$sourceDir @cmakeArgs >> "@ | Set-Content cmake_config.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 3. 完整构建脚本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # full_build.ps1 >> # 初始化环境 >> .\setup_environment.ps1 >> >> # 准备构建目录 >> Set-Location E:\PyTorch_Build\pytorch >> Remove-Item build -Recurse -Force -ErrorAction SilentlyContinue >> New-Item -Path build -ItemType Directory | Out-Null >> Set-Location build >> >> # 配置 CMake >> .\cmake_config.ps1 >> >> # 编译和安装 >> if (`$LASTEXITCODE -eq 0) { >> cmake --build . --config Release --parallel 8 >> if (`$LASTEXITCODE -eq 0) { >> cmake --install . >> } >> } >> >> # 验证安装 >> python -c "import torch; print(f'PyTorch版本: {torch.__version__}'); print(f'CUDA可用: {torch.cuda.is_available()}')" >> "@ | Set-Content full_build.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 4. CUDA测试脚本(修复终止符问题) (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # compile_cuda_test.ps1 >> # 创建CUDA测试程序 >> `$cudaTest = @' >> #include <cuda_runtime.h> >> #include <iostream> >> >> __global__ void addKernel(int *c, const int *a, const int *b) { >> int i = threadIdx.x; >> c[i] = a[i] + b[i]; >> } >> >> int main() { >> const int arraySize = 5; >> const int a[arraySize] = {1, 2, 3, 4, 5}; >> const int b[arraySize] = {10, 20, 30, 40, 50}; >> int c[arraySize] = {0}; >> >> int *dev_a, *dev_b, *dev_c; >> cudaMalloc(&dev_a, arraySize * sizeof(int)); >> cudaMalloc(&dev_b, arraySize * sizeof(int)); >> cudaMalloc(&dev_c, arraySize * sizeof(int)); >> >> cudaMemcpy(dev_a, a, arraySize * sizeof(int), cudaMemcpyHostToDevice); >> cudaMemcpy(dev_b, b, arraySize * sizeof(int), cudaMemcpyHostToDevice); >> >> addKernel<<<1, arraySize>>>(dev_c, dev_a, dev_b); >> cudaMemcpy(c, dev_c, arraySize * sizeof(int), cudaMemcpyDeviceToHost); >> >> std::cout << "CUDA测试结果: "; >> for (int i = 0; i < arraySize; i++) { >> std::cout << c[i] << " "; >> } >> >> cudaFree(dev_a); >> cudaFree(dev_b); >> cudaFree(dev_c); >> return 0; >> } >> '@ >> >> `$cudaTest | Set-Content cuda_test.cpp >> >> # 编译测试 >> `$ccbinPath = "`$env:VCToolsInstallDir\bin\Hostx64\x64" >> nvcc -ccbin "`$ccbinPath" cuda_test.cpp -o cuda_test >> >> # 运行测试 >> if (Test-Path cuda_test.exe) { >> .\cuda_test.exe >> } else { >> Write-Host "CUDA编译失败,请检查环境" >> } >> "@ | Set-Content compile_cuda_test.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 1. 确保在正确目录 (rtx5070_env) PS E:\PyTorch_Build\pytorch> cd E:\PyTorch_Build\pytorch (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 2. 清理并创建虚拟环境 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Get-Process python* | Stop-Process -Force -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item rtx5070_env -Recurse -Force -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 3. 设置执行策略 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Set-ExecutionPolicy RemoteSigned -Scope Process -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 4. 创建所有脚本文件 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # [粘贴上面的脚本创建命令] (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 5. 执行构建流程 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\full_build.ps1 .\cmake_config.ps1: E:\PyTorch_Build\pytorch\full_build.ps1:12 Line | 12 | .\cmake_config.ps1 | ~~~~~~~~~~~~~~~~~~ | The term '.\cmake_config.ps1' is not recognized as a name of a cmdlet, function, script file, or executable | program. Check the spelling of the name, or if a path was included, verify that the path is correct and try | again. Error: not a CMake build directory (missing CMakeCache.txt) Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'torch' (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 6. 验证CUDA编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> .\compile_cuda_test.ps1 .\compile_cuda_test.ps1: The term '.\compile_cuda_test.ps1' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> # 7. 验证PyTorch安装 (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> python -c "import torch; print(f'PyTorch版本: {torch.__version__}'); print(f'CUDA可用: {torch.cuda.is_available()}')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'torch' (rtx5070_env) PS E:\PyTorch_Build\pytorch\build> if ($?) { >> python -c @" >> import torch >> print(f'CUDA设备数量: {torch.cuda.device_count()}') >> if torch.cuda.is_available(): >> print(f'设备0名称: {torch.cuda.get_device_name(0)}') >> "@ >> } (rtx5070_env) PS E:\PyTorch_Build\pytorch\build>

filetype

import numpy as np import matplotlib matplotlib.use('Agg') # 非交互模式 import matplotlib.pyplot as plt from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import classification_report, confusion_matrix import seaborn as sns # 生成4分类数据集 X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_classes=4, n_clusters_per_class=2, random_state=42) X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) class LogisticRegression: def __init__(self, lr=0.01, n_iters=100): self.lr = lr self.n_iters = n_iters self.weights = None self.bias = None self.losses = [] self.accuracies = [] def _sigmoid(self, z): return 1 / (1 + np.exp(-z)) def fit(self, X, y): n_samples, n_features = X.shape self.weights = np.zeros(n_features) self.bias = 0 for _ in range(self.n_iters): linear = np.dot(X, self.weights) + self.bias y_pred = self._sigmoid(linear) # 计算梯度 dw = (1 / n_samples) * np.dot(X.T, (y_pred - y)) db = (1 / n_samples) * np.sum(y_pred - y) # 更新参数 self.weights -= self.lr * dw self.bias -= self.lr * db # 记录损失和准确率 loss = -np.mean(y * np.log(y_pred) + (1 - y) * np.log(1 - y_pred)) self.losses.append(loss) acc = np.mean((y_pred >= 0.5).astype(int) == y) self.accuracies.append(acc) def predict(self, X): linear = np.dot(X, self.weights) + self.bias return (self._sigmoid(linear) >= 0.5).astype(int) class OvR: def __init__(self, n_classes, lr=0.01, n_iters=100): self.n_classes = n_classes self.classifiers = [LogisticRegression(lr, n_iters) for _ in range(n_classes)] self.loss_history = [] self.acc_history = [] def fit(self, X, y): for i in range(self.n_classes): # 创建二分类标签 y_binary = (y == i).astype(int) self.classifiers[i].fit(X, y_binary) # 收集训练指标 self.loss_history.append(self.classifiers[i].losses) self.acc_history.append(self.classifiers[i].accuracies) def predict(self, X): probas = np.array([clf._sigmoid(np.dot(X, clf.weights) + clf.bias) for clf in self.classifiers]).T return np.argmax(probas, axis=1) class OvO: def __init__(self, n_classes, lr=0.01, n_iters=100): self.n_classes = n_classes self.classifiers = {} self.pairs = [] self.loss_history = [] self.acc_history = [] def fit(self, X, y): # 生成所有类别组合 for i in range(self.n_classes): for j in range(i + 1, self.n_classes): mask = np.logical_or(y == i, y == j) X_pair = X[mask] y_pair = (y[mask] == i).astype(int) lr = LogisticRegression(lr=0.01, n_iters=100) lr.fit(X_pair, y_pair) self.classifiers[(i, j)] = lr self.pairs.append((i, j)) self.loss_history.append(lr.losses) self.acc_history.append(lr.accuracies) def predict(self, X): votes = np.zeros((X.shape[0], self.n_classes)) for (i, j), clf in self.classifiers.items(): pred = clf.predict(X) votes[pred == 0, j] += 1 votes[pred == 1, i] += 1 return np.argmax(votes, axis=1) # 训练模型 ovr = OvR(n_classes=4, lr=0.1, n_iters=200) ovr.fit(X_train, y_train) ovo = OvO(n_classes=4, lr=0.1, n_iters=200) ovo.fit(X_train, y_train) # 绘制训练曲线 plt.figure(figsize=(12, 6)) for i in range(4): plt.plot(ovr.loss_history[i], label=f'Class {i} Loss') plt.title('OvR Training Loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.savefig('ovr_loss.png') plt.close() plt.figure(figsize=(12, 6)) for i in range(4): plt.plot(ovr.acc_history[i], label=f'Class {i} Acc') plt.title('OvR Training Accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() plt.savefig('ovr_acc.png') plt.close() # 模型评估 def evaluate_model(model, X_test, y_test, method_name): y_pred = model.predict(X_test) print(f'{method_name}分类报告:\n', classification_report(y_test, y_pred)) plt.figure(figsize=(10, 7)) sns.heatmap(confusion_matrix(y_test, y_pred), annot=True, fmt='d', cmap='Blues') plt.title(f'{method_name} Confusion Matrix') plt.savefig(f'{method_name}_cm.png') plt.close() evaluate_model(ovr, X_test, y_test, 'OvR') evaluate_model(ovo, X_test, y_test, 'OvO')为什么ovo只有一个可视化输出,增加损失曲线和准确率曲线

filetype

# 假设df_virus和labels已经定义 features = df_virus.iloc[:, :-2].values labels = df_virus['label'].values # Load your model (this is assuming you have saved your model in PyTorch format) validation_size = 0.15 performance_results = [] test_results = [] device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) for train_percentage in [0.05, 0.025, 0.015, 0.01,0.004]: print(f"\nTraining percentage: {train_percentage * 100}%") for repeat in range(10): print(f"\nRepetition: {repeat + 1}") X_temp, X_val, y_temp, y_val = train_test_split( features, labels, test_size=0.15, stratify=labels, random_state=42 + repeat ) X_train, X_test, y_train, y_test = train_test_split( X_temp, y_temp, train_size=train_percentage / 0.85, stratify=y_temp, random_state=42 + repeat ) print(f"Training set size: {len(X_train)}, Validation set size: {len(X_val)}, Test set size: {len(X_test)}") scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_val_scaled = scaler.transform(X_val) X_test_scaled = scaler.transform(X_test) label_encoder = LabelEncoder() y_train_encoded = label_encoder.fit_transform(y_train) y_val_encoded = label_encoder.transform(y_val) y_test_encoded = label_encoder.transform(y_test) # Convert to PyTorch tensors X_train_tensor = torch.tensor(X_train_scaled, dtype=torch.float32).unsqueeze(1).to(device) X_val_tensor = torch.tensor(X_val_scaled, dtype=torch.float32).unsqueeze(1).to(device) X_test_tensor = torch.tensor(X_test_scaled, dtype=torch.float32).unsqueeze(1).to(device) y_train_tensor = torch.tensor(y_train_encoded, dtype=torch.long).to(device) y_val_tensor = torch.tensor(y_val_encoded, dtype=torch.long).to(device) y_test_tensor = torch.tensor(y_test_encoded, dtype=torch.long).to(device) train_dataset = TensorDataset(X_train_tensor, y_train_tensor) val_dataset = TensorDataset(X_val_tensor, y_val_tensor) test_dataset = TensorDataset(X_test_tensor, y_test_tensor) train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=64) test_loader = DataLoader(test_dataset, batch_size=64) model = CNN1DModel() print("加载前 fc1 权重:", model.fc1.weight[:2, :2]) model.load_state_dict(torch.load(r"G:\VSCode\tl\pytorch_model_spet16.pth")) print("加载后 fc1 权重:", model.fc1.weight[:2, :2]) num_classes = len(np.unique(labels)) last_layer_name = 'fc3' num_ftrs = model._modules[last_layer_name].in_features model._modules[last_layer_name] = nn.Linear(num_ftrs, 6) # Freeze layers: Similar to TensorFlow, we'll freeze the CNN layers for param in model.parameters(): param.requires_grad = False for param in model.conv1.parameters(): param.requires_grad = True for param in model.bn1.parameters(): param.requires_grad = True for param in model.conv2.parameters(): param.requires_grad = True for param in model.bn2.parameters(): param.requires_grad = True for param in model.conv3.parameters(): param.requires_grad = True for param in model.bn3.parameters(): param.requires_grad = True for param in model.fc1.parameters(): param.requires_grad = True for param in model.fc2.parameters(): param.requires_grad = True for param in model.fc3.parameters(): param.requires_grad = True model = model.to(device) optimizer = optim.Adam(model.parameters(), lr=3e-5) criterion = nn.CrossEntropyLoss() epochs = 600 # Early Stopping parameters patience = 7 # 连续多少个epoch验证集损失没有下降就停止训练 best_val_loss = float('inf') counter = 0 best_model_state = None # 用于保存最佳模型的参数 for epoch in tqdm(range(epochs)): model.train() running_loss = 0.0 correct = 0 total = 0 for X_batch, y_batch in train_loader: optimizer.zero_grad() outputs = model(X_batch) loss = criterion(outputs, y_batch) loss.backward() optimizer.step() running_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += y_batch.size(0) correct += (predicted == y_batch).sum().item() train_accuracy = correct / total train_loss = running_loss / len(train_loader) # Validation model.eval() val_loss = 0.0 correct_val = 0 total_val = 0 with torch.no_grad(): for X_val_batch, y_val_batch in val_loader: outputs = model(X_val_batch) loss = criterion(outputs, y_val_batch) val_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total_val += y_val_batch.size(0) correct_val += (predicted == y_val_batch).sum().item() val_accuracy = correct_val / total_val val_loss = val_loss / len(val_loader) performance_results.append({ 'train_percentage': train_percentage, 'repeat': repeat + 1, 'epoch': epoch + 1, 'train_accuracy': train_accuracy, 'val_accuracy': val_accuracy, 'train_loss': train_loss, 'val_loss': val_loss }) # Early Stopping check if val_loss < best_val_loss: best_val_loss = val_loss counter = 0 best_model_state = model.state_dict() # 保存最佳模型的参数 else: counter += 1 if counter >= patience: print(f"Early stopping at epoch {epoch + 1}") break # 加载最佳模型的参数 if best_model_state is not None: model.load_state_dict(best_model_state) # Test set evaluation model.eval() test_loss = 0.0 correct_test = 0 total_test = 0 all_preds = [] with torch.no_grad(): for X_test_batch, y_test_batch in test_loader: outputs = model(X_test_batch) loss = criterion(outputs, y_test_batch) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total_test += y_test_batch.size(0) correct_test += (predicted == y_test_batch).sum().item() all_preds.extend(predicted.cpu().numpy()) test_accuracy = correct_test / total_test cm = confusion_matrix(y_test_encoded, all_preds) f1 = f1_score(y_test_encoded, all_preds, average='weighted') precision = precision_score(y_test_encoded, all_preds, average='weighted') recall = recall_score(y_test_encoded, all_preds, average='weighted') test_results.append({ 'train_percentage': train_percentage, 'repeat': repeat + 1, 'test_loss': test_loss / len(test_loader), 'test_accuracy': test_accuracy, 'f1_score': f1, 'precision': precision, 'recall': recall, 'confusion_matrix': cm.tolist() }) # Convert to DataFrames performance_df_TL = pd.DataFrame(performance_results) test_results_df_TL = pd.DataFrame(test_results) 如何修改

filetype

完整代码为: import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from sklearn.base import BaseEstimator, ClassifierMixin # 关键导入 from sklearn.utils.validation import check_is_fitted from sklearn.ensemble import VotingClassifier from lightgbm import LGBMClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier # 补充导入 # --------------------------- 自定义Dataset类(用于PyTorch数据加载) --------------------------- class CustomDataset(Dataset): def __init__(self, X, y): self.X = torch.tensor(X, dtype=torch.float32) self.y = torch.tensor(y, dtype=torch.long) def __len__(self): return len(self.X) def __getitem__(self, idx): return self.X[idx], self.y[idx] # --------------------------- PyTorch神经网络包装类(适配sklearn接口) --------------------------- class PyTorchNNClassifier(BaseEstimator, ClassifierMixin): _estimator_type = "classifier" # 显式声明分类器类型(关键) def __init__(self, model_class, input_dim, num_classes, num_epochs=10, batch_size=32, lr=0.1): # 所有参数需在__init__中显式定义(scikit-learn要求) self.model_class = model_class self.input_dim = input_dim self.num_classes = num_classes self.num_epochs = num_epochs self.batch_size = batch_size self.lr = lr self.model = None self.classes_ = None def fit(self, X, y): X = np.array(X, dtype=np.float32) # 显式转换数据类型(避免PyTorch类型错误) y = np.array(y, dtype=np.int64) self.classes_ = np.sort(np.unique(y)) # 存储类别标签(升序,符合sklearn规范) # 初始化PyTorch模型(确保输入输出维度匹配) self.model = self.model_class(self.input_dim, self.num_classes) # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(self.model.parameters(), lr=self.lr) # 创建数据加载器(使用自定义Dataset) dataset = CustomDataset(X, y) dataloader = DataLoader(dataset, batch_size=self.batch_size, shuffle=True) # 训练循环 self.model.train() for epoch in range(self.num_epochs): epoch_loss = 0.0 for inputs, labels in dataloader: optimizer.zero_grad() outputs = self.model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() epoch_loss += loss.item() * inputs.size(0) epoch_loss /= len(dataset) print(f'Epoch {epoch+1}/{self.num_epochs}, Loss: {epoch_loss:.4f}') return self # 必须返回self def predict_proba(self, X): check_is_fitted(self, 'model') # 检查模型是否已训练 X_tensor = torch.tensor(X, dtype=torch.float32) with torch.no_grad(): logits = self.model(X_tensor) probs = torch.softmax(logits, dim=1) # 输出概率分布(shape=[n_samples, n_classes]) return probs.numpy() def predict(self, X): probs = self.predict_proba(X) pred_indices = np.argmax(probs, axis=1) return self.classes_[pred_indices] # 映射到真实标签 # --------------------------- 神经网络模型定义(输入input_dim,输出num_classes) --------------------------- class NN3(nn.Module): def __init__(self, input_dim, num_classes): super().__init__() self.layers = nn.Sequential( nn.Linear(input_dim, 64), nn.ReLU(), nn.Linear(64, num_classes) ) def forward(self, x): return self.layers(x) class NN4(nn.Module): def __init__(self, input_dim, num_classes): super().__init__() self.layers = nn.Sequential( nn.Linear(input_dim, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, num_classes) ) def forward(self, x): return self.layers(x) class NN5(nn.Module): def __init__(self, input_dim, num_classes): super().__init__() self.layers = nn.Sequential( nn.Linear(input_dim, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, num_classes) ) def forward(self, x): return self.layers(x) # --------------------------- 数据准备(示例数据) --------------------------- # 假设输入特征维度为127,10分类(与用户需求一致) input_dim = 127 num_classes = 10 # 示例训练数据(1000样本,127特征,标签0-9) x_train_std_selected = np.random.randn(1000, input_dim) y_train = np.random.randint(0, num_classes, size=1000) # --------------------------- 定义新增模型 --------------------------- # 1. CART模型(决策树分类器) cart_model = DecisionTreeClassifier( max_depth=5, min_samples_split=10, random_state=42 ) # 2. 3/4/5层PyTorch神经网络(输入input_dim,输出num_classes) nn3_model = PyTorchNNClassifier( model_class=NN3, input_dim=input_dim, num_classes=num_classes, num_epochs=3, batch_size=32, lr=0.001 ) nn4_model = PyTorchNNClassifier( model_class=NN4, input_dim=input_dim, num_classes=num_classes, num_epochs=3, batch_size=32, lr=0.001 ) nn5_model = PyTorchNNClassifier( model_class=NN5, input_dim=input_dim, num_classes=num_classes, num_epochs=3, batch_size=32, lr=0.001 ) # --------------------------- 定义基础模型(子分类器) --------------------------- base_models = [ ('bayes', GaussianNB()), ('rf', RandomForestClassifier( n_estimators=200, max_depth=8, min_samples_split=5, random_state=42 )), ('knn', KNeighborsClassifier( n_neighbors=5, weights='distance', algorithm='auto' )), ('svm', SVC( C=1.0, kernel='rbf', probability=True, # 启用概率输出(软投票需要) gamma='scale', random_state=42 )), ('lda', LinearDiscriminantAnalysis( solver='svd', store_covariance=True )), ('cart', cart_model), ('nn3', nn3_model), ('nn4', nn4_model), ('nn5', nn5_model) ] # --------------------------- 创建多数投票集成模型 --------------------------- voting_model = VotingClassifier( estimators=base_models, voting='soft', # 软投票(基于概率) weights=None, # 可自定义模型权重(如[1,2,1,...]) n_jobs=-1 # 使用所有CPU核心 ) # --------------------------- 训练模型 --------------------------- voting_model.fit(x_train_std_selected, y_train)

filetype

import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import TensorDataset, DataLoader from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score import pandas as pd import numpy as np from tqdm import tqdm import copy # 假设df_virus和labels已经定义 features = df_virus.iloc[:, :-2].values labels = df_virus['label'].values # 加载模型 (假设已保存为PyTorch格式) validation_size = 0.15 performance_results = [] test_results = [] device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"使用设备: {device}") # 定义一个函数进行随机欠采样,减少多数类样本数量 # 修改后的undersample函数,接受随机种子参数 def undersample(X, y, seed=None): if seed is not None: np.random.seed(seed) # 固定随机种子 unique, counts = np.unique(y, return_counts=True) min_count = np.min(counts) new_X, new_y = [], [] for label in unique: label_indices = np.where(y == label)[0] sampled_indices = np.random.choice(label_indices, min_count, replace=False) new_X.extend(X[sampled_indices]) new_y.extend(y[sampled_indices]) return np.array(new_X), np.array(new_y) for train_percentage in [0.5,0.25,0.1,0.05, 0.025,0.01]: print(f"\n训练比例: {train_percentage * 100}%") for repeat in range(10): print(f"\n重复次数: {repeat + 1}") # 划分训练集、验证集和测试集 X_temp, X_val, y_temp, y_val = train_test_split( features, labels, test_size=0.15, stratify=labels, random_state=42 + repeat ) X_train, X_test, y_train, y_test = train_test_split( X_temp, y_temp, train_size=train_percentage / 0.85, stratify=y_temp, random_state=42 + repeat ) # 对训练集进行随机欠采样,制造数据不平衡 X_train, y_train = undersample(X_train, y_train, seed=42 + repeat) print(f"训练集大小: {len(X_train)}, 验证集大小: {len(X_val)}, 测试集大小: {len(X_test)}") # 特征标准化 scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_val_scaled = scaler.transform(X_val) X_test_scaled = scaler.transform(X_test) # 标签编码 label_encoder = LabelEncoder() y_train_encoded = label_encoder.fit_transform(y_train) y_val_encoded = label_encoder.transform(y_val) y_test_encoded = label_encoder.transform(y_test) # 转换为PyTorch张量 X_train_tensor = torch.tensor(X_train_scaled, dtype=torch.float32).unsqueeze(1).to(device) X_val_tensor = torch.tensor(X_val_scaled, dtype=torch.float32).unsqueeze(1).to(device) X_test_tensor = torch.tensor(X_test_scaled, dtype=torch.float32).unsqueeze(1).to(device) y_train_tensor = torch.tensor(y_train_encoded, dtype=torch.long).to(device) y_val_tensor = torch.tensor(y_val_encoded, dtype=torch.long).to(device) y_test_tensor = torch.tensor(y_test_encoded, dtype=torch.long).to(device) # 创建数据集和数据加载器 train_dataset = TensorDataset(X_train_tensor, y_train_tensor) val_dataset = TensorDataset(X_val_tensor, y_val_tensor) test_dataset = TensorDataset(X_test_tensor, y_test_tensor) train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=64) test_loader = DataLoader(test_dataset, batch_size=64) # 模型初始化 - 按照迁移学习正确顺序 model = CNN1DModel() # 1. 先修改分类头以适应新任务 num_classes = len(np.unique(labels)) last_layer_name = 'fc3' num_ftrs = model._modules[last_layer_name].in_features model._modules[last_layer_name] = nn.Linear(num_ftrs, num_classes) # 2. 选择性加载预训练权重 pretrained_dict = torch.load(r"G:\VSCode\tl\pytorch_model_spet18.pth") model_dict = model.state_dict() # 过滤不匹配的权重 loadable_weights = {} skipped_layers = [] for k, v in pretrained_dict.items(): if k in model_dict: if v.shape == model_dict[k].shape: loadable_weights[k] = v else: skipped_layers.append(f"{k} (预训练形状: {v.shape}, 当前形状: {model_dict[k].shape})") else: skipped_layers.append(f"{k} (层不存在于当前模型)") # 更新模型权重字典并加载 model_dict.update(loadable_weights) model.load_state_dict(model_dict) # 打印加载信息 print(f"成功加载 {len(loadable_weights)}/{len(pretrained_dict)} 层权重") if skipped_layers: print("跳过的层:") for layer in skipped_layers: print(f" - {layer}") # Freeze layers: Similar to TensorFlow, we'll freeze the CNN layers for param in model.parameters(): param.requires_grad = False for param in model.fc1.parameters(): param.requires_grad = True for param in model.fc2.parameters(): param.requires_grad = True for param in model.fc3.parameters(): param.requires_grad = True # 检查分类头状态 print(f"分类头权重是否可训练: {model.fc3.weight.requires_grad}") print(f"分类头权重示例: {model.fc3.weight[:1, :5]}") # 将模型移至设备 model = model.to(device) # 设置优化器,只优化需要梯度的参数 optimizer = optim.Adam(model.parameters(), lr=7e-6) criterion = nn.CrossEntropyLoss() # 训练参数 epochs = 700 patience = 5# 早停耐心值 best_val_loss = float('inf') counter = 0 best_model_state = None # 用于保存最佳模型的参数 # 训练循环 for epoch in tqdm(range(epochs)): model.train() running_loss = 0.0 correct = 0 total = 0 for X_batch, y_batch in train_loader: optimizer.zero_grad() outputs = model(X_batch) loss = criterion(outputs, y_batch) loss.backward() optimizer.step() running_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += y_batch.size(0) correct += (predicted == y_batch).sum().item() # 计算训练指标 train_accuracy = correct / total train_loss = running_loss / len(train_loader) # 验证 model.eval() val_loss = 0.0 correct_val = 0 total_val = 0 with torch.no_grad(): for X_val_batch, y_val_batch in val_loader: outputs = model(X_val_batch) loss = criterion(outputs, y_val_batch) val_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total_val += y_val_batch.size(0) correct_val += (predicted == y_val_batch).sum().item() val_accuracy = correct_val / total_val val_loss = val_loss / len(val_loader) # 保存性能结果 performance_results.append({ 'train_percentage': train_percentage, 'repeat': repeat + 1, 'epoch': epoch + 1, 'train_accuracy': train_accuracy, 'val_accuracy': val_accuracy, 'train_loss': train_loss, 'val_loss': val_loss }) # 早停检查 if val_loss < best_val_loss: best_val_loss = val_loss counter = 0 best_model_state = copy.deepcopy(model.state_dict()) # 深拷贝最佳模型参数 else: counter += 1 if counter >= patience: print(f"第 {epoch + 1} 轮早停") break # 加载最佳模型参数 if best_model_state is not None: model.load_state_dict(best_model_state) print("加载早停保存的最佳模型") # 测试集评估 model.eval() test_loss = 0.0 correct_test = 0 total_test = 0 all_preds = [] with torch.no_grad(): for X_test_batch, y_test_batch in test_loader: outputs = model(X_test_batch) loss = criterion(outputs, y_test_batch) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total_test += y_test_batch.size(0) correct_test += (predicted == y_test_batch).sum().item() all_preds.extend(predicted.cpu().numpy()) # 计算测试指标 test_accuracy = correct_test / total_test cm = confusion_matrix(y_test_encoded, all_preds) f1 = f1_score(y_test_encoded, all_preds, average='weighted') precision = precision_score(y_test_encoded, all_preds, average='weighted') recall = recall_score(y_test_encoded, all_preds, average='weighted') # 保存测试结果 test_results.append({ 'train_percentage': train_percentage, 'repeat': repeat + 1, 'test_loss': test_loss / len(test_loader), 'test_accuracy': test_accuracy, 'f1_score': f1, 'precision': precision, 'recall': recall, 'confusion_matrix': cm.tolist() }) # 转换为DataFrame performance_df_TL = pd.DataFrame(performance_results) test_results_df_TL = pd.DataFrame(test_results) 我这段代码有什么问题吗

filetype

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修改 cmake_config.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # cmake_config.ps1 >> # 修复源目录路径 - 指向PyTorch源码目录 >> `$sourceDir = (Get-Item -Path "..").FullName # 关键修改:使用上级目录 >> >> `$cmakeArgs = @( >> "-G", "Ninja", >> "-DCMAKE_BUILD_TYPE=Release", >> "-DCMAKE_C_COMPILER=cl.exe", >> "-DCMAKE_CXX_COMPILER=cl.exe", >> "-DCMAKE_CUDA_COMPILER=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe", >> "-DCMAKE_CUDA_HOST_COMPILER=cl.exe", >> "-DCMAKE_SYSTEM_VERSION=10.0.22621.0", >> "-DCUDA_NVCC_FLAGS=-Xcompiler /wd4819 -gencode arch=compute_89,code=sm_89", >> "-DTORCH_CUDA_ARCH_LIST=8.9", >> "-DUSE_CUDA=ON", >> "-DUSE_NCCL=OFF", >> "-DUSE_DISTRIBUTED=OFF", >> "-DBUILD_TESTING=OFF", >> "-DBLAS=OpenBLAS", >> "-DCUDA_TOOLKIT_ROOT_DIR=E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0", >> "-DCUDNN_ROOT_DIR=E:/Program Files/NVIDIA/CUDNN/v9.12", >> "-DPYTHON_EXECUTABLE=`$((Get-Command python).Source)" >> ) >> >> Write-Host "运行 CMake: cmake `$sourceDir @cmakeArgs" >> cmake `$sourceDir @cmakeArgs # 指向正确的源码目录 >> "@ | Set-Content cmake_config.ps1 -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 修改 compile_cuda_test.ps1 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> # compile_cuda_test.ps1 >> `$cudaTest = @' >> #include <cuda_runtime.h> >> #include <iostream> >> #include <cuda.h> // 添加必要头文件 >> >> __global__ void addKernel(int *c, const int *a, const int *b) { >> int i = threadIdx.x; >> c[i] = a[i] + b[i]; >> } >> >> int main() { >> const int arraySize = 5; >> const int a[arraySize] = {1, 2, 3, 4, 5}; >> const int b[arraySize] = {10, 20, 30, 40, 50}; >> int c[arraySize] = {0}; >> >> int *dev_a, *dev_b, *dev_c; >> cudaMalloc((void**)&dev_a, arraySize * sizeof(int)); // 修复指针转换 >> cudaMalloc((void**)&dev_b, arraySize * sizeof(int)); >> cudaMalloc((void**)&dev_c, arraySize * sizeof(int)); >> >> cudaMemcpy(dev_a, a, arraySize * sizeof(int), cudaMemcpyHostToDevice); >> cudaMemcpy(dev_b, b, arraySize * sizeof(int), cudaMemcpyHostToDevice); >> >> // 修复内核调用语法 >> addKernel<<<1, arraySize>>>(dev_c, dev_a, dev_b); >> >> cudaMemcpy(c, dev_c, arraySize * sizeof(int), cudaMemcpyDeviceToHost); >> >> std::cout << "CUDA测试结果: "; >> for (int i = 0; i < arraySize; i++) { >> std::cout << c[i] << " "; >> } >> return 0; >> } >> '@ >> >> `$cudaTest | Set-Content cuda_test.cu # 改为.cu扩展名 >> >> `$ccbinPath = "`$env:VCToolsInstallDir\bin\Hostx64\x64" >> >> # 添加显式包含路径 >> nvcc -ccbin "`$ccbinPath" -I "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include" cuda_test.cu -o cuda_test >> >> if (Test-Path cuda_test.exe) { >> .\cuda_test.exe >> } else { >> Write-Host "CUDA编译失败,请检查环境" >> } >> "@ | Set-Content compile_cuda_test.ps1 -Force (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 1. 更新问题脚本 (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 执行上面的两个脚本修改命令 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 2. 清理构建目录并重新构建 (rtx5070_env) PS E:\PyTorch_Build\pytorch> Remove-Item build -Recurse -Force -ErrorAction SilentlyContinue (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\full_build.ps1 === 开始完整构建 === 环境设置完成! Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: numpy in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.2.6) Requirement already satisfied: ninja in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.13.0) Requirement already satisfied: pyyaml in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (6.0.2) Requirement already satisfied: mkl in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: mkl-include in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2025.2.0) Requirement already satisfied: setuptools in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (65.5.0) Requirement already satisfied: cmake in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.1.0) Requirement already satisfied: cffi in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.17.1) Requirement already satisfied: typing_extensions in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (4.15.0) Requirement already satisfied: future in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.0.0) Requirement already satisfied: six in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (1.17.0) Requirement already satisfied: requests in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (2.32.5) Requirement already satisfied: dataclasses in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (0.6) Requirement already satisfied: intel-openmp<2026,>=2024 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2025.2.1) Requirement already satisfied: tbb==2022.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from mkl) (2022.2.0) Requirement already satisfied: tcmlib==1.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from tbb==2022.*->mkl) (1.4.0) Requirement already satisfied: pycparser in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from cffi) (2.22) Requirement already satisfied: idna<4,>=2.5 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (3.10) Requirement already satisfied: certifi>=2017.4.17 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (2025.8.3) Requirement already satisfied: urllib3<3,>=1.21.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (2.5.0) Requirement already satisfied: charset_normalizer<4,>=2 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from requests) (3.4.3) Requirement already satisfied: intel-cmplr-lib-ur==2025.2.1 in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-openmp<2026,>=2024->mkl) (2025.2.1) Requirement already satisfied: umf==0.11.* in e:\pytorch_build\pytorch\rtx5070_env\lib\site-packages (from intel-cmplr-lib-ur==2025.2.1->intel-openmp<2026,>=2024->mkl) (0.11.0) [notice] A new release of pip available: 22.3.1 -> 25.2 [notice] To update, run: python.exe -m pip install --upgrade pip 运行 CMake: cmake E:\PyTorch_Build\pytorch @cmakeArgs CMake Warning at CMakeLists.txt:418 (message): TensorPipe cannot be used on Windows. Set it to OFF -- Current compiler supports avx2 extension. Will build perfkernels. -- Current compiler supports avx512f extension. Will build fbgemm. -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - failed -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - broken CMake Error at E:/PyTorch_Build/pytorch/rtx5070_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeTestCUDACompiler.cmake:59 (message): The CUDA compiler "E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe" is not able to compile a simple test program. It fails with the following output: Change Dir: 'E:/PyTorch_Build/pytorch/CMakeFiles/CMakeScratch/TryCompile-9hsd61' Run Build Command(s): E:/PyTorch_Build/pytorch/rtx5070_env/Scripts/ninja.exe -v cmTC_917ed [1/2] "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\nvcc.exe" -forward-unknown-to-host-compiler -ccbin=C:\PROGRA~1\MICROS~3\2022\COMMUN~1\VC\Tools\MSVC\1444~1.352\bin\Hostx64\x64\cl.exe -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xfatbin -compress-all -Xcompiler=" -Zi -Ob0 -Od /RTC1" "--generate-code=arch=compute_75,code=[compute_75,sm_75]" -Xcompiler=-MDd -MD -MT CMakeFiles\cmTC_917ed.dir\main.cu.obj -MF CMakeFiles\cmTC_917ed.dir\main.cu.obj.d -x cu -c E:\PyTorch_Build\pytorch\CMakeFiles\CMakeScratch\TryCompile-9hsd61\main.cu -o CMakeFiles\cmTC_917ed.dir\main.cu.obj -Xcompiler=-FdCMakeFiles\cmTC_917ed.dir\,-FS main.cu tmpxft_00004ff4_00000000-10_main.cudafe1.cpp [2/2] C:\WINDOWS\system32\cmd.exe /C "cd . && E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages\cmake\data\bin\cmake.exe -E vs_link_exe --intdir=CMakeFiles\cmTC_917ed.dir --rc=C:\ProgramData\mingw64\mingw64\bin\windres.exe --mt="" --manifests -- C:\ProgramData\mingw64\mingw64\bin\ld.exe /nologo CMakeFiles\cmTC_917ed.dir\main.cu.obj /out:cmTC_917ed.exe /implib:cmTC_917ed.lib /pdb:cmTC_917ed.pdb /version:0.0 /debug /INCREMENTAL cudadevrt.lib cudart.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib -LIBPATH:"E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64" && cd ." FAILED: [code=4294967295] cmTC_917ed.exe C:\WINDOWS\system32\cmd.exe /C "cd . && E:\PyTorch_Build\pytorch\rtx5070_env\Lib\site-packages\cmake\data\bin\cmake.exe -E vs_link_exe --intdir=CMakeFiles\cmTC_917ed.dir --rc=C:\ProgramData\mingw64\mingw64\bin\windres.exe --mt="" --manifests -- C:\ProgramData\mingw64\mingw64\bin\ld.exe /nologo CMakeFiles\cmTC_917ed.dir\main.cu.obj /out:cmTC_917ed.exe /implib:cmTC_917ed.lib /pdb:cmTC_917ed.pdb /version:0.0 /debug /INCREMENTAL cudadevrt.lib cudart.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib -LIBPATH:"E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64" && cd ." RC Pass 1: command "C:\ProgramData\mingw64\mingw64\bin\windres.exe /fo CMakeFiles\cmTC_917ed.dir/manifest.res CMakeFiles\cmTC_917ed.dir/manifest.rc" failed (exit code 1) with the following output: Usage: C:\ProgramData\mingw64\mingw64\bin\windres.exe [option(s)] [input-file] [output-file] The options are: -i --input=<file> Name input file -o --output=<file> Name output file -J --input-format=<format> Specify input format -O --output-format=<format> Specify output format -F --target=<target> Specify COFF target --preprocessor=<program> Program to use to preprocess rc file --preprocessor-arg=<arg> Additional preprocessor argument -I --include-dir=<dir> Include directory when preprocessing rc file -D --define <sym>[=<val>] Define SYM when preprocessing rc file -U --undefine <sym> Undefine SYM when preprocessing rc file -v --verbose Verbose - tells you what it's doing -c --codepage=<codepage> Specify default codepage -l --language=<val> Set language when reading rc file --use-temp-file Use a temporary file instead of popen to read the preprocessor output --no-use-temp-file Use popen (default) -r Ignored for compatibility with rc @<file> Read options from <file> -h --help Print this help message -V --version Print version information FORMAT is one of rc, res, or coff, and is deduced from the file name extension if not specified. A single file name is an input file. No input-file is stdin, default rc. No output-file is stdout, default rc. C:\ProgramData\mingw64\mingw64\bin\windres.exe: supported targets: pe-x86-64 pei-x86-64 pe-bigobj-x86-64 elf64-x86-64 pe-i386 pei-i386 elf32-i386 elf32-iamcu elf64-little elf64-big elf32-little elf32-big srec symbolsrec verilog tekhex binary ihex plugin ninja: build stopped: subcommand failed. CMake will not be able to correctly generate this project. Call Stack (most recent call first): cmake/public/cuda.cmake:47 (enable_language) cmake/Dependencies.cmake:43 (include) CMakeLists.txt:853 (include) -- Configuring incomplete, errors occurred! You have changed variables that require your cache to be deleted. Configure will be re-run and you may have to reset some variables. The following variables have changed: CMAKE_CXX_COMPILER= cl.exe CMAKE_C_COMPILER= cl.exe -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: BLAS BUILD_TESTING CUDNN_ROOT_DIR PYTHON_EXECUTABLE TORCH_CUDA_ARCH_LIST CMake Generate step failed. Build files cannot be regenerated correctly. Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'torch' has no attribute '__version__' (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 3. 验证CUDA编译 (rtx5070_env) PS E:\PyTorch_Build\pytorch> .\compile_cuda_test.ps1 cuda_test.cu cuda_test.cu(1): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include\cuda.h(23283): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 cuda_test.cu(1): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include\cuda.h(23283): warning C4819: 该文件包含不能在当前代码页(936)中表示的字符。请将该文件保存为 Unicode 格式以防止数据丢失 tmpxft_00004e0c_00000000-10_cuda_test.cudafe1.cpp CUDA娴嬭瘯缁撴灉: 11 22 33 44 55 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 4. 最终验证 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c @" >> import torch >> print(f'PyTorch版本: {torch.__version__}') >> print(f'CUDA可用: {torch.cuda.is_available()}') >> if torch.cuda.is_available(): >> print(f'CUDA设备数量: {torch.cuda.device_count()}') >> print(f'设备0名称: {torch.cuda.get_device_name(0)}') >> "@ Traceback (most recent call last): File "<string>", line 2, in <module> AttributeError: module 'torch' has no attribute '__version__' (rtx5070_env) PS E:\PyTorch_Build\pytorch>

filetype

PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> # 使用正确的构建命令格式 (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install ` >> --cmake ` >> --cmake-only ` >> --cmake-generator="Ninja" ` >> --verbose ` >> -DCMAKE_CUDA_COMPILER="${env:CUDA_PATH}\bin\nvcc.exe" ` >> -DCUDNN_INCLUDE_DIR="${env:CUDNN_INCLUDE_DIR}" ` >> -DCUDNN_LIBRARY="${env:CUDNN_LIBRARY}" ` >> -DTORCH_CUDA_ARCH_LIST="8.9;9.0" Traceback (most recent call last): File "E:\PyTorch_Build\pytorch\setup.py", line 288, in <module> import setuptools.command.bdist_wheel ModuleNotFoundError: No module named 'setuptools.command.bdist_wheel' (pytorch_env) PS E:\PyTorch_Build\pytorch> # 创建新环境 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda create -n torch_cu121 python=3.10 -y 3 channel Terms of Service accepted Channels: - defaults - conda-forge - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\Miniconda3\envs\torch_cu121 added / updated specs: - python=3.10 done # # To activate this environment, use # # $ conda activate torch_cu121 # # To deactivate an active environment, use # # $ conda deactivate (pytorch_env) PS E:\PyTorch_Build\pytorch> conda activate torch_cu121 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 安装 PyTorch 支持 SM_89 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia 3 channel Terms of Service accepted Channels: - pytorch-nightly - nvidia - defaults - conda-forge - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: / 好像卡住了 我该怎么办 等等么?

Fl4me
  • 粉丝: 49
上传资源 快速赚钱