file-type

dpy-peper: discord.py中on_voice_state_update事件的简易处理工具

ZIP文件

下载需积分: 9 | 8KB | 更新于2024-12-20 | 85 浏览量 | 4 评论 | 0 下载量 举报 收藏
download 立即下载
dpy-peper是一个Python库,它旨在简化在使用discord.py框架开发Discord机器人时处理语音状态更新的操作。这个库提供了一个辅助工具(包装器)来封装discord.py的on_voice_state_update事件,允许开发者更容易地编写处理用户加入、离开语音频道或移动到不同频道时应执行的功能。 在详细介绍如何使用这个库之前,需要了解几个基础概念。首先,Discord是一个提供即时通讯服务的平台,它包括文本聊天、语音通话和视频通话等功能。Discord API允许开发者创建机器人(bot),以便自动化各种任务或与用户互动。discord.py是一个流行的Python库,它提供了一套API,使得开发者能够在Python中编写Discord机器人。 对于开发者来说,处理语音状态更新事件(on_voice_state_update)可能比较复杂,因为它涉及到监听用户加入或离开语音频道,以及移动到不同频道的行为。这个事件的处理通常需要多个步骤,并且要确保代码的稳定性和性能。dpy-peper的出现就是为了简化这一过程。 使用dpy-peper的步骤如下: 1. 安装:首先,需要使用Python的包管理工具pip来安装dpy-peper库。可以通过命令行运行以下命令来安装: ``` python3 -m pip install dpy-peper ``` 2. 作为扩展使用:安装完成后,可以通过discord.py的bot实例的load_extension方法加载dpy-peper库。在你的bot脚本中,你可以使用以下代码来加载它: ```python from discord.ext import commands bot = commands.Bot(command_prefix='/') bot.load_extension('dpy_peper') # 或者使用下面的代码 # bot.load_extension('discord.ext.peper') bot.run("Th1sIsN0tT0k3n.B3cause.1fiShowB0tWillG3tH4cked") ``` 请注意,加载扩展时可能需要根据你的项目结构和文件名进行适当的调整。 3. 作为独立工具使用:dpy-peper还提供了VoiceWrapper类,允许你直接从模块中导入并使用它,这样可以在不需要加载整个扩展的情况下使用其功能。下面是如何导入VoiceWrapper类的示例: ```python from dpy_peper import VoiceWrapper # 使用VoiceWrapper类进行语音状态更新操作... ``` 请注意,为了确保库的正确工作,你需要拥有一个有效的Discord bot token,这通常在Discord开发者门户中注册你的机器人后获得。此外,你需要确保遵循Discord的服务条款和开发者协议。 以上步骤展示了如何将dpy-peper集成到你的discord.py项目中,并且如何根据项目需求来选择合适的使用方式。通过这种方式,开发者可以更简洁地编写代码来响应用户的语音活动,从而提升机器人的整体功能和用户体验。

相关推荐

filetype

“tmpxft_000061d4_00000000-7_cutlassB_f32_aligned_k65536_dropout.cudafe1.cpp [4835/7459] Building CXX object caffe2\CMakeFiles\torch_cuda.dir\__\torch\csrc\jit\tensorexpr\cuda_codegen.cpp.obj E:\PyTorch_Build\pytorch\torch/csrc/jit/tensorexpr/ir.h(395): warning C4805: “==”: 在操作中将类型“c10::impl::ScalarTypeToCPPType<c10::ScalarType::Bool>::type”与类型“T”混合不安全 with [ T=int ] E:\PyTorch_Build\pytorch\torch/csrc/jit/tensorexpr/ir.h(395): note: 模板实例化上下文(最早的实例化上下文)为 E:\PyTorch_Build\pytorch\torch\csrc\jit\tensorexpr\cuda_codegen.cpp(147): note: 查看对正在编译的函数 模板 实例化“bool torch::jit::tensorexpr::immediateEquals<int>(const torch::jit::tensorexpr::ExprPtr &,T)”的引用 with [ T=int ] [4844/7459] Building CUDA object caffe2\CMakeFiles\torch_c...aten\src\ATen\native\quantized\cuda\FakeQuantizeCore.cu.ob FakeQuantizeCore.cu tmpxft_000011c8_00000000-7_FakeQuantizeCore.cudafe1.cpp [4850/7459] Building CUDA object caffe2\CMakeFiles\torch_c...aten\src\ATen\native\sparse\cuda\SparseCsrTensorMath.cu.ob SparseCsrTensorMath.cu tmpxft_00000e1c_00000000-7_SparseCsrTensorMath.cudafe1.cpp [4854/7459] Building CXX object caffe2\torch\CMakeFiles\torch_python.dir\csrc\StorageMethods.cpp.obj FAILED: [code=2] caffe2/torch/CMakeFiles/torch_python.dir/csrc/StorageMethods.cpp.obj C:\PROGRA~2\MICROS~2\2022\BUILDT~1\VC\Tools\MSVC\1444~1.352\bin\Hostx64\x64\cl.exe /nologo /TP -DAT_PER_OPERATOR_HEADERS -DBUILDING_TESTS -DEXPORT_AOTI_FUNCTIONS -DFMT_HEADER_ONLY=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DPy_NO_LINK_LIB -DTHP_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUSE_CUDA -DUSE_EXTERNAL_MZCRC -DUSE_ITT -DUSE_MIMALLOC -DUSE_NUMPY -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -Dtorch_python_EXPORTS -IE:\PyTorch_Build\pytorch\build\aten\src -IE:\PyTorch_Build\pytorch\aten\src -IE:\PyTorch_Build\pytorch\build -IE:\PyTorch_Build\pytorch -IE:\PyTorch_Build\pytorch\nlohmann -IE:\PyTorch_Build\pytorch\moodycamel -IE:\PyTorch_Build\pytorch\third_party\mimalloc\include -IE:\PyTorch_Build\pytorch\torch\.. -IE:\PyTorch_Build\pytorch\torch\..\aten\src -IE:\PyTorch_Build\pytorch\torch\..\aten\src\TH -IE:\PyTorch_Build\pytorch\build\caffe2\aten\src -IE:\PyTorch_Build\pytorch\build\third_party -IE:\PyTorch_Build\pytorch\build\third_party\onnx -IE:\PyTorch_Build\pytorch\torch\..\third_party\valgrind-headers -IE:\PyTorch_Build\pytorch\torch\..\third_party\gloo -IE:\PyTorch_Build\pytorch\torch\..\third_party\onnx -IE:\PyTorch_Build\pytorch\torch\..\third_party\flatbuffers\include -IE:\PyTorch_Build\pytorch\torch\..\third_party\kineto\libkineto\include -IE:\PyTorch_Build\pytorch\torch\..\third_party\cpp-httplib -IE:\PyTorch_Build\pytorch\torch\..\third_party\nlohmann\include -IE:\PyTorch_Build\pytorch\torch\csrc -IE:\PyTorch_Build\pytorch\torch\csrc\api\include -IE:\PyTorch_Build\pytorch\torch\lib -IE:\PyTorch_Build\pytorch\torch\standalone -IE:\PyTorch_Build\pytorch\torch\lib\libshm_windows -IE:\PyTorch_Build\pytorch\torch\csrc\api -IE:\PyTorch_Build\pytorch\c10\.. -IE:\PyTorch_Build\pytorch\c10\cuda\..\.. -IE:\PyTorch_Build\pytorch\third_party\fmt\include -IE:\PyTorch_Build\pytorch\third_party\onnx -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\googletest\googlemock\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\googletest\googletest\include -external:IE:\PyTorch_Build\pytorch\third_party\protobuf\src -external:IE:\PyTorch_Build\pytorch\third_party\XNNPACK\include -external:IE:\PyTorch_Build\pytorch\third_party\ittapi\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\eigen -external:IE:\PyTorch_Build\pytorch\third_party\ideep\mkl-dnn\include\oneapi\dnnl -external:IE:\PyTorch_Build\pytorch\third_party\ideep\include -external:IE:\PyTorch_Build\pytorch\INTERFACE -external:IE:\PyTorch_Build\pytorch\third_party\nlohmann\include -external:IE:\PyTorch_Build\pytorch\third_party\concurrentqueue -external:IE:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\numpy\_core\include -external:IE:\Python310\Include -external:I"E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include" -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\pybind11\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\opentelemetry-cpp\api\include -external:IE:\PyTorch_Build\pytorch\third_party\cpp-httplib -external:IE:\PyTorch_Build\pytorch\third_party\NVTX\c\include -external:W0 /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /Zc:preprocessor /Zc:preprocessor /Zc:preprocessor /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD /permissive- /EHsc /bigobj -O2 /utf-8 /showIncludes /Focaffe2\torch\CMakeFiles\torch_python.dir\csrc\StorageMethods.cpp.obj /Fdcaffe2\torch\CMakeFiles\torch_python.dir\ /FS -c E:\PyTorch_Build\pytorch\torch\csrc\StorageMethods.cpp E:\PyTorch_Build\pytorch\torch\csrc\StorageMethods.cpp(9): fatal error C1083: 无法打开包括文件: “libshm.h”: No such file or directory [4855/7459] Building CXX object caffe2\torch\CMakeFiles\torch_python.dir\csrc\Storage.cpp.obj FAILED: [code=2] caffe2/torch/CMakeFiles/torch_python.dir/csrc/Storage.cpp.obj C:\PROGRA~2\MICROS~2\2022\BUILDT~1\VC\Tools\MSVC\1444~1.352\bin\Hostx64\x64\cl.exe /nologo /TP -DAT_PER_OPERATOR_HEADERS -DBUILDING_TESTS -DEXPORT_AOTI_FUNCTIONS -DFMT_HEADER_ONLY=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DPy_NO_LINK_LIB -DTHP_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUSE_CUDA -DUSE_EXTERNAL_MZCRC -DUSE_ITT -DUSE_MIMALLOC -DUSE_NUMPY -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -Dtorch_python_EXPORTS -IE:\PyTorch_Build\pytorch\build\aten\src -IE:\PyTorch_Build\pytorch\aten\src -IE:\PyTorch_Build\pytorch\build -IE:\PyTorch_Build\pytorch -IE:\PyTorch_Build\pytorch\nlohmann -IE:\PyTorch_Build\pytorch\moodycamel -IE:\PyTorch_Build\pytorch\third_party\mimalloc\include -IE:\PyTorch_Build\pytorch\torch\.. -IE:\PyTorch_Build\pytorch\torch\..\aten\src -IE:\PyTorch_Build\pytorch\torch\..\aten\src\TH -IE:\PyTorch_Build\pytorch\build\caffe2\aten\src -IE:\PyTorch_Build\pytorch\build\third_party -IE:\PyTorch_Build\pytorch\build\third_party\onnx -IE:\PyTorch_Build\pytorch\torch\..\third_party\valgrind-headers -IE:\PyTorch_Build\pytorch\torch\..\third_party\gloo -IE:\PyTorch_Build\pytorch\torch\..\third_party\onnx -IE:\PyTorch_Build\pytorch\torch\..\third_party\flatbuffers\include -IE:\PyTorch_Build\pytorch\torch\..\third_party\kineto\libkineto\include -IE:\PyTorch_Build\pytorch\torch\..\third_party\cpp-httplib -IE:\PyTorch_Build\pytorch\torch\..\third_party\nlohmann\include -IE:\PyTorch_Build\pytorch\torch\csrc -IE:\PyTorch_Build\pytorch\torch\csrc\api\include -IE:\PyTorch_Build\pytorch\torch\lib -IE:\PyTorch_Build\pytorch\torch\standalone -IE:\PyTorch_Build\pytorch\torch\lib\libshm_windows -IE:\PyTorch_Build\pytorch\torch\csrc\api -IE:\PyTorch_Build\pytorch\c10\.. -IE:\PyTorch_Build\pytorch\c10\cuda\..\.. -IE:\PyTorch_Build\pytorch\third_party\fmt\include -IE:\PyTorch_Build\pytorch\third_party\onnx -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\googletest\googlemock\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\googletest\googletest\include -external:IE:\PyTorch_Build\pytorch\third_party\protobuf\src -external:IE:\PyTorch_Build\pytorch\third_party\XNNPACK\include -external:IE:\PyTorch_Build\pytorch\third_party\ittapi\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\eigen -external:IE:\PyTorch_Build\pytorch\third_party\ideep\mkl-dnn\include\oneapi\dnnl -external:IE:\PyTorch_Build\pytorch\third_party\ideep\include -external:IE:\PyTorch_Build\pytorch\INTERFACE -external:IE:\PyTorch_Build\pytorch\third_party\nlohmann\include -external:IE:\PyTorch_Build\pytorch\third_party\concurrentqueue -external:IE:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\numpy\_core\include -external:IE:\Python310\Include -external:I"E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include" -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\pybind11\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\opentelemetry-cpp\api\include -external:IE:\PyTorch_Build\pytorch\third_party\cpp-httplib -external:IE:\PyTorch_Build\pytorch\third_party\NVTX\c\include -external:W0 /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /Zc:preprocessor /Zc:preprocessor /Zc:preprocessor /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD /permissive- /EHsc /bigobj -O2 /utf-8 /showIncludes /Focaffe2\torch\CMakeFiles\torch_python.dir\csrc\Storage.cpp.obj /Fdcaffe2\torch\CMakeFiles\torch_python.dir\ /FS -c E:\PyTorch_Build\pytorch\torch\csrc\Storage.cpp E:\PyTorch_Build\pytorch\torch\csrc\Storage.cpp(10): fatal error C1083: 无法打开包括文件: “libshm.h”: No such file or directory [4856/7459] Building CXX object caffe2\torch\CMakeFiles\torch_python.dir\csrc\Module.cpp.obj FAILED: [code=2] caffe2/torch/CMakeFiles/torch_python.dir/csrc/Module.cpp.obj C:\PROGRA~2\MICROS~2\2022\BUILDT~1\VC\Tools\MSVC\1444~1.352\bin\Hostx64\x64\cl.exe /nologo /TP -DAT_PER_OPERATOR_HEADERS -DBUILDING_TESTS -DEXPORT_AOTI_FUNCTIONS -DFMT_HEADER_ONLY=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DPy_NO_LINK_LIB -DTHP_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUSE_CUDA -DUSE_EXTERNAL_MZCRC -DUSE_ITT -DUSE_MIMALLOC -DUSE_NUMPY -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -Dtorch_python_EXPORTS -IE:\PyTorch_Build\pytorch\build\aten\src -IE:\PyTorch_Build\pytorch\aten\src -IE:\PyTorch_Build\pytorch\build -IE:\PyTorch_Build\pytorch -IE:\PyTorch_Build\pytorch\nlohmann -IE:\PyTorch_Build\pytorch\moodycamel -IE:\PyTorch_Build\pytorch\third_party\mimalloc\include -IE:\PyTorch_Build\pytorch\torch\.. -IE:\PyTorch_Build\pytorch\torch\..\aten\src -IE:\PyTorch_Build\pytorch\torch\..\aten\src\TH -IE:\PyTorch_Build\pytorch\build\caffe2\aten\src -IE:\PyTorch_Build\pytorch\build\third_party -IE:\PyTorch_Build\pytorch\build\third_party\onnx -IE:\PyTorch_Build\pytorch\torch\..\third_party\valgrind-headers -IE:\PyTorch_Build\pytorch\torch\..\third_party\gloo -IE:\PyTorch_Build\pytorch\torch\..\third_party\onnx -IE:\PyTorch_Build\pytorch\torch\..\third_party\flatbuffers\include -IE:\PyTorch_Build\pytorch\torch\..\third_party\kineto\libkineto\include -IE:\PyTorch_Build\pytorch\torch\..\third_party\cpp-httplib -IE:\PyTorch_Build\pytorch\torch\..\third_party\nlohmann\include -IE:\PyTorch_Build\pytorch\torch\csrc -IE:\PyTorch_Build\pytorch\torch\csrc\api\include -IE:\PyTorch_Build\pytorch\torch\lib -IE:\PyTorch_Build\pytorch\torch\standalone -IE:\PyTorch_Build\pytorch\torch\lib\libshm_windows -IE:\PyTorch_Build\pytorch\torch\csrc\api -IE:\PyTorch_Build\pytorch\c10\.. -IE:\PyTorch_Build\pytorch\c10\cuda\..\.. -IE:\PyTorch_Build\pytorch\third_party\fmt\include -IE:\PyTorch_Build\pytorch\third_party\onnx -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\googletest\googlemock\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\googletest\googletest\include -external:IE:\PyTorch_Build\pytorch\third_party\protobuf\src -external:IE:\PyTorch_Build\pytorch\third_party\XNNPACK\include -external:IE:\PyTorch_Build\pytorch\third_party\ittapi\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\eigen -external:IE:\PyTorch_Build\pytorch\third_party\ideep\mkl-dnn\include\oneapi\dnnl -external:IE:\PyTorch_Build\pytorch\third_party\ideep\include -external:IE:\PyTorch_Build\pytorch\INTERFACE -external:IE:\PyTorch_Build\pytorch\third_party\nlohmann\include -external:IE:\PyTorch_Build\pytorch\third_party\concurrentqueue -external:IE:\PyTorch_Build\pytorch\rtx5070_env\lib\site-packages\numpy\_core\include -external:IE:\Python310\Include -external:I"E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\include" -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\pybind11\include -external:IE:\PyTorch_Build\pytorch\cmake\..\third_party\opentelemetry-cpp\api\include -external:IE:\PyTorch_Build\pytorch\third_party\cpp-httplib -external:IE:\PyTorch_Build\pytorch\third_party\NVTX\c\include -external:W0 /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /Zc:preprocessor /Zc:preprocessor /Zc:preprocessor /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD /permissive- /EHsc /bigobj -O2 /utf-8 /showIncludes /Focaffe2\torch\CMakeFiles\torch_python.dir\csrc\Module.cpp.obj /Fdcaffe2\torch\CMakeFiles\torch_python.dir\ /FS -c E:\PyTorch_Build\pytorch\torch\csrc\Module.cpp E:\PyTorch_Build\pytorch\torch\csrc\Module.cpp(34): fatal error C1083: 无法打开包括文件: “libshm.h”: No such file or directory [4872/7459] Building CUDA object caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda\group_norm_kernel.cu.obj group_norm_kernel.cu tmpxft_00005adc_00000000-7_group_norm_kernel.cudafe1.cpp [4873/7459] Building CUDA object caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\UfuncCUDA_add.cu.obj UfuncCUDA_add.cu tmpxft_00006bdc_00000000-7_UfuncCUDA_add.cudafe1.cpp [4875/7459] Building CUDA object caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda\Unique.cu.obj Unique.cu tmpxft_00000658_00000000-7_Unique.cudafe1.cpp ninja: build stopped: subcommand failed. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch>” “PowerShell 7 环境已加载 (版本: 7.5.2) PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> python -m venv rtx5070_env Error: [Errno 13] Permission denied: 'E:\\PyTorch_Build\\pytorch\\rtx5070_env\\Scripts\\python.exe' PS E:\PyTorch_Build\pytorch> .\rtx5070_env\Scripts\activate (rtx5070_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge openblas 3 channel Terms of Service accepted Retrieving notices: done Channels: - conda-forge - defaults - nvidia - pytorch-nightly Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\Miniconda3 added / updated specs: - openblas The following packages will be downloaded: package | build ---------------------------|----------------- libopenblas-0.3.30 |pthreads_ha4fe6b2_2 3.8 MB conda-forge openblas-0.3.30 |pthreads_h4a7f399_2 262 KB conda-forge ------------------------------------------------------------ Total: 4.0 MB The following NEW packages will be INSTALLED: libopenblas conda-forge/win-64::libopenblas-0.3.30-pthreads_ha4fe6b2_2 openblas conda-forge/win-64::openblas-0.3.30-pthreads_h4a7f399_2 Proceed ([y]/n)? y Downloading and Extracting Packages: Preparing transaction: done Verifying transaction: done Executing transaction: done (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 设置cuDNN路径 (rtx5070_env) PS E:\PyTorch_Build\pytorch> @" >> set(CUDNN_INCLUDE_DIR "E:/Program Files/NVIDIA/CUNND/v9.12/include/13.0") >> set(CUDNN_LIBRARY "E:/Program Files/NVIDIA/CUNND/v9.12/lib/13.0/x64/cudnn64_9.lib") >> message(STATUS "Applied custom cuDNN settings") >> "@ | Set-Content set_cudnn.cmake (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 在CMakeLists.txt第一行插入引用 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $cmakeFile = "CMakeLists.txt" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $content = Get-Content $cmakeFile (rtx5070_env) PS E:\PyTorch_Build\pytorch> $newContent = @() (rtx5070_env) PS E:\PyTorch_Build\pytorch> $newContent += "include(set_cudnn.cmake)" (rtx5070_env) PS E:\PyTorch_Build\pytorch> $newContent += $content (rtx5070_env) PS E:\PyTorch_Build\pytorch> $newContent | Set-Content $cmakeFile (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 安装OpenBLAS (rtx5070_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge openblas -y 3 channel Terms of Service accepted Channels: - conda-forge - defaults - nvidia - pytorch-nightly Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: done # All requested packages already installed. (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> Write-Host "✅ 构建环境优化完成" -ForegroundColor Green ✅ 构建环境优化完成 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 检查GPU支持状态 (rtx5070_env) PS E:\PyTorch_Build\pytorch> python -c "import torch; print(f'CUDA可用: {torch.cuda.is_available()}')" Traceback (most recent call last): File "<string>", line 1, in <module> File "E:\PyTorch_Build\pytorch\torch\__init__.py", line 415, in <module> from torch._C import * # noqa: F403 ImportError: DLL load failed while importing _C: 找不到指定的模块。 (rtx5070_env) PS E:\PyTorch_Build\pytorch> (rtx5070_env) PS E:\PyTorch_Build\pytorch> # 查看已完成任务比例 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $total = 7459 (rtx5070_env) PS E:\PyTorch_Build\pytorch> $completed = (Get-ChildItem build -Recurse -File | Measure-Object).Count (rtx5070_env) PS E:\PyTorch_Build\pytorch> $percent = [math]::Round(($completed/$total)*100, 2) (rtx5070_env) PS E:\PyTorch_Build\pytorch> Write-Host "构建进度: $completed/$total ($percent%)" 构建进度: 12552/7459 (168.28%) (rtx5070_env) PS E:\PyTorch_Build\pytorch>”

filetype
filetype
filetype

-- Found Protobuf: /usr/lib/aarch64-linux-gnu/libprotobuf.a;-lpthread (found version "3.0.0") Generated: /home/songzhiyi/onnx-rel-1.13.0/.setuptools-cmake-build/onnx/onnx-ml.proto Generated: /home/songzhiyi/onnx-rel-1.13.0/.setuptools-cmake-build/onnx/onnx-operators-ml.proto Generated: /home/songzhiyi/onnx-rel-1.13.0/.setuptools-cmake-build/onnx/onnx-data.proto -- Could NOT find pybind11 (missing: pybind11_DIR) CMake Error at CMakeLists.txt:463 (message): cannot find pybind -- Configuring incomplete, errors occurred! See also "/home/songzhiyi/onnx-rel-1.13.0/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log". See also "/home/songzhiyi/onnx-rel-1.13.0/.setuptools-cmake-build/CMakeFiles/CMakeError.log". Traceback (most recent call last): File "setup.py", line 381, in <module> "backend-test-tools = onnx.backend.test.cmd_tools:main", File "/home/songzhiyi/.local/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 233, in run self.run_command("cmake_build") File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 219, in run subprocess.check_call(cmake_args) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python3.6', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/home/songzhiyi/onnx-rel-1.13.0']' returned non-zero exit status 1.

资源评论
用户头像
蔓誅裟華
2025.07.02
为discord机器人开发提供便捷的语音处理方法。
用户头像
咖啡碎冰冰
2025.05.12
dpy-peper,discord.py开发者的语音事件助手。
用户头像
文润观书
2025.03.31
通过简单的模块加载即可扩展功能,提高开发效率。
用户头像
ali-12
2025.03.23
轻松集成discord.py的语音状态更新功能。
Tsy.H
  • 粉丝: 36
上传资源 快速赚钱