活动介绍
file-type

Docker信号处理测试:验证docker run正确性

ZIP文件

下载需积分: 8 | 3KB | 更新于2025-08-10 | 145 浏览量 | 0 下载量 举报 收藏
download 立即下载
在IT行业中,Docker 是一个非常重要的容器化技术,它允许开发者打包应用及其依赖包到一个可移植的容器中,然后在任何支持Docker的操作系统上运行,极大地方便了应用的开发、分发和部署。本篇将详细介绍Docker中的信号处理机制以及如何测试 `docker run` 命令是否能够正确处理信号。 ### Docker信号处理机制 在Linux系统中,进程可以通过信号来接收系统和用户的中断请求。例如,常见的 `SIGKILL`(信号编号为9)用于强制终止进程,而 `SIGTERM`(信号编号为15)则是一个终止请求信号,它允许进程进行清理工作后再结束。Docker容器运行时,它会启动一个主进程,当用户需要停止容器时,通常会向这个主进程发送终止信号。 Docker容器的信号处理与常规Linux进程有些许不同。在Docker早期版本中,如果容器是以交互式启动(即使用 `-t` 参数启动了伪终端),则通过 `docker kill` 发送的信号会直接传递给主进程。但是,如果是以非交互方式启动的容器(即没有使用 `-t` 参数),Docker不会将信号代理给容器中的主进程,导致容器可能不会如预期那样处理信号。 ### 测试 `docker run` 的信号处理 为了测试 `docker run` 命令是否能够正确处理信号,可以进行以下实验。 #### 设置Docker镜像 首先,需要创建一个Docker镜像,这个镜像内部会包含一个脚本,用于监听和打印信号。命令如下: ```bash $ docker build -t docker-signal-test . ``` 这个命令会执行Dockerfile中定义的指令,构建一个名为 `docker-signal-test` 的镜像。Dockerfile可能包含以下指令: ```Dockerfile FROM ubuntu:latest COPY signal_printer.sh /signal_printer.sh CMD ["/signal_printer.sh"] ``` 其中 `signal_printer.sh` 脚本的内容可能如下: ```bash #!/bin/bash echo "Waiting for a signal..." trap 'echo "Received signal $1"; exit 0' SIGHUP SIGINT SIGTERM while true; do sleep 1 done ``` 这个脚本会在容器中运行并等待信号,当接收到 `SIGHUP`, `SIGINT` 或 `SIGTERM` 中的任一信号时,会打印出接收到的信号并退出容器。 #### 测试1: docker run (no -t) with kill 测试的目的是验证在非交互模式下运行的容器是否能够正确处理信号。操作步骤如下: 1. 启动容器并立即用 `kill` 命令发送信号: ```bash $ docker run --rm=true docker-signal-test ./signal_printer.sh Waiting for a signal... ``` 2. 在另一个终端执行 `docker ps` 查看容器是否正在运行: ```bash $ docker ps ``` 3. 发送 `SIGTERM` 信号给容器,Docker命令为 `docker kill [容器ID]`: ```bash $ docker kill 9650d1f6535d ``` 4. 观察第一个终端,脚本应该打印出接收到的信号并退出。 ### 实验注意事项 - 实验中应确保 `docker-signal-test` 镜像能够正常工作,即 `signal_printer.sh` 脚本能够正确地运行和监听信号。 - 使用 `--rm=true` 参数是为了在容器退出时自动删除容器,避免产生不必要的无用容器实例。 - 在发送 `kill` 命令时,需要使用正确的容器ID,这个ID可以通过 `docker ps` 命令来获取。 ### 总结 通过上述的知识点介绍,我们了解了Docker在处理信号时的一些行为和细节,包括它在不同启动模式下的信号代理机制。同时,我们也学会了如何通过简单的测试脚本和命令来验证Docker是否能够正确处理信号。这些知识对于深入理解Docker的工作原理以及如何在生产环境中正确管理容器都至关重要。

相关推荐

filetype

2025-08-04 11:38:25 Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so} 2025-08-04 11:38:25 Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor 2025-08-04 11:38:25 [fc0737d948e1:1 :0:166] Caught signal 11 (Segmentation fault: Sent by the kernel at address (nil)) 2025-08-04 11:38:25 ==== backtrace (tid: 166) ==== 2025-08-04 11:38:25 0 0x0000000000042520 __sigaction() ???:0 2025-08-04 11:38:25 1 0x0000000000028898 abort() ???:0 2025-08-04 11:38:25 2 0x00000000000060a5 cudnnCreateTensorDescriptor() ???:0 2025-08-04 11:38:25 3 0x0000000000415243 ctranslate2::ops::Conv1D::compute<(ctranslate2::Device)1, half_float::half>() ???:0 2025-08-04 11:38:25 4 0x0000000000289989 ctranslate2::layers::WhisperEncoder::operator()() ???:0 2025-08-04 11:38:25 5 0x00000000002e7309 ctranslate2::models::WhisperReplica::encode() ???:0 2025-08-04 11:38:25 6 0x00000000002e8c6c ctranslate2::ReplicaPool<ctranslate2::models::WhisperReplica>::BatchJob<ctranslate2::StorageView, ctranslate2::ReplicaPool<ctranslate2::models::WhisperReplica>::post_batch<ctranslate2::StorageView, ctranslate2::ReplicaPool<ctranslate2::models::WhisperReplica>::post<ctranslate2::StorageView, ctranslate2::models::Whisper::encode(ctranslate2::StorageView const&, bool)::{lambda(ctranslate2::models::WhisperReplica&)#1}>(ctranslate2::models::Whisper::encode(ctranslate2::StorageView const&, bool)::{lambda(ctranslate2::models::WhisperReplica&)#1})::{lambda(ctranslate2::models::WhisperReplica&)#1}>(ctranslate2::ReplicaPool<ctranslate2::models::WhisperReplica>::post<ctranslate2::StorageView, ctranslate2::models::Whisper::encode(ctranslate2::StorageView const&, bool)::{lambda(ctranslate2::models::WhisperReplica&)#1}>(ctranslate2::models::Whisper::encode(ctranslate2::StorageView const&, bool)::{lambda(ctranslate2::models::WhisperReplica&)#1})::{lambda(ctranslate2::models::WhisperReplica&)#1}, std::vector<std::promise<ctranslate2::StorageView>, std::allocator<std::promise<ctranslate2::StorageView> > >)::{lambda()#1}>::run() whisper.cc:0 2025-08-04 11:38:25 7 0x000000000033e384 ctranslate2::Worker::run() ???:0 2025-08-04 11:38:25 8 0x0000000002f45680 execute_native_thread_routine() thread48.o:0 2025-08-04 11:38:25 9 0x0000000000094ac3 pthread_condattr_setpshared() ???:0 2025-08-04 11:38:25 10 0x0000000000126a40 __xmknodat() ???:0 2025-08-04 11:38:25 =================================

filetype

(dla_new) root@bbbc6f9276e3:~/dla-master# python run_all_experiments.py [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. Running on Linux WARNING:tensorflow:From /root/dla-master/deep_rl_for_swarms/common/tf_util.py:55: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /root/dla-master/deep_rl_for_swarms/common/tf_util.py:64: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2025-06-24 09:11:30.746599: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2025-06-24 09:11:30.761544: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2496000000 Hz 2025-06-24 09:11:30.762083: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x59df037ca4d0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-06-24 09:11:30.762111: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-06-24 09:11:30.767869: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-06-24 09:11:30.824139: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2025-06-24 09:11:30.824195: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (bbbc6f9276e3): /proc/driver/nvidia/version does not exist free(): invalid pointer [bbbc6f9276e3:13138] *** Process received signal *** [bbbc6f9276e3:13138] Signal: Aborted (6) [bbbc6f9276e3:13138] Signal code: (-6) [bbbc6f9276e3:13138] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7d7a6338a890] [bbbc6f9276e3:13138] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7d7a62618e97] [bbbc6f9276e3:13138] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7d7a6261a801] [bbbc6f9276e3:13138] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x89897)[0x7d7a62663897] [bbbc6f9276e3:13138] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x9090a)[0x7d7a6266a90a] [bbbc6f9276e3:13138] [ 5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4cc)[0x7d7a62671e1c] [bbbc6f9276e3:13138] [ 6] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.so.1(_ZN10tensorflow15DeviceNameUtils30GetLocalNamesForDeviceMappingsERKNS0_10ParsedNameE+0x1b0)[0x7d798af42c70] [bbbc6f9276e3:13138] [ 7] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.so.1(_ZN10tensorflow9DeviceMgrC1ESt6vectorISt10unique_ptrINS_6DeviceESt14default_deleteIS3_EESaIS6_EE+0x1f9)[0x7d798b0ca769] [bbbc6f9276e3:13138] [ 8] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow20DirectSessionFactory10NewSessionERKNS_14SessionOptionsEPPNS_7SessionE+0x3b0)[0x7d79913b40b0] [bbbc6f9276e3:13138] [ 9] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.so.1(_ZN10tensorflow10NewSessionERKNS_14SessionOptionsEPPNS_7SessionE+0xb0)[0x7d798b149300] [bbbc6f9276e3:13138] [10] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(TF_NewSession+0x24)[0x7d798e821204] [bbbc6f9276e3:13138] [11] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow16TF_NewSessionRefEP8TF_GraphPK17TF_SessionOptionsP9TF_Status+0x12)[0x7d798e17fc22] [bbbc6f9276e3:13138] [12] /root/miniconda3/envs/dla_new/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so(+0x22a9af1)[0x7d798e114af1] [bbbc6f9276e3:13138] [13] python3(+0x156434)[0x59dec6c58434] [bbbc6f9276e3:13138] [14] python3(_PyEval_EvalFrameDefault+0x424)[0x59dec6ca6214] [bbbc6f9276e3:13138] [15] python3(+0xe9d15)[0x59dec6bebd15] [bbbc6f9276e3:13138] [16] python3(+0x1553a8)[0x59dec6c573a8] [bbbc6f9276e3:13138] [17] python3(+0x1564df)[0x59dec6c584df] [bbbc6f9276e3:13138] [18] python3(_PyEval_EvalFrameDefault+0x131a)[0x59dec6ca710a] [bbbc6f9276e3:13138] [19] python3(+0xe9d15)[0x59dec6bebd15] [bbbc6f9276e3:13138] [20] python3(_PyFunction_FastCallDict+0x784)[0x59dec6bed944] [bbbc6f9276e3:13138] [21] python3(+0x144e83)[0x59dec6c46e83] [bbbc6f9276e3:13138] [22] python3(+0x168628)[0x59dec6c6a628] [bbbc6f9276e3:13138] [23] python3(+0xfa127)[0x59dec6bfc127] [bbbc6f9276e3:13138] [24] python3(_PyObject_FastCallKeywords+0x193)[0x59dec6c57ce3] [bbbc6f9276e3:13138] [25] python3(+0x156715)[0x59dec6c58715] [bbbc6f9276e3:13138] [26] python3(_PyEval_EvalFrameDefault+0x131a)[0x59dec6ca710a] [bbbc6f9276e3:13138] [27] python3(+0xe9d15)[0x59dec6bebd15] [bbbc6f9276e3:13138] [28] python3(+0x1553a8)[0x59dec6c573a8] [bbbc6f9276e3:13138] [29] python3(+0x1564df)[0x59dec6c584df] [bbbc6f9276e3:13138] *** End of error message *** Aborted (core dumped) linux下运行python报错

filetype

0 verbose cli /usr/local/bin/node /usr/local/bin/npm 1 info using [email protected] 2 info using [email protected] 3 silly config load:file:/usr/local/lib/node_modules/npm/npmrc 4 silly config load:file:/usr/src/app/.npmrc 5 silly config load:file:/root/.npmrc 6 silly config load:file:/usr/local/etc/npmrc 7 verbose title npm start 8 verbose argv "start" 9 verbose logfile logs-max:10 dir:/root/.npm/_logs/2025-03-12T02_11_41_912Z- 10 verbose logfile /root/.npm/_logs/2025-03-12T02_11_41_912Z-debug-0.log 11 silly logfile done cleaning log files 12 verbose stack Error: command failed 12 verbose stack at promiseSpawn (/usr/local/lib/node_modules/npm/node_modules/@npmcli/promise-spawn/lib/index.js:22:22) 12 verbose stack at spawnWithShell (/usr/local/lib/node_modules/npm/node_modules/@npmcli/promise-spawn/lib/index.js:124:10) 12 verbose stack at promiseSpawn (/usr/local/lib/node_modules/npm/node_modules/@npmcli/promise-spawn/lib/index.js:12:12) 12 verbose stack at runScriptPkg (/usr/local/lib/node_modules/npm/node_modules/@npmcli/run-script/lib/run-script-pkg.js:77:13) 12 verbose stack at runScript (/usr/local/lib/node_modules/npm/node_modules/@npmcli/run-script/lib/run-script.js:9:12) 12 verbose stack at #run (/usr/local/lib/node_modules/npm/lib/commands/run-script.js:130:13) 12 verbose stack at async RunScript.exec (/usr/local/lib/node_modules/npm/lib/commands/run-script.js:39:7) 12 verbose stack at async Npm.exec (/usr/local/lib/node_modules/npm/lib/npm.js:207:9) 12 verbose stack at async module.exports (/usr/local/lib/node_modules/npm/lib/cli/entry.js:74:5) 13 verbose pkgid [email protected] 14 error path /usr/src/app 15 error command failed 16 error signal SIGTERM 17 error command sh -c node dist/api.js 18 verbose cwd /usr/src/app 19 verbose os Linux 5.15.167.4-microsoft-standard-WSL2 20 verbose node v18.20.7 21 verbose npm v10.8.2 22 verbose exit 1 23 verbose code 1 24 error A complete log of this run can be found in: /root/.npm/_logs/2025-03-12T02_11_41_912Z-debug-0.log

filetype

D:\STM32CUBECLT\CMake\bin\cmake.exe -DCMAKE_BUILD_TYPE=Debug -DCMAKE_MAKE_PROGRAM=D:/STM32CUBECLT/Ninja/bin/ninja.exe -DCMAKE_C_COMPILER=D:/STM32CUBECLT/GNU-tools-for-STM32/bin/arm-none-eabi-gcc.exe -DCMAKE_CXX_COMPILER=D:/STM32CUBECLT/GNU-tools-for-STM32/bin/arm-none-eabi-c++.exe -G Ninja -S D:\el\1231123213\MDK-ARM\stm32\xuexi -B D:\el\1231123213\MDK-ARM\stm32\xuexi\cmake-build-debug -- The C compiler identification is GNU 13.3.1 -- The CXX compiler identification is GNU 13.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - failed -- Check for working C compiler: D:/STM32CUBECLT/GNU-tools-for-STM32/bin/arm-none-eabi-gcc.exe -- Check for working C compiler: D:/STM32CUBECLT/GNU-tools-for-STM32/bin/arm-none-eabi-gcc.exe - broken CMake Error at D:/STM32CUBECLT/CMake/share/cmake-3.28/Modules/CMakeTestCCompiler.cmake:67 (message): The C compiler "D:/STM32CUBECLT/GNU-tools-for-STM32/bin/arm-none-eabi-gcc.exe" is not able to compile a simple test program. It fails with the following output: Change Dir: 'D:/el/1231123213/MDK-ARM/stm32/xuexi/cmake-build-debug/CMakeFiles/CMakeScratch/TryCompile-lpeiv3' Run Build Command(s): D:/STM32CUBECLT/Ninja/bin/ninja.exe -v cmTC_2c52d [1/2] D:\STM32CUBECLT\GNU-tools-for-STM32\bin\arm-none-eabi-gcc.exe -std=gnu11 -fdiagnostics-color=always -o CMakeFiles/cmTC_2c52d.dir/testCCompiler.c.obj -c D:/el/1231123213/MDK-ARM/stm32/xuexi/cmake-build-debug/CMakeFiles/CMakeScratch/TryCompile-lpeiv3/testCCompiler.c [2/2] C:\WINDOWS\system32\cmd.exe /C "cd . && D:\STM32CUBECLT\GNU-tools-for-STM32\bin\arm-none-eabi-gcc.exe CMakeFiles/cmTC_2c52d.dir/testCCompiler.c.obj -o cmTC_2c52d.exe -Wl,--out-implib,libcmTC_2c52d.dll.a -Wl,--major-image-version,0,--minor-image-version,0 -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 && cd ." FAILED: cmTC_2c52d.exe C:\WINDOWS\system32\cmd.exe /C "cd . && D:\STM32CUBECLT\GNU-tools-for-STM32\bin\arm-none-eabi-gcc.exe CMakeFiles/cmTC_2c52d.dir/testCCompiler.c.obj -o cmTC_2c52d.exe -Wl,--out-implib,libcmTC_2c52d.dll.a -Wl,--major-image-version,0,--minor-image-version,0 -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 && cd ." Cannot create temporary file in C:\Users\张宇鹏\AppData\Local\Temp\: No such file or directory arm-none-eabi-gcc.exe: internal compiler error: Aborted signal terminated program collect2 Please submit a full bug report, with preprocessed source (by using -freport-bug). See <https://gcc.gnu.org/bugs/> for instructions. ninja: build stopped: subcommand failed. CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:28 (project) -- Configuring incomplete, errors occurred! [已完成]

filetype

""" mlperf inference benchmarking tool """ from __future__ import division from __future__ import print_function from __future__ import unicode_literals # from memory_profiler import profile import argparse import array import collections import json import logging import os import sys import threading import time from multiprocessing import JoinableQueue import sklearn import star_loadgen as lg import numpy as np from mindspore import context from mindspore.train.model import Model from mindspore.train.serialization import load_checkpoint, load_param_into_net from src.model_utils.device_adapter import get_device_num, get_rank_id,get_device_id import os from mindspore import Model, context from mindspore.train.serialization import load_checkpoint, load_param_into_net,\ build_searched_strategy, merge_sliced_parameter from src.wide_and_deep import PredictWithSigmoid, TrainStepWrap, NetWithLossClass, WideDeepModel from src.callbacks import LossCallBack from src.datasets import create_dataset, DataType from src.metrics import AUCMetric from src.model_utils.moxing_adapter import moxing_wrapper from src.metrics import AUCMetric from src.callbacks import EvalCallBack import src.wide_and_deep as wide_deep import src.datasets as datasets from src.model_utils.config import config # from pygcbs_client.task import Task # task = Task() # config = task.config context.set_context(mode=context.GRAPH_MODE, device_target=config.device_target , device_id = 1) print(config.device_id) batch_size = config.batch_size def add_write(file_path, print_str): with open(file_path, 'a+', encoding='utf-8') as file_out: file_out.write(print_str + '\n') def get_WideDeep_net(config): """ Get network of wide&deep model. """ WideDeep_net = WideDeepModel(config) loss_net = NetWithLossClass(WideDeep_net, config) train_net = TrainStepWrap(loss_net) eval_net = PredictWithSigmoid(WideDeep_net) return train_net, eval_net class ModelBuilder(): """ Wide and deep model builder """ def __init__(self): pass def get_hook(self): pass def get_train_hook(self): hooks = [] callback = LossCallBack() hooks.append(callback) if int(os.getenv('DEVICE_ID')) == 0: pass return hooks def get_net(self, config): return get_WideDeep_net(config) logging.basicConfig(level=logging.INFO) log = logging.getLogger("main") NANO_SEC = 1e9 MILLI_SEC = 1000 # pylint: disable=missing-docstring # the datasets we support SUPPORTED_DATASETS = { "debug": (datasets.Dataset, wide_deep.pre_process_criteo_wide_deep, wide_deep.WideDeepPostProcess(), {"randomize": 'total', "memory_map": True}), "multihot-criteo-sample": (wide_deep.WideDeepModel, wide_deep.pre_process_criteo_wide_deep, wide_deep.WideDeepPostProcess(), {"randomize": 'total', "memory_map": True}), "kaggle-criteo": (wide_deep.WideDeepModel, wide_deep.pre_process_criteo_wide_deep, wide_deep.WideDeepPostProcess(), {"randomize": 'total', "memory_map": True}), } # pre-defined command line options so simplify things. They are used as defaults and can be # overwritten from command line SUPPORTED_PROFILES = { "defaults": { "dataset": "multihot-criteo", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "mindspore-native", "model": "wide_deep", "max-batchsize": 2048, }, "dlrm-debug-mindspore": { "dataset": "debug", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "pytorch-native", "model": "dlrm", "max-batchsize": 128, }, "dlrm-multihot-sample-mindspore": { "dataset": "multihot-criteo-sample", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "pytorch-native", "model": "dlrm", "max-batchsize": 2048, }, "dlrm-multihot-mindspore": { "dataset": "multihot-criteo", "inputs": "continuous and categorical features", "outputs": "probability", "backend": "pytorch-native", "model": "dlrm", "max-batchsize": 2048, } } SCENARIO_MAP = { "SingleStream": lg.TestScenario.SingleStream, "MultiStream": lg.TestScenario.MultiStream, "Server": lg.TestScenario.Server, "Offline": lg.TestScenario.Offline, } last_timeing = [] import copy class Item: """An item that we queue for processing by the thread pool.""" def __init__(self, query_id, content_id, features, batch_T=None, idx_offsets=None): self.query_id = query_id self.content_id = content_id self.data = features self.batch_T = batch_T self.idx_offsets = idx_offsets self.start = time.time() import mindspore.dataset as mds lock = threading.Lock() class RunnerBase: def __init__(self, model, ds, threads, post_proc=None): self.take_accuracy = False self.ds = ds self.model = model self.post_process = post_proc self.threads = threads self.result_timing = [] def handle_tasks(self, tasks_queue): pass def start_run(self, result_dict, take_accuracy): self.result_dict = result_dict self.result_timing = [] self.take_accuracy = take_accuracy self.post_process.start() def run_one_item(self, qitem): # run the prediction processed_results = [] try: lock.acquire() data = datasets._get_mindrecord_dataset(*qitem.data) t1 = time.time() results = self.model.eval(data) self.result_timing.append(time.time() - t1) print('##################',time.time()) lock.release() processed_results = self.post_process(results, qitem.batch_T, self.result_dict) # self.post_process.add_results(, ) # g_lables.extend(self.model.auc_metric.true_labels) # g_predicts.extend(self.model.auc_metric.pred_probs) g_lables.extend(self.model.auc_metric.true_labels) g_predicts.extend(self.model.auc_metric.pred_probs) except Exception as ex: # pylint: disable=broad-except log.error("thread: failed, %s", ex) # since post_process will not run, fake empty responses processed_results = [[]] * len(qitem.query_id) finally: response_array_refs = [] response = [] for idx, query_id in enumerate(qitem.query_id): # NOTE: processed_results returned by DlrmPostProcess store both # result = processed_results[idx][0] and target = processed_results[idx][1] # also each idx might be a query of samples, rather than a single sample # depending on the --samples-to-aggregate* arguments. # debug prints # print("s,e:",s_idx,e_idx, len(processed_results)) response_array = array.array("B", np.array(processed_results[0:1], np.float32).tobytes()) response_array_refs.append(response_array) bi = response_array.buffer_info() response.append(lg.QuerySampleResponse(query_id, bi[0], bi[1])) lg.QuerySamplesComplete(response) def enqueue(self, query_samples): idx = [q.index for q in query_samples] query_id = [q.id for q in query_samples] # print(idx) query_len = len(query_samples) # if query_len < self.max_batchsize: # samples = self.ds.get_samples(idx) # # batch_T = [self.ds.get_labels(sample) for sample in samples] # self.run_one_item(Item(query_id, idx, samples)) # else: bs = 1 for i in range(0, query_len, bs): ie = min(i + bs, query_len) samples = self.ds.get_samples(idx[i:ie]) # batch_T = [self.ds.get_labels(sample) for sample in samples] self.run_one_item(Item(query_id[i:ie], idx[i:ie], samples)) def finish(self): pass import threading class MyQueue: def __init__(self, *args, **kwargs): self.lock = threading.Lock() self._data = [] self.status = True def put(self, value): # self.lock.acquire() self._data.append(value) # self.lock.release() def get(self): if self.status and self._data: return self._data.pop(0) if self.status: while self._data: time.sleep(0.1) return self._data.pop(0) return None # self.lock.acquire() # return self._data.pop(0) # self.lock.release() def task_done(self, *args, **kwargs): self.status = False class QueueRunner(RunnerBase): def __init__(self, model, ds, threads, post_proc=None): super().__init__(model, ds, threads, post_proc) queue_size_multiplier = 4 # (args.samples_per_query_offline + max_batchsize - 1) // max_batchsize) self.tasks = JoinableQueue(maxsize=threads * queue_size_multiplier) self.workers = [] self.result_dict = {} for _ in range(self.threads): worker = threading.Thread(target=self.handle_tasks, args=(self.tasks,)) worker.daemon = True self.workers.append(worker) worker.start() def handle_tasks(self, tasks_queue): """Worker thread.""" while True: qitem = tasks_queue.get() if qitem is None: # None in the queue indicates the parent want us to exit tasks_queue.task_done() break self.run_one_item(qitem) tasks_queue.task_done() def enqueue(self, query_samples): idx = [q.index for q in query_samples] query_id = [q.id for q in query_samples] query_len = len(query_samples) # print(idx) # if query_len < self.max_batchsize: # samples = self.ds.get_samples(idx) # # batch_T = [self.ds.get_labels(sample) for sample in samples] # data = Item(query_id, idx, samples) # self.tasks.put(data) # else: bs = 1 for i in range(0, query_len, bs): ie = min(i + bs, query_len) samples = self.ds.get_samples(idx) # batch_T = [self.ds.get_labels(sample) for sample in samples] self.tasks.put(Item(query_id[i:ie], idx[i:ie], samples)) def finish(self): # exit all threads for _ in self.workers: self.tasks.put(None) for worker in self.workers: worker.join() def add_results(final_results, name, result_dict, result_list, took, show_accuracy=False): percentiles = [50., 80., 90., 95., 99., 99.9] buckets = np.percentile(result_list, percentiles).tolist() buckets_str = ",".join(["{}:{:.4f}".format(p, b) for p, b in zip(percentiles, buckets)]) if result_dict["total"] == 0: result_dict["total"] = len(result_list) # this is what we record for each run result = { "took": took, "mean": np.mean(result_list), "percentiles": {str(k): v for k, v in zip(percentiles, buckets)}, "qps": len(result_list) / took, "count": len(result_list), "good_items": result_dict["good"], "total_items": result_dict["total"], } acc_str = "" if show_accuracy: result["accuracy"] = 100. * result_dict["good"] / result_dict["total"] acc_str = ", acc={:.3f}%".format(result["accuracy"]) if "roc_auc" in result_dict: result["roc_auc"] = 100. * result_dict["roc_auc"] acc_str += ", auc={:.3f}%".format(result["roc_auc"]) # add the result to the result dict final_results[name] = result # to stdout print("{} qps={:.2f}, mean={:.4f}, time={:.3f}{}, queries={}, tiles={}".format( name, result["qps"], result["mean"], took, acc_str, len(result_list), buckets_str)) lock = threading.Lock() def append_file(file_path, data): with lock: my_string = ','.join(str(f) for f in data) with open(file_path, 'a+') as file: file.write(my_string + '\n') def read_file(file_path): lines_as_lists = [] with open(file_path, 'r') as file: for line in file: # 去除行尾的换行符,并将行分割成列表 lines_as_lists.extend([float(num) for num in line.strip().split(',')]) return lines_as_lists def get_score(model, quality: float, performance: float): print(model, quality,performance) try: score =0 if model["scenario"] == 'SingleStream'or model["scenario"]== 'MultiStream': if quality >= model["accuracy"]: score = model["baseline_performance"] / (performance + 1e-9) * model["base_score"] else: score ==0 elif model["scenario"]== 'Server' or model["scenario"] == 'Offline': if quality >= model["accuracy"]: score = performance * model["base_score"] / model["baseline_performance"] print(model["baseline_performance"]) else: score ==0 except Exception as e: score ==0 finally: return score def main(): global last_timeing # 初始化时清空文件 global g_lables global g_predicts # args = get_args() g_lables=[] g_predicts=[] # # dataset to use wanted_dataset, pre_proc, post_proc, kwargs = SUPPORTED_DATASETS["debug"] # # --count-samples can be used to limit the number of samples used for testing ds = wanted_dataset(directory=config.dataset_path, train_mode=False, epochs=15, line_per_sample=1000, batch_size=config.test_batch_size, data_type=DataType.MINDRECORD,total_size = config.total_size) # # load model to backend # model = backend.load(args.model_path, inputs=args.inputs, outputs=args.outputs) net_builder = ModelBuilder() train_net, eval_net = net_builder.get_net(config) # ckpt_path = config.ckpt_path param_dict = load_checkpoint(config.ckpt_path) load_param_into_net(eval_net, param_dict) train_net.set_train() eval_net.set_train(False) # acc_metric = AccMetric() # model = Model(train_net, eval_network=eval_net, metrics={"acc": acc_metric}) auc_metric1 = AUCMetric() model = Model(train_net, eval_network=eval_net, metrics={"auc": auc_metric1}) model.auc_metric = auc_metric1 # res = model.eval(ds_eval) final_results = { "runtime": "wide_deep_mindspore", "version": "v2", "time": int(time.time()), "cmdline": str(config), } mlperf_conf = os.path.abspath(config.mlperf_conf) if not os.path.exists(mlperf_conf): log.error("{} not found".format(mlperf_conf)) sys.exit(1) user_conf = os.path.abspath(config.user_conf) if not os.path.exists(user_conf): log.error("{} not found".format(user_conf)) sys.exit(1) if config.output: output_dir = os.path.abspath(config.output) os.makedirs(output_dir, exist_ok=True) os.chdir(output_dir) # # make one pass over the dataset to validate accuracy # count = ds.get_item_count() count = 5 base_score = config.config.base_score #task. accuracy = config.config.accuracy #task. baseline_performance = config.baseline_performance scenario_str = config["scenario"] #task. scenario = SCENARIO_MAP[scenario_str] runner_map = { lg.TestScenario.SingleStream: RunnerBase, lg.TestScenario.MultiStream: QueueRunner, lg.TestScenario.Server: QueueRunner, lg.TestScenario.Offline: QueueRunner } runner = runner_map[scenario](model, ds, config.threads_count, post_proc=post_proc) def issue_queries(query_samples): runner.enqueue(query_samples) def flush_queries(): pass settings = lg.TestSettings() settings.FromConfig(mlperf_conf, config.model_path, config.scenario) settings.FromConfig(user_conf, config.model_path, config.scenario) settings.scenario = scenario settings.mode = lg.TestMode.AccuracyOnly sut = lg.ConstructSUT(issue_queries, flush_queries) qsl = lg.ConstructQSL(count, config.performance_count, ds.load_query_samples, ds.unload_query_samples) log.info("starting {}".format(scenario)) result_dict = {"good": 0, "total": 0, "roc_auc": 0, "scenario": str(scenario)} runner.start_run(result_dict, config.accuracy) lg.StartTest(sut, qsl, settings) result_dict["good"] = runner.post_process.good result_dict["total"] = runner.post_process.total last_timeing = runner.result_timing post_proc.finalize(result_dict) add_results(final_results, "{}".format(scenario), result_dict, last_timeing, time.time() - ds.last_loaded, config.accuracy) runner.finish() lg.DestroyQSL(qsl) lg.DestroySUT(sut) # If multiple subprocesses are running the model send a signal to stop them if (int(os.environ.get("WORLD_SIZE", 1)) > 1): model.eval(None) from sklearn.metrics import roc_auc_score # labels = read_file(labels_path) # predicts = read_file(predicts_path) final_results['auc'] = sklearn.metrics.roc_auc_score(g_lables, g_predicts) print("auc+++++", final_results['auc']) NormMetric= { 'scenario': scenario_str, 'accuracy': accuracy, 'baseline_performance': baseline_performance, 'performance_unit': 's', 'base_score': base_score } # 打开文件 reprot_array=[] test_suit_array=[] test_suit_obj={} test_cases_array=[] test_cases_obj={} test_cases_obj["Name"] = config["task_name"] test_cases_obj["Performance Unit"] = config["performance_unit"] test_cases_obj["Total Duration"] = time.time() - ds.last_loaded test_cases_obj["Train Duration"] = None test_cases_obj["Training Info"] = { "Real Quality" :None, "Learning Rate" :None, "Base Quality" :None, "Epochs" :None, "Optimizer" :None } test_cases_obj["Software Versions"] = { "Python" :3.8, "Framework" :"Mindspore 2.2.14" } percentiles = [50., 80., 90., 95., 99., 99.9] buckets = np.percentile(last_timeing, percentiles).tolist() took = time.time() - ds.last_loaded qps = len(last_timeing) / took print(buckets) if scenario_str=="SingleStream": test_cases_obj["Performance Metric"] = buckets[2] if scenario_str=="MultiStream": test_cases_obj["Performance Metric"] = buckets[4] if scenario_str=="Server": test_cases_obj["Performance Metric"] = qps if scenario_str=="Offline": test_cases_obj["Performance Metric"] = qps score = get_score(NormMetric, final_results["auc"], test_cases_obj["Performance Metric"]) test_cases_obj["Score"] = score test_cases_array.append(test_cases_obj) test_suit_obj["Test Cases"] = test_cases_array test_suit_obj["Name"] = "Inference Suite" test_suit_array.append(test_suit_obj) test_obj = {"Test Suites": test_suit_array} reprot_array.append(test_obj) test_suit_result_obj = {"Name":"Inference Suite","Description":"inference model","Score":score } test_suit_result_array = [] test_suit_result_array.append(test_suit_result_obj) test_suit_result_obj1 = {"Test Suites Results":test_suit_result_array} reprot_array.append(test_suit_result_obj1) epoch_obj={ "epoch":None, "epoch_time":None, "train_loss":None, "metric":None, "metric_name":None } reprot_array.append(epoch_obj) result_final = {"Report Info": reprot_array } print("result_final", result_final) # task.save(result_final) if __name__ == "__main__": # try: # if task.config["is_run_infer"] == True: # main() # task.close() # else: # task.close() # raise ValueError # except Exception as e: # task.logger.error(e) # task.close(e) # task.logger.info("Finish ") # if task.config["is_run_infer"] == True: # print(config) main() # task.close() 这段代码的目的是什么?

filetype

import os import cv2 import numpy as np import psutil import time import argparse import json from datetime import datetime import logging import signal import sys import traceback import threading import GPUtil import subprocess import gc import shutil import queue import concurrent.futures import tracemalloc import platform import requests import zipfile class VideoProcessor: def __init__(self, config): self.config = config self.canceled = False self.start_time = time.time() self.frame_counter = 0 self.progress = 0 self.status = "就绪" self.fps = 0.0 self.mem_usage = 0.0 self.cpu_percent = 0.0 self.system_mem_percent = 0.0 self.system_mem_used = 0.0 self.system_mem_available = 0.0 self.gpu_load = 0.0 self.gpu_memory_used = 0.0 self.gpu_memory_total = 0.0 self.logger = logging.getLogger("VideoProcessor") self.resources = [] # 跟踪需要释放的资源 self.monitor_active = False self.monitor_thread = None # 多线程队列 self.frame_queue = queue.Queue(maxsize=self.config.get('queue_size', 30)) self.processed_queue = queue.Queue(maxsize=self.config.get('queue_size', 30)) # CUDA流管理 self.cuda_streams = [] self.cuda_ctx = None # 检测移动环境 self.is_mobile = self.detect_mobile_environment() if self.is_mobile: self.logger.info("检测到移动环境,启用移动端优化配置") # 内存跟踪 if self.config.get('enable_memory_monitor', False): tracemalloc.start() self.logger.info("内存跟踪已启用") # 注册信号处理 signal.signal(signal.SIGINT, self.signal_handler) signal.signal(signal.SIGTERM, self.signal_handler) def detect_mobile_environment(self): """检测是否在移动环境中运行""" try: system = platform.system().lower() uname = os.uname() # Android检测 if 'linux' in system and 'android' in uname.version.lower(): self.logger.info("检测到Android环境") return True # iOS检测 if system == 'darwin' and 'ios' in uname.machine.lower(): self.logger.info("检测到iOS环境") return True return False except Exception as e: self.logger.warning(f"移动环境检测失败: {str(e)}") return False def signal_handler(self, signum, frame): """处理中断信号""" self.logger.warning(f"接收到中断信号: {signum}, 正在优雅地停止...") self.cancel() sys.exit(1) def start_resource_monitor(self, interval=1): """启动资源监控线程""" self.monitor_active = True self.monitor_thread = threading.Thread( target=self.monitor_resources, args=(interval,), daemon=True ) self.monitor_thread.start() self.logger.info("资源监控线程已启动") def stop_resource_monitor(self): """停止资源监控线程""" if self.monitor_thread and self.monitor_thread.is_alive(): self.monitor_active = False self.monitor_thread.join(timeout=2.0) self.logger.info("资源监控线程已停止") def monitor_resources(self, interval=1): """资源监控线程函数""" self.logger.info("资源监控开始") print("\n资源监控 | 时间戳 | CPU使用率 | 内存使用 | GPU使用率 | GPU显存") print("-" * 70) while self.monitor_active: try: # CPU监控 cpu_percent = psutil.cpu_percent(interval=None) # 内存监控 mem = psutil.virtual_memory() mem_usage = f"{mem.used / (1024**3):.1f}GB/{mem.total / (1024**3):.1f}GB" # GPU监控 gpu_info = "" try: gpus = GPUtil.getGPUs() if gpus: gpu = gpus[0] gpu_info = f"{gpu.load*100:.1f}% | {gpu.memoryUsed:.1f}MB/{gpu.memoryTotal:.0f}MB" # 更新GPU状态 self.gpu_load = gpu.load * 100 self.gpu_memory_used = gpu.memoryUsed self.gpu_memory_total = gpu.memoryTotal else: gpu_info = "No GPU" except Exception as e: gpu_info = f"Error: {str(e)}" timestamp = time.strftime('%H:%M:%S') print(f"{timestamp} | {cpu_percent:6.1f}% | {mem_usage:^15} | {gpu_info}") self.logger.info(f"资源监控 | {timestamp} | CPU: {cpu_percent}% | 内存: {mem_usage} | GPU: {gpu_info}") # 内存泄漏检测 if self.config.get('enable_memory_monitor', False): snapshot = tracemalloc.take_snapshot() top_stats = snapshot.statistics('lineno') self.logger.info("内存分配Top 10:") for stat in top_stats[:10]: self.logger.info(str(stat)) time.sleep(interval) except Exception as e: self.logger.error(f"资源监控出错: {str(e)}") time.sleep(5) # 出错后等待5秒再重试 def init_cuda(self): """初始化CUDA环境""" if not self.config.get('use_gpu_processing', False) or self.is_mobile: return try: device_id = self.config.get('gpu_device_index', 0) if cv2.cuda.getCudaEnabledDeviceCount() > device_id: # 设置CUDA设备 cv2.cuda.setDevice(device_id) device = cv2.cuda.DeviceInfo(device_id) self.logger.info(f"使用GPU设备: {device.name()}") # 创建CUDA流 num_streams = self.config.get('cuda_streams', 4) self.cuda_streams = [cv2.cuda_Stream() for _ in range(num_streams)] self.logger.info(f"已创建 {num_streams} 个CUDA流") # 创建CUDA上下文 self.cuda_ctx = cv2.cuda.Device(device_id).createContext() self.logger.info("CUDA上下文已创建") else: self.logger.warning("请求的GPU设备不可用,将使用CPU处理") self.config['use_gpu_processing'] = False except Exception as e: self.logger.error(f"CUDA初始化失败: {str(e)}") self.config['use_gpu_processing'] = False def open_video_with_acceleration(self, file_path): """使用硬件加速打开视频""" # 移动端使用专用API if self.is_mobile: self.logger.info("移动端: 使用Android专用API") try: # Android专用API cap = cv2.VideoCapture(file_path, cv2.CAP_ANDROID) if cap.isOpened(): self.logger.info("Android专用API打开成功") self.resources.append(cap) return cap else: self.logger.warning("Android专用API打开失败,尝试默认方式") except: self.logger.warning("Android专用API不可用,使用默认方式") # 桌面端或移动端备选方案 if self.config.get('hardware_acceleration', 'disable') == 'disable': cap = cv2.VideoCapture(file_path) self.resources.append(cap) return cap cap = cv2.VideoCapture() self.resources.append(cap) acceleration = { 'auto': cv2.VIDEO_ACCELERATION_ANY, 'any': cv2.VIDEO_ACCELERATION_ANY, 'nvidia': cv2.VIDEO_ACCELERATION_NVIDIA, 'intel': cv2.VIDEO_ACCELERATION_INTEL, 'vaapi': cv2.VIDEO_ACCELERATION_VAAPI }.get(self.config.get('hardware_acceleration', 'auto'), cv2.VIDEO_ACCELERATION_ANY) params = [ cv2.CAP_PROP_HW_ACCELERATION, acceleration, cv2.CAP_PROP_HW_DEVICE, self.config.get('gpu_device_index', 0) ] # 降低延迟的优化参数 if self.config.get('reduce_latency', True): params.extend([ cv2.CAP_PROP_BUFFERSIZE, self.config.get('buffer_size', 3), cv2.CAP_PROP_FPS, self.config.get('target_fps', 30) ]) # MJPEG压缩 if self.config.get('use_mjpeg', True): params.extend([ cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M','J','P','G') ]) # 设置解码线程数 decoding_threads = self.config.get('decoding_threads', 0) if decoding_threads > 0: params.extend([cv2.CAP_PROP_FFMPEG_THREADS, decoding_threads]) try: cap.open(file_path, apiPreference=cv2.CAP_FFMPEG, params=params) # Intel专用加速 if self.config.get('hardware_acceleration', '') == 'intel' and cap.isOpened(): cap.set(cv2.CAP_PROP_INTEL_VIDEO_SRC_HW_ACCEL, 1) except Exception as e: self.logger.error(f"硬件加速打开失败: {str(e)}, 使用默认方式") cap = cv2.VideoCapture(file_path) return cap def update_system_stats(self): """更新系统资源统计""" self.cpu_percent = psutil.cpu_percent(interval=0.1) mem = psutil.virtual_memory() self.system_mem_percent = mem.percent self.system_mem_used = mem.used / (1024 ** 3) # GB self.system_mem_available = mem.available / (1024 ** 3) # GB def print_progress(self): """美观的进度显示""" elapsed = time.time() - self.start_time eta = (100 - self.progress) * elapsed / max(1, self.progress) if self.progress > 0 else 0 # 进度条 bar_length = 30 filled_length = int(bar_length * self.progress / 100) bar = '█' * filled_length + '-' * (bar_length - filled_length) # 队列状态 queue_status = f"Q: {self.frame_queue.qsize()}/{self.processed_queue.qsize()}" progress_str = ( f"进度: |{bar}| {self.progress}% " f"| 速度: {self.fps:.1f}fps " f"| 用时: {elapsed:.1f}s " f"| 剩余: {eta:.1f}s " f"| CPU: {self.cpu_percent:.0f}% " f"| 内存: {self.mem_usage:.1f}MB " f"| GPU: {self.gpu_load:.1f}% " f"| {queue_status}" ) print(f"\r{progress_str}", end="") self.logger.info(progress_str) def capture_thread(self, cap, total_frames): """视频捕获线程 (生产者)""" frame_idx = 0 while cap.isOpened() and not self.canceled and frame_idx < total_frames: ret, frame = cap.read() if not ret: break # 放入队列,非阻塞方式防止死锁 try: self.frame_queue.put((frame_idx, frame), timeout=1.0) frame_idx += 1 except queue.Full: if self.canceled: break time.sleep(0.01) # 发送结束信号 self.frame_queue.put((None, None)) self.logger.info(f"捕获线程完成,共捕获 {frame_idx} 帧") def processing_thread(self, output_resolution): """视频处理线程 (消费者)""" output_width, output_height = output_resolution while not self.canceled: try: # 获取帧,带超时防止死锁 frame_idx, frame = self.frame_queue.get(timeout=2.0) # 结束信号 if frame_idx is None: self.processed_queue.put((None, None)) self.frame_queue.task_done() break # 处理帧 try: # 移动端使用轻量级算法 if self.is_mobile: # 移动端优化:使用Canny边缘检测替代复杂特征检测 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray, 100, 200) # 将边缘检测结果与原帧合并 frame[:, :, 0] = np.minimum(frame[:, :, 0] + edges, 255) frame[:, :, 1] = np.minimum(frame[:, :, 1] + edges, 255) frame[:, :, 2] = np.minimum(frame[:, :, 2] + edges, 255) # 移动端使用快速插值方法 processed_frame = cv2.resize(frame, output_resolution, interpolation=cv2.INTER_LINEAR) else: # 桌面端使用完整算法 if self.config.get('use_gpu_processing', False) and self.cuda_streams: # 选择CUDA流 (轮询) stream_idx = frame_idx % len(self.cuda_streams) stream = self.cuda_streams[stream_idx] # 上传到GPU gpu_frame = cv2.cuda_GpuMat() gpu_frame.upload(frame, stream=stream) # GPU处理 if output_resolution: gpu_frame = cv2.cuda.resize(gpu_frame, output_resolution, stream=stream) # 下载回CPU processed_frame = gpu_frame.download(stream=stream) else: # CPU处理 if output_resolution: processed_frame = cv2.resize(frame, output_resolution) else: processed_frame = frame # 放入已处理队列 self.processed_queue.put((frame_idx, processed_frame), timeout=1.0) except cv2.error as e: if 'CUDA' in str(e): self.logger.error(f"GPU处理失败: {str(e)},切换到CPU模式") self.config['use_gpu_processing'] = False # 使用CPU重试 processed_frame = cv2.resize(frame, output_resolution) if output_resolution else frame self.processed_queue.put((frame_idx, processed_frame), timeout=1.0) else: self.logger.error(f"处理帧 {frame_idx} 失败: {str(e)}") except Exception as e: self.logger.error(f"处理帧 {frame_idx} 时出错: {str(e)}") self.frame_queue.task_done() except queue.Empty: if self.canceled: break except Exception as e: self.logger.error(f"处理线程出错: {str(e)}") self.logger.info("处理线程退出") def writer_thread(self, out, total_frames): """写入线程""" frame_idx = 0 last_log_time = time.time() while not self.canceled and frame_idx < total_frames: try: # 获取已处理帧 idx, processed_frame = self.processed_queue.get(timeout=2.0) # 结束信号 if idx is None: break # 写入输出 if processed_frame is not None: out.write(processed_frame) # 更新计数 self.frame_counter += 1 frame_idx += 1 # 计算帧率 elapsed = time.time() - self.start_time self.fps = self.frame_counter / elapsed if elapsed > 0 else 0 # 更新内存使用 process = psutil.Process(os.getpid()) self.mem_usage = process.memory_info().rss / (1024 ** 2) # MB # 更新系统状态 self.update_system_stats() # 更新进度 self.progress = int(frame_idx * 100 / total_frames) # 定期打印进度 current_time = time.time() if current_time - last_log_time > 1.0 or frame_idx % 50 == 0: self.print_progress() last_log_time = current_time # 内存管理 if frame_idx % 100 == 0: gc.collect() # 检查内存使用情况 if self.system_mem_percent > 90: self.logger.warning(f"系统内存使用超过90%! (当前: {self.system_mem_percent}%)") print(f"\n警告: 系统内存使用过高 ({self.system_mem_percent}%)") self.processed_queue.task_done() except queue.Empty: if self.canceled: break except Exception as e: self.logger.error(f"写入线程出错: {str(e)}") self.logger.info(f"写入线程完成,共写入 {frame_idx} 帧") def run(self): try: self.status = "处理中..." self.logger.info("视频处理开始") self.logger.info(f"主视频: {self.config['main_video']}") self.logger.info(f"副视频: {self.config['sub_video']}") self.logger.info(f"输出文件: {self.config['output_path']}") self.start_time = time.time() # 初始化CUDA self.init_cuda() # 启动资源监控 self.start_resource_monitor() # 打开主视频 self.logger.info("正在打开主视频...") main_cap = self.open_video_with_acceleration(self.config['main_video']) if not main_cap.isOpened(): self.status = "无法打开主视频文件!" self.logger.error(f"无法打开主视频文件: {self.config['main_video']}") return False # 获取主视频信息 main_fps = main_cap.get(cv2.CAP_PROP_FPS) main_width = int(main_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) main_height = int(main_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) main_total_frames = int(main_cap.get(cv2.CAP_PROP_FRAME_COUNT)) self.logger.info(f"主视频信息: {main_width}x{main_height}@{main_fps:.1f}fps, 总帧数: {main_total_frames}") # 打开副视频 self.logger.info("正在打开副视频...") sub_cap = self.open_video_with_acceleration(self.config['sub_video']) if not sub_cap.isOpened(): self.status = "无法打开副视频文件!" self.logger.error(f"无法打开副视频文件: {self.config['sub_video']}") return False # 获取副视频信息 sub_fps = sub_cap.get(cv2.CAP_PROP_FPS) sub_width = int(sub_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) sub_height = int(sub_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) sub_total_frames = int(sub_cap.get(cv2.CAP_PROP_FRAME_COUNT)) self.logger.info(f"副视频信息: {sub_width}x{sub_height}@{sub_fps:.1f}fps, 总帧数: {sub_total_frames}") # 创建输出目录 output_dir = os.path.dirname(self.config['output_path']) if output_dir and not os.path.exists(output_dir): try: os.makedirs(output_dir) self.logger.info(f"已创建输出目录: {output_dir}") except Exception as e: self.status = f"无法创建输出目录: {output_dir}" self.logger.error(f"创建输出目录失败: {str(e)}") return False # 创建输出视频 output_width, output_height = self.config['output_resolution'] fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter(self.config['output_path'], fourcc, main_fps, (output_width, output_height)) self.resources.append(out) if not out.isOpened(): self.status = "无法创建输出视频文件!请检查分辨率设置。" self.logger.error(f"无法创建输出视频: {self.config['output_path']}, 分辨率: {output_width}x{output_height}") return False # 计算主视频分段参数 if self.config['main_segment_type'] == '秒': segment_length_main = int(float(self.config['segment_a']) * main_fps) else: segment_length_main = int(self.config['segment_a']) b1 = int(self.config['b1']) b2 = int(self.config['b2']) replace_frame_count = b2 - b1 + 1 # 计算副视频分段参数 if self.config['sub_segment_type'] == '秒': segment_length_sub = int(float(self.config['segment_c']) * sub_fps) else: segment_length_sub = int(self.config['segment_c']) d = int(self.config['d']) # 计算主视频段数 segments_main = (main_total_frames + segment_length_main - 1) // segment_length_main # 计算副视频段数 segments_sub = (sub_total_frames + segment_length_sub - 1) // segment_length_sub # 检查段数是否匹配 if segments_main > segments_sub: if self.config['sub_option'] == '循环使用': self.logger.warning(f"副视频段数不足({segments_sub}),将循环使用以满足主视频段数({segments_main})") else: self.status = "副视频段数不足,无法完成替换!" self.logger.error(f"副视频段数不足: {segments_sub} < {segments_main}") return False # 初始化性能监控 process = psutil.Process(os.getpid()) self.logger.info("="*50) self.logger.info("开始视频处理") self.logger.info(f"主视频: {self.config['main_video']} ({main_total_frames}帧, {main_fps:.1f}fps)") self.logger.info(f"副视频: {self.config['sub_video']} ({sub_total_frames}帧, {sub_fps:.1f}fps)") self.logger.info(f"输出文件: {self.config['output_path']}") self.logger.info(f"分辨率: {output_width}x{output_height}") self.logger.info(f"主视频分段数: {segments_main}, 每段{segment_length_main}帧") self.logger.info(f"替换帧范围: {b1}-{b2} (每段替换{replace_frame_count}帧)") self.logger.info(f"副视频分段数: {segments_sub}, 每段{segment_length_sub}帧") self.logger.info(f"副视频起始帧: {d}") self.logger.info(f"使用GPU处理: {self.config.get('use_gpu_processing', False)}") self.logger.info(f"CUDA流数量: {len(self.cuda_streams)}") self.logger.info(f"移动环境: {self.is_mobile}") self.logger.info("="*50) print("\n" + "="*50) print("开始视频处理") print(f"主视频: {self.config['main_video']} ({main_total_frames}帧, {main_fps:.1f}fps)") print(f"副视频: {self.config['sub_video']} ({sub_total_frames}帧, {sub_fps:.1f}fps)") print(f"输出文件: {self.config['output_path']}") print(f"分辨率: {output_width}x{output_height}") print(f"主视频分段数: {segments_main}, 每段{segment_length_main}帧") print(f"替换帧范围: {b1}-{b2} (每段替换{replace_frame_count}帧)") print(f"副视频分段数: {segments_sub}, 每段{segment_length_sub}帧") print(f"副视频起始帧: {d}") print(f"使用GPU处理: {self.config.get('use_gpu_processing', False)}") print(f"CUDA流数量: {len(self.cuda_streams)}") print(f"移动环境: {self.is_mobile}") print("="*50 + "\n") # 启动多线程处理 with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: # 启动捕获线程 capture_future = executor.submit( self.capture_thread, main_cap, main_total_frames ) # 启动处理线程 processing_future = executor.submit( self.processing_thread, (output_width, output_height) ) # 启动写入线程 writer_future = executor.submit( self.writer_thread, out, main_total_frames ) # 等待所有线程完成 concurrent.futures.wait( [capture_future, processing_future, writer_future], return_when=concurrent.futures.ALL_COMPLETED ) if not self.canceled: self.status = "处理完成" self.progress = 100 self.print_progress() print(f"\n\n处理完成!输出文件: {self.config['output_path']}") self.logger.info(f"处理完成! 总帧数: {self.frame_counter}, 耗时: {time.time() - self.start_time:.1f}秒") self.logger.info(f"输出文件: {self.config['output_path']}") return True return False except Exception as e: self.status = f"处理过程中发生错误: {str(e)}" error_trace = traceback.format_exc() self.logger.error(f"处理过程中发生错误: {str(e)}") self.logger.error(f"错误详情:\n{error_trace}") print(f"\n\n错误: {str(e)}") return False finally: self.stop_resource_monitor() self.release_resources() if self.config.get('enable_memory_monitor', False): tracemalloc.stop() def release_resources(self): """释放所有资源""" self.logger.info("正在释放资源...") for resource in self.resources: try: if hasattr(resource, 'release'): resource.release() elif hasattr(resource, 'close'): resource.close() except Exception as e: self.logger.warning(f"释放资源时出错: {str(e)}") # 释放CUDA资源 if self.cuda_ctx: try: self.cuda_ctx.destroy() self.logger.info("CUDA上下文已释放") except Exception as e: self.logger.warning(f"释放CUDA上下文时出错: {str(e)}") self.resources = [] self.logger.info("资源已释放") def cancel(self): """取消处理""" self.canceled = True self.status = "正在取消..." self.logger.warning("用户请求取消处理") print("\n正在取消处理...") # 清空队列 while not self.frame_queue.empty(): try: self.frame_queue.get_nowait() self.frame_queue.task_done() except queue.Empty: break while not self.processed_queue.empty(): try: self.processed_queue.get_nowait() self.processed_queue.task_done() except queue.Empty: break self.stop_resource_monitor() self.release_resources() def get_video_info(file_path): """获取视频文件信息""" cap = None try: cap = cv2.VideoCapture(file_path) if cap.isOpened(): width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = cap.get(cv2.CAP_PROP_FPS) frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) duration = frame_count / fps if fps > 0 else 0 return { "width": width, "height": height, "fps": fps, "frame_count": frame_count, "duration": duration } return None except Exception as e: print(f"获取视频信息时出错: {str(e)}") return None finally: if cap and cap.isOpened(): cap.release() def validate_config(config): """验证配置参数""" # 检查文件存在 if not os.path.exists(config['main_video']): print(f"错误: 主视频文件不存在 - {config['main_video']}") return False if not os.path.exists(config['sub_video']): print(f"错误: 副视频文件不存在 - {config['sub_video']}") return False # 检查输出目录 output_dir = os.path.dirname(config['output_path']) if output_dir and not os.path.exists(output_dir): try: os.makedirs(output_dir) print(f"已创建输出目录: {output_dir}") except: print(f"错误: 无法创建输出目录 - {output_dir}") return False # 检查参数有效性 try: # 主视频参数 segment_a = float(config['segment_a']) if segment_a <= 0: print("错误: 分段长度必须大于0!") return False b1 = int(config['b1']) b2 = int(config['b2']) if b1 < 0 or b2 < 0: print("错误: 帧索引不能为负数!") return False if b1 > b2: print("错误: 替换开始帧(b1)必须小于或等于替换结束帧(b2)!") return False # 副视频参数 segment_c = float(config['segment_c']) if segment_c <= 0: print("错误: 分段长度必须大于0!") return False d = int(config['d']) if d < 0: print("错误: 帧索引不能为负数!") return False # 分辨率 width = int(config['output_resolution'][0]) height = int(config['output_resolution'][1]) if width <= 0 or height <= 0: print("错误: 分辨率必须大于0!") return False return True except ValueError: print("错误: 请输入有效的数字参数!") return False def save_config(config, file_path): """保存配置到文件""" try: with open(file_path, 'w') as f: json.dump(config, f, indent=2) print(f"配置已保存到: {file_path}") except Exception as e: print(f"保存配置时出错: {str(e)}") def load_config(file_path): """从文件加载配置""" try: with open(file_path, 'r') as f: config = json.load(f) # 确保配置中包含所有必要字段 required_keys = [ 'main_video', 'sub_video', 'output_path', 'main_segment_type', 'segment_a', 'b1', 'b2', 'sub_segment_type', 'segment_c', 'd', 'sub_option', 'output_resolution' ] for key in required_keys: if key not in config: print(f"警告: 配置文件中缺少 '{key}' 参数") return config except FileNotFoundError: print(f"错误: 配置文件不存在 - {file_path}") except Exception as e: print(f"加载配置时出错: {str(e)}") return None def create_default_config(): """创建默认配置""" return { "main_video": "main_video.mp4", "sub_video": "sub_video.mp4", "output_path": "output/output_video.mp4", "main_segment_type": "秒", # 默认按秒分段 "segment_a": "1", # 默认1秒 "b1": "1", # 默认替换开始帧 "b2": "1", # 默认替换结束帧 "sub_segment_type": "帧", # 默认按帧分段 "segment_c": "1", # 默认1帧 "d": "1", # 默认起始帧 "sub_option": "循环使用", "output_resolution": [1280, 720], "hardware_acceleration": "auto", "gpu_device_index": 0, "reduce_latency": True, "decoding_threads": 4, "use_gpu_processing": True, "cuda_streams": 4, "queue_size": 30, "buffer_size": 3, "target_fps": 30, "use_mjpeg": True, "enable_memory_monitor": False, "mobile_optimized": True # 新增移动端优化标志 } def detect_hardware_acceleration(): """更全面的硬件加速支持检测""" print("\n=== 硬件加速支持检测 ===") print(f"OpenCV版本: {cv2.__version__}") # 检测CUDA支持 if cv2.cuda.getCudaEnabledDeviceCount() > 0: print("CUDA支持: 可用") for i in range(cv2.cuda.getCudaEnabledDeviceCount()): try: device = cv2.cuda.getDevice(i) print(f" 设备 {i}: {device.name()}, 计算能力: {device.majorVersion()}.{device.minorVersion()}") except: print(f" 设备 {i}: 信息获取失败") else: print("CUDA支持: 不可用") # 检测OpenCL支持 print(f"OpenCL支持: {'可用' if cv2.ocl.haveOpenCL() else '不可用'}") # 获取FFMPEG信息 try: result = subprocess.run(['ffmpeg', '-version'], capture_output=True, text=True) ffmpeg_version = result.stdout.split('\n')[0] print(f"FFMPEG版本: {ffmpeg_version}") except: print("FFMPEG版本: 未找到") # 检测可用加速类型 acceleration_types = { 'NVIDIA': cv2.VIDEO_ACCELERATION_NVIDIA, 'Intel': cv2.VIDEO_ACCELERATION_INTEL, 'VAAPI': cv2.VIDEO_ACCELERATION_VAAPI, 'ANY': cv2.VIDEO_ACCELERATION_ANY } print("\n支持的硬件加速类型:") available_accelerations = [] for name, accel_type in acceleration_types.items(): cap = cv2.VideoCapture() try: params = [cv2.CAP_PROP_HW_ACCELERATION, accel_type] test_result = cap.open("", apiPreference=cv2.CAP_FFMPEG, params=params) status = "可用" if test_result else "不可用" print(f"- {name}: {status}") if test_result: available_accelerations.append(name.lower()) except: print(f"- {name}: 检测失败") finally: if cap.isOpened(): cap.release() # 如果没有可用的硬件加速,提供备选方案 if not available_accelerations: print("\n警告: 未检测到任何硬件加速支持!") print("建议:") print("1. 使用软件解码 (设置 hardware_acceleration: 'disable')") print("2. 安装以下备选库:") print(" - NVIDIA GPU 用户: 安装 CUDA Toolkit 和 cuDNN") print(" - Intel GPU 用户: 安装 Intel Media SDK") print(" - AMD/其他 GPU 用户: 安装 VAAPI") print("3. 重新编译OpenCV以支持硬件加速") print("4. 使用支持硬件加速的FFmpeg版本") else: print("\n检测到以下可用的硬件加速类型:") print(", ".join(available_accelerations)) print("在配置文件中设置 'hardware_acceleration' 参数使用") def preview_frame(config, frame_index, is_main=True): """预览指定视频的指定帧""" video_path = config['main_video'] if is_main else config['sub_video'] cap = cv2.VideoCapture(video_path) if not cap.isOpened(): print(f"无法打开视频文件: {video_path}") return total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) if frame_index >= total_frames: print(f"帧索引超出范围 (最大: {total_frames-1})") cap.release() return cap.set(cv2.CAP_PROP_POS_FRAMES, frame_index) ret, frame = cap.read() if ret: # 创建预览窗口 window_name = f"预览: {'主视频' if is_main else '副视频'} - 帧 {frame_index}" cv2.namedWindow(window_name, cv2.WINDOW_NORMAL) # 调整窗口大小 height, width = frame.shape[:2] max_height = 800 if height > max_height: scale = max_height / height frame = cv2.resize(frame, (int(width * scale), max_height)) cv2.imshow(window_name, frame) cv2.waitKey(0) cv2.destroyAllWindows() else: print(f"无法读取帧 {frame_index}") cap.release() def batch_process(config_file, output_dir): """批量处理多个配置""" try: with open(config_file) as f: batch_configs = json.load(f) except Exception as e: print(f"加载批量配置文件失败: {str(e)}") return total_tasks = len(batch_configs) print(f"\n开始批量处理 {total_tasks} 个任务") for i, config in enumerate(batch_configs): print(f"\n处理任务 {i+1}/{total_tasks}") print(f"主视频: {config['main_video']}") print(f"副视频: {config['sub_video']}") # 添加时间戳到输出文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") base_name = os.path.basename(config['output_path']) config['output_path'] = os.path.join( output_dir, f"{timestamp}_{base_name}" ) # 验证配置 if not validate_config(config): print(f"任务 {i+1} 配置验证失败,跳过") continue # 创建处理器 processor = VideoProcessor(config) success = processor.run() if success: print(f"任务 {i+1} 完成: {config['output_path']}") else: print(f"任务 {i+1} 失败") # 任务间延迟,让系统冷却 if i < total_tasks - 1: print("\n等待5秒,准备下一个任务...") time.sleep(5) def setup_logging(): """配置日志系统""" log_dir = "logs" if not os.path.exists(log_dir): os.makedirs(log_dir) timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") log_file = os.path.join(log_dir, f"video_processor_{timestamp}.log") logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler(log_file), logging.StreamHandler() ] ) logger = logging.getLogger() logger.info(f"日志系统初始化完成, 日志文件: {log_file}") return logger, log_file def install_termux_dependencies(): """安装Termux所需的依赖""" print("正在安装Termux依赖...") commands = [ "pkg update && pkg upgrade -y", "pkg install python libjpeg-turbo libvulkan vulkan-loader-android ffmpeg -y", "pkg install vulkan-tools vulkan-validation-layers -y", "pkg install ocl-icd opencl-headers -y" ] for cmd in commands: print(f"执行: {cmd}") result = subprocess.run(cmd, shell=True) if result.returncode != 0: print(f"命令执行失败: {cmd}") return False print("Termux依赖安装完成") return True def verify_gpu_support(): """验证GPU支持情况""" print("\n验证GPU支持:") # 验证MediaCodec支持 print("\n1. MediaCodec支持:") result = subprocess.run(["ffmpeg", "-hwaccels"], capture_output=True, text=True) if "mediacodec" in result.stdout: print(" ✓ 支持MediaCodec硬件加速") else: print(" ✗ 不支持MediaCodec硬件加速") # 验证Vulkan支持 print("\n2. Vulkan支持:") try: result = subprocess.run(["vulkaninfo"], capture_output=True, text=True) if "deviceName" in result.stdout: print(" ✓ 支持Vulkan API") else: print(" ✗ 不支持Vulkan API") except FileNotFoundError: print(" ✗ vulkaninfo未安装,无法验证Vulkan支持") # 验证OpenCL支持 print("\n3. OpenCL支持:") try: result = subprocess.run(["clinfo"], capture_output=True, text=True) if "Platform Name" in result.stdout: print(" ✓ 支持OpenCL") else: print(" ✗ 不支持OpenCL") except FileNotFoundError: print(" ✗ clinfo未安装,无法验证OpenCL支持") print("\n验证完成") def setup_termux_gpu_acceleration(): """设置Termux GPU加速环境""" print("="*50) print("Termux GPU加速视频处理设置") print("="*50) # 安装基础依赖 if not install_termux_dependencies(): print("依赖安装失败,无法继续设置") return # 验证GPU支持 verify_gpu_support() # 下载并编译CLBlast print("\n编译安装CLBlast...") commands = [ "pkg install git cmake make -y", "git clone https://github.com/CNugteren/CLBlast", "cd CLBlast && mkdir build && cd build", "cmake .. -DCMAKE_INSTALL_PREFIX=$PREFIX", "make install" ] for cmd in commands: print(f"执行: {cmd}") result = subprocess.run(cmd, shell=True) if result.returncode != 0: print(f"命令执行失败: {cmd}") return print("\nGPU加速环境设置完成!") print("现在可以使用以下命令进行硬件加速视频处理:") print("ffmpeg -hwaccel mediacodec -i input.mp4 -c:v h264_mediacodec output.mp4") # 创建示例批处理脚本 with open("gpu_batch_process.sh", "w") as f: f.write("""#!/bin/bash # GPU加速批处理脚本 for f in *.mp4; do echo "处理: $f" ffmpeg -hwaccel mediacodec -i "$f" -c:v h264_mediacodec "gpu_$f" done echo "所有视频处理完成!" """) print("\n已创建批处理脚本: gpu_batch_process.sh") print("使用命令运行: bash gpu_batch_process.sh") def main(): # 设置日志 logger, log_file = setup_logging() # 创建参数解析器 parser = argparse.ArgumentParser(description="专业视频帧替换工具", formatter_class=argparse.RawTextHelpFormatter) parser.add_argument("--config", help="配置文件路径", default="") parser.add_argument("--save-config", help="保存默认配置到文件", action="store_true") parser.add_argument("--background", help="后台运行模式", action="store_true") parser.add_argument("--batch", help="批量处理模式,指定批量配置文件", default="") parser.add_argument("--preview-main", type=int, help="预览主视频指定帧", default=-1) parser.add_argument("--preview-sub", type=int, help="预览副视频指定帧", default=-1) parser.add_argument("--output-dir", help="批量处理输出目录", default="batch_output") parser.add_argument("--enable-gpu", help="启用GPU加速处理", action="store_true") parser.add_argument("--enable-mem-monitor", help="启用内存监控", action="store_true") parser.add_argument("--setup-termux", help="设置Termux GPU加速环境", action="store_true") args = parser.parse_args() # Termux GPU加速设置 if args.setup_termux: setup_termux_gpu_acceleration() return # 保存默认配置 if args.save_config: config_file = args.config if args.config else "video_config.json" default_config = create_default_config() save_config(default_config, config_file) print(f"默认配置已保存到: {config_file}") return # 批量处理模式 if args.batch: if not os.path.exists(args.output_dir): os.makedirs(args.output_dir) batch_process(args.batch, args.output_dir) return # 加载配置 config = None if args.config: config = load_config(args.config) # 如果没有提供配置或加载失败,使用默认配置 if not config: print("使用默认配置") config = create_default_config() # 命令行参数覆盖配置 if args.enable_gpu: config['use_gpu_processing'] = True if args.enable_mem_monitor: config['enable_memory_monitor'] = True # 预览功能 if args.preview_main >= 0: preview_frame(config, args.preview_main, is_main=True) return if args.preview_sub >= 0: preview_frame(config, args.preview_sub, is_main=False) return # 后台模式处理 if args.background: print("后台模式运行中...") logger.info("后台模式启动") # 重定向标准输出到日志 sys.stdout = open(log_file, 'a') sys.stderr = sys.stdout # 显示硬件加速信息 detect_hardware_acceleration() # 显示配置 logger.info("\n当前配置:") logger.info(f"主视频: {config['main_video']}") logger.info(f"副视频: {config['sub_video']}") logger.info(f"输出文件: {config['output_path']}") logger.info(f"主视频分段方式: {config['main_segment_type']}, 长度: {config['segment_a']}") logger.info(f"替换帧范围: b1={config['b1']}, b2={config['b2']}") logger.info(f"副视频分段方式: {config['sub_segment_type']}, 长度: {config['segment_c']}") logger.info(f"副视频起始帧: d={config['d']}") logger.info(f"副视频不足时: {config['sub_option']}") logger.info(f"输出分辨率: {config['output_resolution'][0]}x{config['output_resolution'][1]}") logger.info(f"硬件加速: {config.get('hardware_acceleration', 'auto')}") logger.info(f"解码线程数: {config.get('decoding_threads', 0)}") logger.info(f"使用GPU处理: {config.get('use_gpu_processing', False)}") logger.info(f"CUDA流数量: {config.get('cuda_streams', 0)}") logger.info(f"队列大小: {config.get('queue_size', 30)}") logger.info(f"启用内存监控: {config.get('enable_memory_monitor', False)}") logger.info(f"移动端优化: {config.get('mobile_optimized', True)}") print("\n当前配置:") print(f"主视频: {config['main_video']}") print(f"副视频: {config['sub_video']}") print(f"输出文件: {config['output_path']}") print(f"主视频分段方式: {config['main_segment_type']}, 长度: {config['segment_a']}") print(f"替换帧范围: b1={config['b1']}, b2={config['b2']}") print(f"副视频分段方式: {config['sub_segment_type']}, 长度: {config['segment_c']}") print(f"副视频起始帧: d={config['d']}") print(f"副视频不足时: {config['sub_option']}") print(f"输出分辨率: {config['output_resolution'][0]}x{config['output_resolution'][1]}") print(f"硬件加速: {config.get('hardware_acceleration', 'auto')}") print(f"解码线程数: {config.get('decoding_threads', 0)}") print(f"使用GPU处理: {config.get('use_gpu_processing', False)}") print(f"CUDA流数量: {config.get('cuda_streams', 0)}") print(f"队列大小: {config.get('queue_size', 30)}") print(f"启用内存监控: {config.get('enable_memory_monitor', False)}") print(f"移动端优化: {config.get('mobile_optimized', True)}\n") # 验证配置 if not validate_config(config): logger.error("配置验证失败") return # 显示视频信息 main_info = get_video_info(config['main_video']) if main_info: logger.info("主视频信息:") logger.info(f" 尺寸: {main_info['width']}x{main_info['height']}") logger.info(f" 帧率: {main_info['fps']:.1f} fps") logger.info(f" 总帧数: {main_info['frame_count']}") logger.info(f" 时长: {main_info['duration']:.1f}秒") print("主视频信息:") print(f" 尺寸: {main_info['width']}x{main_info['height']}") print(f" 帧率: {main_info['fps']:.1f} fps") print(f" 总帧数: {main_info['frame_count']}") print(f" 时长: {main_info['duration']:.1f}秒") sub_info = get_video_info(config['sub_video']) if sub_info: logger.info("\n副视频信息:") logger.info(f" 尺寸: {sub_info['width']}x{sub_info['height']}") logger.info(f" 帧率: {sub_info['fps']:.1f} fps") logger.info(f" 总帧数: {sub_info['frame_count']}") logger.info(f" 时长: {sub_info['duration']:.1f}秒") print("\n副视频信息:") print(f" 尺寸: {sub_info['width']}x{sub_info['height']}") print(f" 帧率: {sub_info['fps']:.1f} fps") print(f" 总帧数: {sub_info['frame_count']}") print(f" 时长: {sub_info['duration']:.1f}秒") # 确认开始处理 if not args.background: print("\n按 Enter 开始处理,或输入 'c' 取消...") user_input = input().strip().lower() if user_input == 'c': logger.info("用户取消处理") print("处理已取消") return # 创建并运行处理器 logger.info("开始视频处理") processor = VideoProcessor(config) processor.run() # 保存配置 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") config_file = f"video_config_{timestamp}.json" save_config(config, config_file) logger.info(f"配置已保存: {config_file}") if __name__ == "__main__": main() 分析此代码所需依赖

温暖如故
  • 粉丝: 31
上传资源 快速赚钱