活动介绍

cmd \\.\\%COM_DEBUG_PRO%

时间: 2025-08-21 12:57:31 浏览: 3
### 串行通信端口(COM 端口)调试方法 在Windows系统中,通过命令提示符(CMD)访问 `\\.\%COM_DEBUG_PRO%` 这种形式的路径通常用于与串行通信端口(COM端口)进行交互。这种访问方式允许开发者或系统管理员直接与连接到计算机的串行设备进行通信,例如嵌入式设备、传感器、调制解调器等。 #### 用途 1. **设备调试**:开发人员可以通过COM端口与外部设备进行数据交换,以便进行调试和测试。例如,发送特定命令并接收设备的响应来验证设备的功能。 2. **日志记录**:通过串行端口捕获设备的日志信息,有助于分析设备的行为和解决问题。 3. **固件更新**:某些设备可以通过串行端口进行固件更新,确保设备运行最新的软件版本。 4. **自动化测试**:在自动化测试环境中,通过串行端口与设备通信可以实现对设备的自动控制和测试。 #### 调试方法 要通过CMD访问串行端口,可以使用 `mode` 命令来配置和查看串行端口的设置。例如: ```cmd mode COM1: baud=9600 parity=N data=8 stop=1 ``` 此命令将COM1端口的波特率设置为9600,无奇偶校验,8位数据位,1位停止位。 要与串行端口通信,可以使用 `copy` 命令将文件内容发送到串行端口: ```cmd copy filename.txt \\.\COM1 ``` 这将把 `filename.txt` 文件的内容发送到COM1端口。 #### 示例代码 在MFC应用程序中,如果需要在调试过程中通过串行端口输出日志信息,可以使用以下代码初始化控制台窗口并重定向标准输出: ```cpp void InitConsoleWindow() { AllocConsole(); HANDLE kztdd = GetStdHandle(STD_OUTPUT_HANDLE); long ff = (long)_open_osfhandle((intptr_t)kztdd, _O_TEXT); FILE *fp = _fdopen(ff, "w"); (*stdout) = (*fp); printf("Debug start\r\n"); } void DeInitConsoleWindow() { fclose(stdout); FreeConsole(); } ``` 在应用程序的 `InitInstance` 方法中调用 `InitConsoleWindow` 和 `DeInitConsoleWindow` 函数,以便在需要时启用和禁用控制台窗口。 #### 注意事项 - 在访问串行端口之前,确保端口没有被其他程序占用。 - 配置正确的波特率、数据位、停止位和奇偶校验,以确保与设备的通信正常。 - 使用 `mode` 命令查看当前串行端口的配置,确保设置正确[^4]。
阅读全文

相关推荐

import os import sys import random import shutil import subprocess import numpy as np from scipy.fftpack import dct, idct from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QLabel, QLineEdit, QFileDialog, QMessageBox, QProgressBar, QTextEdit) from PyQt5.QtCore import Qt, QThread, pyqtSignal import cv2 from moviepy.editor import VideoFileClip, ColorClip class VideoProcessor(QThread): progress_signal = pyqtSignal(int) log_signal = pyqtSignal(str) finished_signal = pyqtSignal(bool) def __init__(self, a_video_path, output_dir, num_videos): super().__init__() self.a_video_path = a_video_path self.output_dir = output_dir self.num_videos = num_videos self.video_dir = "video" self.daifa_dir = "daifa" def run(self): try: # 创建输出目录 if not os.path.exists(self.daifa_dir): os.makedirs(self.daifa_dir) # 获取B视频列表 b_videos = [f for f in os.listdir(self.video_dir) if f.lower().endswith(('.mp4', '.avi', '.mov', '.mkv'))] if len(b_videos) < self.num_videos: self.log_signal.emit(f"错误: video文件夹中只有{len(b_videos)}个视频,但需要{self.num_videos}个") self.finished_signal.emit(False) return # 随机选择B视频 selected_b_videos = random.sample(b_videos, self.num_videos) # 处理A视频 a_video_info = self.get_video_info(self.a_video_path) self.log_signal.emit(f"A视频信息: 时长={a_video_info['duration']}秒, 分辨率={a_video_info['width']}x{a_video_info['height']}") # 提取A视频音频 audio_path = self.extract_audio(self.a_video_path) self.log_signal.emit("已提取A视频音频") # 处理音频隐写 stego_audio_path = self.audio_steganography(audio_path) self.log_signal.emit("已完成音频隐写处理") for i, b_video_name in enumerate(selected_b_videos): self.log_signal.emit(f"处理第{i+1}个视频: {b_video_name}") # 处理A视频帧 a_frames_dir = self.process_a_video(self.a_video_path, a_video_info) self.progress_signal.emit(25) # 处理B视频 b_video_path = os.path.join(self.video_dir, b_video_name) b_frames_dir = self.process_b_video(b_video_path, a_video_info) self.progress_signal.emit(50) # 嵌入隐写 stego_frames_dir = self.embed_frames(a_frames_dir, b_frames_dir, a_video_info) self.progress_signal.emit(75) # 合成最终视频 output_path = os.path.join(self.daifa_dir, f"daifa{i+1}.mp4") self.create_final_video(stego_frames_dir, stego_audio_path, output_path, a_video_info) # 添加元数据 self.add_metadata(output_path) # 清理临时文件 self.cleanup([a_frames_dir, b_frames_dir, stego_frames_dir]) self.progress_signal.emit(100) self.log_signal.emit(f"已完成第{i+1}个视频处理: {output_path}") # 删除已使用的B视频 os.remove(b_video_path) # 清理音频文件 self.cleanup([audio_path, stego_audio_path]) self.finished_signal.emit(True) except Exception as e: self.log_signal.emit(f"处理过程中出错: {str(e)}") self.finished_signal.emit(False) def get_video_info(self, video_path): clip = VideoFileClip(video_path) info = { 'duration': clip.duration, 'fps': clip.fps, 'width': clip.w, 'height': clip.h } clip.close() return info def extract_audio(self, video_path): clip = VideoFileClip(video_path) audio_path = "temp_audio.wav" clip.audio.write_audiofile(audio_path, verbose=False, logger=None) clip.close() return audio_path def audio_steganography(self, audio_path): # 这里实现音频隐写算法 # 简化实现:只是复制文件 stego_audio_path = "temp_audio_stego.wav" shutil.copyfile(audio_path, stego_audio_path) return stego_audio_path def process_a_video(self, video_path, video_info): frames_dir = "temp_a_frames" if os.path.exists(frames_dir): shutil.rmtree(frames_dir) os.makedirs(frames_dir) # 创建扰动背景视频 bg_clip = ColorClip((720, 1560), color=[0, 0, 0], duration=video_info['duration']) bg_clip = bg_clip.set_fps(30) # 加载A视频并调整大小 a_clip = VideoFileClip(video_path) # 计算缩放比例 scale = min(720 / a_clip.w, 1560 / a_clip.h) new_w = int(a_clip.w * scale) new_h = int(a_clip.h * scale) # 调整大小并设置位置 a_clip = a_clip.resize((new_w, new_h)) a_clip = a_clip.set_opacity(0.98) # 合成视频 x_pos = (720 - new_w) // 2 y_pos = (1560 - new_h) // 2 final_clip = bg_clip.set_duration(a_clip.duration) final_clip = final_clip.set_position((x_pos, y_pos)) final_clip = final_clip.set_opacity(1) # 写入临时文件 temp_video = "temp_a_processed.mp4" final_clip.write_videofile( temp_video, codec='libx264', audio=False, fps=30, verbose=False, logger=None ) # 拆帧 cap = cv2.VideoCapture(temp_video) frame_count = 0 while True: ret, frame = cap.read() if not ret: break cv2.imwrite(os.path.join(frames_dir, f"frame_{frame_count:06d}.png"), frame) frame_count += 1 cap.release() # 清理临时文件 os.remove(temp_video) a_clip.close() bg_clip.close() final_clip.close() return frames_dir def process_b_video(self, video_path, a_video_info): frames_dir = "temp_b_frames" if os.path.exists(frames_dir): shutil.rmtree(frames_dir) os.makedirs(frames_dir) # 创建扰动背景视频 bg_clip = ColorClip((720, 1560), color=[0, 0, 0], duration=a_video_info['duration']) bg_clip = bg_clip.set_fps(30) # 加载B视频并调整大小 b_clip = VideoFileClip(video_path) # 裁剪到与A视频相同时长 if b_clip.duration > a_video_info['duration']: b_clip = b_clip.subclip(0, a_video_info['duration']) # 计算缩放比例 scale = min(720 / b_clip.w, 1560 / b_clip.h) new_w = int(b_clip.w * scale) new_h = int(b_clip.h * scale) # 调整大小 b_clip = b_clip.resize((new_w, new_h)) # 合成视频 x_pos = (720 - new_w) // 2 y_pos = (1560 - new_h) // 2 final_clip = bg_clip.set_duration(b_clip.duration) final_clip = final_clip.set_position((x_pos, y_pos)) # 写入临时文件 temp_video = "temp_b_processed.mp4" final_clip.write_videofile( temp_video, codec='libx264', audio=False, fps=30, verbose=False, logger=None ) # 拆帧 cap = cv2.VideoCapture(temp_video) frame_count = 0 while True: ret, frame = cap.read() if not ret: break cv2.imwrite(os.path.join(frames_dir, f"frame_{frame_count:06d}.png"), frame) frame_count += 1 cap.release() # 清理临时文件 os.remove(temp_video) b_clip.close() bg_clip.close() final_clip.close() return frames_dir def dct_embed(self, cover_frame, secret_frame, alpha=0.01): # 将图像转换为YUV颜色空间 cover_yuv = cv2.cvtColor(cover_frame, cv2.COLOR_BGR2YUV) secret_yuv = cv2.cvtColor(secret_frame, cv2.COLOR_BGR2YUV) # 只使用Y通道进行隐写 cover_y = cover_yuv[:,:,0].astype(np.float32) secret_y = secret_yuv[:,:,0].astype(np.float32) # 将秘密图像缩放到合适的大小 secret_y = cv2.resize(secret_y, (cover_y.shape[1]//8, cover_y.shape[0]//8)) # 对封面图像进行8x8分块DCT stego_y = cover_y.copy() for i in range(0, cover_y.shape[0], 8): for j in range(0, cover_y.shape[1], 8): if i//8 < secret_y.shape[0] and j//8 < secret_y.shape[1]: block = cover_y[i:i+8, j:j+8] dct_block = dct(dct(block.T, norm='ortho').T, norm='ortho') # 在中频系数中嵌入信息 dct_block[4, 4] += alpha * secret_y[i//8, j//8] idct_block = idct(idct(dct_block.T, norm='ortho').T, norm='ortho') stego_y[i:i+8, j:j+8] = idct_block # 合并回YUV图像 stego_yuv = cover_yuv.copy() stego_yuv[:,:,0] = stego_y # 转换回BGR颜色空间 stego_frame = cv2.cvtColor(stego_yuv, cv2.COLOR_YUV2BGR) return stego_frame def embed_frames(self, a_frames_dir, b_frames_dir, video_info): stego_frames_dir = "temp_stego_frames" if os.path.exists(stego_frames_dir): shutil.rmtree(stego_frames_dir) os.makedirs(stego_frames_dir) a_frames = sorted([f for f in os.listdir(a_frames_dir) if f.endswith('.png')]) b_frames = sorted([f for f in os.listdir(b_frames_dir) if f.endswith('.png')]) total_frames = min(len(a_frames), len(b_frames)) for i in range(total_frames): a_frame_path = os.path.join(a_frames_dir, a_frames[i]) b_frame_path = os.path.join(b_frames_dir, b_frames[i]) a_frame = cv2.imread(a_frame_path) b_frame = cv2.imread(b_frame_path) # 使用DCT隐写 stego_frame = self.dct_embed(a_frame, b_frame) # 保存隐写后的帧 cv2.imwrite(os.path.join(stego_frames_dir, f"frame_{i:06d}.png"), stego_frame) if i % 30 == 0: # 每秒钟更新一次进度 self.progress_signal.emit(50 + int(25 * i / total_frames)) return stego_frames_dir def create_final_video(self, frames_dir, audio_path, output_path, video_info): # 获取帧列表 frames = sorted([f for f in os.listdir(frames_dir) if f.endswith('.png')]) # 创建视频写入器 first_frame = cv2.imread(os.path.join(frames_dir, frames[0])) height, width = first_frame.shape[:2] fourcc = cv2.VideoWriter_fourcc(*'avc1') out = cv2.VideoWriter(output_path, fourcc, 30, (width, height)) # 写入所有帧 for frame_name in frames: frame = cv2.imread(os.path.join(frames_dir, frame_name)) out.write(frame) out.release() # 添加音频 cmd = [ 'ffmpeg', '-y', '-i', output_path, '-i', audio_path, '-c:v', 'copy', '-c:a', 'aac', '-b:a', '96k', '-strict', 'experimental', output_path + '_with_audio.mp4' ] subprocess.run(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) # 替换原文件 os.remove(output_path) os.rename(output_path + '_with_audio.mp4', output_path) def add_metadata(self, video_path): # 随机生成元数据 dates = ['2023:06:15 12:30:45', '2023:07:20 14:25:30', '2023:08:10 09:15:22'] locations = ['Beijing', 'Shanghai', 'Guangzhou', 'Shenzhen'] devices = ['iPhone 14 Pro', 'HUAWEI P60', 'Xiaomi 13', 'Canon EOS R5'] date = random.choice(dates) location = random.choice(locations) device = random.choice(devices) # 使用FFmpeg添加元数据 cmd = [ 'ffmpeg', '-y', '-i', video_path, '-metadata', f'creation_time={date}', '-metadata', f'location={location}', '-metadata', f'model={device}', '-c', 'copy', video_path + '_with_metadata.mp4' ] subprocess.run(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) # 替换原文件 os.remove(video_path) os.rename(video_path + '_with_metadata.mp4', video_path) def cleanup(self, paths): for path in paths: if os.path.exists(path): if os.path.isfile(path): os.remove(path) else: shutil.rmtree(path) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.initUI() def initUI(self): self.setWindowTitle('视频隐写处理软件') self.setGeometry(100, 100, 600, 500) central_widget = QWidget() self.setCentralWidget(central_widget) layout = QVBoxLayout() # A视频选择 a_video_layout = QHBoxLayout() self.a_video_label = QLabel('A视频路径:') self.a_video_path = QLineEdit() self.a_video_browse = QPushButton('浏览') self.a_video_browse.clicked.connect(self.browse_a_video) a_video_layout.addWidget(self.a_video_label) a_video_layout.addWidget(self.a_video_path) a_video_layout.addWidget(self.a_video_browse) layout.addLayout(a_video_layout) # 处理数量输入 num_layout = QHBoxLayout() self.num_label = QLabel('处理数量:') self.num_input = QLineEdit('1') num_layout.addWidget(self.num_label) num_layout.addWidget(self.num_input) layout.addLayout(num_layout) # 进度条 self.progress_bar = QProgressBar() layout.addWidget(self.progress_bar) # 日志显示 self.log_display = QTextEdit() self.log_display.setReadOnly(True) layout.addWidget(self.log_display) # 开始按钮 self.start_button = QPushButton('开始处理') self.start_button.clicked.connect(self.start_processing) layout.addWidget(self.start_button) central_widget.setLayout(layout) def browse_a_video(self): file_path, _ = QFileDialog.getOpenFileName( self, '选择A视频', '', '视频文件 (*.mp4 *.avi *.mov *.mkv)') if file_path: self.a_video_path.setText(file_path) def log_message(self, message): self.log_display.append(message) def start_processing(self): a_video_path = self.a_video_path.text() if not os.path.exists(a_video_path): QMessageBox.warning(self, '错误', '请选择有效的A视频文件') return try: num_videos = int(self.num_input.text()) video_dir = "video" if not os.path.exists(video_dir): os.makedirs(video_dir) QMessageBox.warning(self, '错误', 'video文件夹不存在,已创建空文件夹') return b_videos = [f for f in os.listdir(video_dir) if f.lower().endswith(('.mp4', '.avi', '.mov', '.mkv'))] if len(b_videos) < num_videos: QMessageBox.warning( self, '错误', f'video文件夹中只有{len(b_videos)}个视频,但需要{num_videos}个' ) return except ValueError: QMessageBox.warning(self, '错误', '请输入有效的数字') return self.start_button.setEnabled(False) self.progress_bar.setValue(0) self.log_display.clear() self.processor = VideoProcessor(a_video_path, "daifa", num_videos) self.processor.progress_signal.connect(self.progress_bar.setValue) self.processor.log_signal.connect(self.log_message) self.processor.finished_signal.connect(self.processing_finished) self.processor.start() def processing_finished(self, success): self.start_button.setEnabled(True) if success: QMessageBox.information(self, '完成', '视频处理完成') else: QMessageBox.warning(self, '错误', '视频处理过程中出现错误') if __name__ == '__main__': app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec_()) 给这个代码一个详细打包步骤

import os import sys import cv2 import numpy as np import subprocess import tempfile import shutil import random import wave import struct from PIL import Image, ImageDraw, ImageFont # 尝试导入PyQt5,如果失败则提供友好的错误信息 try: from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QLabel, QFileDialog, QMessageBox, QProgressBar, QGroupBox, QTextEdit, QCheckBox, QListWidget, QListWidgetItem, QComboBox) from PyQt5.QtCore import Qt, QThread, pyqtSignal from PyQt5.QtGui import QFont, QPalette, QColor except ImportError: print("错误: 需要安装PyQt5库") print("请运行: pip install PyQt5") sys.exit(1) # 尝试导入GPU相关库 try: import pyopencl as cl OPENCL_AVAILABLE = True except ImportError: OPENCL_AVAILABLE = False try: import cupy as cp CUDA_AVAILABLE = True except ImportError: CUDA_AVAILABLE = False class AudioSteganography: """音频隐写处理类""" @staticmethod def embed_message(audio_path, message, output_path): """使用改进的LSB算法在音频中嵌入消息""" try: # 读取音频文件 with wave.open(audio_path, 'rb') as audio: params = audio.getparams() frames = audio.readframes(audio.getnframes()) # 将音频数据转换为字节数组 audio_data = bytearray(frames) # 将消息转换为二进制 binary_message = ''.join(format(ord(char), '08b') for char in message) binary_message += '00000000' # 添加终止符 # 检查音频容量是否足够 if len(binary_message) > len(audio_data) * 8: raise ValueError("音频文件太小,无法嵌入消息") # 使用改进的LSB算法嵌入消息(每4个样本嵌入1位) sample_interval = 4 # 每4个样本嵌入1位 message_index = 0 for i in range(0, len(audio_data), sample_interval): if message_index >= len(binary_message): break # 修改每个样本的最低有效位 audio_data[i] = (audio_data[i] & 0xFE) | int(binary_message[message_index]) message_index += 1 # 保存带有隐写信息的音频 with wave.open(output_path, 'wb') as output_audio: output_audio.setparams(params) output_audio.writeframes(bytes(audio_data)) return True, "音频隐写成功" except Exception as e: return False, f"音频隐写失败: {str(e)}" @staticmethod def extract_message(audio_path): """从音频中提取隐藏的消息""" try: # 读取音频文件 with wave.open(audio_path, 'rb') as audio: frames = audio.readframes(audio.getnframes()) # 将音频数据转换为字节数组 audio_data = bytearray(frames) # 提取LSB位 binary_message = '' sample_interval = 4 # 与嵌入时保持一致 for i in range(0, len(audio_data), sample_interval): binary_message += str(audio_data[i] & 1) # 将二进制转换为字符 message = '' for i in range(0, len(binary_message), 8): byte = binary_message[i:i+8] if len(byte) < 8: break char = chr(int(byte, 2)) if char == '\0': # 遇到终止符停止 break message += char return True, message except Exception as e: return False, f"消息提取失败: {str(e)}" class VideoProcessor(QThread): progress_updated = pyqtSignal(int) status_updated = pyqtSignal(str) finished = pyqtSignal(bool, str) batch_progress = pyqtSignal(int, int) # 当前处理, 总计 def __init__(self, video_a_path, video_b_paths, output_dir, use_gpu=False, gpu_type="auto"): super().__init__() self.video_a_path = video_a_path self.video_b_paths = video_b_paths self.output_dir = output_dir self.use_gpu = use_gpu self.gpu_type = gpu_type self.temp_dir = tempfile.mkdtemp() self.gpu_context = None self.gpu_queue = None self.gpu_device = None # 初始化GPU环境 if self.use_gpu: self.init_gpu() def init_gpu(self): """初始化GPU环境""" try: if self.gpu_type == "cuda" or (self.gpu_type == "auto" and CUDA_AVAILABLE): # 使用CUDA self.status_updated.emit("初始化CUDA环境...") # 检查可用GPU devices = cp.cuda.runtime.getDeviceCount() if devices > 0: self.status_updated.emit(f"找到 {devices} 个NVIDIA GPU") # 使用第一个可用的GPU cp.cuda.Device(0).use() self.gpu_type = "cuda" return True else: self.status_updated.emit("未找到NVIDIA GPU,尝试使用OpenCL") self.gpu_type = "opencl" if self.gpu_type == "opencl" or (self.gpu_type == "auto" and OPENCL_AVAILABLE): # 使用OpenCL self.status_updated.emit("初始化OpenCL环境...") platforms = cl.get_platforms() # 优先寻找Intel Arc显卡 intel_arc_found = False for platform in platforms: devices = platform.get_devices(device_type=cl.device_type.GPU) for device in devices: device_name = device.name if "Intel" in device_name and ("Arc" in device_name or "A770" in device_name): self.status_updated.emit(f"找到Intel Arc显卡: {device_name}") self.gpu_context = cl.Context([device]) self.gpu_queue = cl.CommandQueue(self.gpu_context) self.gpu_device = device self.gpu_type = "opencl" intel_arc_found = True break if intel_arc_found: break # 如果没有找到Intel Arc,寻找其他GPU if not intel_arc_found: for platform in platforms: devices = platform.get_devices(device_type=cl.device_type.GPU) if devices: self.gpu_context = cl.Context(devices) self.gpu_queue = cl.CommandQueue(self.gpu_context) self.gpu_device = devices[0] self.status_updated.emit(f"找到OpenCL GPU: {devices[0].name}") self.gpu_type = "opencl" return True # 如果没有找到GPU,尝试使用CPU if not intel_arc_found: for platform in platforms: devices = platform.get_devices(device_type=cl.device_type.CPU) if devices: self.gpu_context = cl.Context(devices) self.gpu_queue = cl.CommandQueue(self.gpu_context) self.gpu_device = devices[0] self.status_updated.emit(f"使用OpenCL CPU: {devices[0].name}") self.gpu_type = "opencl" return True self.status_updated.emit("未找到OpenCL设备,将使用CPU") self.use_gpu = False return False else: self.status_updated.emit("未安装GPU支持库,将使用CPU") self.use_gpu = False return False except Exception as e: self.status_updated.emit(f"GPU初始化失败: {str(e)},将使用CPU") self.use_gpu = False return False def run(self): try: # 创建OK文件夹 ok_dir = os.path.join(self.output_dir, "OK") os.makedirs(ok_dir, exist_ok=True) total_videos = len(self.video_b_paths) for idx, video_b_path in enumerate(self.video_b_paths): output_filename = f"output_{os.path.basename(video_b_path).split('.')[0]}.mp4" output_path = os.path.join(ok_dir, output_filename) self.batch_progress.emit(idx + 1, total_videos) self.status_updated.emit(f"处理视频 {idx + 1}/{total_videos}: {os.path.basename(video_b_path)}") # 处理单个视频对 success, message = self.process_single_video(self.video_a_path, video_b_path, output_path) if not success: self.finished.emit(False, f"处理失败: {message}") return self.finished.emit(True, f"批量处理完成!共处理 {total_videos} 个视频,输出保存在 {ok_dir}") except Exception as e: import traceback error_details = traceback.format_exc() self.finished.emit(False, f"处理过程中出现错误: {str(e)}\n详细信息:\n{error_details}") finally: # 清理临时文件 if os.path.exists(self.temp_dir): try: shutil.rmtree(self.temp_dir) except: pass def process_single_video(self, video_a_path, video_b_path, output_path): """处理单个视频对""" try: self.status_updated.emit("开始处理视频...") # 提取音频 self.status_updated.emit("提取音频...") audio_path = self.extract_audio(video_a_path) self.progress_updated.emit(10) # 处理视频A self.status_updated.emit("处理视频A...") a_frames_dir = self.process_video_a(video_a_path) self.progress_updated.emit(30) # 处理视频B self.status_updated.emit("处理视频B...") b_frames_dir = self.process_video_b(video_b_path, len(os.listdir(a_frames_dir))) self.progress_updated.emit(50) # 嵌入隐写 self.status_updated.emit("嵌入隐写信息...") stego_frames_dir = self.embed_steganography(a_frames_dir, b_frames_dir) self.progress_updated.emit(70) # 处理音频并合成最终视频 self.status_updated.emit("处理音频并合成最终视频...") self.process_audio_and_assemble(stego_frames_dir, audio_path, output_path) self.progress_updated.emit(90) # 添加随机元数据 self.status_updated.emit("添加元数据...") self.add_random_metadata(output_path) self.progress_updated.emit(100) return True, "处理完成" except Exception as e: import traceback error_details = traceback.format_exc() return False, f"处理过程中出现错误: {str(e)}\n详细信息:\n{error_details}" def extract_audio(self, video_path): """提取音频并转换为单声道""" audio_path = os.path.join(self.temp_dir, "audio.wav") # 使用ffmpeg提取音频并转换为单声道 cmd = [ 'ffmpeg', '-i', video_path, '-vn', '-ac', '1', '-ar', '44100', '-y', audio_path ] try: subprocess.run(cmd, check=True, capture_output=True) return audio_path except subprocess.CalledProcessError as e: # 如果提取失败,创建一个空的音频文件 self.status_updated.emit("警告: 无法提取音频,将创建空音频") open(audio_path, 'a').close() return audio_path def process_video_a(self, video_path): """处理视频A""" # 创建输出目录 output_dir = os.path.join(self.temp_dir, "video_a_frames") os.makedirs(output_dir, exist_ok=True) # 获取视频信息 cap = cv2.VideoCapture(video_path) total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) fps = cap.get(cv2.CAP_PROP_FPS) width, height = 1080, 2336 # 生成不可见黑色扰动背景 for i in range(total_frames): # 创建带有轻微扰动的黑色背景 background = np.random.randint(0, 3, (height, width, 3), dtype=np.uint8) # 读取原始帧 ret, frame = cap.read() if not ret: break # 调整原始帧大小并居中放置 h, w = frame.shape[:2] scale = min((width-10)/w, (height-10)/h) new_w, new_h = int(w * scale), int(h * scale) resized_frame = cv2.resize(frame, (new_w, new_h)) # 将调整后的帧放置在背景上,设置不透明度为98% x_offset = (width - new_w) // 2 y_offset = (height - new_h) // 2 # 创建叠加层 overlay = background.copy() roi = overlay[y_offset:y_offset+new_h, x_offset:x_offset+new_w] # 使用加权叠加实现98%不透明度 cv2.addWeighted(resized_frame, 0.98, roi, 0.02, 0, roi) # 保存帧 cv2.imwrite(os.path.join(output_dir, f"frame_{i:06d}.png"), overlay) # 更新进度 if i % 10 == 0: self.status_updated.emit(f"处理视频A帧: {i}/{total_frames}") cap.release() return output_dir def process_video_b(self, video_path, total_frames_needed): """处理视频B""" # 创建输出目录 output_dir = os.path.join(self.temp_dir, "video_b_frames") os.makedirs(output_dir, exist_ok=True) # 获取视频B的信息 cap_b = cv2.VideoCapture(video_path) total_frames_b = int(cap_b.get(cv2.CAP_PROP_FRAME_COUNT)) fps_b = cap_b.get(cv2.CAP_PROP_FPS) width, height = 1080, 2336 # 计算需要从视频B中提取的帧 start_frame = 0 if total_frames_b > total_frames_needed: start_frame = random.randint(0, total_frames_b - total_frames_needed) # 生成不可见黑色扰动背景并处理视频B for i in range(total_frames_needed): # 创建带有轻微扰动的黑色背景 background = np.random.randint(0, 3, (height, width, 3), dtype=np.uint8) # 读取原始帧(从适当的位置) frame_idx = min(start_frame + i, total_frames_b - 1) cap_b.set(cv2.CAP_PROP_POS_FRAMES, frame_idx) ret, frame = cap_b.read() if not ret: break # 调整原始帧大小并居中放置 h, w = frame.shape[:2] scale = min((width-10)/w, (height-10)/h) new_w, new_h = int(w * scale), int(h * scale) resized_frame = cv2.resize(frame, (new_w, new_h)) # 将调整后的帧放置在背景上 x_offset = (width - new_w) // 2 y_offset = (height - new_h) // 2 # 创建叠加层 overlay = background.copy() roi = overlay[y_offset:y_offset+new_h, x_offset:x_offset+new_w] # 叠加帧 cv2.addWeighted(resized_frame, 1.0, roi, 0.0, 0, roi) # 保存帧 cv2.imwrite(os.path.join(output_dir, f"frame_{i:06d}.png"), overlay) # 更新进度 if i % 10 == 0: self.status_updated.emit(f"处理视频B帧: {i}/{total_frames_needed}") cap_b.release() return output_dir def dct_embed_gpu(self, carrier, secret): """使用GPU加速的DCT隐写""" if self.gpu_type == "cuda" and CUDA_AVAILABLE and self.use_gpu: # 使用CuPy进行GPU加速 carrier_gpu = cp.asarray(carrier) secret_gpu = cp.asarray(secret) # 转换为YUV颜色空间 carrier_yuv = cp.zeros_like(carrier_gpu) secret_yuv = cp.zeros_like(secret_gpu) # RGB到YUV转换矩阵 transform = cp.array([[0.299, 0.587, 0.114], [-0.14713, -0.28886, 0.436], [0.615, -0.51499, -0.10001]]) # 应用转换矩阵 for i in range(carrier_gpu.shape[0]): for j in range(carrier_gpu.shape[1]): carrier_yuv[i, j] = cp.dot(transform, carrier_gpu[i, j]) secret_yuv[i, j] = cp.dot(transform, secret_gpu[i, j]) # 只使用Y通道进行DCT变换 carrier_y = carrier_yuv[:,:,0].astype(cp.float32) secret_y = secret_yuv[:,:,0].astype(cp.float32) # 对载体和秘密图像进行DCT变换 carrier_dct = cp.fft.dct(cp.fft.dct(carrier_y, axis=0), axis=1) secret_dct = cp.fft.dct(cp.fft.dct(secret_y, axis=0), axis=1) # 嵌入强度因子 alpha = 0.03 # 在DCT域中嵌入秘密图像 stego_dct = carrier_dct + alpha * secret_dct # 进行逆DCT变换 stego_y = cp.fft.idct(cp.fft.idct(stego_dct, axis=1), axis=0) # 将结果放回YUV图像中 stego_yuv = carrier_yuv.copy() stego_yuv[:,:,0] = stego_y # YUV到RGB转换矩阵 inv_transform = cp.linalg.inv(transform) # 转换回RGB颜色空间 stego_bgr = cp.zeros_like(stego_yuv) for i in range(stego_yuv.shape[0]): for j in range(stego_yuv.shape[1]): stego_bgr[i, j] = cp.dot(inv_transform, stego_yuv[i, j]) return cp.clip(stego_bgr, 0, 255).astype(cp.uint8).get() elif self.gpu_type == "opencl" and OPENCL_AVAILABLE and self.use_gpu and self.gpu_context: # 使用OpenCL进行GPU加速,特别优化Intel Arc显卡 return self.dct_embed_opencl(carrier, secret) else: # 使用CPU版本 return self.dct_embed_cpu(carrier, secret) def dct_embed_opencl(self, carrier, secret): """使用OpenCL进行DCT隐写,特别优化Intel Arc显卡""" try: # 将图像转换为YUV颜色空间 carrier_yuv = cv2.cvtColor(carrier, cv2.COLOR_BGR2YUV) secret_yuv = cv2.cvtColor(secret, cv2.COLOR_BGR2YUV) # 只使用Y通道进行DCT变换 carrier_y = carrier_yuv[:,:,0].astype(np.float32) secret_y = secret_yuv[:,:,0].astype(np.float32) # 创建OpenCL缓冲区 carrier_buffer = cl.Buffer(self.gpu_context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=carrier_y) secret_buffer = cl.Buffer(self.gpu_context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=secret_y) # 创建输出缓冲区 stego_dct_buffer = cl.Buffer(self.gpu_context, cl.mem_flags.WRITE_ONLY, carrier_y.nbytes) stego_y_buffer = cl.Buffer(self.gpu_context, cl.mem_flags.WRITE_ONLY, carrier_y.nbytes) # 编译OpenCL程序 dct_program = cl.Program(self.gpu_context, """ __kernel void dct_embed(__global float* carrier, __global float* secret, __global float* stego_dct, __global float* stego_y, float alpha, int width, int height) { int x = get_global_id(0); int y = get_global_id(1); int idx = y * width + x; // DCT变换 (简化实现) // 在实际应用中,这里应该实现完整的2D DCT算法 // 这里使用简化的DCT近似 float dct_carrier = carrier[idx] * cos((2*x+1)*y*M_PI/(2*width)); float dct_secret = secret[idx] * cos((2*x+1)*y*M_PI/(2*width)); // 嵌入秘密图像 stego_dct[idx] = dct_carrier + alpha * dct_secret; // 逆DCT变换 stego_y[idx] = stego_dct[idx] * cos((2*x+1)*y*M_PI/(2*width)); } """).build() # 设置内核参数 width, height = carrier_y.shape[1], carrier_y.shape[0] alpha = np.float32(0.03) # 执行内核 dct_program.dct_embed(self.gpu_queue, carrier_y.shape, None, carrier_buffer, secret_buffer, stego_dct_buffer, stego_y_buffer, alpha, np.int32(width), np.int32(height)) # 读取结果 stego_y = np.empty_like(carrier_y) cl.enqueue_copy(self.gpu_queue, stego_y, stego_y_buffer) # 将结果放回YUV图像中 stego_yuv = carrier_yuv.copy() stego_yuv[:,:,0] = stego_y # 转换回BGR颜色空间 stego_bgr = cv2.cvtColor(stego_yuv, cv2.COLOR_YUV2BGR) return np.clip(stego_bgr, 0, 255).astype(np.uint8) except Exception as e: self.status_updated.emit(f"OpenCL处理失败: {str(e)},将使用CPU") return self.dct_embed_cpu(carrier, secret) def dct_embed_cpu(self, carrier, secret): """在DCT域中嵌入秘密图像(CPU版本)""" # 将图像转换为YUV颜色空间 carrier_yuv = cv2.cvtColor(carrier, cv2.COLOR_BGR2YUV) secret_yuv = cv2.cvtColor(secret, cv2.COLOR_BGR2YUV) # 只使用Y通道进行DCT变换 carrier_y = carrier_yuv[:,:,0].astype(np.float32) secret_y = secret_yuv[:,:,0].astype(np.float32) # 对载体和秘密图像进行DCT变换 carrier_dct = cv2.dct(carrier_y) secret_dct = cv2.dct(secret_y) # 嵌入强度因子 alpha = 0.03 # 在DCT域中嵌入秘密图像 stego_dct = carrier_dct + alpha * secret_dct # 进行逆DCT变换 stego_y = cv2.idct(stego_dct) # 将结果放回YUV图像中 stego_yuv = carrier_yuv.copy() stego_yuv[:,:,0] = stego_y # 转换回BGR颜色空间 stego_bgr = cv2.cvtColor(stego_yuv, cv2.COLOR_YUV2BGR) return np.clip(stego_bgr, 0, 255).astype(np.uint8) def embed_steganography(self, a_frames_dir, b_frames_dir): """嵌入隐写信息""" # 创建输出目录 output_dir = os.path.join(self.temp_dir, "stego_frames") os.makedirs(output_dir, exist_ok=True) # 获取帧列表 a_frames = sorted([f for f in os.listdir(a_frames_dir) if f.endswith('.png')]) total_frames = len(a_frames) for i, frame_name in enumerate(a_frames): # 读取A视频帧 carrier_frame = cv2.imread(os.path.join(a_frames_dir, frame_name)) # 读取B视频帧 secret_frame = cv2.imread(os.path.join(b_frames_dir, frame_name)) # 调整秘密图像大小以匹配载体图像 secret_frame = cv2.resize(secret_frame, (carrier_frame.shape[1], carrier_frame.shape[0])) # 在DCT域中嵌入秘密图像 if self.use_gpu: stego_frame = self.dct_embed_gpu(carrier_frame, secret_frame) else: stego_frame = self.dct_embed_cpu(carrier_frame, secret_frame) # 添加数字水印(版权保护)- 透明度极低,人眼几乎不可见 stego_frame = self.add_watermark(stego_frame) # 保存处理后的帧 cv2.imwrite(os.path.join(output_dir, frame_name), stego_frame) # 更新进度 if i % 10 == 0: self.status_updated.emit(f"嵌入隐写帧: {i}/{total_frames}") return output_dir def add_watermark(self, image): """添加数字水印 - 透明度极低,人眼几乎不可见""" # 将OpenCV图像转换为PIL图像 pil_image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # 创建一个绘图对象 draw = ImageDraw.Draw(pil_image, 'RGBA') # 使用默认字体 try: font = ImageFont.load_default() # 尝试加载系统字体 try: font = ImageFont.truetype("arial.ttf", 20) except: pass except: pass # 添加水印文本 - 使用极低的透明度 (约2%) watermark_text = "Copyright Protected" # 获取文本尺寸 try: # 对于较新版本的Pillow bbox = draw.textbbox((0, 0), watermark_text, font=font) text_width = bbox[2] - bbox[0] text_height = bbox[3] - bbox[1] except: # 对于较旧版本的Pillow try: text_width, text_height = draw.textsize(watermark_text, font=font) except: # 如果所有方法都失败,使用估计值 text_width, text_height = 150, 20 # 在多个位置添加水印 positions = [ (10, 10), (image.shape[1] - text_width - 10, 10), (10, image.shape[0] - text_height - 10), (image.shape[1] - text_width - 10, image.shape[0] - text_height - 10) ] for position in positions: # 添加文本 - 使用极低的透明度 (约2%) draw.text(position, watermark_text, (255, 255, 255, 5), font=font) # 转换回OpenCV图像 return cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2BGR) def process_audio_and_assemble(self, frames_dir, audio_path, output_path): """处理音频并合成最终视频""" # 生成随机噪声音频 noise_audio_path = os.path.join(self.temp_dir, "noise_audio.wav") # 获取音频信息 try: cmd = ['ffprobe', '-i', audio_path, '-show_entries', 'format=duration', '-v', 'quiet', '-of', 'csv=p=0'] result = subprocess.run(cmd, capture_output=True, text=True) duration = float(result.stdout.strip()) # 生成随机噪声(极低音量) sample_rate = 44100 t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False) noise = 0.0005 * np.random.randn(len(t)) # 极低音量噪声 # 保存噪声音频 import scipy.io.wavfile as wavfile wavfile.write(noise_audio_path, sample_rate, noise.astype(np.float32)) except: # 如果生成噪声失败,创建一个空的音频文件 open(noise_audio_path, 'a').close() # 在音频中嵌入隐写信息 stego_audio_path = os.path.join(self.temp_dir, "stego_audio.wav") message = "HiddenSteganoMessage2023" success, msg = AudioSteganography.embed_message(audio_path, message, stego_audio_path) if not success: self.status_updated.emit(f"音频隐写警告: {msg}") stego_audio_path = audio_path # 使用原始音频 # 合并音频(左声道为隐写音频,右声道为噪声) mixed_audio_path = os.path.join(self.temp_dir, "mixed_audio.wav") cmd = [ 'ffmpeg', '-y', '-i', stego_audio_path, '-i', noise_audio_path, '-filter_complex', '[0:a][1:a]amerge=inputs=2,pan=stereo|c0<c0+c1|c1<c2+c3[aout]', '-map', '[aout]', mixed_audio_path ] try: subprocess.run(cmd, check=True, capture_output=True) except: # 如果合并失败,使用隐写音频 mixed_audio_path = stego_audio_path # 使用ffmpeg从帧序列创建视频 frame_pattern = os.path.join(frames_dir, "frame_%06d.png") # 生成随机比特率 (5000-7000kbps) bitrate = random.randint(5000, 7000) self.status_updated.emit(f"使用比特率: {bitrate}kbps") # 使用H.264编码和指定的参数 cmd = [ 'ffmpeg', '-y', '-framerate', '30', '-i', frame_pattern, '-i', mixed_audio_path, '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-crf', '18', # 较低CRF值以获得更高质量 '-preset', 'fast' if not self.use_gpu else 'medium', '-b:v', f'{bitrate}k', '-maxrate', f'{bitrate + 1000}k', '-bufsize', f'{bitrate * 2}k', '-s', '1080x2336', '-c:a', 'aac', '-b:a', '128k', '-metadata', 'title=Processed Video', output_path ] try: subprocess.run(cmd, check=True, capture_output=True) except subprocess.CalledProcessError as e: # 如果添加音频失败,尝试创建没有音频的视频 cmd_no_audio = [ 'ffmpeg', '-y', '-framerate', '30', '-i', frame_pattern, '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-crf', '18', '-preset', 'fast' if not self.use_gpu else 'medium', '-b:v', f'{bitrate}k', '-maxrate', f'{bitrate + 1000}k', '-bufsize', f'{bitrate * 2}k', '-s', '1080x2336', '-metadata', 'title=Processed Video', output_path ] subprocess.run(cmd_no_audio, check=True, capture_output=True) # 添加声纹去重处理(简化实现) self.add_voiceprint_processing(output_path) def add_voiceprint_processing(self, video_path): """添加声纹去重处理(简化实现)""" # 在实际应用中,这里应该实现复杂的声纹处理算法 # 这里只是一个简单的示例,添加一些元数据标记 temp_output = video_path + ".temp.mp4" cmd = [ 'ffmpeg', '-i', video_path, '-metadata', 'voiceprint_processed=true', '-metadata', 'voiceprint_hash=' + ''.join(random.choices('0123456789abcdef', k=32)), '-codec', 'copy', '-y', temp_output ] try: subprocess.run(cmd, check=True, capture_output=True) # 替换原文件 os.replace(temp_output, video_path) except: # 如果添加元数据失败,保持原文件不变 if os.path.exists(temp_output): os.remove(temp_output) def add_random_metadata(self, video_path): """添加随机元数据到视频文件""" metadata_options = { 'creation_time': ['2023-01-15T10:30:00', '2023-02-20T14:45:30', '2023-03-10T09:15:45'], 'location': ['New York, USA', 'London, UK', 'Tokyo, Japan', 'Paris, France'], 'device': ['iPhone 14 Pro', 'Samsung Galaxy S23', 'Canon EOS R5', 'Sony A7IV'], 'description': ['Beautiful landscape', 'Urban exploration', 'Nature documentary', 'Travel vlog'], 'encoder': ['H.264', 'HEVC', 'AV1', 'VP9'] } # 随机选择元数据 selected_metadata = { 'creation_time': random.choice(metadata_options['creation_time']), 'location': random.choice(metadata_options['location']), 'device': random.choice(metadata_options['device']), 'description': random.choice(metadata_options['description']), 'encoder': random.choice(metadata_options['encoder']) } # 创建临时输出文件 temp_output = video_path + ".temp.mp4" # 使用ffmpeg添加元数据 cmd = [ 'ffmpeg', '-i', video_path, '-metadata', f'creation_time={selected_metadata["creation_time"]}', '-metadata', f'location={selected_metadata["location"]}', '-metadata', f'device={selected_metadata["device"]}', '-metadata', f'description={selected_metadata["description"]}', '-metadata', f'encoder={selected_metadata["encoder"]}', '-codec', 'copy', '-y', temp_output ] try: subprocess.run(cmd, check=True, capture_output=True) # 替换原文件 os.replace(temp_output, video_path) except: # 如果添加元数据失败,保持原文件不变 if os.path.exists(temp_output): os.remove(temp_output) class VideoSteganographyApp(QMainWindow): def __init__(self): super().__init__() self.video_a_path = "" self.video_b_paths = [] self.output_dir = "" self.use_gpu = False self.gpu_type = "auto" self.initUI() def initUI(self): self.setWindowTitle('视频隐写处理工具 - 最终版') self.setGeometry(100, 100, 900, 800) # 设置暗色主题样式 self.setStyleSheet(""" QMainWindow { background-color: #2b2b2b; color: #cccccc; } QGroupBox { font-weight: bold; border: 2px solid #444444; border-radius: 5px; margin-top: 1ex; padding-top: 10px; background-color: #3c3c3c; } QGroupBox::title { subcontrol-origin: margin; left: 10px; padding: 0 5px 0 5px; color: #ffffff; } QPushButton { background-color: #4CAF50; border: none; color: white; padding: 10px 20px; text-align: center; text-decoration: none; font-size: 16px; margin: 4px 2px; border-radius: 5px; } QPushButton:hover { background-color: #45a049; } QPushButton:disabled { background-color: #555555; } QPushButton:checked { background-color: #2196F3; } QLabel { padding: 5px; color: #cccccc; } QProgressBar { border: 2px solid #444444; border-radius: 5px; text-align: center; background-color: #3c3c3c; } QProgressBar::chunk { background-color: #4CAF50; width: 10px; } QCheckBox { padding: 5px; color: #cccccc; } QCheckBox::indicator { width: 15px; height: 15px; } QCheckBox::indicator:unchecked { border: 1px solid #555555; background-color: #3c3c3c; } QCheckBox::indicator:checked { border: 1px solid #555555; background-color: #4CAF50; } QListWidget { border: 1px solid #444444; border-radius: 3px; background-color: #3c3c3c; color: #cccccc; } QComboBox { border: 1px solid #444444; border-radius: 3px; padding: 5px; background-color: #3c3c3c; color: #cccccc; } QComboBox QAbstractItemView { border: 1px solid #444444; background-color: #3c3c3c; color: #cccccc; selection-background-color: #4CAF50; } QTextEdit { background-color: #3c3c3c; color: #cccccc; border: 1px solid #444444; border-radius: 3px; } """) central_widget = QWidget() self.setCentralWidget(central_widget) layout = QVBoxLayout(central_widget) # 标题 title_label = QLabel("视频隐写处理工具 - 最终版") title_label.setAlignment(Qt.AlignCenter) title_font = QFont() title_font.setPointSize(20) title_font.setBold(True) title_label.setFont(title_font) layout.addWidget(title_label) # 加速选项 acceleration_group = QGroupBox("加速选项") acceleration_layout = QHBoxLayout() self.gpu_checkbox = QCheckBox("使用GPU加速") self.gpu_checkbox.setChecked(False) self.gpu_checkbox.stateChanged.connect(self.toggle_gpu_acceleration) acceleration_layout.addWidget(self.gpu_checkbox) self.gpu_type_combo = QComboBox() self.gpu_type_combo.addItems(["自动检测", "NVIDIA CUDA", "OpenCL (Intel/AMD)"]) self.gpu_type_combo.currentIndexChanged.connect(self.change_gpu_type) acceleration_layout.addWidget(QLabel("GPU类型:")) acceleration_layout.addWidget(self.gpu_type_combo) acceleration_group.setLayout(acceleration_layout) layout.addWidget(acceleration_group) # 视频A选择区域 video_a_group = QGroupBox("视频A (主视频)") video_a_layout = QVBoxLayout() self.video_a_label = QLabel("未选择文件") video_a_layout.addWidget(self.video_a_label) video_a_btn = QPushButton("选择视频A") video_a_btn.clicked.connect(self.select_video_a) video_a_layout.addWidget(video_a_btn) video_a_group.setLayout(video_a_layout) layout.addWidget(video_a_group) # 视频B选择区域 video_b_group = QGroupBox("视频B (隐写视频 - 可多选)") video_b_layout = QVBoxLayout() self.video_b_list = QListWidget() video_b_layout.addWidget(self.video_b_list) video_b_btn_layout = QHBoxLayout() add_video_b_btn = QPushButton("添加视频B") add_video_b_btn.clicked.connect(self.add_video_b) video_b_btn_layout.addWidget(add_video_b_btn) remove_video_b_btn = QPushButton("移除选中") remove_video_b_btn.clicked.connect(self.remove_video_b) video_b_btn_layout.addWidget(remove_video_b_btn) clear_video_b_btn = QPushButton("清空列表") clear_video_b_btn.clicked.connect(self.clear_video_b) video_b_btn_layout.addWidget(clear_video_b_btn) video_b_layout.addLayout(video_b_btn_layout) video_b_group.setLayout(video_b_layout) layout.addWidget(video_b_group) # 输出选择区域 output_group = QGroupBox("输出设置") output_layout = QVBoxLayout() self.output_label = QLabel("未选择输出目录") output_layout.addWidget(self.output_label) output_btn = QPushButton("选择输出目录") output_btn.clicked.connect(self.select_output) output_layout.addWidget(output_btn) output_group.setLayout(output_layout) layout.addWidget(output_group) # 进度区域 progress_group = QGroupBox("处理进度") progress_layout = QVBoxLayout() self.batch_label = QLabel("准备就绪") progress_layout.addWidget(self.batch_label) self.status_label = QLabel("等待开始...") progress_layout.addWidget(self.status_label) self.progress_bar = QProgressBar() self.progress_bar.setValue(0) progress_layout.addWidget(self.progress_bar) progress_group.setLayout(progress_layout) layout.addWidget(progress_group) # 处理按钮 self.process_btn = QPushButton("开始批量处理") self.process_btn.clicked.connect(self.process_videos) self.process_btn.setEnabled(False) layout.addWidget(self.process_btn) # 日志区域 log_group = QGroupBox("处理日志") log_layout = QVBoxLayout() self.log_text = QTextEdit() self.log_text.setReadOnly(True) log_layout.addWidget(self.log_text) log_group.setLayout(log_layout) layout.addWidget(log_group) # 初始化日志 self.log_text.append("应用程序已启动") self.log_text.append(f"CUDA可用: {CUDA_AVAILABLE}") self.log_text.append(f"OpenCL可用: {OPENCL_AVAILABLE}") def toggle_gpu_acceleration(self, state): self.use_gpu = (state == Qt.Checked) if self.use_gpu: self.log_text.append("已启用GPU加速") else: self.log_text.append("已禁用GPU加速,使用CPU处理") def change_gpu_type(self, index): if index == 0: self.gpu_type = "auto" self.log_text.append("GPU类型: 自动检测") elif index == 1: self.gpu_type = "cuda" self.log_text.append("GPU类型: NVIDIA CUDA") elif index == 2: self.gpu_type = "opencl" self.log_text.append("GPU类型: OpenCL (Intel/AMD)") def select_video_a(self): file_path, _ = QFileDialog.getOpenFileName( self, "选择视频A文件", "", "视频文件 (*.mp4 *.avi *.mov *.mkv)" ) if file_path: self.video_a_path = file_path self.video_a_label.setText(f"已选择: {os.path.basename(file_path)}") self.log_text.append(f"已选择视频A: {file_path}") self.check_ready() def add_video_b(self): file_paths, _ = QFileDialog.getOpenFileNames( self, "选择视频B文件", "", "视频文件 (*.mp4 *.avi *.mov *.mkv)" ) if file_paths: for file_path in file_paths: if file_path not in self.video_b_paths: self.video_b_paths.append(file_path) self.video_b_list.addItem(os.path.basename(file_path)) self.log_text.append(f"已添加视频B: {file_path}") self.check_ready() def remove_video_b(self): selected_items = self.video_b_list.selectedItems() for item in selected_items: index = self.video_b_list.row(item) removed_path = self.video_b_paths.pop(index) self.video_b_list.takeItem(index) self.log_text.append(f"已移除视频B: {removed_path}") self.check_ready() def clear_video_b(self): self.video_b_paths.clear() self.video_b_list.clear() self.log_text.append("已清空视频B列表") self.check_ready() def select_output(self): dir_path = QFileDialog.getExistingDirectory( self, "选择输出目录" ) if dir_path: self.output_dir = dir_path self.output_label.setText(f"输出目录: {dir_path}") self.log_text.append(f"已选择输出目录: {dir_path}") self.check_ready() def check_ready(self): if self.video_a_path and self.video_b_paths and self.output_dir: self.process_btn.setEnabled(True) else: self.process_btn.setEnabled(False) def process_videos(self): self.process_btn.setEnabled(False) self.log_text.append("开始批量处理视频...") self.processor = VideoProcessor( self.video_a_path, self.video_b_paths, self.output_dir, self.use_gpu, self.gpu_type ) self.processor.progress_updated.connect(self.update_progress) self.processor.status_updated.connect(self.update_status) self.processor.finished.connect(self.processing_finished) self.processor.batch_progress.connect(self.update_batch_progress) self.processor.start() def update_progress(self, value): self.progress_bar.setValue(value) def update_status(self, message): self.status_label.setText(message) self.log_text.append(message) def update_batch_progress(self, current, total): self.batch_label.setText(f"处理进度: {current}/{total}") self.log_text.append(f"开始处理第 {current} 个视频,共 {total} 个") def processing_finished(self, success, message): self.process_btn.setEnabled(True) self.status_label.setText("处理完成" if success else "处理失败") self.log_text.append(message) if success: QMessageBox.information(self, "成功", message) else: QMessageBox.warning(self, "错误", message) def main(): app = QApplication(sys.argv) # 设置应用程序样式为Fusion,支持暗色主题 app.setStyle('Fusion') # 设置调色板为暗色主题 palette = QPalette() palette.setColor(QPalette.Window, QColor(43, 43, 43)) palette.setColor(QPalette.WindowText, Qt.white) palette.setColor(QPalette.Base, QColor(25, 25, 25)) palette.setColor(QPalette.AlternateBase, QColor(53, 53, 53)) palette.setColor(QPalette.ToolTipBase, Qt.white) palette.setColor(QPalette.ToolTipText, Qt.white) palette.setColor(QPalette.Text, Qt.white) palette.setColor(QPalette.Button, QColor(53, 53, 53)) palette.setColor(QPalette.ButtonText, Qt.white) palette.setColor(QPalette.BrightText, Qt.red) palette.setColor(QPalette.Link, QColor(42, 130, 218)) palette.setColor(QPalette.Highlight, QColor(42, 130, 218)) palette.setColor(QPalette.HighlightedText, Qt.black) app.setPalette(palette) window = VideoSteganographyApp() window.show() sys.exit(app.exec_()) if __name__ == '__main__': main() 再检查一下主程序代码有没有错误项和冲突项,给程序一个ico图标,将代码展示成无需命令行运行的模式,而是鼠标点击就可以运行的软件,只要Windows版本的非常详细操作。每一步怎么打包需要用到什么代码什么文件,文件怎么命名,什么软件怎么操作都详细的发给我

修改代码使保存照片期间不发送uart信息,输出完整代码 import logging from maix import camera, display, image, nn, app, uart, time import requests import json import os import threading from datetime import datetime 日志配置 logging.basicConfig( level=logging.INFO, format=‘%(asctime)s [%(levelname)s] %(message)s’, handlers=[ logging.FileHandler(“/root/operation.log”), logging.StreamHandler() ] ) logger = logging.getLogger(“MAIX_PRO”) 状态定义 class SystemState: NORMAL = 0 # 正常检测模式 OBJECT_DETECTED = 1 # 物体检测处理中 SPECIAL_HANDLING = 2 # 特殊处理模式(标签19) WAIT_FOR_LABEL1 = 3 # 等待标签1状态 OCR模型加载 try: ocr = nn.PP_OCR(model=“/root/models/pp_ocr.mud”) logger.info(“OCR model loaded”) except Exception as e: logger.critical(f"OCR model load failed: {str(e)}") exit(1) 保存目录 SAVE_DIR = “/root/models/mymodel/” SAVE_DIR = “/boot/Pictures/” os.makedirs(SAVE_DIR, exist_ok=True) 硬件初始化(使用OCR模型要求的分辨率) try: cam = camera.Camera(ocr.input_width(), ocr.input_height(), ocr.input_format()) logger.debug(f"Camera resolution: {cam.width()}x{cam.height()}“) except RuntimeError as e: logger.critical(f"Camera init failed: {str(e)}”) exit(1) disp = display.Display() UART初始化 device = “/dev/ttyS0” serial0 = uart.UART(device, 115200) logger.info(“UART initialized”) 登录获取token login_url = “http://111.230.114.23/api/user/login” headers_login = {‘Content-Type’: ‘application/json’} login_data = {“userAccount”: “lanyating”, “userPassword”: 12345678} json_data = json.dumps(login_data) try: login_response = requests.post(login_url, data=json_data, headers=headers_login) response_json = login_response.json() token = response_json.get(‘data’) if token: logger.info(f"Login successful, token obtained: {token[:10]}…“) # 只显示部分token以保护隐私 else: logger.error(f"Login failed: No token returned in response”) exit(1) except Exception as e: logger.critical(f"Login failed: {str(e)}") exit(1) class OperationController: def init(self): self.state = SystemState.NORMAL self.current_label = None self.last_detect_time = 0 self.upload_complete = False self.lock = threading.Lock() self.timers = [] # 初始发送forward命令 (0x02) self.send_uart(“right”) # 初始化 photo_url 和 data_url self.photo_url = “http://111.230.114.23/api/file/upload” self.data_url = “http://111.230.114.23/api/data/add” # 确保 token 在整个类中可用 self.token = token def send_uart(self, command): """发送带十六进制前缀的UART命令,命令为单字节""" try: # 命令映射表 command_map = { "stop": 0x00, # 停止命令 "left": 0x01, # 左转命令 "right": 0x02 # 右转/前进命令 } # 获取命令对应的字节值 if command in command_map: cmd_byte = bytes([command_map[command]]) else: logger.error(f"Unknown command: {command}") return # 创建十六进制前缀字节序列 header = bytes.fromhex('ffff02') # 添加换行符 newline = b'\n' # 组合所有部分:header + cmd_byte + newline data_to_send = header + cmd_byte # 发送完整的字节序列 serial0.write(data_to_send) logger.info(f"UART sent: {data_to_send.hex()} (hex)") except Exception as e: logger.error(f"UART send failed: {str(e)}") def save_and_upload(self, img, label): try: # 生成文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"{SAVE_DIR}{label}_{timestamp}.jpg" # 保存图片 if img.save(filename, quality=90): logger.info(f"Image saved: {filename}") # 同步上传 with open(filename, 'rb') as file: files = { 'file': ('image.jpg', file, 'image/jpeg') } params = { 'biz': 'plant_picture', } headers = { "token": self.token } logger.info(f"Uploading {filename} with label {label}, Token: {self.token[:10]}...") response = requests.post( self.photo_url, files=files, headers=headers, params=params ) if response.json().get('code') == 0: logger.info(f"Upload success: {filename}, Response: {response.text}") return response.json().get('data') else: logger.warning(f"Upload failed: {response.text}") else: logger.error("Image save failed") except Exception as e: logger.error(f"Capture failed: {str(e)}") return None def save_data(self, data): try: params = [{ "deviceName": 1, "plantId": 1, "growthStage": "flowering", "healthStage": "healthy", "height": "5", "crownWidth": "5", "humidity": '', "ph": '', "dan": '', "lin": '', "jia": '', "photoUrl": data, "notes": "" }] headers = { "token": self.token } response = requests.post( self.data_url, headers=headers, json=params ) logger.info(f"Response: {data}") if response.json().get('code') == 0: logger.info(f"Data save success: {response.text}") else: logger.warning(f"Data save failed: {response.text}") except Exception as e: logger.error(f"Data upload error: {str(e)}") def handle_detection(self, objs, img): with self.lock: current_time = time.time() # 状态机逻辑 if self.state == SystemState.NORMAL: for obj in objs: text = obj.char_str().strip() # 获取OCR识别结果并去除空格 logger.info(f"OCR detected text: {text}") # 处理01-07的情况 if text in ["01", "02", "03", "04", "05", "06", "07"]: num = int(text) # 转换为整数 logger.info(f"Label {num} detected via OCR") self.state = SystemState.OBJECT_DETECTED self.send_uart("stop") # 发送停止命令 (0x00) # 1秒后保存并上传 def delayed_save(): print("开始上传") data = self.save_and_upload(img, num) print("上传成功") if data: self.save_data(data) self.add_timer(1.0, delayed_save) # 2秒后发送前进命令 def delayed_forward(): self.send_uart("right") # 发送前进命令 (0x02) self.state = SystemState.NORMAL self.add_timer(2.0, delayed_forward) break # 处理一个有效结果后退出循环 # 处理08的情况 elif text == "08": logger.info("Special label 08 detected") self.state = SystemState.SPECIAL_HANDLING self.send_uart("stop") # 发送停止命令 (0x00) # 1秒后保存并上传 def delayed_save(): print("开始上传08") data = self.save_and_upload(img, 8) # 使用19作为标签 print("上传成功08") if data: self.save_data(data) self.send_uart("left") # 发送左转命令 (0x01) # 进入等待标签1状态 self.state = SystemState.WAIT_FOR_LABEL1 self.add_timer(1.0, delayed_save) break # 处理一个有效结果后退出循环 elif self.state == SystemState.SPECIAL_HANDLING: # 等待上传完成 pass elif self.state == SystemState.WAIT_FOR_LABEL1: for obj in objs: text = obj.char_str().strip() if text == "01": logger.info("Label1 after special handling") self.send_uart("stop") # 发送停止命令 (0x00) self.add_timer(1.0, lambda: self.send_uart("right")) # 发送前进命令 (0x02) self.state = SystemState.NORMAL break def add_timer(self, delay, callback): timer = threading.Timer(delay, callback) timer.start() self.timers.append(timer) def cleanup(self): for timer in self.timers: timer.cancel() logger.info("System cleanup completed") 主控制实例 controller = OperationController() 创建 Color 对象 red_color = image.Color(255, 0, 0) # 定义标准红色 主循环 try: while not app.need_exit(): try: img = cam.read() except Exception as e: logger.error(f"摄像头读取失败: {str(e)}") continue # 执行OCR识别 try: objs = ocr.detect(img) except Exception as e: logger.error(f"OCR识别失败: {str(e)}") disp.show(img) continue # 处理结果 if len(objs) > 0: controller.handle_detection(objs, img) # 显示OCR结果 for obj in objs: # 绘制检测框(四个点) points = obj.box.to_list() img.draw_keypoints( points, red_color, # 颜色 4, # 点大小 -1, # 连接所有点 1 # 线宽 ) # 绘制识别文本 img.draw_string( obj.box.x4, obj.box.y4, obj.char_str(), scale=0.5, color=red_color ) disp.show(img) except KeyboardInterrupt: logger.info(“用户中断”) except Exception as e: logger.critical(f"致命错误: {str(e)}") finally: controller.cleanup() logger.info(“系统关闭”)

/****************************************************************************** Copyright (C), 2012-2013, HSAN ****************************************************************************** Filename : hi_omci_me_onu_g.c Version : 初稿 Author : owen Creation : 2013-10-25 Description: ONU-G ******************************************************************************/ /***************************************************************************** * INCLUDE * *****************************************************************************/ #include "hi_omci_lib.h" #include "hi_omci_me_lib.h" #include "hi_omci_tapi_lib.h" #include "hi_omci_me.h" #include "hi_ipc.h" #include "hi_notifier.h" /***************************************************************************** * LOCAL_DEFINE * *****************************************************************************/ #define HI_OMCI_ME_ONUG_STATUS_CORRECT 0 #define HI_OMCI_ME_ONUG_STATUS_INVALID 1 #define HI_OMCI_ME_ONUG_TRAFFIC_MGT 2 /***************************************************************************** * LOCAL_TYPEDEF * *****************************************************************************/ /***************************************************************************** * LOCAL_VARIABLE * *****************************************************************************/ /***************************************************************************** * LOCAL_FUNCTION * *****************************************************************************/ /***************************************************************************** * PUBLIC_FUNCTION * *****************************************************************************/ /***************************************************************************** * INIT_EXIT * *****************************************************************************/ /****************************************************************************** Function : hi_omci_me_onug_init Description : ONU-G init Input Parm : 无 Output Parm : N/A Return : HI_RET_SUCC/HI_RET_FAIL ******************************************************************************/ hi_int32 hi_omci_me_onug_init() { hi_omci_me_onu_g_msg_s st_entry; hi_omci_tapi_sysinfo_s st_info; hi_omci_tapi_sn_loid_s st_sn_loid; hi_int32 i_ret; hi_uint32 ui_vendorid; i_ret = hi_omci_ext_create_inst(HI_OMCI_PRO_ME_ONT_G_E, 0); HI_OMCI_RET_CHECK(i_ret); i_ret = hi_omci_tapi_sysinfo_get(&st_info); HI_OMCI_RET_CHECK(i_ret); i_ret = hi_omci_tapi_sn_loid_get(&st_sn_loid); HI_OMCI_RET_CHECK(i_ret); HI_OS_MEMSET_S(&st_entry, sizeof(st_entry), 0, sizeof(st_entry)); st_entry.st_msghead.us_instid = 0; ui_vendorid = htonl(st_info.ui_vendor_id); HI_OS_MEMCPY_S(st_entry.st_onug.uc_vendorid, sizeof(st_entry.st_onug.uc_vendorid), &ui_vendorid, sizeof(st_entry.st_onug.uc_vendorid)); HI_OS_MEMCPY_S(st_entry.st_onug.uc_version, sizeof(st_entry.st_onug.uc_version), st_info.auc_version, sizeof(st_entry.st_onug.uc_version)); HI_OS_MEMCPY_S(st_entry.st_onug.uc_sn, sizeof(st_entry.st_onug.uc_sn), st_sn_loid.auc_sn, sizeof(st_entry.st_onug.uc_sn)); HI_OS_MEMCPY_S(st_entry.st_onug.uc_loid, sizeof(st_entry.st_onug.uc_loid), st_sn_loid.auc_loid, sizeof(st_entry.st_onug.uc_loid)); HI_OS_MEMCPY_S(st_entry.st_onug.uc_passwd, sizeof(st_entry.st_onug.uc_passwd), st_sn_loid.auc_lopwd, sizeof(st_entry.st_onug.uc_passwd)); st_entry.st_onug.uc_trafficflag = HI_OMCI_ME_ONUG_TRAFFIC_MGT; st_entry.st_onug.uc_status = HI_OMCI_ME_ONUG_STATUS_CORRECT; i_ret = hi_omci_ext_set_inst(HI_OMCI_PRO_ME_ONT_G_E, 0, &st_entry); HI_OMCI_RET_CHECK(i_ret); hi_omci_systrace(HI_RET_SUCC, 0, 0, 0, 0); return HI_RET_SUCC; } /* ***************************************************************************** Function : hi_omci_me_onug_get Description : ONT-G 数据库更新前处理 Input Parm : hi_void*pv_data hi_uint32 ui_in Output Parm : hi_uint32 *pui_outlen Return : HI_RET_SUCC/HI_RET_FAIL ***************************************************************************** */ hi_int32 hi_omci_me_onug_get(hi_void *pv_data, hi_uint32 ui_inlen, hi_uint32 *pui_outlen) { hi_omci_me_onu_g_msg_s *pst_msg = (hi_omci_me_onu_g_msg_s *)pv_data; hi_omci_me_onu_g_msg_s st_entry = { 0 }; hi_int32 i_ret; hi_ushort16 us_instid = pst_msg->st_msghead.us_instid; hi_omci_tapi_sn_loid_s st_sn_loid = { 0 }; i_ret = hi_omci_tapi_sn_loid_get(&st_sn_loid); HI_OMCI_RET_CHECK(i_ret); i_ret = hi_omci_ext_get_inst(HI_OMCI_PRO_ME_ONT_G_E, us_instid, &st_entry); HI_OMCI_RET_CHECK(i_ret); HI_OS_MEMCPY_S(st_entry.st_onug.uc_loid, sizeof(st_entry.st_onug.uc_loid), st_sn_loid.auc_loid, sizeof(st_entry.st_onug.uc_loid)); HI_OS_MEMCPY_S(st_entry.st_onug.uc_passwd, sizeof(st_entry.st_onug.uc_passwd), st_sn_loid.auc_lopwd, sizeof(st_entry.st_onug.uc_passwd)); i_ret = hi_omci_ext_set_inst(HI_OMCI_PRO_ME_ONT_G_E, us_instid, &st_entry); HI_OMCI_RET_CHECK(i_ret); hi_omci_systrace(HI_RET_SUCC, 0, 0, 0, 0); return HI_RET_SUCC; } /****************************************************************************** Function : hi_omci_me_onug_set Description : ONT-G 在数据库操作后处理 Input Parm : hi_void *pv_data hi_uint32 ui_inlen Output Parm : hi_uint32 *pui_outlen Return : HI_RET_SUCC/HI_RET_FAIL ******************************************************************************/ hi_int32 hi_omci_me_onug_set(hi_void *pv_data, hi_uint32 ui_in, hi_uint32 *pui_outlen) { hi_int32 i_ret; hi_uint32 ui_state; hi_omci_me_onu_g_msg_s *pst_msg = (hi_omci_me_onu_g_msg_s *)pv_data; hi_omci_notifier_data_s st_notifier_data = { 0 }; hi_omci_debug("authstatus : %d \n", pst_msg->st_onug.uc_status); if (pst_msg->st_onug.uc_status > HI_OMCI_ME_ONUG_DUP_LOID) { hi_omci_systrace(HI_OMCI_PRO_ERR_PARA_ERR_E, 0, 0, 0, 0); return HI_OMCI_PRO_ERR_PARA_ERR_E; } st_notifier_data.em_type = HI_OMCI_NOTIFY_LOID_REGSTA_E; st_notifier_data.ui_data = pst_msg->st_onug.uc_status; i_ret = hi_notifier_call(HI_OMCI_NOTIFIY_NAME, &st_notifier_data); HI_OMCI_RET_CHECK(i_ret); ui_state = (pst_msg->st_onug.uc_status <= HI_OMCI_ME_ONUG_AUTH_SUCC ? HI_FALSE : HI_TRUE); i_ret = HI_IPC_CALL("hi_pon_set_fail_regstate", &ui_state); if (HI_RET_SUCC != i_ret) return i_ret; return HI_RET_SUCC; } 以上代码中,是否有涉及到:远程复位ONT,网管中远程恢复出厂没有生效

修改下面代码# 处理01-07的情况的部分,使保存完图片先发送前进命令 (0x02)后进入三秒 UPLOADING = 4 # 上传处理中状态的状态,以保证不会同一张照片多次保存 import logging from maix import camera, display, image, nn, app, uart, time import requests import json import os import threading from datetime import datetime # 日志配置 logging.basicConfig( level=logging.INFO, format='%(asctime)s [%(levelname)s] %(message)s', handlers=[ logging.FileHandler("/root/operation.log"), logging.StreamHandler() ] ) logger = logging.getLogger("MAIX_PRO") # 状态定义 class SystemState: NORMAL = 0 # 正常检测模式 OBJECT_DETECTED = 1 # 物体检测处理中 SPECIAL_HANDLING = 2 # 特殊处理模式(标签08) WAIT_FOR_LABEL1 = 3 # 等待标签1状态 UPLOADING = 4 # 上传处理中状态 PAUSED = 5 # 暂停状态,等待UART信号 # OCR模型加载 try: ocr = nn.PP_OCR(model="/root/models/pp_ocr.mud") logger.info("OCR model loaded") except Exception as e: logger.critical(f"OCR model load failed: {str(e)}") exit(1) # 保存目录 SAVE_DIR = "/boot/Pictures/" os.makedirs(SAVE_DIR, exist_ok=True) # 更新ROI区域定义 (x, y, w, h) - 使用新参数 (138, 139, 44, 85) ROI_X = 138 ROI_Y = 139 ROI_W = 44 ROI_H = 85 # 硬件初始化(使用OCR模型要求的分辨率) try: cam = camera.Camera(ocr.input_width(), ocr.input_height(), ocr.input_format()) logger.debug(f"Camera resolution: {cam.width()}x{cam.height()}") except RuntimeError as e: logger.critical(f"Camera init failed: {str(e)}") exit(1) disp = display.Display() # UART初始化 device = "/dev/ttyS0" serial0 = uart.UART(device, 115200) logger.info("UART initialized") # 使用固定token替换登录流程 token = "eyJhbGciOiJIUzI1NiJ9.eyJpZCI6MTkwNTgwNDA0MTExNjU2OTYwMSwidXNlclJvbGUiOiJhZG1pbiIsImV4cCI6ODY0MTc1MjM3MDQzNiwidXNlcm5hbWUiOm51bGx9.3TxID4LJExTSDFxo4jr8LuIgFYgO2cSp7Z3sqPATVc0" logger.info(f"Using fixed token: {token[:10]}...") # 只打印前10位避免日志过长 class OperationController: def __init__(self): self.state = SystemState.NORMAL self.current_label = None self.last_detect_time = 0 self.upload_complete = False self.lock = threading.Lock() self.timers = [] # 初始发送forward命令 (0x02) self.send_uart("right") # 初始化 photo_url 及 data_url self.photo_url = "http://111.230.114.23/api/file/upload" self.data_url = "http://111.230.114.23/api/data/add" # 使用固定token self.token = token # 启动UART接收线程 self.uart_receive_thread = threading.Thread(target=self.uart_receive_loop, daemon=True) self.uart_receive_thread.start() logger.info("UART receive thread started") def uart_receive_loop(self): """UART接收线程,处理接收到的数据""" while True: try: # 读取UART数据 data = serial0.read(1) # 每次读取一个字节 if data is not None and len(data) > 0: # 将字节转换为整数 byte_val = data[0] logger.info(f"UART received byte: {hex(byte_val)}") if byte_val == 0x02: # 收到0x02时重置状态为NORMAL with self.lock: logger.info("Received 0x02, reset state to NORMAL") self.state = SystemState.NORMAL # 发送前进命令 self.send_uart("right") except Exception as e: logger.error(f"UART receive error: {str(e)}") time.sleep_ms(10) # 避免过度占用CPU def send_uart(self, command): """发送带十六进制前缀的UART命令,命令为单字节""" # 如果当前处于上传状态,则不发送任何UART命令 if self.state == SystemState.UPLOADING: logger.warning(f"Blocked UART command during upload: {command}") return try: # 命令映射表 command_map = { "stop": 0x00, # 停止命令 "left": 0x01, # 左转命令 "right": 0x02 # 右转/前进命令 } # 获取命令对应的字节值 if command in command_map: cmd_byte = bytes([command_map[command]]) else: logger.error(f"Unknown command: {command}") return # 创建十六进制前缀字节序列 header = bytes.fromhex('ffff02') # 组合所有部分:header + cmd_byte data_to_send = header + cmd_byte # 发送完整的字节序列 serial0.write(data_to_send) logger.info(f"UART sent: {data_to_send.hex()} (hex)") except Exception as e: logger.error(f"UART send failed: {str(e)}") def save_and_upload(self, img, label): try: # 设置上传状态,阻止UART发送 self.state = SystemState.UPLOADING logger.info(f"Starting upload for label {label} (UART blocked)") # 生成文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"{SAVE_DIR}{label}_{timestamp}.jpg" # 保存图片 if img.save(filename, quality=90): logger.info(f"Image saved: {filename}") # 同步上传 with open(filename, 'rb') as file: files = { 'file': ('image.jpg', file, 'image/jpeg') } params = { 'biz': 'plant_picture', } headers = { "token": self.token } logger.info(f"Uploading {filename} with label {label}, Token: {self.token[:10]}...") response = requests.post( self.photo_url, files=files, headers=headers, params=params ) if response.json().get('code') == 0: logger.info(f"Upload success: {filename}, Response: {response.text}") return response.json().get('data') else: logger.warning(f"Upload failed: {response.text}") else: logger.error("Image save failed") except Exception as e: logger.error(f"Capture failed: {str(e)}") finally: # 恢复状态,允许UART发送 self.state = SystemState.NORMAL logger.info(f"Upload completed for label {label} (UART unblocked)") return None def save_data(self, data): try: # 设置上传状态,阻止UART发送 self.state = SystemState.UPLOADING logger.info("Starting data save (UART blocked)") params = [{ "deviceName": 1, "plantId": 1, "growthStage": "flowering", "healthStage": "healthy", "height": "5", "crownWidth": "5", "humidity": '', "ph": '', "dan": '', "lin": '', "jia": '', "photoUrl": data, "notes": "" }] headers = { "token": self.token } response = requests.post( self.data_url, headers=headers, json=params ) logger.info(f"Response: {data}") if response.json().get('code') == 0: logger.info(f"Data save success: {response.text}") else: logger.warning(f"Data save failed: {response.text}") except Exception as e: logger.error(f"Data upload error: {str(e)}") finally: # 恢复状态,允许UART发送 self.state = SystemState.NORMAL logger.info("Data save completed (UART unblocked)") def get_ocr_text(self, obj): """安全获取OCR文本内容""" try: # 尝试获取文本内容 text = obj.char_str # 如果char_str是方法则调用它 if callable(text): text = text() # 确保是字符串类型 return str(text).strip() except Exception as e: logger.error(f"获取OCR文本失败: {str(e)}") return "" def handle_detection(self, objs, img): with self.lock: current_time = time.time() # 状态机逻辑 if self.state == SystemState.NORMAL: for obj in objs: # 使用安全方法获取文本 text = self.get_ocr_text(obj) logger.info(f"OCR detected text: {text}") # 处理01-07的情况 if text in ["01", "02", "03", "04", "05", "06", "07"]: num = int(text) # 转换为整数 logger.info(f"Label {num} detected via OCR") self.state = SystemState.OBJECT_DETECTED self.send_uart("stop") # 发送停止命令 (0x00) # 1秒后保存并上传 def delayed_save(): data = self.save_and_upload(img, num) if data: self.save_data(data) self.add_timer(1.0, delayed_save) # 2秒后发送前进命令 def delayed_forward(): self.send_uart("right") # 发送前进命令 (0x02) self.state = SystemState.NORMAL self.add_timer(2.0, delayed_forward) break # 处理一个有效结果后退出循环 # 处理08的情况 elif text == "08": logger.info("Special label 08 detected") self.state = SystemState.SPECIAL_HANDLING self.send_uart("stop") # 发送停止命令 (0x00) # 1秒后保存并上传 def delayed_save(): data = self.save_and_upload(img, 8) if data: self.save_data(data) self.send_uart("left") # 发送左转命令 (0x01) # 进入等待标签1状态 self.state = SystemState.WAIT_FOR_LABEL1 self.add_timer(1.0, delayed_save) break # 处理一个有效结果后退出循环 elif self.state == SystemState.SPECIAL_HANDLING: # 等待上传完成 pass elif self.state == SystemState.WAIT_FOR_LABEL1: for obj in objs: text = self.get_ocr_text(obj) if text == "01": logger.info("Label1 after special handling") self.send_uart("stop") # 发送停止命令 (0x00) break def add_timer(self, delay, callback): timer = threading.Timer(delay, callback) timer.start() self.timers.append(timer) def cleanup(self): for timer in self.timers: timer.cancel() logger.info("System cleanup completed") # 主控制实例 controller = OperationController() # 创建颜色对象 red_color = image.Color(255, 0, 0) # 红色 - 用于检测框 green_color = image.Color(0, 255, 0) # 绿色 - 用于ROI框 blue_color = image.Color(0, 0, 255) # 蓝色 - 用于文本 yellow_color = image.Color(255, 255, 0) # 黄色 - 用于警告信息 # 主循环 try: # 帧率计算变量 frame_count = 0 last_log_time = time.time() while not app.need_exit(): try: # 读取图像 img = cam.read() frame_count += 1 except Exception as e: logger.error(f"摄像头读取失败: {str(e)}") continue # 绘制ROI区域边框 - 使用新的矩形参数 (138, 139, 44, 85) img.draw_rect(ROI_X, ROI_Y, ROI_W, ROI_H, green_color, thickness=2) # 添加ROI区域标签 img.draw_string(ROI_X, ROI_Y - 20, f"ROI: {ROI_X},{ROI_Y},{ROI_W},{ROI_H}", scale=0.7, color=blue_color) # 裁剪ROI区域 try: # 使用crop方法裁剪ROI区域 roi_img = img.crop(ROI_X, ROI_Y, ROI_W, ROI_H) except Exception as e: logger.error(f"ROI裁剪失败: {str(e)}") disp.show(img) continue # 执行OCR识别(仅在ROI区域) try: objs = ocr.detect(roi_img) except Exception as e: logger.error(f"OCR识别失败: {str(e)}") disp.show(img) continue # 调整检测框坐标(从ROI坐标转换到原始图像坐标) adjusted_objs = [] for obj in objs: # 直接修改原始对象坐标 obj.box.x1 += ROI_X obj.box.y1 += ROI_Y obj.box.x2 += ROI_X obj.box.y2 += ROI_Y obj.box.x3 += ROI_X obj.box.y3 += ROI_Y obj.box.x4 += ROI_X obj.box.y4 += ROI_Y adjusted_objs.append(obj) # 处理结果 if len(adjusted_objs) > 0: controller.handle_detection(adjusted_objs, img) # 显示OCR结果 for obj in adjusted_objs: # 绘制检测框(四个点) points = obj.box.to_list() img.draw_keypoints( points, red_color, # 颜色 4, # 点大小 -1, # 连接所有点 1 # 线宽 ) # 安全获取文本内容 try: text = controller.get_ocr_text(obj) # 绘制识别文本 img.draw_string( obj.box.x4, obj.box.y4, text, scale=0.5, color=red_color ) except Exception as e: logger.error(f"绘制OCR文本失败: {str(e)}") img.draw_string( obj.box.x4, obj.box.y4, "ERROR", scale=0.5, color=yellow_color ) # 显示状态信息 state_text = f"State: {controller.state}" img.draw_string(5, 5, state_text, scale=0.8, color=blue_color) # 显示检测结果数量 count_text = f"Detected: {len(adjusted_objs)}" img.draw_string(5, 25, count_text, scale=0.8, color=blue_color) # 显示当前时间 time_text = datetime.now().strftime("%H:%M:%S") img.draw_string(img.width() - 100, 5, time_text, scale=0.8, color=blue_color) # 显示帧率 if time.time() - last_log_time > 1.0: fps = frame_count img.draw_string(5, 45, f"FPS: {fps}", scale=0.8, color=blue_color) frame_count = 0 last_log_time = time.time() # 显示图像 disp.show(img) except KeyboardInterrupt: logger.info("用户中断") except Exception as e: logger.critical(f"致命错误: {str(e)}") finally: controller.cleanup() logger.info("系统关闭")

static int ufshcd_wait_for_register(struct ufs_hba *hba, u32 reg, u32 mask, u32 val, unsigned long timeout_ms) { int err = 0; unsigned long start = get_timer(0); /* ignore bits that we don't intend to wait on */ val = val & mask; while ((ufshcd_readl(hba, reg) & mask) != val) { if (get_timer(start) > timeout_ms) { if ((ufshcd_readl(hba, reg) & mask) != val) err = -ETIMEDOUT; break; } } return err; } /** * ufshcd_init_pwr_info - setting the POR (power on reset) * values in hba power info */ static void ufshcd_init_pwr_info(struct ufs_hba *hba) { hba->pwr_info.gear_rx = UFS_PWM_G1; hba->pwr_info.gear_tx = UFS_PWM_G1; hba->pwr_info.lane_rx = 1; hba->pwr_info.lane_tx = 1; hba->pwr_info.pwr_rx = SLOWAUTO_MODE; hba->pwr_info.pwr_tx = SLOWAUTO_MODE; hba->pwr_info.hs_rate = 0; } /** * ufshcd_print_pwr_info - print power params as saved in hba * power info */ static void ufshcd_print_pwr_info(struct ufs_hba *hba) { static const char * const names[] = { "INVALID MODE", "FAST MODE", "SLOW_MODE", "INVALID MODE", "FASTAUTO_MODE", "SLOWAUTO_MODE", "INVALID MODE", }; dev_err(hba->dev, "[RX, TX]: gear=[%d, %d], lane[%d, %d], pwr[%s, %s], rate = %d\n", hba->pwr_info.gear_rx, hba->pwr_info.gear_tx, hba->pwr_info.lane_rx, hba->pwr_info.lane_tx, names[hba->pwr_info.pwr_rx], names[hba->pwr_info.pwr_tx], hba->pwr_info.hs_rate); } static void ufshcd_device_reset(struct ufs_hba *hba) { ufshcd_vops_device_reset(hba); } /** * ufshcd_ready_for_uic_cmd - Check if controller is ready * to accept UIC commands */ static inline bool ufshcd_ready_for_uic_cmd(struct ufs_hba *hba) { if (ufshcd_readl(hba, REG_CONTROLLER_STATUS) & UIC_COMMAND_READY) return true; else return false; } /** * ufshcd_get_uic_cmd_result - Get the UIC command result */ static inline int ufshcd_get_uic_cmd_result(struct ufs_hba *hba) { return ufshcd_readl(hba, REG_UIC_COMMAND_ARG_2) & MASK_UIC_COMMAND_RESULT; } /** * ufshcd_get_dme_attr_val - Get the value of attribute returned by UIC command */ static inline u32 ufshcd_get_dme_attr_val(struct ufs_hba *hba) { return ufshcd_readl(hba, REG_UIC_COMMAND_ARG_3); } /** * ufshcd_is_device_present - Check if any device connected to * the host controller */ static inline bool ufshcd_is_device_present(struct ufs_hba *hba) { return (ufshcd_readl(hba, REG_CONTROLLER_STATUS) & DEVICE_PRESENT) ? true : false; } /** * ufshcd_send_uic_cmd - UFS Interconnect layer command API * */ static int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd) { unsigned long start = 0; u32 intr_status; u32 enabled_intr_status; if (!ufshcd_ready_for_uic_cmd(hba)) { dev_err(hba->dev, "Controller not ready to accept UIC commands\n"); return -EIO; } debug("sending uic command:%d\n", uic_cmd->command); /* Write Args */ ufshcd_writel(hba, uic_cmd->argument1, REG_UIC_COMMAND_ARG_1); ufshcd_writel(hba, uic_cmd->argument2, REG_UIC_COMMAND_ARG_2); ufshcd_writel(hba, uic_cmd->argument3, REG_UIC_COMMAND_ARG_3); /* Write UIC Cmd */ ufshcd_writel(hba, uic_cmd->command & COMMAND_OPCODE_MASK, REG_UIC_COMMAND); start = get_timer(0); do { intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); enabled_intr_status = intr_status & hba->intr_mask; ufshcd_writel(hba, intr_status, REG_INTERRUPT_STATUS); if (get_timer(start) > UFS_UIC_CMD_TIMEOUT) { dev_err(hba->dev, "Timedout waiting for UIC response\n"); return -ETIMEDOUT; } if (enabled_intr_status & UFSHCD_ERROR_MASK) { dev_err(hba->dev, "Error in status:%08x\n", enabled_intr_status); return -1; } } while (!(enabled_intr_status & UFSHCD_UIC_MASK)); uic_cmd->argument2 = ufshcd_get_uic_cmd_result(hba); uic_cmd->argument3 = ufshcd_get_dme_attr_val(hba); debug("Sent successfully\n"); return 0; } /** * ufshcd_dme_set_attr - UIC command for DME_SET, DME_PEER_SET * */ int ufshcd_dme_set_attr(struct ufs_hba *hba, u32 attr_sel, u8 attr_set, u32 mib_val, u8 peer) { struct uic_command uic_cmd = {0}; static const char *const action[] = { "dme-set", "dme-peer-set" }; const char *set = action[!!peer]; int ret; int retries = UFS_UIC_COMMAND_RETRIES; uic_cmd.command = peer ? UIC_CMD_DME_PEER_SET : UIC_CMD_DME_SET; uic_cmd.argument1 = attr_sel; uic_cmd.argument2 = UIC_ARG_ATTR_TYPE(attr_set); uic_cmd.argument3 = mib_val; do { /* for peer attributes we retry upon failure */ ret = ufshcd_send_uic_cmd(hba, &uic_cmd); if (ret) dev_dbg(hba->dev, "%s: attr-id 0x%x val 0x%x error code %d\n", set, UIC_GET_ATTR_ID(attr_sel), mib_val, ret); } while (ret && peer && --retries); if (ret) dev_err(hba->dev, "%s: attr-id 0x%x val 0x%x failed %d retries\n", set, UIC_GET_ATTR_ID(attr_sel), mib_val, UFS_UIC_COMMAND_RETRIES - retries); return ret; } /** * ufshcd_dme_get_attr - UIC command for DME_GET, DME_PEER_GET * */ int ufshcd_dme_get_attr(struct ufs_hba *hba, u32 attr_sel, u32 *mib_val, u8 peer) { struct uic_command uic_cmd = {0}; static const char *const action[] = { "dme-get", "dme-peer-get" }; const char *get = action[!!peer]; int ret; int retries = UFS_UIC_COMMAND_RETRIES; uic_cmd.command = peer ? UIC_CMD_DME_PEER_GET : UIC_CMD_DME_GET; uic_cmd.argument1 = attr_sel; do { /* for peer attributes we retry upon failure */ ret = ufshcd_send_uic_cmd(hba, &uic_cmd); if (ret) dev_dbg(hba->dev, "%s: attr-id 0x%x error code %d\n", get, UIC_GET_ATTR_ID(attr_sel), ret); } while (ret && peer && --retries); if (ret) dev_err(hba->dev, "%s: attr-id 0x%x failed %d retries\n", get, UIC_GET_ATTR_ID(attr_sel), UFS_UIC_COMMAND_RETRIES - retries); if (mib_val && !ret) *mib_val = uic_cmd.argument3; return ret; } static int ufshcd_disable_tx_lcc(struct ufs_hba *hba, bool peer) { u32 tx_lanes, i, err = 0; if (!peer) ufshcd_dme_get(hba, UIC_ARG_MIB(PA_CONNECTEDTXDATALANES), &tx_lanes); else ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_CONNECTEDTXDATALANES), &tx_lanes); for (i = 0; i < tx_lanes; i++) { unsigned int val = UIC_ARG_MIB_SEL(TX_LCC_ENABLE, UIC_ARG_MPHY_TX_GEN_SEL_INDEX(i)); if (!peer) err = ufshcd_dme_set(hba, val, 0); else err = ufshcd_dme_peer_set(hba, val, 0); if (err) { dev_err(hba->dev, "%s: TX LCC Disable failed, peer = %d, lane = %d, err = %d\n", __func__, peer, i, err); break; } } return err; } static inline int ufshcd_disable_device_tx_lcc(struct ufs_hba *hba) { return ufshcd_disable_tx_lcc(hba, true); } /** * ufshcd_dme_link_startup - Notify Unipro to perform link startup * */ static int ufshcd_dme_link_startup(struct ufs_hba *hba) { struct uic_command uic_cmd = {0}; int ret; uic_cmd.command = UIC_CMD_DME_LINK_STARTUP; ret = ufshcd_send_uic_cmd(hba, &uic_cmd); if (ret) dev_dbg(hba->dev, "dme-link-startup: error code %d\n", ret); return ret; } /** * ufshcd_disable_intr_aggr - Disables interrupt aggregation. * */ static inline void ufshcd_disable_intr_aggr(struct ufs_hba *hba) { ufshcd_writel(hba, 0, REG_UTP_TRANSFER_REQ_INT_AGG_CONTROL); } /** * ufshcd_get_lists_status - Check UCRDY, UTRLRDY and UTMRLRDY */ static inline int ufshcd_get_lists_status(u32 reg) { return !((reg & UFSHCD_STATUS_READY) == UFSHCD_STATUS_READY); } /** * ufshcd_enable_run_stop_reg - Enable run-stop registers, * When run-stop registers are set to 1, it indicates the * host controller that it can process the requests */ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba) { ufshcd_writel(hba, UTP_TASK_REQ_LIST_RUN_STOP_BIT, REG_UTP_TASK_REQ_LIST_RUN_STOP); ufshcd_writel(hba, UTP_TRANSFER_REQ_LIST_RUN_STOP_BIT, REG_UTP_TRANSFER_REQ_LIST_RUN_STOP); } /** * ufshcd_enable_intr - enable interrupts */ static void ufshcd_enable_intr(struct ufs_hba *hba, u32 intrs) { u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE); u32 rw; if (hba->version == UFSHCI_VERSION_10) { rw = set & INTERRUPT_MASK_RW_VER_10; set = rw | ((set ^ intrs) & intrs); } else { set |= intrs; } ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE); hba->intr_mask = set; } /** * ufshcd_make_hba_operational - Make UFS controller operational * * To bring UFS host controller to operational state, * 1. Enable required interrupts * 2. Configure interrupt aggregation * 3. Program UTRL and UTMRL base address * 4. Configure run-stop-registers * */ static int ufshcd_make_hba_operational(struct ufs_hba *hba) { int err = 0; u32 reg; /* Enable required interrupts */ ufshcd_enable_intr(hba, UFSHCD_ENABLE_INTRS); /* Disable interrupt aggregation */ ufshcd_disable_intr_aggr(hba); /* Configure UTRL and UTMRL base address registers */ ufshcd_writel(hba, lower_32_bits((dma_addr_t)hba->utrdl), REG_UTP_TRANSFER_REQ_LIST_BASE_L); ufshcd_writel(hba, upper_32_bits((dma_addr_t)hba->utrdl), REG_UTP_TRANSFER_REQ_LIST_BASE_H); ufshcd_writel(hba, lower_32_bits((dma_addr_t)hba->utmrdl), REG_UTP_TASK_REQ_LIST_BASE_L); ufshcd_writel(hba, upper_32_bits((dma_addr_t)hba->utmrdl), REG_UTP_TASK_REQ_LIST_BASE_H); /* * Make sure base address and interrupt setup are updated before * enabling the run/stop registers below. */ wmb(); /* * UCRDY, UTMRLDY and UTRLRDY bits must be 1 */ reg = ufshcd_readl(hba, REG_CONTROLLER_STATUS); if (!(ufshcd_get_lists_status(reg))) { ufshcd_enable_run_stop_reg(hba); } else { dev_err(hba->dev, "Host controller not ready to process requests\n"); err = -EIO; goto out; } out: return err; } /** * ufshcd_link_startup - Initialize unipro link startup */ static int ufshcd_link_startup(struct ufs_hba *hba) { int ret; int retries = DME_LINKSTARTUP_RETRIES; do { ufshcd_ops_link_startup_notify(hba, PRE_CHANGE); ret = ufshcd_dme_link_startup(hba); /* check if device is detected by inter-connect layer */ if (!ret && !ufshcd_is_device_present(hba)) { dev_err(hba->dev, "%s: Device not present\n", __func__); ret = -ENXIO; goto out; } /* * DME link lost indication is only received when link is up, * but we can't be sure if the link is up until link startup * succeeds. So reset the local Uni-Pro and try again. */ if (ret && ufshcd_hba_enable(hba)) goto out; } while (ret && retries--); if (ret) /* failed to get the link up... retire */ goto out; /* Mark that link is up in PWM-G1, 1-lane, SLOW-AUTO mode */ ufshcd_init_pwr_info(hba); if (hba->quirks & UFSHCD_QUIRK_BROKEN_LCC) { ret = ufshcd_disable_device_tx_lcc(hba); if (ret) goto out; } /* Include any host controller configuration via UIC commands */ ret = ufshcd_ops_link_startup_notify(hba, POST_CHANGE); if (ret) goto out; /* Clear UECPA once due to LINERESET has happened during LINK_STARTUP */ ufshcd_readl(hba, REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER); ret = ufshcd_make_hba_operational(hba); out: if (ret) dev_err(hba->dev, "link startup failed %d\n", ret); return ret; }

MultiValueDictKeyError at /auth/users/ 'password' Request Method: POST Request URL: http://127.0.0.1:8000/auth/users/ Django Version: 4.2.21 Exception Type: MultiValueDictKeyError Exception Value: 'password' Exception Location: C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\django\utils\datastructures.py, line 86, in __getitem__ Raised during: account.views.UserView Python Executable: C:\Users\高琨\AppData\Local\Programs\Python\Python39\python.exe Python Version: 3.9.11 Python Path: ['C:\\Users\\高琨\\Desktop\\backend', 'C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39\\python39.zip', 'C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39\\DLLs', 'C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39\\lib', 'C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39', 'C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages'] Server time: Tue, 24 Jun 2025 10:29:49 +0800 Traceback Switch to copy-and-paste view C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\django\utils\datastructures.py, line 84, in __getitem__ list_ = super().__getitem__(key) … Local vars During handling of the above exception ('password'), another exception occurred: C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\exception.py, line 55, in inner response = get_response(request) … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\base.py, line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\django\views\decorators\csrf.py, line 56, in wrapper_view return view_func(*args, **kwargs) … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\rest_framework\viewsets.py, line 124, in view return self.dispatch(request, *args, **kwargs) … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\rest_framework\views.py, line 509, in dispatch response = self.handle_exception(exc) … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\rest_framework\views.py, line 469, in handle_exception self.raise_uncaught_exception(exc) … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\rest_framework\views.py, line 480, in raise_uncaught_exception raise exc … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\rest_framework\views.py, line 506, in dispatch response = handler(request, *args, **kwargs) … Local vars C:\Users\高琨\Desktop\backend\account\views.py, line 31, in login password = request.data['password'] … Local vars C:\Users\高琨\AppData\Local\Programs\Python\Python39\lib\site-packages\django\utils\datastructures.py, line 86, in __getitem__ raise MultiValueDictKeyError(key) … Local vars Request information USER AnonymousUser GET No GET data POST Variable Value csrfmiddlewaretoken 'DWg42eYqodHAyGvU8iq7SW01Agw6SWsrGqn2RNRtENkZgyesArpAfiWcxFAkniwR' last_login '' first_name '琨' last_name '高' date_joined '' email '[email protected]' desc '啊大苏打' mobile '13698551543' gender '' FILES Variable Value avatar <InMemoryUploadedFile: 屏幕截图 2025-05-16 181140.png (image/png)> COOKIES Variable Value csrftoken '********************' META Variable Value ALLUSERSPROFILE 'C:\\ProgramData' APPDATA 'C:\\Users\\高琨\\AppData\\Roaming' CHROME_CRASHPAD_PIPE_NAME '\\\\.\\pipe\\crashpad_2004_IXNEYUSYGSDNGSXV' COLORTERM 'truecolor' COMMONPROGRAMFILES 'C:\\Program Files\\Common Files' COMMONPROGRAMFILES(X86) 'C:\\Program Files (x86)\\Common Files' COMMONPROGRAMW6432 'C:\\Program Files\\Common Files' COMPUTERNAME 'LAPTOP-SUN5G18K' COMSPEC 'C:\\WINDOWS\\system32\\cmd.exe' CONTENT_LENGTH '75110' CONTENT_TYPE 'multipart/form-data; boundary=----WebKitFormBoundaryJGK9BG3i7H6CHnvK' CSRF_COOKIE 'dEh8ZJ3dqKNzS2TICj9Dxw6l7zeoFweA' DJANGO_SETTINGS_MODULE 'core.settings' DRIVERDATA 'C:\\Windows\\System32\\Drivers\\DriverData' EFC_9700_1262719628 '1' EFC_9700_1592913036 '1' EFC_9700_2283032206 '1' EFC_9700_2775293581 '1' EFC_9700_3789132940 '1' FPS_BROWSER_APP_PROFILE_STRING 'Internet Explorer' FPS_BROWSER_USER_PROFILE_STRING 'Default' GATEWAY_INTERFACE 'CGI/1.1' HOMEDRIVE 'C:' HOMEPATH '\\Users\\高琨' HTTP_ACCEPT 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' HTTP_ACCEPT_ENCODING 'gzip, deflate, br, zstd' HTTP_ACCEPT_LANGUAGE 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' HTTP_CACHE_CONTROL 'max-age=0' HTTP_CONNECTION 'keep-alive' HTTP_COOKIE '********************' HTTP_HOST '127.0.0.1:8000' HTTP_ORIGIN 'http://127.0.0.1:8000' HTTP_REFERER 'http://127.0.0.1:8000/auth/users/' HTTP_SEC_CH_UA '"Microsoft Edge";v="137", "Chromium";v="137", "Not/A)Brand";v="24"' HTTP_SEC_CH_UA_MOBILE '?0' HTTP_SEC_CH_UA_PLATFORM '"Windows"' HTTP_SEC_FETCH_DEST 'document' HTTP_SEC_FETCH_MODE 'navigate' HTTP_SEC_FETCH_SITE 'same-origin' HTTP_SEC_FETCH_USER '?1' HTTP_UPGRADE_INSECURE_REQUESTS '1' HTTP_USER_AGENT ('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like ' 'Gecko) Chrome/137.0.0.0 Safari/537.36 Edg/137.0.0.0') LANG 'zh_CN.UTF-8' LOCALAPPDATA 'C:\\Users\\高琨\\AppData\\Local' LOGONSERVER '\\\\LAPTOP-SUN5G18K' NPM_HOME 'C:\\Program Files\\nodejs\\node_global' NUMBER_OF_PROCESSORS '16' NVM_HOME 'E:\\nvm' NVM_SYMLINK 'C:\\nvm4w\\nodejs' ONEDRIVE 'C:\\Users\\高琨\\OneDrive' ORIGINAL_XDG_CURRENT_DESKTOP 'undefined' OS 'Windows_NT' PATH ('E:\\bin\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Windows\\system32\\config\\systemprofile\\AppData\\Local\\Microsoft\\WindowsApps;C:\\windows\\system32\\HWAudioDriver\\;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;E:\\nvm;C:\\nvm4w\\nodejs;C:\\Program ' 'Files\\dotnet\\;C:\\Program Files\\nodejs\\;C:\\Program ' 'Files\\nodejs\\node_global;C:\\Windows\\System32\\node_modules\\yarn\\bin;D:\\yarn_global\\bin;F:\\;C:\\Program ' 'Files ' '(x86)\\Tencent\\微信web开发者工具\\dll;C:\\Users\\高琨\\AppData\\Local\\Microsoft\\WindowsApps\\python3.exe;D:\\phpstudy_pro\\Extensions\\php\\php7.3.4nts;D:\\phpstudy_pro\\Extensions\\composer2.5.8;C:\\Program ' 'Files (x86)\\NetSarang\\Xftp ' '8\\;C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\\;C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Python39\\;C:\\Users\\高琨\\AppData\\Local\\Programs\\Python\\Launcher\\;C:\\Users\\高琨\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\高琨\\AppData\\Local\\Programs\\Microsoft ' 'VS Code\\bin;E:\\nvm;C:\\nvm4w\\nodejs;C:\\Users\\高琨\\AppData\\Roaming\\npm') PATHEXT '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL' PATH_INFO '/auth/users/' PROCESSOR_ARCHITECTURE 'AMD64' PROCESSOR_IDENTIFIER 'Intel64 Family 6 Model 154 Stepping 3, GenuineIntel' PROCESSOR_LEVEL '6' PROCESSOR_REVISION '9a03' PROGRAMDATA 'C:\\ProgramData' PROGRAMFILES 'C:\\Program Files' PROGRAMFILES(X86) 'C:\\Program Files (x86)' PROGRAMW6432 'C:\\Program Files' PSMODULEPATH ('C:\\Users\\高琨\\Documents\\WindowsPowerShell\\Modules;C:\\Program ' 'Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules') PUBLIC 'C:\\Users\\Public' QUERY_STRING '' REMOTE_ADDR '127.0.0.1' REMOTE_HOST '' REQUEST_METHOD 'POST' RUN_MAIN 'true' SCRIPT_NAME '' SERVER_NAME '127.0.0.1' SERVER_PORT '8000' SERVER_PROTOCOL 'HTTP/1.1' SERVER_SOFTWARE 'WSGIServer/0.2' SESSIONNAME 'Console' SYSTEMDRIVE 'C:' SYSTEMROOT 'C:\\WINDOWS' TEMP 'C:\\Users\\高琨\\AppData\\Local\\Temp' TERM_PROGRAM 'vscode' TERM_PROGRAM_VERSION '1.101.1' TMP 'C:\\Users\\高琨\\AppData\\Local\\Temp' USERDOMAIN 'LAPTOP-SUN5G18K' USERDOMAIN_ROAMINGPROFILE 'LAPTOP-SUN5G18K' USERNAME '高琨' USERPROFILE 'C:\\Users\\高琨' VSCODE_INJECTION '1' VSCODE_NONCE '71bb0211-da42-4580-bb3d-e761a3af6e04' VSCODE_STABLE '1' WINDIR 'C:\\WINDOWS' ZES_ENABLE_SYSMAN '1' wsgi.errors <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'> wsgi.file_wrapper <class 'wsgiref.util.FileWrapper'> wsgi.input <django.core.handlers.wsgi.LimitedStream object at 0x000001EBE9AE46D0> wsgi.multiprocess False wsgi.multithread True wsgi.run_once False wsgi.url_scheme 'http' wsgi.version (1, 0) Settings Using settings module core.settings Setting Value ABSOLUTE_URL_OVERRIDES {} ADMINS [] ALLOWED_HOSTS ['*'] APPEND_SLASH True AUTHENTICATION_BACKENDS ['django.contrib.auth.backends.ModelBackend'] AUTH_PASSWORD_VALIDATORS '********************' AUTH_USER_MODEL 'account.User' BASE_DIR WindowsPath('C:/Users/高琨/Desktop/backend') CACHES {'default': {'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'}} CACHE_MIDDLEWARE_ALIAS 'default' CACHE_MIDDLEWARE_KEY_PREFIX '********************' CACHE_MIDDLEWARE_SECONDS 600 CORS_ALLOW_ALL_ORIGINS True CSRF_COOKIE_AGE 31449600 CSRF_COOKIE_DOMAIN None CSRF_COOKIE_HTTPONLY False CSRF_COOKIE_MASKED False CSRF_COOKIE_NAME 'csrftoken' CSRF_COOKIE_PATH '/' CSRF_COOKIE_SAMESITE 'Lax' CSRF_COOKIE_SECURE False CSRF_FAILURE_VIEW 'django.views.csrf.csrf_failure' CSRF_HEADER_NAME 'HTTP_X_CSRFTOKEN' CSRF_TRUSTED_ORIGINS [] CSRF_USE_SESSIONS False DATABASES {'default': {'ATOMIC_REQUESTS': False, 'AUTOCOMMIT': True, 'CONN_HEALTH_CHECKS': False, 'CONN_MAX_AGE': 0, 'ENGINE': 'django.db.backends.sqlite3', 'HOST': '', 'NAME': WindowsPath('C:/Users/高琨/Desktop/backend/db.sqlite3'), 'OPTIONS': {}, 'PASSWORD': '********************', 'PORT': '', 'TEST': {'CHARSET': None, 'COLLATION': None, 'MIGRATE': True, 'MIRROR': None, 'NAME': None}, 'TIME_ZONE': None, 'USER': ''}} DATABASE_ROUTERS [] DATA_UPLOAD_MAX_MEMORY_SIZE 2621440 DATA_UPLOAD_MAX_NUMBER_FIELDS 1000 DATA_UPLOAD_MAX_NUMBER_FILES 100 DATETIME_FORMAT 'N j, Y, P' DATETIME_INPUT_FORMATS ['%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M', '%m/%d/%Y %H:%M:%S', '%m/%d/%Y %H:%M:%S.%f', '%m/%d/%Y %H:%M', '%m/%d/%y %H:%M:%S', '%m/%d/%y %H:%M:%S.%f', '%m/%d/%y %H:%M'] DATE_FORMAT 'N j, Y' DATE_INPUT_FORMATS ['%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', '%b %d %Y', '%b %d, %Y', '%d %b %Y', '%d %b, %Y', '%B %d %Y', '%B %d, %Y', '%d %B %Y', '%d %B, %Y'] DEBUG True DEBUG_PROPAGATE_EXCEPTIONS False DECIMAL_SEPARATOR '.' DEFAULT_AUTO_FIELD 'django.db.models.BigAutoField' DEFAULT_CHARSET 'utf-8' DEFAULT_EXCEPTION_REPORTER 'django.views.debug.ExceptionReporter' DEFAULT_EXCEPTION_REPORTER_FILTER 'django.views.debug.SafeExceptionReporterFilter' DEFAULT_FILE_STORAGE 'django.core.files.storage.FileSystemStorage' DEFAULT_FROM_EMAIL 'webmaster@localhost' DEFAULT_INDEX_TABLESPACE '' DEFAULT_TABLESPACE '' DISALLOWED_USER_AGENTS [] EMAIL_BACKEND 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST 'localhost' EMAIL_HOST_PASSWORD '********************' EMAIL_HOST_USER '' EMAIL_PORT 25 EMAIL_SSL_CERTFILE None EMAIL_SSL_KEYFILE '********************' EMAIL_SUBJECT_PREFIX '[Django] ' EMAIL_TIMEOUT None EMAIL_USE_LOCALTIME False EMAIL_USE_SSL False EMAIL_USE_TLS False FILE_UPLOAD_DIRECTORY_PERMISSIONS None FILE_UPLOAD_HANDLERS ['django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler'] FILE_UPLOAD_MAX_MEMORY_SIZE 2621440 FILE_UPLOAD_PERMISSIONS 420 FILE_UPLOAD_TEMP_DIR None FIRST_DAY_OF_WEEK 0 FIXTURE_DIRS [] FORCE_SCRIPT_NAME None FORMAT_MODULE_PATH None FORM_RENDERER 'django.forms.renderers.DjangoTemplates' IGNORABLE_404_URLS [] INSTALLED_APPS ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'corsheaders', 'account'] INTERNAL_IPS [] LANGUAGES [('af', 'Afrikaans'), ('ar', 'Arabic'), ('ar-dz', 'Algerian Arabic'), ('ast', 'Asturian'), ('az', 'Azerbaijani'), ('bg', 'Bulgarian'), ('be', 'Belarusian'), ('bn', 'Bengali'), ('br', 'Breton'), ('bs', 'Bosnian'), ('ca', 'Catalan'), ('ckb', 'Central Kurdish (Sorani)'), ('cs', 'Czech'), ('cy', 'Welsh'), ('da', 'Danish'), ('de', 'German'), ('dsb', 'Lower Sorbian'), ('el', 'Greek'), ('en', 'English'), ('en-au', 'Australian English'), ('en-gb', 'British English'), ('eo', 'Esperanto'), ('es', 'Spanish'), ('es-ar', 'Argentinian Spanish'), ('es-co', 'Colombian Spanish'), ('es-mx', 'Mexican Spanish'), ('es-ni', 'Nicaraguan Spanish'), ('es-ve', 'Venezuelan Spanish'), ('et', 'Estonian'), ('eu', 'Basque'), ('fa', 'Persian'), ('fi', 'Finnish'), ('fr', 'French'), ('fy', 'Frisian'), ('ga', 'Irish'), ('gd', 'Scottish Gaelic'), ('gl', 'Galician'), ('he', 'Hebrew'), ('hi', 'Hindi'), ('hr', 'Croatian'), ('hsb', 'Upper Sorbian'), ('hu', 'Hungarian'), ('hy', 'Armenian'), ('ia', 'Interlingua'), ('id', 'Indonesian'), ('ig', 'Igbo'), ('io', 'Ido'), ('is', 'Icelandic'), ('it', 'Italian'), ('ja', 'Japanese'), ('ka', 'Georgian'), ('kab', 'Kabyle'), ('kk', 'Kazakh'), ('km', 'Khmer'), ('kn', 'Kannada'), ('ko', 'Korean'), ('ky', 'Kyrgyz'), ('lb', 'Luxembourgish'), ('lt', 'Lithuanian'), ('lv', 'Latvian'), ('mk', 'Macedonian'), ('ml', 'Malayalam'), ('mn', 'Mongolian'), ('mr', 'Marathi'), ('ms', 'Malay'), ('my', 'Burmese'), ('nb', 'Norwegian Bokmål'), ('ne', 'Nepali'), ('nl', 'Dutch'), ('nn', 'Norwegian Nynorsk'), ('os', 'Ossetic'), ('pa', 'Punjabi'), ('pl', 'Polish'), ('pt', 'Portuguese'), ('pt-br', 'Brazilian Portuguese'), ('ro', 'Romanian'), ('ru', 'Russian'), ('sk', 'Slovak'), ('sl', 'Slovenian'), ('sq', 'Albanian'), ('sr', 'Serbian'), ('sr-latn', 'Serbian Latin'), ('sv', 'Swedish'), ('sw', 'Swahili'), ('ta', 'Tamil'), ('te', 'Telugu'), ('tg', 'Tajik'), ('th', 'Thai'), ('tk', 'Turkmen'), ('tr', 'Turkish'), ('tt', 'Tatar'), ('udm', 'Udmurt'), ('uk', 'Ukrainian'), ('ur', 'Urdu'), ('uz', 'Uzbek'), ('vi', 'Vietnamese'), ('zh-hans', 'Simplified Chinese'), ('zh-hant', 'Traditional Chinese')] LANGUAGES_BIDI ['he', 'ar', 'ar-dz', 'ckb', 'fa', 'ur'] LANGUAGE_CODE 'zh-hans' LANGUAGE_COOKIE_AGE None LANGUAGE_COOKIE_DOMAIN None LANGUAGE_COOKIE_HTTPONLY False LANGUAGE_COOKIE_NAME 'django_language' LANGUAGE_COOKIE_PATH '/' LANGUAGE_COOKIE_SAMESITE None LANGUAGE_COOKIE_SECURE False LOCALE_PATHS [] LOGGING {} LOGGING_CONFIG 'logging.config.dictConfig' LOGIN_REDIRECT_URL '/accounts/profile/' LOGIN_URL '/accounts/login/' LOGOUT_REDIRECT_URL None MANAGERS [] MEDIA_ROOT 'C:/Users/高琨/Desktop/backend/media' MEDIA_URL '/media/' MESSAGE_STORAGE 'django.contrib.messages.storage.fallback.FallbackStorage' MIDDLEWARE ['corsheaders.middleware.CorsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] MIGRATION_MODULES {} MONTH_DAY_FORMAT 'F j' NUMBER_GROUPING 0 PASSWORD_HASHERS '********************' PASSWORD_RESET_TIMEOUT '********************' PREPEND_WWW False REST_FRAMEWORK {'DEFAULT_AUTHENTICATION_CLASSES': ('rest_framework_simplejwt.authentication.JWTAuthentication',), 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'DEFAULT_PERMISSION_CLASSES': 'rest_framework.permissions.IsAuthenticated', 'NON_FIELD_ERRORS_KEY': '********************', 'PAGE_SIZE': 3} ROOT_URLCONF 'core.urls' SECRET_KEY '********************' SECRET_KEY_FALLBACKS '********************' SECURE_CONTENT_TYPE_NOSNIFF True SECURE_CROSS_ORIGIN_OPENER_POLICY 'same-origin' SECURE_HSTS_INCLUDE_SUBDOMAINS False SECURE_HSTS_PRELOAD False SECURE_HSTS_SECONDS 0 SECURE_PROXY_SSL_HEADER None SECURE_REDIRECT_EXEMPT [] SECURE_REFERRER_POLICY 'same-origin' SECURE_SSL_HOST None SECURE_SSL_REDIRECT False SERVER_EMAIL 'root@localhost' SESSION_CACHE_ALIAS 'default' SESSION_COOKIE_AGE 1209600 SESSION_COOKIE_DOMAIN None SESSION_COOKIE_HTTPONLY True SESSION_COOKIE_NAME 'sessionid' SESSION_COOKIE_PATH '/' SESSION_COOKIE_SAMESITE 'Lax' SESSION_COOKIE_SECURE False SESSION_ENGINE 'django.contrib.sessions.backends.db' SESSION_EXPIRE_AT_BROWSER_CLOSE False SESSION_FILE_PATH None SESSION_SAVE_EVERY_REQUEST False SESSION_SERIALIZER 'django.contrib.sessions.serializers.JSONSerializer' SETTINGS_MODULE 'core.settings' SHORT_DATETIME_FORMAT 'm/d/Y P' SHORT_DATE_FORMAT 'm/d/Y' SIGNING_BACKEND 'django.core.signing.TimestampSigner' SILENCED_SYSTEM_CHECKS [] SIMPLE_JWT {'ACCESS_TOKEN_LIFETIME': '********************', 'AUTH_HEADER_TYPES': ('Bearer',), 'REFRESH_TOKEN_LIFETIME': '********************', 'SIGNING_KEY': '********************'} STATICFILES_DIRS [] STATICFILES_FINDERS ['django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder'] STATICFILES_STORAGE 'django.contrib.staticfiles.storage.StaticFilesStorage' STATIC_ROOT 'C:\\Users\\高琨\\Desktop\\backend\\static' STATIC_URL '/static/' STORAGES {'default': {'BACKEND': 'django.core.files.storage.FileSystemStorage'}, 'staticfiles': {'BACKEND': 'django.contrib.staticfiles.storage.StaticFilesStorage'}} TEMPLATES [{'APP_DIRS': True, 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'OPTIONS': {'context_processors': ['django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages']}}] TEST_NON_SERIALIZED_APPS [] TEST_RUNNER 'django.test.runner.DiscoverRunner' THOUSAND_SEPARATOR ',' TIME_FORMAT 'P' TIME_INPUT_FORMATS ['%H:%M:%S', '%H:%M:%S.%f', '%H:%M'] TIME_ZONE 'Asia/Shanghai' USE_DEPRECATED_PYTZ False USE_I18N True USE_L10N True USE_THOUSAND_SEPARATOR False USE_TZ False USE_X_FORWARDED_HOST False USE_X_FORWARDED_PORT False WSGI_APPLICATION 'core.wsgi.application' X_FRAME_OPTIONS 'DENY' YEAR_MONTH_FORMAT 'F Y'

import cv2 import numpy as np import imutils import pytesseract import os import easyocr # Tesseract 路径配置 pytesseract.pytesseract.tesseract_cmd = r'C:\Users\26281\tesseract\tesseract.exe' os.environ['TESSDATA_PREFIX'] = r'C:\Users\26281\tesseract\tessdata' # 初始化 EasyOCR 识别器 reader = easyocr.Reader(['ch_sim', 'en'], gpu=False) # 1. 读取图像 img_path = "D:/pro/pythonProject4/4.23.jpg" img = cv2.imread(img_path, cv2.IMREAD_COLOR) if img is None: print("无法识别图像!") exit() # 2. 预处理 img = cv2.resize(img, (600, 400)) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) edged = cv2.Canny(gray, 30, 200) # 3. 找轮廓并筛选车牌轮廓 contours = cv2.findContours(edged.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) contours = imutils.grab_contours(contours) contours = sorted(contours, key=cv2.contourArea, reverse=True)[:10] screenCnt = None for c in contours: peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.018 * peri, True) if len(approx) == 4: screenCnt = approx break if screenCnt is None: print("未能获取到有效车牌轮廓") exit() # 4. 提取车牌区域 cv2.drawContours(img, [screenCnt], -1, (0, 0, 255), 3) mask = np.zeros(gray.shape, np.uint8) cv2.fillPoly(mask, [screenCnt], 255) (x, y) = np.where(mask == 255) if len(x) == 0 or len(y) == 0: print("无效的车牌区域!") exit() (topx, topy) = (np.min(x), np.min(y)) (bottomx, bottomy) = (np.max(x), np.max(y)) cropped_gray = gray[topx:bottomx + 1, topy:bottomy + 1] cropped_color = img[topx:bottomx + 1, topy:bottomy + 1] # 保存彩色版本用于 EasyOCR # 5. 单独提取省份区域进行识别 h, w = cropped_gray.shape province_width = int(w * 0.25) # 取车牌宽度的1/4作为省份简称区域 province_region_gray = cropped_gray[:, :province_width] province_region_color = cropped_color[:, :province_width] # 6. 使用 EasyOCR 识别省份 province_results = reader.readtext(province_region_color) province = "" for (bbox, text, prob) in province_results: if text in '粤苏浙京沪津渝冀晋蒙辽吉黑皖闽赣鲁豫鄂湘粤桂琼川贵云藏陕甘青宁新' and prob > 0.5: province = text break # 7. 使用 EasyOCR 识别车牌其他部分 char_region_color = cropped_color[:, province_width:] results = reader.readtext(char_region_color) easy_text = "" for (bbox, text, prob) in results: if len(text) >= 5 and prob > 0.5: # 至少5个字符且置信度>0.5 easy_text = text.strip().replace(" ", "") break # 8. 处理识别结果 if province and easy_text: print("车牌省份:", province) print("车牌号码:", easy_text) else: print("EasyOCR 识别不完整,尝试 Tesseract") # 9. 使用 Tesseract 进行识别 # 配置 Tesseract 参数 tess_config = ( '-c tessedit_char_whitelist=粤苏浙京沪津渝冀晋蒙辽吉黑皖闽赣鲁豫鄂湘粤桂琼川贵云藏陕甘青宁新0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ ' '--oem 3 --psm 6' ) # 对车牌进行二值化处理 _, binary = cv2.threshold(cropped_gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) # 使用 Tesseract 识别 text = pytesseract.image_to_string(binary, config=tess_config, lang='chi_sim+eng') text = text.strip().replace(" ", "").replace("\n", "") if text: # 尝试分离省份和车牌其他部分 province = text[0] if len(text) > 0 and text[ 0] in '粤苏浙京沪津渝冀晋蒙辽吉黑皖闽赣鲁豫鄂湘粤桂琼川贵云藏陕甘青宁新' else "" plate_number = text[1:] if len(text) > 1 else "" print("车牌省份:", province) print("车牌号码:", plate_number) else: print("Tesseract 识别失败") # 10. 图像显示(在有UI环境下) if os.environ.get('DISPLAY') or cv2.getWindowProperty('dummy', cv2.WND_PROP_VISIBLE) >= 0: img = cv2.resize(img, (500, 300)) cv2.imshow('Detected Plate', img) cv2.imshow('Cropped Plate', cropped_gray) cv2.imshow('Province Region', province_region_color) cv2.imshow('Char Region', char_region_color) cv2.waitKey(0) cv2.destroyAllWindows() 输出车牌正确结果应是粤F0A943,但是输出结果是D:\pro\pythonProject\venv\Scripts\python.exe D:\pro\pythonProject\测试.py Using CPU. Note: This module is much faster with a GPU. 车牌省份: 粤 车牌号码: 0A943 [ WARN:[email protected]] global window.cpp:302 cvGetWindowProperty No UI backends available. Use OPENCV_LOG_LEVEL=DEBUG for investigation,车牌识别不能完整是为什么

import sys import os import tkinter as tk from tkinter import ttk, filedialog, messagebox import cv2 import numpy as np import random from datetime import datetime, timedelta import threading import subprocess import shutil # 设置DPI感知,确保在高分辨率屏幕上显示正常 try: from ctypes import windll windll.shcore.SetProcessDpiAwareness(1) except: pass # 尝试导入可选依赖 MOVIEPY_AVAILABLE = False PSUTIL_AVAILABLE = False # 处理打包后的导入问题 if getattr(sys, 'frozen', False): # 运行在打包环境中 base_path = sys._MEIPASS # 添加可能的包路径 for package in ['moviepy', 'imageio', 'imageio_ffmpeg', 'psutil']: package_path = os.path.join(base_path, package) if os.path.exists(package_path): sys.path.insert(0, package_path) else: # 正常运行时 base_path = os.path.dirname(__file__) try: from moviepy.editor import VideoFileClip, AudioFileClip MOVIEPY_AVAILABLE = True except ImportError as e: print(f"MoviePy import error: {e}") try: import psutil PSUTIL_AVAILABLE = True except ImportError: pass # 全局变量 stop_processing = False # 定义核心处理函数 def add_invisible_overlay(frame, strength): """核心功能:添加全透明扰动层(对抗哈希检测)""" # 将强度从0-100映射到更合理的扰动范围 (1-5) overlay_strength = strength / 100.0 * 4 + 1 # 1 to 5 # 1. 创建一个和帧大小一样的随机噪声图像 noise = np.random.randn(*frame.shape).astype(np.float32) * overlay_strength # 2. 将噪声加到原帧上 new_frame = frame.astype(np.float32) + noise # 3. 确保像素值在0-255之间 new_frame = np.clip(new_frame, 0, 255).astype(np.uint8) return new_frame def resize_with_padding(frame, target_width=720, target_height=1560): """将帧调整为目标分辨率,保持宽高比,不足部分用黑色填充""" # 获取原始尺寸 h, w = frame.shape[:2] # 计算缩放比例 scale = target_width / w new_h = int(h * scale) # 如果缩放后的高度超过目标高度,则按高度缩放 if new_h > target_height: scale = target_height / h new_w = int(w * scale) resized = cv2.resize(frame, (new_w, target_height)) else: resized = cv2.resize(frame, (target_width, new_h)) # 创建目标画布(黑色) canvas = np.zeros((target_height, target_width, 3), dtype=np.uint8) # 计算放置位置(居中) y_offset = (target_height - resized.shape[0]) // 2 x_offset = (target_width - resized.shape[1]) // 2 # 将缩放后的图像放到画布上 canvas[y_offset:y_offset+resized.shape[0], x_offset:x_offset+resized.shape[1]] = resized # 在黑色区域添加不可见的随机噪声(亮度值0-5) black_areas = np.where(canvas == 0) if len(black_areas[0]) > 0: # 只对黑色区域添加噪声 noise = np.random.randint(0, 6, size=black_areas[0].shape, dtype=np.uint8) for i in range(3): # 对RGB三个通道 canvas[black_areas[0], black_areas[1], i] = noise return canvas def generate_random_metadata(): """生成随机的元数据""" # 随机设备型号列表 devices = [ "iPhone15,3", "iPhone15,2", "iPhone14,2", "iPhone14,1", "SM-G998B", "SM-G996B", "SM-G781B", "Mi 11 Ultra", "Mi 10", "Redmi Note 10 Pro" ] # 随机应用程序列表 apps = [ "Wxmm_9020230808", "Wxmm_9020230701", "Wxmm_9020230605", "LemonCamera_5.2.1", "CapCut_9.5.0", "VivaVideo_9.15.5" ] # 随机生成创建时间(最近30天内) now = datetime.now() random_days = random.randint(0, 30) random_hours = random.randint(0, 23) random_minutes = random.randint(0, 59) random_seconds = random.randint(0, 59) creation_time = now - timedelta(days=random_days, hours=random_hours, minutes=random_minutes, seconds=random_seconds) return { "device_model": random.choice(devices), "writing_application": random.choice(apps), "creation_time": creation_time.strftime("%Y-%m-%dT%H:%M:%S"), "title": f"Video_{random.randint(10000, 99999)}", "artist": "Mobile User", "compatible_brands": "isom,iso2,avc1,mp41", "major_brand": "isom" } def corrupt_metadata(input_path, output_path, custom_metadata=None, gpu_type="cpu"): """使用FFmpeg深度修改元数据""" if custom_metadata is None: custom_metadata = generate_random_metadata() # 根据GPU类型设置编码器 if gpu_type == "nvidia": video_encoder = "h264_nvenc" elif gpu_type == "amd": video_encoder = "h264_amf" elif gpu_type == "intel": video_encoder = "h264_qsv" else: video_encoder = "libx264" # 构造FFmpeg命令 command = [ 'ffmpeg', '-i', input_path, '-map_metadata', '-1', # 丢弃所有元数据 '-metadata', f'title={custom_metadata["title"]}', '-metadata', f'artist={custom_metadata["artist"]}', '-metadata', f'creation_time={custom_metadata["creation_time"]}', '-metadata', f'compatible_brands={custom_metadata["compatible_brands"]}', '-metadata', f'major_brand={custom_metadata["major_brand"]}', '-metadata', f'handler_name={custom_metadata["writing_application"]}', '-movflags', 'use_metadata_tags', '-c:v', video_encoder, '-preset', 'medium', '-crf', str(random.randint(18, 23)), # 随机CRF值 '-profile:v', 'high', '-level', '4.0', '-pix_fmt', 'yuv420p', '-c:a', 'aac', '-b:a', '96k', '-ar', '44100', '-y', output_path ] # 添加设备特定元数据 if 'iPhone' in custom_metadata["device_model"]: command.extend([ '-metadata', f'com.apple.quicktime.model={custom_metadata["device_model"]}', '-metadata', f'com.apple.quicktime.software=16.0' ]) try: subprocess.run(command, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) return True except subprocess.CalledProcessError as e: print(f"FFmpeg error: {e}") return False except FileNotFoundError: messagebox.showerror('致命错误', '错误:未找到FFmpeg!\n请确保ffmpeg.exe在程序同一目录下。') return False def create_background_video(output_path, duration, width=720, height=1560, fps=30): """创建带有扰动的黑色背景视频""" # 使用FFmpeg创建带有随机噪声的黑色背景视频 cmd = [ 'ffmpeg', '-f', 'lavfi', '-i', f'nullsrc=s={width}x{height}:d={duration}:r={fps}', '-vf', 'noise=alls=20:allf=t', '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-y', output_path ] try: subprocess.run(cmd, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) return True except subprocess.CalledProcessError as e: print(f"创建背景视频失败: {e}") return False def detect_gpu(): """检测可用的GPU类型""" try: # 尝试使用nvidia-smi检测NVIDIA GPU try: result = subprocess.run(['nvidia-smi'], capture_output=True, text=True, check=True) if result.returncode == 0: return "nvidia" except (subprocess.CalledProcessError, FileNotFoundError): pass # 尝试使用Windows Management Instrumentation检测AMD和Intel GPU try: import wmi w = wmi.WMI() for gpu in w.Win32_VideoController(): name = gpu.Name.lower() if "amd" in name or "radeon" in name: return "amd" elif "intel" in name: return "intel" except ImportError: pass # 尝试使用dxdiag检测GPU try: result = subprocess.run(['dxdiag', '/t', 'dxdiag.txt'], capture_output=True, text=True, check=True) if result.returncode == 0: with open('dxdiag.txt', 'r', encoding='utf-16') as f: content = f.read().lower() if "nvidia" in content: return "nvidia" elif "amd" in content or "radeon" in content: return "amd" elif "intel" in content: return "intel" except (subprocess.CalledProcessError, FileNotFoundError): pass except Exception as e: print(f"GPU检测失败: {e}") return "cpu" def add_audio_watermark(audio_clip, strength): """给音频添加水印(简化版本)""" # 在实际应用中,这里应该实现音频扰动算法 # 这里只是一个示例,返回原始音频 return audio_clip def process_video(): """主处理流程控制器""" global stop_processing if not MOVIEPY_AVAILABLE: messagebox.showerror("错误", "MoviePy库未安装!请运行: pip install moviepy") return False input_path = input_entry.get() output_path = output_entry.get() if not input_path or not output_path: messagebox.showerror('错误', '请先选择输入和输出文件!') return False # 解析用户选择的强度和功能 strength = strength_scale.get() use_video_perturb = video_var.get() use_audio_perturb = audio_var.get() use_metadata_corrupt = metadata_var.get() use_gan = gan_var.get() use_resize = resize_var.get() use_pip = pip_var.get() pip_opacity = pip_opacity_scale.get() if use_pip else 2 num_pip_videos = int(pip_num_combo.get()) if use_pip else 0 gpu_type = gpu_combo.get() # 临时文件路径 temp_video_path = "temp_processed.mp4" temp_audio_path = "temp_audio.aac" pip_temp_path = "temp_pip.mp4" if use_pip else None background_path = "temp_background.mp4" final_output_path = output_path # 获取原始视频时长和帧率 try: original_clip = VideoFileClip(input_path) original_duration = original_clip.duration original_fps = original_clip.fps original_clip.close() except Exception as e: messagebox.showerror('错误', f'无法打开视频文件: {str(e)}') return False try: # 第一步:创建背景视频 if use_resize: if not create_background_video(background_path, original_duration, 720, 1560, original_fps): messagebox.showerror('错误', '创建背景视频失败!') return False # 第二步:处理视频和音频 if use_video_perturb or use_resize: # 使用OpenCV打开视频 cap = cv2.VideoCapture(input_path) # 获取视频属性 fps = int(cap.get(cv2.CAP_PROP_FPS)) width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # 设置目标分辨率 target_width, target_height = 720, 1560 # 创建VideoWriter来写入处理后的视频 fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter(temp_video_path, fourcc, fps, (target_width, target_height)) processed_frames = 0 # 主循环:逐帧处理 while True: ret, frame = cap.read() if not ret: break # 读到结尾就退出 # 如果勾选了"调整分辨率",先调整分辨率 if use_resize: frame = resize_with_padding(frame, target_width, target_height) # 如果勾选了"视频扰动",就对当前帧进行处理 if use_video_perturb: frame = add_invisible_overlay(frame, strength) # 写入处理后的帧 out.write(frame) processed_frames += 1 # 更新进度条 progress_var.set(processed_frames / total_frames * 100) root.update_idletasks() # 检查是否取消 if stop_processing: break # 释放资源 cap.release() out.release() if stop_processing: messagebox.showinfo('信息', '处理已取消!') return False # 第三步:处理音频 if use_audio_perturb: # 从原视频提取音频 original_video = VideoFileClip(input_path) original_audio = original_video.audio if original_audio is not None: # 给音频添加水印 processed_audio = add_audio_watermark(original_audio, strength) # 保存处理后的音频到临时文件 processed_audio.write_audiofile(temp_audio_path, logger=None) processed_audio.close() original_video.close() else: # 如果没有勾选音频处理,直接提取原音频 original_video = VideoFileClip(input_path) original_audio = original_video.audio if original_audio is not None: original_audio.write_audiofile(temp_audio_path, logger=None) original_video.close() # 第四步:合并视频和音频 # 如果处理了视频或调整了分辨率,使用处理后的视频,否则使用原视频 video_source = temp_video_path if (use_video_perturb or use_resize) else input_path # 如果有音频文件,合并音频 if os.path.exists(temp_audio_path): # 使用FFmpeg合并音视频 merge_cmd = [ 'ffmpeg', '-i', video_source, '-i', temp_audio_path, '-c:v', 'copy', '-c:a', 'aac', '-map', '0:v:0', '-map', '1:a:0', '-shortest', '-y', final_output_path ] subprocess.run(merge_cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) else: # 如果没有音频,直接复制视频 shutil.copy2(video_source, final_output_path) # 第五步:处理元数据(无论是否处理视频音频,只要勾选了就执行) if use_metadata_corrupt: custom_meta = generate_random_metadata() temp_final_path = final_output_path + "_temp.mp4" success = corrupt_metadata(final_output_path, temp_final_path, custom_meta, gpu_type) if success: # 用处理完元数据的文件替换最终文件 if os.path.exists(final_output_path): os.remove(final_output_path) os.rename(temp_final_path, final_output_path) else: return False # 第六步:GAN处理(预留功能) if use_gan: messagebox.showinfo('信息', 'GAN功能是预留选项,在当前版本中未实际生效。') messagebox.showinfo('完成', f'处理完成!\n输出文件已保存至: {final_output_path}') return True except Exception as e: messagebox.showerror('错误', f'处理过程中出现错误: {str(e)}') return False finally: # 清理可能的临时文件 for temp_file in [temp_video_path, temp_audio_path, pip_temp_path, background_path]: if temp_file and os.path.exists(temp_file): try: os.remove(temp_file) except: pass # 重置进度条 progress_var.set(0) # 启用开始按钮 start_button.config(state=tk.NORMAL) stop_processing = False def start_processing(): """开始处理视频""" global stop_processing stop_processing = False start_button.config(state=tk.DISABLED) # 在新线程中处理视频 thread = threading.Thread(target=process_video, daemon=True) thread.start() def stop_processing_func(): """停止处理""" global stop_processing stop_processing = True def browse_input(): """浏览输入文件""" filename = filedialog.askopenfilename( filetypes=[("Video Files", "*.mp4 *.mov *.avi *.mkv"), ("All Files", "*.*")] ) if filename: input_entry.delete(0, tk.END) input_entry.insert(0, filename) def browse_output(): """浏览输出文件""" filename = filedialog.asksaveasfilename( defaultextension=".mp4", filetypes=[("MP4 Files", "*.mp4"), ("All Files", "*.*")] ) if filename: output_entry.delete(0, tk.END) output_entry.insert(0, filename) def toggle_pip_widgets(): """切换画中画相关控件的状态""" state = tk.NORMAL if pip_var.get() else tk.DISABLED pip_num_combo.config(state=state) pip_opacity_scale.config(state=state) def show_gan_info(): """显示GAN功能信息""" if gan_var.get(): messagebox.showinfo('功能说明', '请注意:GAN功能是高级预留功能。\n在当前版本中,它会被一个高级扰动算法模拟,但并非真正的GAN。\n效果依然强大。') # 检测可用GPU detected_gpu = detect_gpu() gpu_options = ["自动检测", "cpu", "nvidia", "amd", "intel"] default_gpu = detected_gpu if detected_gpu != "cpu" else "自动检测" # 创建主窗口 root = tk.Tk() root.title("视频号专版防检测处理工具 v3.0") root.geometry("800x600") # 创建变量 video_var = tk.BooleanVar(value=True) audio_var = tk.BooleanVar(value=True) resize_var = tk.BooleanVar(value=True) metadata_var = tk.BooleanVar(value=True) pip_var = tk.BooleanVar(value=False) gan_var = tk.BooleanVar(value=False) progress_var = tk.DoubleVar(value=0) # 创建界面组件 main_frame = ttk.Frame(root, padding="10") main_frame.grid(row=0, column=0, sticky=(tk.W, tk.E, tk.N, tk.S)) # 输入文件选择 ttk.Label(main_frame, text="输入视频文件:").grid(row=0, column=0, sticky=tk.W, pady=5) input_entry = ttk.Entry(main_frame, width=50) input_entry.grid(row=0, column=1, padx=5, pady=5) ttk.Button(main_frame, text="浏览", command=browse_input).grid(row=0, column=2, padx=5, pady=5) # 输出文件选择 ttk.Label(main_frame, text="输出视频文件:").grid(row=1, column=0, sticky=tk.W, pady=5) output_entry = ttk.Entry(main_frame, width=50) output_entry.grid(row=1, column=1, padx=5, pady=5) ttk.Button(main_frame, text="浏览", command=browse_output).grid(row=1, column=2, padx=5, pady=5) # 分隔线 ttk.Separator(main_frame, orient=tk.HORIZONTAL).grid(row=2, column=0, columnspan=3, sticky=(tk.W, tk.E), pady=10) # 处理强度 ttk.Label(main_frame, text="处理强度:").grid(row=3, column=0, sticky=tk.W, pady=5) strength_scale = tk.Scale(main_frame, from_=1, to=100, orient=tk.HORIZONTAL, length=400) strength_scale.set(50) strength_scale.grid(row=3, column=1, columnspan=2, sticky=(tk.W, tk.E), padx=5, pady=5) # 分隔线 ttk.Separator(main_frame, orient=tk.HORIZONTAL).grid(row=4, column=0, columnspan=3, sticky=(tk.W, tk.E), pady=10) # 处理选项 ttk.Checkbutton(main_frame, text="时空域微扰动 (抗视频指纹 - 核心推荐)", variable=video_var).grid(row=5, column=0, columnspan=3, sticky=tk.W, pady=2) ttk.Checkbutton(main_frame, text="音频指纹污染 (抗音频指纹 - 核心推荐)", variable=audio_var).grid(row=6, column=0, columnspan=3, sticky=tk.W, pady=2) ttk.Checkbutton(main_frame, text="标准化分辨率 (720x1560) + 黑边扰动", variable=resize_var).grid(row=7, column=0, columnspan=3, sticky=tk.W, pady=2) ttk.Checkbutton(main_frame, text="元数据彻底清理与伪造", variable=metadata_var).grid(row=8, column=0, columnspan=3, sticky=tk.W, pady=2) # 画中画选项 pip_frame = ttk.Frame(main_frame) pip_frame.grid(row=9, column=0, columnspan=3, sticky=tk.W, pady=2) ttk.Checkbutton(pip_frame, text="画中画干扰 (从P文件夹随机选择视频)", variable=pip_var, command=toggle_pip_widgets).grid(row=0, column=0, sticky=tk.W) pip_options_frame = ttk.Frame(main_frame) pip_options_frame.grid(row=10, column=0, columnspan=3, sticky=tk.W, pady=2) ttk.Label(pip_options_frame, text="画中画数量:").grid(row=0, column=0, sticky=tk.W, padx=5) pip_num_combo = ttk.Combobox(pip_options_frame, values=[1, 2, 3, 4, 5], state="readonly", width=5) pip_num_combo.set(3) pip_num_combo.grid(row=0, column=1, padx=5) ttk.Label(pip_options_frame, text="透明度 (1-100):").grid(row=0, column=2, sticky=tk.W, padx=5) pip_opacity_scale = tk.Scale(pip_options_frame, from_=1, to=100, orient=tk.HORIZONTAL, length=150) pip_opacity_scale.set(2) pip_opacity_scale.grid(row=0, column=3, padx=5) # 禁用画中画选项 pip_num_combo.config(state=tk.DISABLED) pip_opacity_scale.config(state=tk.DISABLED) # GAN选项 ttk.Checkbutton(main_frame, text="动态GAN对抗性扰动 (预留功能)", variable=gan_var, command=show_gan_info).grid(row=11, column=0, columnspan=3, sticky=tk.W, pady=2) # GPU加速选项 gpu_frame = ttk.Frame(main_frame) gpu_frame.grid(row=12, column=0, columnspan=3, sticky=tk.W, pady=5) ttk.Label(gpu_frame, text="GPU加速:").grid(row=0, column=0, sticky=tk.W) gpu_combo = ttk.Combobox(gpu_frame, values=gpu_options, state="readonly", width=10) gpu_combo.set(default_gpu) gpu_combo.grid(row=0, column=1, padx=5) # 分隔线 ttk.Separator(main_frame, orient=tk.HORIZONTAL).grid(row=13, column=0, columnspan=3, sticky=(tk.W, tk.E), pady=10) # 进度条 ttk.Label(main_frame, text="进度:").grid(row=14, column=0, sticky=tk.W, pady=5) progress_bar = ttk.Progressbar(main_frame, variable=progress_var, maximum=100, length=400) progress_bar.grid(row=14, column=1, columnspan=2, sticky=(tk.W, tk.E), padx=5, pady=5) # 按钮 button_frame = ttk.Frame(main_frame) button_frame.grid(row=15, column=0, columnspan=3, pady=10) start_button = ttk.Button(button_frame, text="开始处理", command=start_processing) start_button.pack(side=tk.LEFT, padx=5) ttk.Button(button_frame, text="停止", command=stop_processing_func).pack(side=tk.LEFT, padx=5) ttk.Button(button_frame, text="退出", command=root.quit).pack(side=tk.LEFT, padx=5) # 配置网格权重 root.columnconfigure(0, weight=1) root.rowconfigure(0, weight=1) main_frame.columnconfigure(1, weight=1) # 运行主循环 root.mainloop() 以上代码在打包时出现问题,请使用Tkinter替代PySimpleGUI代码,通过指定路径解决 MoviePy 导入问题,C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\site-packages\moviepy_init_.py 这是moviepy路径,给我一个完整的打包操作方案,详细列出所需的所有操作

pdf

大家在看

recommend-type

cloudwatch-logback-appender:将签发日志条目发布到AWS CloudWatch的Appender

适用于AWS CloudWatch的Logback日志附加程序 背景 该程序包提供了一个将其日志事件写入Cloudwatch的logback附加程序。 在您说出它之前,似乎有很多这样的项目,但是我发现没有一个项目是独立的并且已经发布到中央Maven存储库中。 可以从获取代码。 Maven软件包通过发布 享受,格雷·沃森 Maven配置 com.j256.cloudwatchlogbackappender cloudwatchlogbackappender &lt;!-- NOTE: change the version to the most recent release version from the re
recommend-type

使用wxWidgets跨平台设计

wxWidgets跨平台设计类库用C++写的啊. 还有使用wxWidgets的总体框架文档.编译并安装的文档搭建Eclipse+CDT+MinGW+wxWidgets开发环境
recommend-type

A First Course in Probability, CN HD, English eBook, Solution Manual

CSDN 好像不支持 0 积分上传,那就 1 积分意思意思吧 A First Course in Probability, 9th Edition, Original eBook 概率论基础教程,高清,9,书签 Solution Manual 答案 拒绝知识垄断,拒绝盗版再收益
recommend-type

物理引擎Havok教程

Havok引擎,全称为Havok游戏动力开发工具包(Havok Game Dynamics SDK),一般称为Havok,是一个用于物理系统方面的游戏引擎,为电子游戏所设计,注重在游戏中对于真实世界的模拟。使用碰撞功能的Havok引擎可以让更多真实世界的情况以最大的拟真度反映在游戏中。
recommend-type

佳博打印机编程手册esc tspl cpcl

佳博打印机编程手册,包括esc、tspl、cpcl指令

最新推荐

recommend-type

python版本基于ChatGLM的飞书机器人.zip

python版本基于ChatGLM的飞书机器人.zip
recommend-type

CSP竞赛动态规划与图论高效代码实现:Dijkstra算法及状态压缩DP的应用与优化

内容概要:本文聚焦于CSP竞赛中从动态规划到图论的高效代码实现,重点介绍了动态规划中的背包问题及其代码实现,通过状态转移方程和滚动数组优化空间复杂度;阐述了状态压缩的概念,特别是位运算表示状态的方法,适用于子集枚举问题;详细讲解了图论中的Dijkstra算法,利用优先队列优化最短路径计算,确保每次取出距离最小的节点,并进行松弛操作更新邻接节点的最短距离。最后展望了多语言支持的发展趋势以及竞赛平台智能化的趋势。; 适合人群:对CSP竞赛感兴趣并有一定编程基础的学生或爱好者,尤其是希望提高算法竞赛水平的参赛者。; 使用场景及目标:①理解动态规划的核心思想,掌握背包问题的状态转移方程和优化技巧;②学会使用位运算进行状态压缩,解决子集枚举问题;③掌握Dijkstra算法的实现细节,理解优先队列的作用和松弛操作的原理。; 阅读建议:本文涉及较多代码实例,建议读者在阅读过程中亲自编写和调试代码,以便更好地理解和掌握相关算法的实现细节。同时关注未来发展趋势,为参加竞赛做好准备。
recommend-type

电气工程基于阻抗频谱的电缆缺陷检测与定位方法研究:电缆健康监测系统设计及实验验证(论文复现含详细代码及解释)

内容概要:该论文研究了一种基于阻抗频谱的电缆缺陷检测与定位方法。主要内容包括:(1)建立含局部缺陷的电缆模型,通过Matlab仿真分析局部过热、老化、破损对电缆参数的影响;(2)提出新型积分变换方法将阻抗频谱从频域转换到空间域实现缺陷定位,并对60m~1000m电缆进行多缺陷定位仿真;(3)通过实验验证了15m-82m电缆的缺陷类型判别和定位的可行性,相比传统方法具有更高准确性和抗干扰性。研究为电缆缺陷检测提供了新思路。 适合人群:从事电力电缆维护与检测的技术人员、研究人员以及相关专业的高校师生。 使用场景及目标:①电力系统中电缆的定期巡检和故障排查;②电缆制造商的质量控制和产品测试;③提高电缆缺陷检测的准确性和效率,降低维护成本和风险。 其他说明:论文详细介绍了电缆建模、参数仿真、阻抗谱特征分析、缺陷定位算法和老化状态评估等核心技术环节,并通过实验验证了方法的有效性。文中还提供了大量Python代码实现,便于读者理解和复现研究结果。此外,该研究提出了多项创新点,如无衰减核函数积分变换、阻抗谱特征与缺陷类型的映射关系等,为后续研究和技术应用奠定了基础。
recommend-type

《Selenium3自动化测试实战--基于Python语言》书中代码.zip

《Selenium3自动化测试实战--基于Python语言》书中代码.zip
recommend-type

ctkqiang_HuaTuoAI_27288_1755685691704.zip

ctkqiang_HuaTuoAI_27288_1755685691704.zip
recommend-type

企业网络结构设计与拓扑图的PKT文件解析

企业网络拓扑设计是网络架构设计的一个重要组成部分,它涉及到企业内部网络的布局结构,确保信息传递的高效和网络安全。网络拓扑设计需要详细规划网络中每个组件的位置、连接方式、设备类型等关键要素。在设计过程中,通常会使用网络拓扑图来形象地表示这些组件和它们之间的关系。 网络拓扑设计中重要的知识点包括: 1. 拓扑图的类型:网络拓扑图主要有以下几种类型,每一种都有其特定的应用场景和设计要求。 - 总线拓扑:所有设备都连接到一条共享的主干线上,信息在全网中广播。适合小型网络,维护成本低,但故障排查较为困难。 - 星型拓扑:所有设备通过点对点连接到一个中心节点。便于管理和监控,中心节点的故障可能导致整个网络瘫痪。 - 环形拓扑:每个节点通过专用链路形成一个闭合环路。信息单向流动,扩展性较差,对单点故障敏感。 - 网状拓扑:网络中的设备通过多条路径连接,提供极高的冗余性。适合大型网络,成本较高。 2. 网络设备的选择:网络设备包括路由器、交换机、防火墙、无线接入点等。设计时需根据实际需求选择适合的设备类型和配置。 3. IP地址规划:合理的IP地址分配能确保网络的有序运行,包括私有地址和公有地址的规划,子网划分,以及IP地址的动态分配(DHCP)和静态分配。 4. 网络安全设计:保护企业网络不受攻击至关重要。包括设置防火墙规则、配置入侵检测系统(IDS)、实施访问控制列表(ACL)等安全策略。 5. 网络冗余和负载均衡:为防止网络中的单点故障,设计时需要考虑使用冗余技术和负载均衡技术,例如多线路接入、链路聚合、VRRP(虚拟路由器冗余协议)等。 6. 物理布线规划:这是指网络中的物理连接方式和布线方案,通常根据实际建筑环境和网络设备位置来决定,包括线缆的种类和长度限制等。 7. 虚拟化和云计算:在现代企业网络设计中,虚拟化技术和云计算服务的集成也变得越来越重要,需要合理规划服务器资源和云服务接口。 从提供的文件描述来看,包含了详细的网络结构设计和拓扑图PKT文件,以及源代码。这说明该课设文件不仅包括了网络拓扑图的视觉展示,还涉及到了实际的网络配置和编程实现。PKT文件可能指的是Packet Tracer(网络模拟软件)生成的网络拓扑文件, Packet Tracer 是Cisco开发的网络模拟工具,用户可以通过它创建网络拓扑图并模拟网络设备之间的数据通信。 由于文件名仅标注为“课设”,我们无法得知具体的课程名称或者网络拓扑设计的具体案例。不过,可以推测这份文件可能是一个教学材料或课程设计项目,旨在帮助学生或者网络工程师通过实践来加深对网络拓扑设计的理解和应用。其中包含的源代码可能是用于配置网络设备(如路由器和交换机的命令脚本),或者是一些网络应用的编程实现,比如网络监控工具或者数据分析脚本等。
recommend-type

【技术解读】:5个步骤深入自定义你的Winform窗口

# 摘要 本文全面概述了Winform窗口自定义开发的关键方面,涵盖了从基础操作到高级功能开发以及性能调优的各个方面。文章首先介绍了Winform窗口的基本操作、设计元素及响应式设计实现方法。随后,深入探讨了自定义控件创建、多线程处理以及插件机制等高级功能开发技术。为了优化交互体验,文中还阐述了用户输入处理、动画效果集成以及高级用户界面交互技术。此外,文章详细讨论了窗口的国
recommend-type

ARM/x86/c86 的具体区别

<think>好的,用户之前问过国产化服务器和操作系统,现在又想知道ARM、x86、x86的具体区别。看起来用户可能是在做技术选型或者需要了解不同架构的优缺点,以便在实际项目中选择合适的硬件和系统。 首先,用户提到的c86可能是个笔误,应该是x86。需要确认这一点,避免误解。然后需要分别从架构、设计理念、应用场景、性能、生态等方面对比ARM和x86。另外,用户可能对国产芯片如鲲鹏、飞腾等基于ARM的处理器感兴趣,所以需要联系之前的回答,说明这些国产芯片的架构背景。 接下来,需要检查技术细节的准确性,比如指令集类型、功耗、扩展性、授权模式等。还要考虑用户可能的实际需求,比如是否需要低功耗设备
recommend-type

最新Swift语言iOS开发实战教程免费下载

标题《Intermediate_swift_ios_12_book》表明了本书是一本关于Swift语言以及iOS 12平台的中阶开发教程。在Swift语言方面,它侧重于深入探讨和实践,旨在帮助读者提升在iOS开发方面的技能水平。自从2014年苹果公司首次推出Swift语言以来,它就成为了开发iOS、macOS、watchOS和tvOS应用的首选语言。Swift语言以其安全、快速、现代的特性逐渐取代了Objective-C,成为苹果生态系统中的主流开发语言。iOS 12作为苹果公司推出的最新操作系统版本,它引入了许多新特性,比如ARKit 2、MeasureKit和新的Screen Time功能,因此开发者需要学习和适应这些变化以充分利用它们。 描述强调了这本书是由Appcoda出版的,Appcoda是一家专注于提供高质量iOS和Swift编程教程的在线平台。通过Appcoda出版的教程,读者通常能够获得紧跟行业标准和实践的教学材料。此书被推荐给希望学习使用最新的Swift语言进行iOS开发的人群。这暗示了该书涵盖了iOS 12的新特性和API,这些内容对于想要掌握最新开发技术的开发者来说至关重要。 标签"ios swift programming practice"则进一步明确了这本书的三个主要知识点:iOS开发、Swift编程和编程实践。这些标签指向了iOS开发的核心技能和知识领域。iOS开发涉及到使用Xcode作为主要的开发环境,掌握使用Interface Builder构建用户界面,以及理解如何使用UIKit框架来创建和管理用户界面。Swift编程则集中在语言本身,包括其基本语法、类型系统、面向协议编程、闭包、泛型等高级特性。编程实践则强调实际编写代码的能力,如编写可测试、可维护和高性能的代码,以及如何使用设计模式来解决常见的开发问题。 文件名称列表中的"Intermediate swift ios12 book.epub"指出了该教程的电子书格式。EPUB是一种广泛使用的电子书标准格式,它支持可调整的布局,使得内容在不同尺寸的屏幕上都可阅读。EPUB格式允许用户在各种阅读设备上阅读书籍,如平板电脑、智能手机、电子书阅读器等。而文件名"._Intermediate swift ios12 book.epub"前面的点和下划线可能表明这是一个隐藏文件或在某种特定环境下被创建的临时文件。 综上所述,知识点涉及: 1. Swift语言基础:Swift是一种安全、快速、现代的编程语言,由苹果公司开发,用于iOS、macOS、watchOS和tvOS应用的开发。Swift语言的特性包括语法简洁、类型安全、内存管理自动化、对闭包和泛型的支持等。 2. iOS 12平台特性:iOS 12作为当时较新的操作系统版本,提供了许多新API和功能,如ARKit 2、MeasureKit等。开发者需要掌握如何在应用中利用这些API实现增强现实(AR)、时间管理等高级功能。 3. Xcode和UIKit框架:Xcode是iOS开发的主要集成开发环境(IDE),它提供了代码编辑器、调试工具、性能分析工具以及用户界面构建器等工具。UIKit框架是构建iOS应用用户界面的基础框架,它提供了丰富的用户界面组件和控件。 4. Swift高级特性和编程实践:学习Swift的高级特性有助于编写高效和可维护的代码。这包括理解闭包的使用、泛型编程、面向协议的设计等。同时,学习和实践良好的编程习惯,如编写可测试的代码、应用设计模式、以及遵循苹果的编码规范和最佳实践。 5. Appcoda及其教程特点:Appcoda是一家提供高质量iOS和Swift编程教程的平台,其教学材料通常紧跟技术发展和行业标准,很适合用于自我学习和提升技能。
recommend-type

【核心攻略】:掌握Winform界面构建的10大黄金法则

# 摘要 Winform界面构建是开发桌面应用程序的重要组成部分,本文从界面布局、数据管理、性能优化、安全性以及进阶技术等多方面进行深入探讨。第一章提供了一个概览,接下来的章节分别详细阐述了如何设计高效的Winform布局,包括布局容器的选择与嵌套布局策略;如何通过数据绑定简化数据管理并保证数据的正确性;以及如何优化界面性能,提高渲染效率并