CPU 7% RAM 55% Temp 52° VRAM 19% GPU 1% Manager Show Image Feed 🐍 1 NameError 80 GroundingDinoModelLoader (segment anything) clothes, T-shirt, bra, bikini, blouse, dress, (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, UnrealisticDream photo of girl , nude, naked, nice tits GroundingDinoModelLoader (segment anything) name 'groundingdino' is not defined # ComfyUI Error Report ## Error Details - **Node ID:** 80 - **Node Type:** GroundingDinoModelLoader (segment anything) - **Exception Type:** NameError - **Except

时间: 2025-08-27 19:12:11 AIGC 浏览: 38
<think>嗯,用户遇到了ComfyUI中GroundingDinoModelLoader节点的NameError错误,具体是'groundingdino is not defined'。我需要先分析可能的原因,然后给出解决方案。 首先,根据引用[4]提到的,NameError通常是由于变量或模块未定义导致的。可能的原因包括模块未正确安装或导入,路径问题,或者依赖项冲突。 用户之前提到的引用[3]中,有一个关于_C未定义的错误,解决方法是修改代码中的条件判断,绕过使用_C的部分。这可能与当前问题有关,因为都是GroundingDINO相关的模块问题。 接下来,我应该考虑用户是否安装了正确的groundingdino库。如果用户是通过pip安装的,可能需要检查安装命令是否正确。有时候,直接安装官方库可能缺少某些依赖或自定义组件,导致模块无法导入。因此,建议用户使用特定的安装命令,比如从GitHub仓库安装,或者安装包含CUDA支持的版本。 另一个可能性是环境路径问题。ComfyUI可能没有正确识别groundingdino的安装路径。需要检查sys.path是否包含模块所在的目录,或者在ComfyUI的根目录下创建符号链接,让Python能够找到该模块。 还有,用户可能没有正确设置环境变量,或者存在多个Python环境导致模块安装在错误的位置。需要确认用户是否在正确的环境中安装了依赖,并且ComfyUI运行的环境与之匹配。 此外,代码中的导入语句可能有问题。例如,是否应该从某个子模块导入groundingdino,或者是否存在拼写错误。需要检查导入语句是否正确,并根据需要添加绝对路径到系统路径中。 最后,如果以上方法都不奏效,可能需要手动下载或克隆仓库,并将相关文件放置到正确的位置,确保模块能够被正确引用。同时,用户可能需要安装额外的依赖项,如pycocotools,这些可能在官方安装指南中有提到。 总结可能的解决步骤:安装正确的groundingdino版本,检查路径和导入语句,创建符号链接,安装依赖项,重启ComfyUI。如果问题依旧,可能需要检查代码中的条件判断,类似引用[3]中的临时解决方法,修改代码以避免引发错误。</think>针对ComfyUI中GroundingDinoModelLoader节点出现的`NameError: 'groundingdino' is not defined`错误,以下是分步解决方案: ### 原因分析 1. **模块未正确安装**:GroundingDINO依赖库未安装或版本不兼容 2. **路径引用问题**:ComfyUI无法定位groundingdino模块 3. **环境配置冲突**:Python环境未正确关联依赖项 4. **代码兼容性问题**:部分代码触发CUDA加速相关的异常条件判断[^3] ### 解决方案 #### 1. 安装专用依赖版本 ```bash # 官方推荐安装命令(需CUDA环境) pip install git+https://github.com/IDEA-Research/GroundingDINO.git # 或指定版本 pip install groundingdino-py==0.4.0 ``` #### 2. 添加系统路径引用 在ComfyUI的`__init__.py`或节点启动脚本中添加: ```python import sys sys.path.append("/绝对路径/to/GroundingDINO") from groundingdino.util import get_grounding_output ``` #### 3. 创建符号链接(Linux/Mac) ```bash # 在ComfyUI根目录执行 ln -s /path/to/GroundingDINO/groundingdino ./custom_nodes/ ``` #### 4. 验证依赖完整性 ```bash pip install -r requirements.txt # 确保包含: # torch>=1.9.0 # transformers>=4.25.1 # pycocotools>=2.0.6 ``` #### 5. 修改条件判断逻辑(参考[^3]) 定位`ms_deform_attn.py`文件,将: ```python if torch.cuda.is_available() and value.is_cuda and 0: ``` 修改为: ```python if False: # 强制禁用_C加速 ``` ### 验证步骤 1. 重启ComfyUI服务 2. 在Python交互环境测试: ```python >>> import groundingdino >>> print(groundingdino.__version__) '0.4.0' # 预期输出 ```
阅读全文

相关推荐

### 完整实现:AI驱动的游戏性能优化系统 以下是将之前讨论的功能整合成一个完整的Python程序,包含硬件监控、AI预测、动态调整和系统优化模块: python """ AI驱动游戏性能优化系统 v1.0 适用于: Windows 10企业版 + NVIDIA GTX 950M + i5-6300HQ 功能: 实时硬件监控 + AI动态优化 + 温度控制 """ import psutil import pynvml import lightgbm as lgb import time import subprocess import os # ====================== # 配置参数 # ====================== SAFE_TEMP = 75 # GPU安全温度阈值(°C) MAX_TEMP = 90 # GPU临界温度(°C) VRAM_LIMIT = 1.8 # 显存警戒线(GB) MODEL_PATH = 'gpu_optimizer.txt' # AI模型路径 # ====================== # 硬件监控模块 # ====================== def get_hardware_status(): """获取实时硬件状态""" status = {} # CPU状态 status['cpu_temp'] = psutil.sensors_temperatures().get('coretemp', [None])[0].current status['cpu_usage'] = psutil.cpu_percent() # GPU状态 pynvml.nvmlInit() try: handle = pynvml.nvmlDeviceGetHandleByIndex(0) status['gpu_temp'] = pynvml.nvmlDeviceGetTemperature(handle, pynvml.NVML_TEMPERATURE_GPU) mem_info = pynvml.nvmlDeviceGetMemoryInfo(handle) status['vram_usage'] = mem_info.used / (1024 ** 3) # 转换为GB status['gpu_usage'] = pynvml.nvmlDeviceGetUtilizationRates(handle).gpu finally: pynvml.nvmlShutdown() return status # ====================== # AI预测引擎 # ====================== class AIPredictor: def __init__(self, model_path): self.model = lgb.Booster(model_file=model_path) def predict_optimal_settings(self, hardware_status, current_fps): """ AI预测最优设置 输入: [CPU温度, GPU温度, VRAM使用, 当前帧率] 输出: [目标帧率, 最大功耗(W), 纹理质量(0-1)] """ input_data = [ hardware_status['cpu_temp'], hardware_status['gpu_temp'], hardware_status['vram_usage'], current_fps ] return self.model.predict([input_data])[0] # ====================== # 动态调整模块 # ====================== def set_power_limit(power_watt): """设置GPU功耗墙(伪代码)""" # 实际实现需调用NVIDIA-smi: # subprocess.run(f"nvidia-smi -pl {power_watt}", shell=True) print(f"[优化] 设置GPU功耗墙: {power_watt}W") def set_texture_quality(quality): """调整纹理质量(伪代码)""" # 实际实现需通过游戏API或配置文件修改 quality_level = "高" if quality > 0.7 else "中" if quality > 0.4 else "低" print(f"[优化] 设置纹理质量: {quality_level}") def apply_vram_compression(ratio): """应用显存压缩(伪代码)""" print(f"[显存优化] 应用压缩比率: {ratio:.1%}") # ====================== # 温度控制算法 # ====================== def calculate_max_power(base_power, current_temp): """ 计算安全最大功耗 $$ P_{\text{max}} = P_{\text{base}} \times \left(1 - \frac{T - T_{\text{safe}}}{T_{\text{max}} - T_{\text{safe}}}\right) $$ """ if current_temp <= SAFE_TEMP: return base_power return base_power * (1 - (current_temp - SAFE_TEMP) / (MAX_TEMP - SAFE_TEMP)) # ====================== # 核心优化逻辑 # ====================== def run_optimization_loop(): """主优化循环""" predictor = AIPredictor(MODEL_PATH) while True: # 获取当前状态 status = get_hardware_status() current_fps = get_current_fps() # 需实现帧率获取 # AI预测最优设置 target_fps, base_power, texture_quality = predictor.predict_optimal_settings(status, current_fps) # 温度自适应调整 safe_power = calculate_max_power(base_power, status['gpu_temp']) # 应用优化设置 set_power_limit(safe_power) set_texture_quality(texture_quality) # 显存管理 if status['vram_usage'] > VRAM_LIMIT: compress_ratio = 1 - (VRAM_LIMIT / status['vram_usage']) apply_vram_compression(compress_ratio) # 负载均衡 balance_cpu_gpu_load(status) time.sleep(2) # 每2秒优化一次 # ====================== # 辅助功能 # ====================== def get_current_fps(): """获取当前游戏帧率(需游戏特定实现)""" return 45 # 示例值 def balance_cpu_gpu_load(status): """CPU-GPU负载均衡""" if status['cpu_usage'] > 90 and status['gpu_usage'] < 60: print("[负载均衡] 将物理计算转移到GPU") elif status['gpu_usage'] > 90 and status['cpu_usage'] < 70: print("[负载均衡] 将部分着色器计算转移到CPU") def enable_gaming_mode(): """启用游戏优化模式""" # 切换到高性能电源计划 subprocess.run("powercfg /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c", shell=True) # 降低后台进程优先级 subprocess.run('wmic process where name="explorer.exe" set priority "below normal"', shell=True) # 设置当前进程为高优先级 os.system(f"wmic process where processid={os.getpid()} set priority high") # ====================== # 主程序入口 # ====================== if __name__ == "__main__": print("=== 启动AI游戏优化器 ===") enable_gaming_mode() try: run_optimization_loop() except KeyboardInterrupt: print("\n=== 优化服务已停止 ===") ### 配套批处理脚本 start_optimizer.bat batch @echo off echo 正在启用高性能模式... powercfg /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c timeout /t 1 echo 启动AI优化引擎... start /high python game_optimizer.py echo 降低后台进程优先级... wmic process where name="explorer.exe" set priority "below normal" wmi

我的需求: 我的python版本是3.9.要求:输入一张图片和要求返回生成符合要求的图片 基于一张商品原图(例如黄色塑料凳子),生成该商品的不同角度图片(如 45 度侧视图),要求: 保持商品的结构、形状、细节完全一致(不能变成其他产品); 生成的图片需符合提示词描述(如 “白色背景、产品摄影风格”); 本质是 “图生图(Img2Img)+ 角度变换 + 结构锁定”。 技术要求 我下载了sd_xl_base_1.0.safetensors和model_index.json和controlnet-canny-sdxl-1.0.safetensors和controlnet-canny-sdxl-1.0 代码功能: 使用本地模型 适配低显存 GPU(如 4GB 显存,需启用内存优化:model_cpu_offload等); 避免版本冲突(需明确diffusers、modelscope等库的兼容版本)。 输出效果: 新角度图片需与原图是 “同款商品”,仅视角不同; 背景干净、细节清晰,符合商品摄影风格。 要求必须不能报错,能满足需求。注意版本间兼容问题,版本号要确定。不要频繁犯错。 不要给我示例代码,我要完整的,正确的代码及依赖版本 我下载了两个模型,怎么实现图生图功能。 ai给我的代码: import os # 强制离线模式,避免联网请求 os.environ["TRANSFORMERS_OFFLINE"] = "1" os.environ["HF_HUB_OFFLINE"] = "1" import torch import numpy as np import cv2 from PIL import Image from diffusers import ( StableDiffusionXLImg2ImgPipeline, ControlNetModel, EulerAncestralDiscreteScheduler ) from diffusers.utils import load_image from controlnet_aux import CannyDetector class ProductViewGenerator: def __init__(self, sd_model_path="D:\\PythonCode\\model\\sd_xl_base_1.0.safetensors", controlnet_path="D:\\PythonCode\\model\\controlnet-canny-sdxl-1.0.safetensors", device="cuda" if torch.cuda.is_available() else "cpu"): """初始化生成器,加载模型并配置低显存模式""" self.device = device # 对于低显存GPU,使用float16精度 self.dtype = torch.float16 if device == "cuda" else torch.float32 # 检查模型文件是否存在 if not os.path.exists(sd_model_path): raise FileNotFoundError(f"SD模型文件不存在: {sd_model_path},请检查路径是否正确") if not os.path.exists(controlnet_path): raise FileNotFoundError(f"ControlNet模型文件不存在: {controlnet_path},请检查路径是否正确") try: # 加载ControlNet模型 self.controlnet = ControlNetModel.from_single_file( controlnet_path, torch_dtype=self.dtype ) # 加载主模型 self.pipeline = StableDiffusionXLImg2ImgPipeline.from_single_file( sd_model_path, controlnet=self.controlnet, torch_dtype=self.dtype, use_safetensors=True ) # 配置调度器 self.pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config( self.pipeline.scheduler.config ) # 初始化Canny边缘检测器 self.canny_detector = CannyDetector() # 低显存优化配置 self.configure_low_vram() except Exception as e: raise RuntimeError(f"模型加载失败: {str(e)}") def configure_low_vram(self): """配置低显存模式,特别针对4GB GPU优化""" if self.device == "cuda": # 启用模型CPU卸载 - 关键的低显存优化 self.pipeline.enable_model_cpu_offload() # 启用注意力切片以减少内存使用 self.pipeline.enable_attention_slicing(1) # 明确设置切片大小 # 启用xFormers加速(如果安装了) try: self.pipeline.enable_xformers_memory_efficient_attention() print("已启用xFormers加速,内存使用将进一步优化") except ImportError: print("未安装xFormers,将使用默认注意力机制") except Exception as e: print(f"启用xFormers失败: {str(e)},将继续使用默认注意力机制") # 启用渐进式VAE解码 self.pipeline.enable_vae_slicing() # 禁用安全检查器以节省内存 self.pipeline.safety_checker = None def preprocess_image(self, image, resolution=512): # 4GB GPU建议使用512分辨率 """预处理输入图像,调整大小并标准化""" if isinstance(image, str): if not os.path.exists(image): raise FileNotFoundError(f"输入图像不存在: {image}") image = load_image(image) # 调整图像大小并保持比例 image.thumbnail((resolution, resolution)) # 转换为RGB if image.mode != "RGB": image = image.convert("RGB") return image def generate_canny_image(self, image, low_threshold=100, high_threshold=200): """生成Canny边缘检测图像,用于ControlNet控制结构""" # 转换为OpenCV格式 img_np = np.array(image) img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) # 生成Canny边缘 canny_image = self.canny_detector( img_np, low_threshold=low_threshold, high_threshold=high_threshold ) # 转换回PIL Image canny_image = Image.fromarray(canny_image) return canny_image def generate_view(self, image, prompt, negative_prompt="blurry, low quality, distorted, extra objects, wrong proportions, mutation, text, watermark", angle_description="45度侧视图", strength=0.65, # 降低强度值,确保结构更接近原图 guidance_scale=7.5, controlnet_conditioning_scale=0.9, # 提高控制强度 num_inference_steps=25, # 减少步数以节省显存 output_path=None): """生成商品的新角度视图""" # 预处理图像 processed_image = self.preprocess_image(image) # 生成Canny边缘图像 canny_image = self.generate_canny_image(processed_image) # 构建完整提示词 full_prompt = f"{prompt}, {angle_description}, 白色背景, 产品摄影风格, 高清细节, 专业灯光, 阴影清晰, 无反光" # 生成图像 with torch.no_grad(): # 禁用梯度计算以节省内存 generated_image = self.pipeline( prompt=full_prompt, negative_prompt=negative_prompt, image=processed_image, control_image=canny_image, strength=strength, guidance_scale=guidance_scale, controlnet_conditioning_scale=controlnet_conditioning_scale, num_inference_steps=num_inference_steps, width=processed_image.width, height=processed_image.height, torch_dtype=self.dtype ).images[0] # 保存图像(如果指定了路径) if output_path: os.makedirs(os.path.dirname(output_path), exist_ok=True) generated_image.save(output_path) print(f"生成的图像已保存至: {output_path}") return generated_image def generate_multiple_views(self, image, prompt, angles, output_dir="generated_views"): """生成多个角度的视图""" os.makedirs(output_dir, exist_ok=True) generated_images = [] for i, angle in enumerate(angles): output_path = os.path.join(output_dir, f"view_{i + 1}_{angle.replace(' ', '_')}.png") img = self.generate_view( image=image, prompt=prompt, angle_description=angle, output_path=output_path ) generated_images.append((angle, img)) return generated_images # 使用示例 if __name__ == "__main__": # 初始化生成器 try: # 请根据你的实际路径修改这两个参数 generator = ProductViewGenerator( sd_model_path="D:\\PythonCode\\model\\sd_xl_base_1.0.safetensors", controlnet_path="D:\\PythonCode\\model\\controlnet-canny-sdxl-1.0.safetensors" ) print("模型加载成功,准备生成图像...") # 示例:生成黄色塑料凳子的不同角度视图 input_image_path = "input_stool.jpg" # 替换为你的输入图片路径 product_prompt = "黄色塑料凳子,塑料材质,四条腿,圆形座面,无扶手" # 要生成的角度列表 angles = [ "正面视图", "45度侧视图", "90度侧视图", "背面视图" ] # 对于4GB GPU,建议先尝试生成较少的角度 # 生成多个角度 generated_views = generator.generate_multiple_views( image=input_image_path, prompt=product_prompt, angles=angles ) print(f"成功生成 {len(generated_views)} 个角度的视图") except Exception as e: print(f"发生错误: {str(e)}") 报错信息: D:\PythonCode\test2_39\Scripts\python.exe D:\PythonCode\test\product_view_generator.py D:\PythonCode\test2_39\lib\site-packages\controlnet_aux\mediapipe_face\mediapipe_face_common.py:7: UserWarning: The module 'mediapipe' is not installed. The package will have limited functionality. Please install it using the command: pip install 'mediapipe' warnings.warn( D:\PythonCode\test2_39\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) D:\PythonCode\test2_39\lib\site-packages\timm\models\registry.py:4: FutureWarning: Importing from timm.models.registry is deprecated, please import via timm.models warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.models", FutureWarning) D:\PythonCode\test2_39\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_5m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_5m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\PythonCode\test2_39\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_11m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_11m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\PythonCode\test2_39\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\PythonCode\test2_39\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_384 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) D:\PythonCode\test2_39\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_512 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected. return register_model(fn_wrapper) 发生错误: 模型加载失败: ('Connection aborted.', TimeoutError(10060, '由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。', None, 10060, None)) 你看看这样对吗?是代码问题还是模型问题?你有没有更好的解决方案?

dmesg | grep -e i915 -e snd_hda -e drm -e amd [ 6.559265] systemd[1]: Starting Load Kernel Module drm... [ 6.704955] systemd[1]: [email protected]: Deactivated successfully. [ 6.705017] systemd[1]: Finished Load Kernel Module drm. [ 7.965413] i915 0000:00:02.0: enabling device (0000 -> 0003) [ 7.968541] i915 0000:00:02.0: Direct firmware load for i915/kbl_dmc_ver1_04.bin failed with error -2 [ 7.968544] i915 0000:00:02.0: [drm] Failed to load DMC firmware i915/kbl_dmc_ver1_04.bin. Disabling runtime power management. [ 7.968545] i915 0000:00:02.0: [drm] DMC firmware homepage: https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmwar5 [ 8.045478] [drm] radeon kernel modesetting enabled. [ 8.045630] [drm] initializing kernel modesetting (OLAND 0x1002:0x6611 0x1642:0x1869 0x87). [ 8.045741] [drm] Changing default dispclk from 480Mhz to 600Mhz [ 8.045751] [drm] Detected VRAM RAM=2048M, BAR=256M [ 8.045752] [drm] RAM width 64bits DDR [ 8.045756] [drm] radeon: 2048M of VRAM memory ready [ 8.045757] [drm] radeon: 2048M of GTT memory ready. [ 8.045761] [drm] Loading oland Microcode [ 8.045772] [drm:si_init.cold [radeon]] *ERROR* Failed to load firmware! [ 8.045810] [drm] radeon: finishing device. [ 8.053014] [drm] radeon: ttm finalized [ 8.253678] snd_hda_intel 0000:00:1f.3: enabling device (0000 -> 0002) [ 8.253801] snd_hda_intel 0000:01:00.1: enabling device (0000 -> 0002) [ 8.253840] snd_hda_intel 0000:01:00.1: Force to non-snoop mode [ 8.460880] i915 0000:00:02.0: [drm] failed to retrieve link info, disabling eDP [ 8.461092] i915 0000:00:02.0: [drm] [ENCODER:94:DDI B/PHY B] is disabled/in DSI mode with an ungated DDI clock, gate it [ 8.461094] i915 0000:00:02.0: [drm] [ENCODER:105:DDI C/PHY C] is disabled/in DSI mode with an ungated DDI clock, gate it [ 8.461095] i915 0000:00:02.0: [drm] [ENCODER:111:DDI D/PHY D] is disabled/in DSI mode with an ungated DDI clock, gate it [ 8.493388] [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 0 [ 8.507890] snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915]) [ 8.527147] [drm] amdgpu kernel modesetting enabled. [ 8.527241] amdgpu: CRAT table disabled by module option [ 8.527242] amdgpu: Virtual CRAT table created for CPU [ 8.527267] amdgpu: Topology: Add CPU node [ 8.527307] amdgpu 0000:01:00.0: amdgpu: SI support provided by radeon. [ 8.527308] amdgpu 0000:01:00.0: amdgpu: Use radeon.si_support=0 amdgpu.si_support=1 to override. [ 10.950939] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 13.714849] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 16.506338] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 16.560082] snd_hda_codec_realtek hdaudioC0D0: autoconfig for ALC662 rev3: line_outs=1 (0x14/0x0/0x0/0x0/0x0) type:line [ 16.560092] snd_hda_codec_realtek hdaudioC0D0: speaker_outs=0 (0x0/0x0/0x0/0x0/0x0) [ 16.560096] snd_hda_codec_realtek hdaudioC0D0: hp_outs=1 (0x1b/0x0/0x0/0x0/0x0) [ 16.560100] snd_hda_codec_realtek hdaudioC0D0: mono: mono_out=0x0 [ 16.560102] snd_hda_codec_realtek hdaudioC0D0: inputs: [ 16.560105] snd_hda_codec_realtek hdaudioC0D0: Front Mic=0x19 [ 16.560108] snd_hda_codec_realtek hdaudioC0D0: Rear Mic=0x18 [ 16.560110] snd_hda_codec_realtek hdaudioC0D0: Line=0x1a

我再给你提供一条信息,重新回答一下: sudo dmesg | grep -iE "drm|amdgpu|radeon" [ 3.349202] ACPI: bus type drm_connector registered [ 3.546218] [drm] Initialized simpledrm 1.0.0 20200625 for simple-framebuffer.0 on minor 0 [ 3.559403] simple-framebuffer simple-framebuffer.0: [drm] fb0: simpledrmdrmfb frame buffer device [ 7.045601] systemd[1]: Starting Load Kernel Module drm... [ 7.234081] systemd[1]: [email protected]: Deactivated successfully. [ 7.242032] systemd[1]: Finished Load Kernel Module drm. [ 9.785305] [drm] radeon kernel modesetting enabled. [ 10.117072] radeon 0000:01:00.0: vgaarb: deactivate vga console [ 10.123474] radeon 0000:01:00.0: enabling device (0006 -> 0007) [ 10.130276] [drm] initializing kernel modesetting (OLAND 0x1002:0x6611 0x1642:0x1869 0x87). [ 10.141740] [drm] Changing default dispclk from 480Mhz to 600Mhz [ 10.147771] radeon 0000:01:00.0: VRAM: 2048M 0x0000000000000000 - 0x000000007FFFFFFF (2048M used) [ 10.178146] radeon 0000:01:00.0: GTT: 2048M 0x0000000080000000 - 0x00000000FFFFFFFF [ 10.196332] [drm] Detected VRAM RAM=2048M, BAR=256M [ 10.201228] [drm] RAM width 64bits DDR [ 10.205056] [drm] radeon: 2048M of VRAM memory ready [ 10.211379] [drm] radeon: 2048M of GTT memory ready. [ 10.216355] [drm] Loading oland Microcode [ 10.220403] radeon 0000:01:00.0: Direct firmware load for radeon/oland_pfp.bin failed with error -2 [ 10.230852] radeon 0000:01:00.0: Direct firmware load for radeon/OLAND_pfp.bin failed with error -2 [ 10.248329] si_cp: Failed to load firmware "radeon/OLAND_pfp.bin" [ 10.263826] [drm:si_init [radeon]] *ERROR* Failed to load firmware! [ 10.278118] radeon 0000:01:00.0: Fatal error during GPU init [ 10.307982] [drm] radeon: finishing device. [ 10.319382] [drm] radeon: ttm finalized [ 10.323534] radeon: probe of 0000:01:00.0 failed with error -2 [ 10.373041] i915 0000:00:02.0: [drm] VT-d active for gfx access [ 10.379006] i915 0000:00:02.0: [drm] Using Transparent Hugepages [ 10.395884] i915 0000:00:02.0: [drm] Failed to load DMC firmware i915/kbl_dmc_ver1_04.bin. Disabling runtime power management. [ 10.407275] i915 0000:00:02.0: [drm] DMC firmware homepage: https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmwar5 [ 10.733789] i915 0000:00:02.0: [drm] [ENCODER:94:DDI A/PHY A] failed to retrieve link info, disabling eDP [ 10.743648] i915 0000:00:02.0: [drm] [ENCODER:94:DDI B/PHY B] is disabled/in DSI mode with an ungated DDI clock, gate it [ 10.754536] i915 0000:00:02.0: [drm] [ENCODER:106:DDI C/PHY C] is disabled/in DSI mode with an ungated DDI clock, gate it [ 10.765517] i915 0000:00:02.0: [drm] [ENCODER:112:DDI D/PHY D] is disabled/in DSI mode with an ungated DDI clock, gate it [ 10.778769] [drm] Initialized i915 1.6.0 20230929 for 0000:00:02.0 on minor 0 [ 12.691555] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 13.175008] [drm] amdgpu kernel modesetting enabled. [ 13.180151] amdgpu: Virtual CRAT table created for CPU [ 13.196107] amdgpu: Topology: Add CPU node [ 13.211725] amdgpu 0000:01:00.0: amdgpu: SI support provided by radeon. [ 13.222256] amdgpu 0000:01:00.0: amdgpu: Use radeon.si_support=0 amdgpu.si_support=1 to override. [ 14.578600] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 18.286674] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes

altstratixii_oct # altparallel_flash_loader # altserial_flash_loader # alt_fault_injection # sld_virtual_jtag_basic # altsource_probe # End time: 10:23:27 on Jun 26,2025, Elapsed time: 0:00:04 # Errors: 0, Warnings: 0 # End time: 10:23:27 on Jun 26,2025, Elapsed time: 0:08:48 # Errors: 0, Warnings: 4 # vsim work.tb # Start time: 10:23:28 on Jun 26,2025 # Loading work.tb # Loading work.image # Loading work.ram_im # Loading work.altsyncram # Loading work.altsyncram_body # Loading work.ALTERA_DEVICE_FAMILIES # Loading work.ALTERA_MF_MEMORY_INITIALIZATION # ** Error: (vsim-3063) Port 'wraddr' not found in the connected module (3rd connection). # Time: 0 ps Iteration: 0 Instance: /tb/U0/ram_0 File: ../verilog/image.v Line: 71 # ** Error: (vsim-3063) Port 'rdaddr' not found in the connected module (4th connection). # Time: 0 ps Iteration: 0 Instance: /tb/U0/ram_0 File: ../verilog/image.v Line: 71 # ** Warning: (vsim-3015) [PCDPC] - Port size (8) does not match connection size (1) for port 'q'. The port definition is at: ../verilog/ram_im.v(46). # Time: 0 ps Iteration: 0 Instance: /tb/U0/ram_0 File: ../verilog/image.v Line: 71 # ** Error: (vsim-3063) Port 'wraddr' not found in the connected module (3rd connection). # Time: 0 ps Iteration: 0 Instance: /tb/U0/ram_1 File: ../verilog/image.v Line: 80 # ** Error: (vsim-3063) Port 'rdaddr' not found in the connected module (4th connection). # Time: 0 ps Iteration: 0 Instance: /tb/U0/ram_1 File: ../verilog/image.v Line: 80 # ** Warning: (vsim-3015) [PCDPC] - Port size (8) does not match connection size (1) for port 'q'. The port definition is at: ../verilog/ram_im.v(46). # Time: 0 ps Iteration: 0 Instance: /tb/U0/ram_1 File: ../verilog/image.v Line: 80 # Error loading design # Error: Error loading design # Pausing macro execution # MACRO ./run.tcl PAUSED at line 27

[START] Security scan [DONE] Security scan ## ComfyUI-Manager: installing dependencies done. ** ComfyUI startup time: 2025-07-19 01:33:34.225 ** Platform: Linux ** Python version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] ** Python executable: /home/xiaochun/chun/bin/python ** ComfyUI Path: /home/xiaochun/ComfyUI ** ComfyUI Base Folder Path: /home/xiaochun/ComfyUI ** User directory: /home/xiaochun/ComfyUI/user ** ComfyUI-Manager config path: /home/xiaochun/ComfyUI/user/default/ComfyUI-Manager/config.ini ** Log path: /home/xiaochun/ComfyUI/user/comfyui.log Prestartup times for custom nodes: 0.4 seconds: /home/xiaochun/ComfyUI/custom_nodes/ComfyUI-Manager Checkpoint files will always be loaded safely. Total VRAM 20464 MB, total RAM 32056 MB pytorch version: 2.9.0.dev20250716+rocm6.4 AMD arch: gfx1100 ROCm version: (6, 4) Set vram state to: NORMAL_VRAM Device: cuda:0 AMD Radeon RX 7900 XT : hipMallocAsync Using pytorch attention Python version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] ComfyUI version: 0.3.44 ComfyUI frontend version: 1.23.4 [Prompt Server] web root: /home/xiaochun/chun/lib/python3.12/site-packages/comfyui_frontend_package/static ### Loading: ComfyUI-Manager (V3.34) [ComfyUI-Manager] network_mode: public ### ComfyUI Revision: 3645 [c5de4955] *DETACHED | Released on '2025-07-08' Import times for custom nodes: 0.0 seconds: /home/xiaochun/ComfyUI/custom_nodes/websocket_image_save.py 0.1 seconds: /home/xiaochun/ComfyUI/custom_nodes/ComfyUI-Manager Context impl SQLiteImpl. Will assume non-transactional DDL. No target revision found. Starting server To see the GUI go to: http://127.0.0.1:8188 [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json FETCH ComfyRegistry Data: 5/92 FETCH ComfyRegistry Data: 10/92 FETCH ComfyRegistry Data: 15/92 FETCH ComfyRegistry Data: 20/92 FETCH ComfyRegistry Data: 25/92 FETCH ComfyRegistry Data: 30/92 FETCH ComfyRegistry Data: 35/92 FETCH ComfyRegistry Data: 40/92 FETCH ComfyRegistry Data: 45/92 FETCH ComfyRegistry Data: 50/92 FETCH ComfyRegistry Data: 55/92 FETCH ComfyRegistry Data: 60/92 FETCH ComfyRegistry Data: 65/92 FETCH ComfyRegistry Data: 70/92 FETCH ComfyRegistry Data: 75/92 Cannot connect to comfyregistry. FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json[ComfyUI-Manager] Due to a network error, switching to local mode. => custom-node-list.json => Cannot connect to host raw.githubusercontent.com:443 ssl:default [Network is unreachable] FETCH DATA from: /home/xiaochun/ComfyUI/custom_nodes/ComfyUI-Manager/custom-node-list.json [DONE] [ComfyUI-Manager] All startup tasks have been completed. got prompt model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 model_type FLOW WARNING: No VAE weights detected, VAE not initalized. no CLIP/text encoder weights in checkpoint, the text encoder model will not be loaded. !!! Exception during processing !!! ERROR: clip input is invalid: None If the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model. Traceback (most recent call last): File "/home/xiaochun/ComfyUI/execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/xiaochun/ComfyUI/execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/xiaochun/ComfyUI/execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "/home/xiaochun/ComfyUI/execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/xiaochun/ComfyUI/nodes.py", line 67, in encode raise RuntimeError("ERROR: clip input is invalid: None\n\nIf the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model.") RuntimeError: ERROR: clip input is invalid: None If the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model. Prompt executed in 2.95 seconds got prompt Using split attention in VAE Using split attention in VAE VAE load device: cuda:0, offload device: cpu, dtype: torch.float32 model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 model_type FLOW Requested to load FluxClipModel_ loaded completely 9.5367431640625e+25 4777.53759765625 True CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16 clip missing: ['text_projection.weight'] Requested to load Flux loaded completely 17727.5506875 11340.311584472656 True 100%|███████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00, 1.92s/it] Requested to load AutoencodingEngine 0 models unloaded. loaded completely 2193.1828125 319.7467155456543 True Prompt executed in 67.98 seconds got prompt loaded partially 8909.300595947265 8909.142578125 0 100%|███████████████████████████████████████████████████████████████████████| 20/20 [00:46<00:00, 2.33s/it] Requested to load AutoencodingEngine 0 models unloaded. loaded completely 2428.2046875 319.7467155456543 True Prompt executed in 67.21 seconds 分析是什么原因导致的生成结果损坏

time=2025-07-16T19:00:53.467+08:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\lry20\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-07-16T19:00:53.489+08:00 level=INFO source=images.go:476 msg="total blobs: 5" time=2025-07-16T19:00:53.490+08:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" time=2025-07-16T19:00:53.491+08:00 level=INFO source=routes.go:1288 msg="Listening on 127.0.0.1:11434 (version 0.9.6)" time=2025-07-16T19:00:53.491+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-07-16T19:00:53.491+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-07-16T19:00:53.492+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-07-16T19:00:53.610+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-785e8a28-d83d-54d1-8736-f71aeb4aa49b library=cuda compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Laptop GPU" overhead="864.0 MiB" time=2025-07-16T19:00:54.022+08:00 level=WARN source=amd_windows.go:138 msg="amdgpu is not supported (supported types:[gfx1030 gfx1100 gfx1101 gfx1102 gfx1151 gfx906])" gpu_type=gfx1036 gpu=0 library=C:\Users\lry20\AppData\Local\Programs\Ollama\lib\ollama\rocm time=2025-07-16T19:00:54.026+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-785e8a28-d83d-54d1-8736-f71aeb4aa49b library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5060 Laptop GPU" total="8.0 GiB" available="6.8 GiB"

解释这些参数optional arguments: -h, --help show this help message and exit --host HOST --port PORT --config-installer Open config web page, mainly for windows installer (default: False) --load-installer-config Load all cmd args from installer config file (default: False) --installer-config INSTALLER_CONFIG Config file for windows installer (default: None) --model {lama,ldm,zits,mat,fcf,sd1.5,cv2,manga,sd2,paint_by_example,instruct_pix2pix} --no-half Using full precision model. If your generate result is always black or green, use this argument. (sd/paint_by_exmaple) (default: False) --cpu-offload Offloads all models to CPU, significantly reducing vRAM usage. (sd/paint_by_example) (default: False) --disable-nsfw Disable NSFW checker. (sd/paint_by_example) (default: False) --sd-cpu-textencoder Run Stable Diffusion text encoder model on CPU to save GPU memory. (default: False) --local-files-only Use local files only, not connect to Hugging Face server. (sd/paint_by_example) (default: False) --enable-xformers Enable xFormers optimizations. Requires xformers package has been installed. See: https://github.com/facebookresearch/xformers (sd/paint_by_example) (default: False) --device {cuda,cpu,mps} --gui Launch Lama Cleaner as desktop app (default: False) --no-gui-auto-close Prevent backend auto close after the GUI window closed. (default: False) --gui-size GUI_SIZE GUI_SIZE Set window size for GUI (default: [1600, 1000]) --input INPUT If input is image, it will be loaded by default. If input is directory, you can browse and select image in file manager. (default: None) --output-dir OUTPUT_DIR Result images will be saved to output directory automatically without confirmation. (default: None) --model-dir MODEL_DIR Model download directory (by setting XDG_CACHE_HOME environment variable), by default model downloaded to ~/.cache (default: /Users/cwq/.cache) --disable-model-switch Disable model switch in frontend (default: False)

# ComfyUI Error Report ## Error Details - **Node ID:** 42 - **Node Type:** WanVideoSampler - **Exception Type:** AttributeError - **Exception Message:** 'WanModel' object has no attribute 'vace_patch_embedding' ## Stack Trace File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3325, in process noise_pred, self.cache_state = predict_with_cfg( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2617, in predict_with_cfg noise_pred_cond, cache_state_cond = transformer( ^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1617, in forward vace_hints = self.forward_vace(x, data["context"], data["seq_len"], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1196, in forward_vace c = [self.vace_patch_embedding(u.unsqueeze(0).float()).to(x.dtype) for u in vace_context] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 291, in __getattr__ return super().__getattr__(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__ raise AttributeError( ## System Information - **ComfyUI Version:** 0.3.43 - **Arguments:** ComfyUI\main.py --windows-standalone-build --listen 0.0.0.0 --enable-cors-header * - **OS:** nt - **Python Version:** 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] - **Embedded Python:** true - **PyTorch Version:** 2.7.1+cu128 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 25769279488 - **VRAM Free:** 24347803648 - **Torch VRAM Total:** 67108864 - **Torch VRAM Free:** 50200576 ## Logs 2025-07-24T17:17:54.612419 - 2025-07-24T17:17:54.663419 - INFO: IPAdapter model loaded from F:\ComfyUI_Mie_V6.0\ComfyUI\models\ipadapter\ip-adapter-plus_sd15.safetensors2025-07-24T17:17:54.663419 - 2025-07-24T17:17:54.663419 - end_vram - start_vram: 2537542306 - 2537542306 = 02025-07-24T17:17:54.663419 - 2025-07-24T17:17:54.664419 - #11 [IPAdapterUnifiedLoader]: 0.05s - vram 0b2025-07-24T17:17:54.664419 - 2025-07-24T17:17:54.924153 - end_vram - start_vram: 2639727650 - 2537542306 = 1021853442025-07-24T17:17:54.924153 - 2025-07-24T17:17:54.924670 - #10 [IPAdapterAdvanced]: 0.26s - vram 102185344b2025-07-24T17:17:54.924670 - 2025-07-24T17:17:54.931880 - end_vram - start_vram: 2733990564 - 2635766434 = 982241302025-07-24T17:17:54.931880 - 2025-07-24T17:17:54.931880 - #14 [RescaleCFG]: 0.01s - vram 98224130b2025-07-24T17:17:54.931880 - 2025-07-24T17:17:54.933882 - Requested to load BaseModel 2025-07-24T17:17:55.233473 - loaded completely 19197.46322517395 1639.406135559082 True 2025-07-24T17:18:23.076833 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.07it/s]2025-07-24T17:18:23.076833 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.08it/s]2025-07-24T17:18:23.076833 - 2025-07-24T17:18:23.078833 - end_vram - start_vram: 4711053742 - 2733990562 = 19770631802025-07-24T17:18:23.078833 - 2025-07-24T17:18:23.078833 - #3 [KSampler]: 28.15s - vram 1977063180b2025-07-24T17:18:23.078833 - 2025-07-24T17:18:23.711202 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:18:23.711239 - 2025-07-24T17:18:23.711239 - #8 [VAEDecode]: 0.63s - vram 4495048864b2025-07-24T17:18:23.711754 - 2025-07-24T17:18:23.864142 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:18:23.864142 - 2025-07-24T17:18:23.864142 - #15 [SaveImage]: 0.15s - vram 0b2025-07-24T17:18:23.865143 - 2025-07-24T17:18:23.865143 - comfyui lumi batcher overwrite task done2025-07-24T17:18:23.865143 - 2025-07-24T17:18:23.866143 - Prompt executed in 31.05 seconds 2025-07-24T17:18:26.480125 - got prompt 2025-07-24T17:18:53.685302 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.14it/s]2025-07-24T17:18:53.686303 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.11it/s]2025-07-24T17:18:53.686303 - 2025-07-24T17:18:53.686303 - end_vram - start_vram: 4689507218 - 3605411690 = 10840955282025-07-24T17:18:53.686303 - 2025-07-24T17:18:53.687302 - #3 [KSampler]: 27.17s - vram 1084095528b2025-07-24T17:18:53.687302 - 2025-07-24T17:18:54.232028 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:18:54.232028 - 2025-07-24T17:18:54.233027 - #8 [VAEDecode]: 0.54s - vram 4495048864b2025-07-24T17:18:54.233027 - 2025-07-24T17:18:54.380027 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:18:54.380027 - 2025-07-24T17:18:54.380027 - #15 [SaveImage]: 0.15s - vram 0b2025-07-24T17:18:54.380027 - 2025-07-24T17:18:54.381027 - comfyui lumi batcher overwrite task done2025-07-24T17:18:54.382027 - 2025-07-24T17:18:54.385027 - Prompt executed in 27.90 seconds 2025-07-24T17:24:25.352704 - got prompt 2025-07-24T17:24:25.445704 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:24:25.446704 - 2025-07-24T17:24:25.446704 - #20 [LoraLoader]: 0.05s - vram 0b2025-07-24T17:24:25.446704 - 2025-07-24T17:24:25.447704 - Requested to load SD1ClipModel 2025-07-24T17:24:25.792290 - loaded completely 18695.929541397094 235.84423828125 True 2025-07-24T17:24:25.800290 - end_vram - start_vram: 3521486046 - 3507187562 = 142984842025-07-24T17:24:25.800290 - 2025-07-24T17:24:25.800290 - #7 [CLIPTextEncode]: 0.35s - vram 14298484b2025-07-24T17:24:25.800290 - 2025-07-24T17:24:25.808290 - end_vram - start_vram: 3519593078 - 3507187562 = 124055162025-07-24T17:24:25.808290 - 2025-07-24T17:24:25.809290 - #6 [CLIPTextEncode]: 0.01s - vram 12405516b2025-07-24T17:24:25.809290 - 2025-07-24T17:24:25.810290 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:24:25.810290 - 2025-07-24T17:24:25.810290 - #11 [IPAdapterUnifiedLoader]: 0.00s - vram 0b2025-07-24T17:24:25.810290 - 2025-07-24T17:24:26.053290 - end_vram - start_vram: 3609372906 - 3507187562 = 1021853442025-07-24T17:24:26.053290 - 2025-07-24T17:24:26.054290 - #10 [IPAdapterAdvanced]: 0.24s - vram 102185344b2025-07-24T17:24:26.054290 - 2025-07-24T17:24:26.059290 - end_vram - start_vram: 3703635820 - 3605411690 = 982241302025-07-24T17:24:26.059290 - 2025-07-24T17:24:26.060290 - #14 [RescaleCFG]: 0.01s - vram 98224130b2025-07-24T17:24:26.060290 - 2025-07-24T17:24:26.061290 - Requested to load BaseModel 2025-07-24T17:24:26.644908 - loaded completely 19912.1512134552 1639.406135559082 True 2025-07-24T17:24:53.885378 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.08it/s]2025-07-24T17:24:53.885378 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.10it/s]2025-07-24T17:24:53.885378 - 2025-07-24T17:24:53.886378 - end_vram - start_vram: 4689507218 - 3703635818 = 9858714002025-07-24T17:24:53.886378 - 2025-07-24T17:24:53.886378 - #3 [KSampler]: 27.83s - vram 985871400b2025-07-24T17:24:53.887378 - 2025-07-24T17:24:54.482797 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:24:54.482797 - 2025-07-24T17:24:54.482797 - #8 [VAEDecode]: 0.60s - vram 4495048864b2025-07-24T17:24:54.482797 - 2025-07-24T17:24:54.640169 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:24:54.640169 - 2025-07-24T17:24:54.640169 - #15 [SaveImage]: 0.16s - vram 0b2025-07-24T17:24:54.640169 - 2025-07-24T17:24:54.641169 - comfyui lumi batcher overwrite task done2025-07-24T17:24:54.641169 - 2025-07-24T17:24:54.643410 - Prompt executed in 29.28 seconds 2025-07-24T17:25:09.991282 - got prompt 2025-07-24T17:25:10.085282 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:25:10.085282 - 2025-07-24T17:25:10.085282 - #20 [LoraLoader]: 0.05s - vram 0b2025-07-24T17:25:10.085282 - 2025-07-24T17:25:10.086282 - Requested to load SD1ClipModel 2025-07-24T17:25:10.369134 - loaded completely 18695.929541397094 235.84423828125 True 2025-07-24T17:25:10.377134 - end_vram - start_vram: 3521486046 - 3507187562 = 142984842025-07-24T17:25:10.377134 - 2025-07-24T17:25:10.377134 - #7 [CLIPTextEncode]: 0.29s - vram 14298484b2025-07-24T17:25:10.377134 - 2025-07-24T17:25:10.385134 - end_vram - start_vram: 3519593078 - 3507187562 = 124055162025-07-24T17:25:10.385134 - 2025-07-24T17:25:10.385134 - #6 [CLIPTextEncode]: 0.01s - vram 12405516b2025-07-24T17:25:10.385134 - 2025-07-24T17:25:10.386135 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:25:10.386135 - 2025-07-24T17:25:10.386135 - #11 [IPAdapterUnifiedLoader]: 0.00s - vram 0b2025-07-24T17:25:10.386135 - 2025-07-24T17:25:10.631046 - end_vram - start_vram: 3609372906 - 3507187562 = 1021853442025-07-24T17:25:10.631046 - 2025-07-24T17:25:10.631046 - #10 [IPAdapterAdvanced]: 0.24s - vram 102185344b2025-07-24T17:25:10.631046 - 2025-07-24T17:25:10.637046 - end_vram - start_vram: 3703635820 - 3605411690 = 982241302025-07-24T17:25:10.637046 - 2025-07-24T17:25:10.637046 - #14 [RescaleCFG]: 0.01s - vram 98224130b2025-07-24T17:25:10.637046 - 2025-07-24T17:25:10.639047 - Requested to load BaseModel 2025-07-24T17:25:11.180881 - loaded completely 19912.1512134552 1639.406135559082 True 2025-07-24T17:25:39.272121 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:28<00:00, 1.08it/s]2025-07-24T17:25:39.272121 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:28<00:00, 1.07it/s]2025-07-24T17:25:39.272121 - 2025-07-24T17:25:39.273121 - end_vram - start_vram: 4689507218 - 3703635818 = 9858714002025-07-24T17:25:39.273121 - 2025-07-24T17:25:39.274121 - #3 [KSampler]: 28.64s - vram 985871400b2025-07-24T17:25:39.274121 - 2025-07-24T17:25:39.888702 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:25:39.888702 - 2025-07-24T17:25:39.888702 - #8 [VAEDecode]: 0.61s - vram 4495048864b2025-07-24T17:25:39.888702 - 2025-07-24T17:25:40.051703 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:25:40.051703 - 2025-07-24T17:25:40.051703 - #15 [SaveImage]: 0.16s - vram 0b2025-07-24T17:25:40.051703 - 2025-07-24T17:25:40.052703 - comfyui lumi batcher overwrite task done2025-07-24T17:25:40.052703 - 2025-07-24T17:25:40.060703 - Prompt executed in 30.06 seconds 2025-07-24T17:43:56.476860 - got prompt 2025-07-24T17:43:57.039587 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.039587 - 2025-07-24T17:43:57.039587 - #7 [WanVideoVAELoader]: 0.48s - vram 0b2025-07-24T17:43:57.040588 - 2025-07-24T17:43:57.040588 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.040588 - 2025-07-24T17:43:57.040588 - #12 [WanVideoExperimentalArgs]: 0.00s - vram 0b2025-07-24T17:43:57.040588 - 2025-07-24T17:43:57.041588 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.041588 - 2025-07-24T17:43:57.041588 - #10 [WanVideoEnhanceAVideo]: 0.00s - vram 0b2025-07-24T17:43:57.041588 - 2025-07-24T17:43:57.042588 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.042588 - 2025-07-24T17:43:57.042588 - #21 [WanVideoVACEModelSelect]: 0.00s - vram 0b2025-07-24T17:43:57.042588 - 2025-07-24T17:44:00.282807 - Detected model in_channels: 36 2025-07-24T17:44:00.282807 - Model type: fl2v, num_heads: 40, num_layers: 40 2025-07-24T17:44:00.283807 - Model variant detected: i2v_720 2025-07-24T17:44:00.596667 - model_type FLOW 2025-07-24T17:44:00.597701 - Using accelerate to load and assign model weights to device... 2025-07-24T17:44:00.801188 - Loading transformer parameters to cpu: 83%|██████████████████████████████▋ | 1080/1304 [00:00<00:00, 5458.86it/s]2025-07-24T17:44:00.839187 - Loading transformer parameters to cpu: 100%|█████████████████████████████████████| 1304/1304 [00:00<00:00, 5446.33it/s]2025-07-24T17:44:00.839187 - 2025-07-24T17:44:00.844187 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:44:00.844187 - 2025-07-24T17:44:00.844187 - #22 [WanVideoModelLoader]: 3.80s - vram 0b2025-07-24T17:44:00.844187 - 2025-07-24T17:44:02.947231 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:44:02.948230 - 2025-07-24T17:44:02.948230 - #6 [LoadWanVideoT5TextEncoder]: 2.10s - vram 0b2025-07-24T17:44:02.948230 - 2025-07-24T17:44:02.948230 - Moving video model to cpu 2025-07-24T17:45:21.569551 - end_vram - start_vram: 11728619736 - 106744016 = 116218757202025-07-24T17:45:21.569558 - 2025-07-24T17:45:21.569558 - #8 [WanVideoTextEncode]: 78.62s - vram 11621875720b2025-07-24T17:45:21.569558 - 2025-07-24T17:45:21.624655 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.625215 - 2025-07-24T17:45:21.625215 - #34 [LoadImage]: 0.05s - vram 0b2025-07-24T17:45:21.625215 - 2025-07-24T17:45:21.626235 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.626242 - 2025-07-24T17:45:21.626242 - #19 [INTConstant]: 0.00s - vram 0b2025-07-24T17:45:21.626242 - 2025-07-24T17:45:21.671234 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.671234 - 2025-07-24T17:45:21.671234 - #24 [ImageResizeKJ]: 0.04s - vram 0b2025-07-24T17:45:21.671234 - 2025-07-24T17:45:21.738079 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.738592 - 2025-07-24T17:45:21.739108 - #30 [ImageResizeKJ]: 0.07s - vram 0b2025-07-24T17:45:21.739108 - 2025-07-24T17:45:21.782786 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.782786 - 2025-07-24T17:45:21.782786 - #35 [ImageResizeKJ]: 0.04s - vram 0b2025-07-24T17:45:21.782786 - 2025-07-24T17:45:21.915384 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.915902 - 2025-07-24T17:45:21.915902 - #29 [WanVideoVACEStartToEndFrame]: 0.13s - vram 0b2025-07-24T17:45:21.915902 - 2025-07-24T17:45:21.916414 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.916918 - 2025-07-24T17:45:21.917002 - #40 [GetImageRangeFromBatch]: 0.00s - vram 0b2025-07-24T17:45:21.917002 - 2025-07-24T17:45:21.981944 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.981944 - 2025-07-24T17:45:21.981944 - #25 [WanVideoVACEStartToEndFrame]: 0.06s - vram 0b2025-07-24T17:45:21.981944 - 2025-07-24T17:45:21.995235 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.995235 - 2025-07-24T17:45:21.995235 - #41 [MaskBatchMulti]: 0.01s - vram 0b2025-07-24T17:45:21.995235 - 2025-07-24T17:45:22.039203 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:22.039203 - 2025-07-24T17:45:22.039203 - #37 [ImageBatchMulti]: 0.04s - vram 0b2025-07-24T17:45:22.039708 - 2025-07-24T17:45:22.039713 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:22.040227 - 2025-07-24T17:45:22.040227 - #26 [GetImageSizeAndCount]: 0.00s - vram 0b2025-07-24T17:45:22.040227 - 2025-07-24T17:45:22.883828 - input_masks shape2025-07-24T17:45:22.883828 - 2025-07-24T17:45:22.884341 - torch.Size([81, 496, 896])2025-07-24T17:45:22.884341 - 2025-07-24T17:45:34.863567 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.30s/it]2025-07-24T17:45:34.863567 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.32s/it]2025-07-24T17:45:34.864077 - 2025-07-24T17:45:46.589102 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.30s/it]2025-07-24T17:45:46.589612 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.30s/it]2025-07-24T17:45:46.589612 - 2025-07-24T17:45:46.600743 - refs shape2025-07-24T17:45:46.600743 - 2025-07-24T17:45:46.600743 - torch.Size([1, 3, 1, 496, 896])2025-07-24T17:45:46.600743 - 2025-07-24T17:45:46.715123 - VAE encoding: 56%|██████████████████████████████████████▉ | 5/9 [00:00<00:00, 44.31it/s]2025-07-24T17:45:46.805051 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 44.38it/s]2025-07-24T17:45:46.805583 - 2025-07-24T17:45:46.870662 - end_vram - start_vram: 2252473238 - 115132624 = 21373406142025-07-24T17:45:46.870662 - 2025-07-24T17:45:46.871171 - #16 [WanVideoVACEEncode]: 24.83s - vram 2137340614b2025-07-24T17:45:46.871171 - 2025-07-24T17:45:46.873216 - timesteps: tensor([1000.0000, 983.7124, 964.5367, 941.6301, 913.7864, 879.2166, 835.1485, 777.0445, 696.9283, 579.3684, 390.1099, 34.7826], device='cuda:0') 2025-07-24T17:45:46.887141 - Seq len: 38192 2025-07-24T17:49:31.510530 - Sampling 85 frames at 896x496 with 12 steps 2025-07-24T17:49:37.849674 - 0%| | 0/12 [00:00<?, ?it/s]2025-07-24T17:49:44.296375 - 0%| | 0/12 [00:06<?, ?it/s]2025-07-24T17:49:44.297376 - 2025-07-24T17:49:44.747159 - !!! Exception during processing !!! 'WanModel' object has no attribute 'vace_patch_embedding' 2025-07-24T17:49:44.783076 - Traceback (most recent call last): File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3325, in process noise_pred, self.cache_state = predict_with_cfg( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2617, in predict_with_cfg noise_pred_cond, cache_state_cond = transformer( ^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1617, in forward vace_hints = self.forward_vace(x, data["context"], data["seq_len"], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1196, in forward_vace c = [self.vace_patch_embedding(u.unsqueeze(0).float()).to(x.dtype) for u in vace_context] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 291, in __getattr__ return super().__getattr__(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__ raise AttributeError( AttributeError: 'WanModel' object has no attribute 'vace_patch_embedding'. Did you mean: 'patch_embedding'? 2025-07-24T17:49:44.785980 - end_vram - start_vram: 34183609272 - 173795536 = 340098137362025-07-24T17:49:44.785980 - 2025-07-24T17:49:44.785980 - #42 [WanVideoSampler]: 237.91s - vram 34009813736b2025-07-24T17:49:44.785980 - 2025-07-24T17:49:44.798561 - comfyui lumi batcher overwrite task done2025-07-24T17:49:44.798968 - 2025-07-24T17:49:44.868694 - Prompt executed in 348.38 seconds 2025-07-24T18:17:32.104770 - [ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.2025-07-24T18:17:32.105296 - 2025-07-24T18:17:33.147891 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-07-24T18:17:33.147891 - 2025-07-24T18:17:36.036752 - [DONE]2025-07-24T18:17:36.036752 - 2025-07-24T18:17:36.120191 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2025-07-24T18:17:36.120191 - 2025-07-24T18:17:37.292226 - [DONE]2025-07-24T18:17:37.292226 - 2025-07-24T18:17:37.313785 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extras.json2025-07-24T18:17:37.313785 - 2025-07-24T18:17:38.420206 - [DONE]2025-07-24T18:17:38.420897 - 2025-07-24T18:17:38.602146 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2025-07-24T18:17:38.602146 - 2025-07-24T18:17:39.726647 - [DONE]2025-07-24T18:17:39.726647 - 2025-07-24T18:21:32.720524 - got prompt 2025-07-24T18:21:36.403235 - timesteps: tensor([1000.0000, 983.7124, 964.5367, 941.6301, 913.7864, 879.2166, 835.1485, 777.0445, 696.9283, 579.3684, 390.1099, 34.7826], device='cuda:0') 2025-07-24T18:21:36.571239 - Seq len: 38192 2025-07-24T18:21:36.578239 - Sampling 85 frames at 896x496 with 12 steps 2025-07-24T18:21:36.834814 - 0%| | 0/12 [00:00<?, ?it/s]2025-07-24T18:21:40.522225 - 0%| | 0/12 [00:03<?, ?it/s]2025-07-24T18:21:40.522225 - 2025-07-24T18:21:40.567079 - !!! Exception during processing !!! 'WanModel' object has no attribute 'vace_patch_embedding' 2025-07-24T18:21:40.572236 - Traceback (most recent call last): File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3325, in process noise_pred, self.cache_state = predict_with_cfg( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2617, in predict_with_cfg noise_pred_cond, cache_state_cond = transformer( ^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1617, in forward vace_hints = self.forward_vace(x, data["context"], data["seq_len"], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1196, in forward_vace c = [self.vace_patch_embedding(u.unsqueeze(0).float()).to(x.dtype) for u in vace_context] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 291, in __getattr__ return super().__getattr__(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__ raise AttributeError( AttributeError: 'WanModel' object has no attribute 'vace_patch_embedding'. Did you mean: 'patch_embedding'? 2025-07-24T18:21:40.574784 - end_vram - start_vram: 34183609272 - 32966353744 = 12172555282025-07-24T18:21:40.574784 - 2025-07-24T18:21:40.575297 - #42 [WanVideoSampler]: 7.68s - vram 1217255528b2025-07-24T18:21:40.575297 - 2025-07-24T18:21:40.575811 - comfyui lumi batcher overwrite task done2025-07-24T18:21:40.575811 - 2025-07-24T18:21:40.577866 - Prompt executed in 7.84 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. Workflow too large. Please manually upload the workflow from local file system. ## Additional Context (Please add any additional context or steps to reproduce the error here) 什么问题,该怎么解决

# ComfyUI Error Report ## Error Details - **Node ID:** 4 - **Node Type:** CheckpointLoaderSimple - **Exception Type:** RuntimeError - **Exception Message:** ERROR: Could not detect model type of: C:\Users\shanye\Documents\ComfyUI\models\checkpoints\Instamodel_1 Workflow + LoRA\Instamodel_1.safetensors ## Stack Trace File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 578, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 1039, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}\n{}".format(ckpt_path, model_detection_error_hint(ckpt_path, sd))) ## System Information - **ComfyUI Version:** 0.3.56 - **Arguments:** C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\main.py --user-directory C:\Users\shanye\Documents\ComfyUI\user --input-directory C:\Users\shanye\Documents\ComfyUI\input --output-directory C:\Users\shanye\Documents\ComfyUI\output --front-end-root C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\Users\shanye\Documents\ComfyUI --extra-model-paths-config C:\Users\shanye\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000 - **OS:** nt - **Python Version:** 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.8.0+cu129 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 6441926656 - **VRAM Free:** 5370806272 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ## Logs 2025-09-04T11:13:39.610304 - Adding extra search path custom_nodes C:\Users\shanye\Documents\ComfyUI\custom_nodes 2025-09-04T11:13:39.610304 - Adding extra search path download_model_base C:\Users\shanye\Documents\ComfyUI\models 2025-09-04T11:13:39.610304 - Adding extra search path custom_nodes C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes 2025-09-04T11:13:39.610304 - Setting output directory to: C:\Users\shanye\Documents\ComfyUI\output 2025-09-04T11:13:39.610304 - Setting input directory to: C:\Users\shanye\Documents\ComfyUI\input 2025-09-04T11:13:39.610304 - Setting user directory to: C:\Users\shanye\Documents\ComfyUI\user 2025-09-04T11:13:40.313409 - [START] Security scan2025-09-04T11:13:40.313409 - 2025-09-04T11:13:41.675613 - [DONE] Security scan2025-09-04T11:13:41.675613 - 2025-09-04T11:13:41.775803 - ## ComfyUI-Manager: installing dependencies done.2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI startup time:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - 2025-09-04 11:13:41.7752025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Platform:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - Windows2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Python version:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Python executable:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\.venv\Scripts\python.exe2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI Path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI Base Folder Path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** User directory:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\user2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI-Manager config path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Log path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\user\comfyui.log2025-09-04T11:13:41.775803 - 2025-09-04T11:13:42.904189 - [ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated. 2025-09-04T11:13:42.904189 - Prestartup times for custom nodes: 2025-09-04T11:13:42.904189 - 3.3 seconds: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager 2025-09-04T11:13:42.904189 - 2025-09-04T11:13:44.103451 - Checkpoint files will always be loaded safely. 2025-09-04T11:13:44.181039 - Total VRAM 6144 MB, total RAM 16075 MB 2025-09-04T11:13:44.181039 - pytorch version: 2.8.0+cu129 2025-09-04T11:13:44.181039 - Set vram state to: NORMAL_VRAM 2025-09-04T11:13:44.181039 - Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync 2025-09-04T11:13:45.119456 - Using pytorch attention 2025-09-04T11:13:46.242679 - Python version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)] 2025-09-04T11:13:46.242679 - ComfyUI version: 0.3.56 2025-09-04T11:13:46.262300 - [Prompt Server] web root: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app 2025-09-04T11:13:47.026643 - ### Loading: ComfyUI-Manager (V3.36) 2025-09-04T11:13:47.026643 - [ComfyUI-Manager] network_mode: public 2025-09-04T11:13:47.036119 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository) 2025-09-04T11:13:47.052419 - Import times for custom nodes: 2025-09-04T11:13:47.052928 - 0.0 seconds: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\websocket_image_save.py 2025-09-04T11:13:47.052928 - 0.0 seconds: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager 2025-09-04T11:13:47.052928 - 2025-09-04T11:13:47.292879 - Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file (Background on this error at: https://sqlalche.me/e/20/e3q8) 2025-09-04T11:13:47.312519 - Starting server 2025-09-04T11:13:47.313524 - To see the GUI go to: http://127.0.0.1:8000 2025-09-04T11:13:47.628776 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-09-04T11:13:47.658220 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-09-04T11:13:47.797495 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-09-04T11:13:48.035699 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-09-04T11:13:48.082459 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-09-04T11:13:48.177134 - comfyui-frontend-package not found in requirements.txt 2025-09-04T11:13:48.447275 - comfyui-frontend-package not found in requirements.txt 2025-09-04T11:13:54.407289 - FETCH ComfyRegistry Data: 5/962025-09-04T11:13:54.407289 - 2025-09-04T11:14:01.787097 - FETCH ComfyRegistry Data: 10/962025-09-04T11:14:01.787097 - 2025-09-04T11:14:08.713963 - FETCH ComfyRegistry Data: 15/962025-09-04T11:14:08.713963 - 2025-09-04T11:14:15.351666 - FETCH ComfyRegistry Data: 20/962025-09-04T11:14:15.351666 - 2025-09-04T11:14:22.429376 - FETCH ComfyRegistry Data: 25/962025-09-04T11:14:22.429376 - 2025-09-04T11:14:29.356493 - FETCH ComfyRegistry Data: 30/962025-09-04T11:14:29.356493 - 2025-09-04T11:14:37.063216 - FETCH ComfyRegistry Data: 35/962025-09-04T11:14:37.063216 - 2025-09-04T11:14:44.307279 - FETCH ComfyRegistry Data: 40/962025-09-04T11:14:44.307279 - 2025-09-04T11:14:51.189416 - FETCH ComfyRegistry Data: 45/962025-09-04T11:14:51.189416 - 2025-09-04T11:14:57.943608 - got prompt 2025-09-04T11:14:57.961871 - Warning, This is not a checkpoint file, trying to load it as a diffusion model only. 2025-09-04T11:14:57.962863 - !!! Exception during processing !!! ERROR: Could not detect model type of: C:\Users\shanye\Documents\ComfyUI\models\checkpoints\Instamodel_1 Workflow + LoRA\Instamodel_1.safetensors 2025-09-04T11:14:57.964864 - Traceback (most recent call last): File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 578, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 1039, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}\n{}".format(ckpt_path, model_detection_error_hint(ckpt_path, sd))) RuntimeError: ERROR: Could not detect model type of: C:\Users\shanye\Documents\ComfyUI\models\checkpoints\Instamodel_1 Workflow + LoRA\Instamodel_1.safetensors 2025-09-04T11:14:57.966494 - Prompt executed in 0.02 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. {"id":"415d3cd4-ff2f-432f-beed-1c037c7f683a","revision":0,"last_node_id":12,"last_link_id":11,"nodes":[{"id":8,"type":"VAEDecode","pos":[1206.800048828125,139.6000518798828],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"localized_name":"Latent","name":"samples","type":"LATENT","link":7},{"localized_name":"vae","name":"vae","type":"VAE","link":8}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","slot_index":0,"links":[11]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":5,"type":"EmptyLatentImage","pos":[546.2001342773438,566.1336669921875],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"宽度","name":"width","type":"INT","widget":{"name":"width"},"link":null},{"localized_name":"高度","name":"height","type":"INT","widget":{"name":"height"},"link":null},{"localized_name":"批量大小","name":"batch_size","type":"INT","widget":{"name":"batch_size"},"link":null}],"outputs":[{"localized_name":"Latent","name":"LATENT","type":"LATENT","slot_index":0,"links":[2]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"EmptyLatentImage"},"widgets_values":[960,1280,1]},{"id":7,"type":"CLIPTextEncode","pos":[404.3333740234375,331.6666564941406],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":5},{"localized_name":"文本","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","slot_index":0,"links":[6]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"CLIPTextEncode"},"widgets_values":["4koma, comic, censored, jpeg artifacts, blurry, lowres, worst quality, low quality, bad anatomy, bad hands, signature, watermark, text, logo, artist name, censored, patreon username, logo, flag, name tag, deformed, crazy eyes, bad eyes, squint, frown, extra fingers, fewer fingers, SmoothNegative_Hands-neg"]},{"id":6,"type":"CLIPTextEncode","pos":[399,121.33334350585938],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":3},{"localized_name":"文本","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","slot_index":0,"links":[4]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"CLIPTextEncode"},"widgets_values":["girl, graffiti artist, lean body, medium breasts, punk hairstyle, multicolored hair, blonde hair, bright pink hair, white hoodie, neon pink runes on hoodie, form fitting black pants, White sneakers, bubblegum bubble, leaning with her back on a wall, one foot on the wall, one foot on the ground, hands in pocket, punk atmosphere, neon lighting, masterpiece, Wall covered in graffiti, detailed, absurdres, cowboy shot"]},{"id":3,"type":"KSampler","pos":[865.6666870117188,131.99998474121094],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"localized_name":"模型","name":"model","type":"MODEL","link":1},{"localized_name":"正面条件","name":"positive","type":"CONDITIONING","link":4},{"localized_name":"负面条件","name":"negative","type":"CONDITIONING","link":6},{"localized_name":"Latent图像","name":"latent_image","type":"LATENT","link":2},{"localized_name":"种子","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"步数","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"cfg","name":"cfg","type":"FLOAT","widget":{"name":"cfg"},"link":null},{"localized_name":"采样器名称","name":"sampler_name","type":"COMBO","widget":{"name":"sampler_name"},"link":null},{"localized_name":"调度器","name":"scheduler","type":"COMBO","widget":{"name":"scheduler"},"link":null},{"localized_name":"降噪","name":"denoise","type":"FLOAT","widget":{"name":"denoise"},"link":null}],"outputs":[{"localized_name":"Latent","name":"LATENT","type":"LATENT","slot_index":0,"links":[7]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"KSampler"},"widgets_values":[612215308838764,"randomize",24,7,"euler","beta",1]},{"id":4,"type":"CheckpointLoaderSimple","pos":[5.333343505859375,196.6666717529297],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"Checkpoint名称","name":"ckpt_name","type":"COMBO","widget":{"name":"ckpt_name"},"link":null}],"outputs":[{"localized_name":"模型","name":"MODEL","type":"MODEL","slot_index":0,"links":[1]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","slot_index":1,"links":[3,5]},{"localized_name":"VAE","name":"VAE","type":"VAE","slot_index":2,"links":[8]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"CheckpointLoaderSimple","models":[{"name":"v1-5-pruned-emaonly-fp16.safetensors","url":"https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors?download=true","directory":"checkpoints"}]},"widgets_values":["Instamodel_1 Workflow + LoRA\\Instamodel_1.safetensors"]},{"id":11,"type":"SaveImage","pos":[1452.80029296875,141.66671752929688],"size":[210,270],"flags":{},"order":6,"mode":0,"inputs":[{"localized_name":"图片","name":"images","type":"IMAGE","link":11},{"localized_name":"文件名前缀","name":"filename_prefix","type":"STRING","widget":{"name":"filename_prefix"},"link":null}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.56"},"widgets_values":["ComfyUI"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[11,8,0,11,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9090909090909094,"offset":[28.583954807873738,165.70971363866818]}},"version":0.4} ## Additional Context (Please add any additional context or steps to reproduce the error here)

# ComfyUI Error Report ## Error Details - **Node ID:** 5 - **Node Type:** Joy_caption_two - **Exception Type:** RuntimeError - **Exception Message:** All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] ## Stack Trace File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( ## System Information - **ComfyUI Version:** 0.3.33 - **Arguments:** main.py - **OS:** nt - **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.7.0+cu126 ## Devices - **Name:** cuda:0 NVIDIA GeForce GTX 1660 Ti : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 6442123264 - **VRAM Free:** 5379194880 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ## Logs 2025-05-12T12:33:00.245025 - Adding extra search path checkpoints C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\Stable-diffusion 2025-05-12T12:33:00.245025 - Adding extra search path configs C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\Stable-diffusion 2025-05-12T12:33:00.246022 - Adding extra search path vae C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\VAE 2025-05-12T12:33:00.246022 - Adding extra search path loras C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\Lora 2025-05-12T12:33:00.246022 - Adding extra search path loras C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\LyCORIS 2025-05-12T12:33:00.247019 - Adding extra search path upscale_models C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\ESRGAN 2025-05-12T12:33:00.247019 - Adding extra search path upscale_models C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\RealESRGAN 2025-05-12T12:33:00.247019 - Adding extra search path upscale_models C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\SwinIR 2025-05-12T12:33:00.247019 - Adding extra search path embeddings C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\embeddings 2025-05-12T12:33:00.247019 - Adding extra search path hypernetworks C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\hypernetworks 2025-05-12T12:33:00.248017 - Adding extra search path controlnet C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\extensions\sd-webui-controlnet 2025-05-12T12:33:00.823554 - [START] Security scan2025-05-12T12:33:00.823554 - 2025-05-12T12:33:02.367066 - [DONE] Security scan2025-05-12T12:33:02.367066 - 2025-05-12T12:33:02.643316 - ## ComfyUI-Manager: installing dependencies done.2025-05-12T12:33:02.643316 - 2025-05-12T12:33:02.643316 - ** ComfyUI startup time:2025-05-12T12:33:02.644221 - 2025-05-12T12:33:02.644221 - 2025-05-12 12:33:02.6432025-05-12T12:33:02.644221 - 2025-05-12T12:33:02.644221 - ** Platform:2025-05-12T12:33:02.644221 - 2025-05-12T12:33:02.644221 - Windows2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.645181 - ** Python version:2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.645181 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.645181 - ** Python executable:2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.646180 - C:\Users\张三\AppData\Local\Programs\Python\Python310\python.exe2025-05-12T12:33:02.646180 - 2025-05-12T12:33:02.646180 - ** ComfyUI Path:2025-05-12T12:33:02.646180 - 2025-05-12T12:33:02.646180 - C:\comfyui2\ComfyUI2025-05-12T12:33:02.646180 - 2025-05-12T12:33:02.646180 - ** ComfyUI Base Folder Path:2025-05-12T12:33:02.647178 - 2025-05-12T12:33:02.647178 - C:\comfyui2\ComfyUI2025-05-12T12:33:02.647178 - 2025-05-12T12:33:02.647178 - ** User directory:2025-05-12T12:33:02.647178 - 2025-05-12T12:33:02.648176 - C:\comfyui2\ComfyUI\user2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - ** ComfyUI-Manager config path:2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - ** Log path:2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - C:\comfyui2\ComfyUI\user\comfyui.log2025-05-12T12:33:02.648176 - 2025-05-12T12:33:04.301068 - Prestartup times for custom nodes: 2025-05-12T12:33:04.302066 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\comfyui-easy-use 2025-05-12T12:33:04.302066 - 4.1 seconds: C:\comfyui2\ComfyUI\custom_nodes\ComfyUI-Manager 2025-05-12T12:33:04.302066 - 2025-05-12T12:33:06.807275 - Checkpoint files will always be loaded safely. 2025-05-12T12:33:07.017820 - Total VRAM 6144 MB, total RAM 16294 MB 2025-05-12T12:33:07.018365 - pytorch version: 2.7.0+cu126 2025-05-12T12:33:09.352975 - xformers version: 0.0.30 2025-05-12T12:33:09.352975 - Set vram state to: NORMAL_VRAM 2025-05-12T12:33:09.352975 - Device: cuda:0 NVIDIA GeForce GTX 1660 Ti : cudaMallocAsync 2025-05-12T12:33:09.686612 - Using xformers attention 2025-05-12T12:33:11.295329 - Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] 2025-05-12T12:33:11.295743 - ComfyUI version: 0.3.33 2025-05-12T12:33:11.358578 - ComfyUI frontend version: 1.18.10 2025-05-12T12:33:11.361570 - [Prompt Server] web root: C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\comfyui_frontend_package\static 2025-05-12T12:33:13.337748 - Module 'diffusers' load failed. If you don't have it installed, do it:2025-05-12T12:33:13.338688 - 2025-05-12T12:33:13.339179 - pip install diffusers2025-05-12T12:33:13.339179 - 2025-05-12T12:33:13.688119 - [ComfyUI-Easy-Use] server: v1.3.0 Loaded2025-05-12T12:33:13.688119 - 2025-05-12T12:33:13.688119 - [ComfyUI-Easy-Use] web root: C:\comfyui2\ComfyUI\custom_nodes\comfyui-easy-use\web_version/v2 Loaded2025-05-12T12:33:13.689074 - 2025-05-12T12:33:13.717123 - ### Loading: ComfyUI-Manager (V3.31.13) 2025-05-12T12:33:13.717123 - [ComfyUI-Manager] network_mode: public 2025-05-12T12:33:14.072667 - ### ComfyUI Version: v0.3.33-10-g577de83c | Released on '2025-05-11' 2025-05-12T12:33:14.311459 - Import times for custom nodes: 2025-05-12T12:33:14.311459 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\websocket_image_save.py 2025-05-12T12:33:14.313453 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\aigodlike-comfyui-translation 2025-05-12T12:33:14.313453 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two 2025-05-12T12:33:14.314451 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\comfyui-custom-scripts 2025-05-12T12:33:14.314451 - 0.6 seconds: C:\comfyui2\ComfyUI\custom_nodes\ComfyUI-Manager 2025-05-12T12:33:14.315449 - 0.7 seconds: C:\comfyui2\ComfyUI\custom_nodes\comfyui-easy-use 2025-05-12T12:33:14.315449 - 2025-05-12T12:33:14.362371 - Starting server 2025-05-12T12:33:14.363371 - To see the GUI go to: http://127.0.0.1:8188 2025-05-12T12:33:19.620936 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-05-12T12:33:19.678088 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-05-12T12:33:19.845706 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-05-12T12:33:19.955392 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-05-12T12:33:20.436522 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-05-12T12:34:14.934119 - Cannot connect to comfyregistry.2025-05-12T12:34:14.934119 - 2025-05-12T12:34:14.940242 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-05-12T12:34:14.940242 - 2025-05-12T12:34:19.785536 - got prompt 2025-05-12T12:34:19.982016 - C:\comfyui2\ComfyUI\models\clip\siglip-so400m-patch14-3842025-05-12T12:34:19.982016 - 2025-05-12T12:34:20.277243 - [DONE]2025-05-12T12:34:20.277243 - 2025-05-12T12:34:20.377933 - [ComfyUI-Manager] All startup tasks have been completed. 2025-05-12T12:34:26.095833 - Loading VLM's custom vision model2025-05-12T12:34:26.096832 - 2025-05-12T12:34:28.260562 - Requested to load SiglipVisionTransformer 2025-05-12T12:34:28.272048 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T12:34:53.793004 - Requested to load ImageAdapter 2025-05-12T12:34:53.793004 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T12:35:10.488040 - Loading tokenizer2025-05-12T12:35:10.488040 - 2025-05-12T12:35:11.002348 - Loading LLM: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit2025-05-12T12:35:11.003346 - 2025-05-12T12:35:11.003346 - C:\comfyui2\ComfyUI\models\LLM\Meta-Llama-3.1-8B-Instruct-bnb-4bit2025-05-12T12:35:11.003346 - 2025-05-12T12:35:11.005339 - Successfully modified 'base_model_name_or_path' value in 'C:\comfyui2\ComfyUI\models\Joy_caption_two\text_model\adapter_config.json'.2025-05-12T12:35:11.005339 - 2025-05-12T12:35:24.237554 - Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. 2025-05-12T12:35:30.940677 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:35:30.962005 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:35:30.967286 - Prompt executed in 71.18 seconds 2025-05-12T12:51:12.715731 - got prompt 2025-05-12T12:51:12.907212 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T12:51:36.518268 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T12:51:53.412534 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:51:53.419550 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:51:53.426451 - Prompt executed in 40.69 seconds 2025-05-12T13:27:37.964284 - got prompt 2025-05-12T13:27:38.213613 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T13:28:03.041961 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T13:28:19.913094 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T13:28:19.919269 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T13:28:19.925246 - Prompt executed in 41.93 seconds 2025-05-12T14:01:51.529873 - got prompt 2025-05-12T14:01:51.690142 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T14:01:56.251833 - [ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.2025-05-12T14:01:56.251833 - 2025-05-12T14:01:56.355173 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\1514988643_custom-node-list.json2025-05-12T14:01:56.355173 - 2025-05-12T14:01:56.372727 - [DONE]2025-05-12T14:01:56.372727 - 2025-05-12T14:01:56.465008 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\746607195_github-stats.json2025-05-12T14:01:56.465008 - 2025-05-12T14:01:56.472987 - [DONE]2025-05-12T14:01:56.473984 - 2025-05-12T14:01:56.476976 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\832903789_extras.json2025-05-12T14:01:56.476976 - 2025-05-12T14:01:56.477973 - [DONE]2025-05-12T14:01:56.477973 - 2025-05-12T14:01:56.646551 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\1742899825_extension-node-map.json2025-05-12T14:01:56.646551 - 2025-05-12T14:01:56.656524 - [DONE]2025-05-12T14:01:56.656524 - 2025-05-12T14:02:15.505597 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T14:02:31.244068 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T14:02:31.248613 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T14:02:31.253592 - Prompt executed in 39.72 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. {"id":"00000000-0000-0000-0000-000000000000","revision":0,"last_node_id":6,"last_link_id":6,"nodes":[{"id":2,"type":"LoadImage","pos":[-536.7379150390625,-152.25942993164062],"size":[270,314],"flags":{},"order":0,"mode":0,"inputs":[{"label":"image","localized_name":"图像","name":"image","type":"COMBO","widget":{"name":"image"},"link":null},{"label":"upload","localized_name":"选择文件上传","name":"upload","type":"IMAGEUPLOAD","widget":{"name":"upload"},"link":null}],"outputs":[{"label":"IMAGE","localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[5]},{"label":"MASK","localized_name":"遮罩","name":"MASK","type":"MASK","links":null}],"properties":{"cnr_id":"comfy-core","ver":"0.3.33","Node name for S&R":"LoadImage"},"widgets_values":["CN_6.jpg","image"]},{"id":4,"type":"ShowText|pysssss","pos":[247.84080505371094,74.16905975341797],"size":[366.1156921386719,159.55372619628906],"flags":{},"order":3,"mode":0,"inputs":[{"label":"text","localized_name":"text","name":"text","type":"STRING","link":4}],"outputs":[{"label":"STRING","localized_name":"字符串","name":"STRING","shape":6,"type":"STRING","links":null}],"properties":{"cnr_id":"comfyui-custom-scripts","ver":"1.2.5","Node name for S&R":"ShowText|pysssss"},"widgets_values":[]},{"id":6,"type":"Joy_caption_two_load","pos":[-148.00003051757812,-86.00003051757812],"size":[270,58],"flags":{},"order":1,"mode":0,"inputs":[{"label":"model","localized_name":"model","name":"model","type":"COMBO","widget":{"name":"model"},"link":null}],"outputs":[{"label":"JoyTwoPipeline","localized_name":"JoyTwoPipeline","name":"JoyTwoPipeline","type":"JoyTwoPipeline","links":[6]}],"properties":{"cnr_id":"comfyui_slk_joy_caption_two","ver":"667751cab945bd8e9fb0be4d557d47e36821350a","Node name for S&R":"Joy_caption_two_load"},"widgets_values":["unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit"]},{"id":5,"type":"Joy_caption_two","pos":[-139.20001220703125,33.199951171875],"size":[270,126],"flags":{},"order":2,"mode":0,"inputs":[{"label":"joy_two_pipeline","localized_name":"joy_two_pipeline","name":"joy_two_pipeline","type":"JoyTwoPipeline","link":6},{"label":"image","localized_name":"image","name":"image","type":"IMAGE","link":5},{"label":"caption_type","localized_name":"caption_type","name":"caption_type","type":"COMBO","widget":{"name":"caption_type"},"link":null},{"label":"caption_length","localized_name":"caption_length","name":"caption_length","type":"COMBO","widget":{"name":"caption_length"},"link":null},{"label":"low_vram","localized_name":"low_vram","name":"low_vram","type":"BOOLEAN","widget":{"name":"low_vram"},"link":null}],"outputs":[{"label":"STRING","localized_name":"字符串","name":"STRING","type":"STRING","links":[4]}],"properties":{"cnr_id":"comfyui_slk_joy_caption_two","ver":"667751cab945bd8e9fb0be4d557d47e36821350a","Node name for S&R":"Joy_caption_two"},"widgets_values":["Descriptive","long",true]}],"links":[[4,5,0,4,0,"STRING"],[5,2,0,5,1,"IMAGE"],[6,6,0,5,0,"JoyTwoPipeline"]],"groups":[],"config":{},"extra":{"frontendVersion":"1.18.10"},"version":0.4} ## Additional Context (Please add any additional context or steps to reproduce the error here) 以上代码是什么意思

最新推荐

recommend-type

Java源码-springboot+vue217志同道合交友网站+mysql完整资料.zip

本项目是基于Java源码的springboot+vue217志同道合交友网站,结合mysql数据库实现的一个完整项目。项目主要功能包括用户注册登录、个人资料管理、兴趣标签匹配、在线聊天、活动发布与参与等,旨在为用户提供一个便捷的交友平台,帮助志同道合的人快速找到彼此。项目采用前后端分离架构,前端使用vue217框架实现用户界面,后端基于springboot框架处理业务逻辑,mysql数据库负责数据存储与管理。开发此项目的目的是为了解决现代社会中人们交友难的问题,通过技术手段提升交友效率,促进人与人之间的交流与互动。毕设项目源码常年开发定制更新,希望对需要的同学有帮助。
recommend-type

RaspberryMatic与Docker整合:CCU2固件容器化操作指南

### Docker与CCU2固件整合 #### 知识点1:Docker容器技术 Docker是一种开源的容器化平台,它允许开发者将应用及其依赖打包到一个可移植的容器中,该容器可以在任何支持Docker的机器上运行。Docker容器和传统的虚拟机不同,它不需要完整的操作系统镜像,而是利用宿主机的操作系统内核,实现了轻量级的隔离,启动速度快,资源消耗低。 #### 知识点2:CCU2固件与OpenHAB CCU2(CCU代表Comet Control Unit)固件通常用在HomeMatic智能家居自动化系统中,它负责管理和控制HomeMatic的设备。CCU2运行的是一个基于Linux的自定义系统,专门优化用于与HomeMatic硬件和软件通信。当把CCU2固件用于Docker容器时,意味着你可以在任何支持Docker的设备上,通过容器化的方式部署和运行CCU2环境,从而支持HomeMatic设备的控制。 #### 知识点3:RaspberryMatic RaspberryMatic是为树莓派量身打造的一个项目,它允许用户在树莓派上运行CCU2固件。项目提供了一整套的HomeMatic体验,包括备份功能、Dutty-Cycle、LAN GW等。RaspberryMatic的一个显著优点是支持多种架构,包括x86_64/amd64、ARM和ARM64。 #### 知识点4:Docker容器部署脚本 "docker-ccu"项目提供了一套脚本,这些脚本能够自动化创建一个Docker容器来运行CCU2固件。通常这类脚本命名为`deploy.sh`,开发者或者最终用户可以通过运行这些脚本来快速部署和启动Docker容器,而无需手动配置和启动容器的每一个步骤。 #### 知识点5:数据备份与迁移 在使用Docker容器进行部署时,用户可能需要在不同环境下迁移数据或者保留原有数据。脚本中提到了数据保留的问题,如果用户之前使用的是其他方式部署,比如非Docker方式或者使用了特定的docker卷或者容器名称,那么在调用`deploy.sh`脚本部署时,需要对设置进行相应的调整,以保证数据的完整性。 #### 知识点6:仓库维护与开源社区 项目维护者提到了不再计划继续更新该存储库,并提出了将仓库设置为只读模式的想法。这在开源社区中是比较常见的情况,尤其是在维护者有新的兴趣点或者由于个人时间限制时。在此情况下,开源项目可以通过社区协作来继续维护,或者寻求其他维护者的接手。 #### 知识点7:Shell脚本编写 由于项目中提到了一个叫做`deploy.sh`的脚本文件,这说明脚本是用Shell语言编写的。Shell脚本非常适合于执行自动化任务,比如配置环境、启动服务、管理文件系统等,因此在自动化部署或系统管理中经常被使用。了解Shell脚本编写,对于自动化管理Docker容器等任务至关重要。 #### 知识点8:社区支持和反馈 项目维护者在描述中提到,如果在一个月内没有收到任何关于将官方CCU作为容器使用的反馈,将会把仓库设置为只读模式。这表明了开源社区中项目的发展很大程度上依赖于社区成员的反馈和支持。因此,了解如何与开源项目互动,提交问题、建议和补丁,是参与开源社区的重要途径。 #### 知识点9:固件概念与兼容性 CCU2固件特别设计用于某些特定硬件,但通过Docker化的方式,开发者可以跨平台运行CCU2固件,这增加了固件的兼容性。Docker的隔离性允许用户在一个通用的软件层面上运行原本可能受限于特定硬件的固件,从而扩展了固件的应用场景。 #### 知识点10:操作系统架构支持 项目支持包括x86_64/amd64、ARM和ARM64在内的多种架构,说明了Docker容器在不同硬件平台上的高度可移植性。这一特点使得开发者可以在各种硬件上部署相同的环境,简化了跨平台应用的开发和部署。 #### 结语 该文档提供了一个关于如何将特定固件整合入Docker容器的方案,并说明了项目维护者对于未来发展的规划。这些内容不仅对有志于尝试或扩展该项目的个人有指导意义,同时也展示了开源社区协作以及Docker技术在部署和管理复杂系统环境中的重要性和便利性。
recommend-type

手把手封装SDK:C#如何高效集成汉印D35BT打印功能

# 摘要 本文围绕C# SDK封装与汉印D35BT打印机集成的技术实践展开,系统阐述了SDK封装的理论基础、架构设计及面向对象设计原则的应用。文章分析了汉印D35BT打印机的通信协议与API调用方式,并详细介绍了在C#中实现蓝牙设备交互与数据发送的方法。通过核心打印功能的类封装、异步任务处理机制的设计,提升了SDK的易用性与扩展性。结合WinForm项目示例验证功能完整性后,进一步探讨了SDK的性能优化策略、测试方法及发布流程,构建了从设计、实现到部署的完整技术路径。 # 关键字 SDK封装;蓝牙通信;面向对象设计;异步打印;API调用;NuGet包发布 参考资源链接:[C#开
recommend-type

VM虚拟机ubuntu桥接主机无线网络

### 配置 VMware Ubuntu 桥接模式连接无线网络 在 VMware 中配置 Ubuntu 虚拟机通过桥接模式连接主机的无线网络,需要确保虚拟机与主机处于同一网络段,并能够通过主机的无线网卡直接访问外部网络。以下是详细的配置步骤: #### VMware 设置桥接模式 1. **以管理员权限运行 VMware**,进入 **编辑 > 虚拟网络编辑器**。 2. 在 **虚拟网络编辑器** 界面中,找到 **VMnet0(桥接模式)** 的设置部分。 3. 在 **“桥接到”** 的下拉菜单中,选择主机的无线网卡设备。 4. 勾选 **“自动设置桥接”** 选项,确保 VMwar
recommend-type

Ruby on Rails跳蚤市场应用开发详解

根据提供的文件信息,我们可以从中提炼出以下知识点: ### 标题知识点 - **freemarket_sample_72h** - 标题暗示这是一份关于名为“freemarket”的跳蚤市场应用程序的72小时开发样例或原型。 - 样例名称“freemarket_sample_72h”可能用于内部标识或者版本控制,表明该样本是在有限的时间内(即72小时内)完成的。 ### 描述知识点 - **网站结构** - 首页:应用程序的入口点,通常包含总体介绍和导航链接。 - 产品页面:展示产品的列表或者详细信息。 - 展览页:可能指专门展示某些特殊产品或促销产品的页面。 - 应用信息:关于应用程序的基本信息,如版本号、开发团队、联系方式等。 - 应用概述:对应用程序功能和目标用户群体的简介。 - **用户账户信息** - 测试账号:为开发者或测试者提供的虚拟用户账号信息,以便进行应用程序的内部测试。 - 购买者信息:提供了邮箱地址、密码以及购买卡信息,是进行交易和购买所必需的。 - 卖家信息:提供了卖家的邮箱地址和密码,用于登录卖家账户进行产品上架和管理。 - **功能列表** - 新用户注册:允许新用户创建账户。 - 登录功能:用户可以使用凭证登录应用程序。 - 产品列表功能:展示所有可购买的产品。 - 产品购买功能:用户可以购买产品,涉及到支付信息的处理。 - 产品类别注册和显示:允许用户查看不同的产品分类。 - 产品详细信息显示:展示每个产品的详细信息,如描述、价格等。 - 编辑和删除列出的产品:赋予管理员或卖家权利更新或移除产品信息。 - **开发环境** - Ruby 2.5.1:这是Ruby编程语言的一个版本。 - Ruby on Rails 5.4.2:这是一个使用Ruby语言编写的开源Web应用框架。 - MySQL 14.14:这是一个流行的开源关系型数据库管理系统。 - Github:一个用于代码托管和版本控制的平台。 - AWS:亚马逊提供的云服务平台,包括EC2(弹性计算云)和S3(简单存储服务)。 - Capistrano:是一个开源的自动化部署工具,常用于Ruby on Rails项目。 - **开发周期和工作时间** - 开发时间:约4周,说明了项目从开始到完成所需的时间。 - 每天平均工作时间:大约9小时,表明项目的紧凑和开发团队的努力。 - 开发系统人数:4,指出了参与该项目的开发人员数量。 - 敏捷类型:可能指的是一种开发过程,强调快速迭代和响应变化。 ### 标签知识点 - **Ruby** - 这个标签直接指向了Ruby编程语言,说明该应用程序是使用Ruby开发的。 ### 压缩包子文件的文件名称列表知识点 - **freemarket_sample_72h-master** - 这是源代码压缩包的文件名称,指示了一个版本控制中的主分支(master)。 通过这些信息,我们可以了解到该应用程序是一个在线跳蚤市场,它允许用户注册、登录、查看和购买产品,并且提供了产品管理功能。项目采用Ruby语言和Ruby on Rails框架开发,并使用了多种技术和平台,如MySQL数据库、Github、AWS云服务以及Capistrano部署工具。项目开发遵循敏捷开发原则,并在一个紧凑的时间表中完成。
recommend-type

【C#条码打印实战技巧】:汉印D35BT数据格式转换全攻略

# 摘要 本文围绕C#语言实现条码打印的技术方案展开,重点以汉印D35BT打印机为实践对象,系统性地讲解了条码打印中数据格式的基本原理与处理方法。文章分析了条码打印的数据流向、通信协议与命令集结构,探讨了在C#开发环境下实现数据格式转换、命令封装与容错机制的关键技术。通过完整的打印流程实现、性能优化策略以及常见问题排查方法的介绍,帮助开发者构建高效稳定的条码打印应用。同时,文章还展望了条码打印技术在多协议支持、云服务集成与企业级系统对接方面的拓展方向。 # 关键字 条码打印;数据格式;C#开发;通信协议;命令封装;容错机制 参考资源链接:[C#开发汉印D35BT条码打印机源代码
recommend-type

pikaqiu靶场搭建linux

在Linux系统上搭建名为“Pikachu”的靶场环境,主要涉及以下几个步骤: ### 下载并安装Pikachu靶场 首先,需要确保系统中已安装Git工具。如果尚未安装,可以通过以下命令安装Git: ```bash sudo yum install git ``` 接下来,使用Git从GitHub上克隆Pikachu靶场的源代码到本地: ```bash git clone https://github.com/zhuifengshaonianhanlu/pikachu.git ``` 完成下载后,将Pikachu靶场文件移动到Apache服务器的根目录下,以便于后续的Web访问:
recommend-type

jsTris: 探索俄罗斯方块的JavaScript实现

俄罗斯方块是一款经典的电子游戏,自1984年由苏联程序员阿列克谢·帕基特诺夫在计算机上首次开发以来,就以其简单而又富有挑战性的玩法吸引了世界各地的玩家。随着时间的发展,许多开发者开始尝试使用不同的编程语言和平台对俄罗斯方块进行重制或创新。本篇将详细介绍关于一个名为“jsTris”的俄罗斯方块游戏的JavaScript实现。 ### 1. JavaScript与Web游戏开发 JavaScript是一种广泛用于网页前端开发的脚本语言,它能够让网页拥有动态交互功能。自ECMAScript 5版本之后,JavaScript性能得到了显著的提升,使其逐渐成为开发Web游戏的理想选择。通过HTML5的Canvas API与JavaScript的结合,开发者可以创建出流畅、富有吸引力的图形界面,为用户带来良好的游戏体验。 ### 2.俄罗斯方块游戏机制 俄罗斯方块的基本玩法是玩家需要移动、旋转和放置一系列下落的方块,使它们在底部拼成完整的一行或多行,这样可以消除方块并获得分数。当方块堆积到屏幕顶部时,游戏结束。游戏难度会随着时间的推移而逐渐增加。 ### 3. jsTris项目概述 "jsTris"是俄罗斯方块的一个JavaScript版本,由一位不具名的开发者于2014年开发,并上传到了GitHub上进行开源。项目中包含了所有实现俄罗斯方块游戏逻辑的JavaScript代码,以及必要的HTML和CSS文件,用以构建游戏界面。 尽管作者自述代码“非常混乱”,而且表示自己没有回过头来清理过,这可能意味着对于初学者和后来的维护者来说,理解原始代码的结构和逻辑可能会有一定难度。不过,即使代码结构不佳,jsTris仍然可以作为一个学习的资源,开发者可以通过重构和优化来提升代码质量,同时也更好地理解游戏背后的逻辑。 ### 4. 音乐资源 在描述中提到了音乐来源,但并未给出具体的音乐文件信息。通常情况下,Web游戏会使用一些背景音乐和效果音来增强游戏体验。在jsTris项目中,音乐文件可能被嵌入到了项目中,或者通过外部链接引入。音乐的版权问题在此类开源项目中需要特别注意,开发者使用音乐时应确保拥有相应的使用权或音乐已经处于公共领域。 ### 5. 标签和文件结构 本项目的标签是"JavaScript",表明该项目完全是使用JavaScript进行开发的。关于"jsTris-master"这个文件名,它可能是项目中的主文件夹,包含了游戏的核心代码和资源文件。在一个典型的JavaScript项目结构中,可能包括以下部分: - HTML文件:定义游戏的结构和入口点。 - CSS文件:负责游戏的样式和视觉效果。 - JavaScript文件:包含游戏逻辑、控制和交互代码。 - 音频文件:用于游戏背景音乐和各种效果音。 - 图片文件:可能包括游戏中的图标、角色或背景。 ### 6. 开源与社区 该项目被上传到了GitHub,这是一个全球性的开源社区,允许开发者贡献代码,共同改进项目。在GitHub上,jsTris项目可能拥有自己的README文件,用于说明如何运行游戏、如何贡献代码或报告问题等。开源项目对于开发者来说是学习和实践编程技巧的宝贵资源,同时也可以通过社区获得帮助和反馈,从而改进项目。 ### 7. 清理与重构代码的重要性 提到jsTris的代码"非常混乱",对于任何类型的软件项目而言,可读性和可维护性都是极其重要的。混乱的代码会导致开发者难以理解,更不用说进行进一步的开发或优化。因此,对于jsTris或任何类似项目,代码重构是一个需要认真对待的过程。重构可以提高代码质量,降低维护成本,并可能修复一些潜在的错误。 ### 总结 jsTris项目作为一款使用JavaScript实现的俄罗斯方块游戏,向我们展示了如何利用Web技术进行游戏开发。虽然存在代码结构上的问题,但它无疑为有兴趣学习和改进的开发者提供了实践机会。通过深入分析和可能的代码重构,不仅可以提升jsTris项目的质量,也可以使开发者自己在JavaScript编程和游戏开发方面获得宝贵的经验。
recommend-type

从失败到稳定打印:汉印D35BT常见问题排查与解决方案大全

# 摘要 本文围绕汉印D35BT打印机的使用与故障排查展开系统研究,首先介绍其基本功能与典型应用场景,进而从打印质量、通信连接及任务处理等方面深入分析常见故障的成因,涵盖硬件适配、环境影响、数据通信及固件兼容性等多个维度。基于理论分析,本文进一步提出针对典型问题的实操解决方案,并探讨打印性能优化、固件升级及企业系统集成等高级应用策略。通过案例分析与实践经验总结,旨在提升设备运行稳定性与打印效率,为企业用户提供可靠的技术支持和运维指导。 # 关键字 汉印D35BT;打印质量;蓝牙通信;打印缓冲区;固件升级;数据格式适配 参考资源链接:[C#开发汉印D35BT条码打印机源代码及二次
recommend-type

point transformer v3安装

要安装 **Point Transformer V3** 深度学习模型,通常需要从其开源代码库中获取源代码并进行本地部署。目前,该模型的实现可能托管在 GitHub 或类似的代码平台上。以下是安装和部署的一般步骤: ### 1. 环境准备 确保系统中已安装以下工具和库: - **Python**(建议 3.8 或更高版本) - **PyTorch**(建议版本 1.10 或更高) - **CUDA Toolkit**(与 PyTorch 兼容的版本) - **NVIDIA GPU驱动** - **编译工具链**(如 `ninja`、`CMake` 等) 安装 PyTorch 的示例命令