# ComfyUI Error Report ## Error Details - **Node ID:** 90 - **Node Type:** WanVideoSampler - **Exception Type:** UnboundLocalError - **Exception Message:** cannot access local variable 'callback_latent' where it is not associated with a value ## Stack Trace ``` File "G:\comfyui\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\comfyui\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\comfyui\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "G:\comfyui\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "G:\comfyui\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3988, in process "samples": callback_latent.unsqueeze(0).cpu() if callback is not None else None, ^^^^^^^^^^^^^^^ ``` ## System Information - **ComfyUI Version:** 0.3.56 - **Arguments:** main.py --auto-launch - **OS:** nt - **Python Version:** 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.7.1+cu128 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 17170956288 - **VRAM Free:** 15821963264 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ## Logs ``` 2025-09-01T14:58:56.860057 - 0.9 seconds: G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes 2025-09-01T14:58:56.860057 - 1.1 seconds: G:\comfyui\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet 2025-09-01T14:58:56.861054 - 1.7 seconds: G:\comfyui\ComfyUI\custom_nodes\ComfyUI-nunchaku 2025-09-01T14:58:56.861054 - 2.4 seconds: G:\comfyui\ComfyUI\custom_nodes\ComfyUI-Easy-Use 2025-09-01T14:58:56.861054 - 2025-09-01T14:58:57.088527 - Context impl SQLiteImpl. 2025-09-01T14:58:57.088527 - Will assume non-transactional DDL. 2025-09-01T14:58:57.089524 - No target revision found. 2025-09-01T14:58:57.111451 - Starting server 2025-09-01T14:58:57.112447 - To see the GUI go to: http://127.0.0.1:8188 2025-09-01T14:58:58.110145 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/photoswipe-lightbox.esm.min.js2025-09-01T14:58:58.110145 - 2025-09-01T14:58:58.154639 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/photoswipe.min.css2025-09-01T14:58:58.154639 - 2025-09-01T14:58:58.155636 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/pickr.min.js2025-09-01T14:58:58.155636 - 2025-09-01T14:58:58.192513 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/classic.min.css2025-09-01T14:58:58.192513 - 2025-09-01T14:58:58.193509 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/model-viewer.min.js2025-09-01T14:58:58.193509 - 2025-09-01T14:58:58.197497 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/juxtapose.css2025-09-01T14:58:58.198493 - 2025-09-01T14:58:58.209456 - G:\comfyui\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/juxtapose.min.js2025-09-01T14:58:58.209456 - 2025-09-01T14:58:58.799579 - [G:\comfyui\ComfyUI\custom_nodes\comfy_mtb] | INFO -> Found multiple match, we will pick the last D:\AI-sd\sd-webui-aki-v4\models\SwinIR ['G:\\comfyui\\ComfyUI\\models\\upscale_models', 'D:\\AI-sd\\sd-webui-aki-v4\\models\\ESRGAN', 'D:\\AI-sd\\sd-webui-aki-v4\\models\\RealESRGAN', 'D:\\AI-sd\\sd-webui-aki-v4\\models\\SwinIR'] 2025-09-01T14:58:58.807556 - []2025-09-01T14:58:58.807556 - 2025-09-01T14:58:58.807556 - []2025-09-01T14:58:58.807556 - 2025-09-01T14:59:00.532965 - got prompt 2025-09-01T14:59:00.569842 - Using xformers attention in VAE 2025-09-01T14:59:00.570844 - Using xformers attention in VAE 2025-09-01T14:59:00.726582 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 2025-09-01T14:59:00.882662 - Requested to load FluxClipModel_ 2025-09-01T14:59:00.889639 - loaded completely 9.5367431640625e+25 4777.53759765625 True 2025-09-01T14:59:00.998583 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16 2025-09-01T14:59:01.063367 - clip missing: ['text_projection.weight'] 2025-09-01T14:59:01.722971 - FETCH ComfyRegistry Data: 5/962025-09-01T14:59:01.722971 - 2025-09-01T14:59:03.042519 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 2025-09-01T14:59:03.043516 - model_type FLUX 2025-09-01T14:59:11.471386 - FETCH ComfyRegistry Data: 10/962025-09-01T14:59:11.473376 - 2025-09-01T14:59:11.998346 - Requested to load Flux 2025-09-01T14:59:18.503848 - loaded completely 0.0 11350.067443847656 True 2025-09-01T14:59:18.514805 - Patching comfy attention to use sageattn2025-09-01T14:59:18.514805 - 2025-09-01T14:59:18.688225 - 0%| | 0/20 [00:00<?, ?it/s]2025-09-01T14:59:20.150041 - FETCH ComfyRegistry Data: 15/962025-09-01T14:59:20.150041 - 2025-09-01T14:59:28.628200 - 40%|█████████████████████████████████▏ | 8/20 [00:09<00:14, 1.17s/it]2025-09-01T14:59:29.656666 - FETCH ComfyRegistry Data: 20/962025-09-01T14:59:29.657170 - 2025-09-01T14:59:37.969589 - 80%|█████████████████████████████████████████████████████████████████▌ | 16/20 [00:19<00:04, 1.16s/it]2025-09-01T14:59:38.276116 - FETCH ComfyRegistry Data: 25/962025-09-01T14:59:38.276619 - 2025-09-01T14:59:42.606388 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:23<00:00, 1.16s/it]2025-09-01T14:59:42.609378 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:23<00:00, 1.20s/it]2025-09-01T14:59:42.610375 - 2025-09-01T14:59:42.612369 - Restoring initial comfy attention2025-09-01T14:59:42.613364 - 2025-09-01T14:59:43.566091 - Requested to load AutoencodingEngine 2025-09-01T14:59:44.164328 - loaded completely 0.0 159.87335777282715 True 2025-09-01T14:59:44.683600 - comfyui lumi batcher overwrite task done2025-09-01T14:59:44.684596 - 2025-09-01T14:59:44.695560 - Prompt executed in 44.16 seconds 2025-09-01T14:59:47.355547 - FETCH ComfyRegistry Data: 30/962025-09-01T14:59:47.355547 - 2025-09-01T14:59:50.013416 - got prompt 2025-09-01T14:59:50.431031 - loaded completely 0.0 11350.067443847656 True 2025-09-01T14:59:50.435018 - Patching comfy attention to use sageattn2025-09-01T14:59:50.435018 - 2025-09-01T14:59:55.538670 - 25%|████████████████████▊ | 5/20 [00:05<00:16, 1.11s/it]2025-09-01T14:59:56.060186 - FETCH ComfyRegistry Data: 35/962025-09-01T14:59:56.060689 - 2025-09-01T15:00:03.927199 - 60%|█████████████████████████████████████████████████▏ | 12/20 [00:13<00:09, 1.19s/it]2025-09-01T15:00:04.665127 - FETCH ComfyRegistry Data: 40/962025-09-01T15:00:04.666123 - 2025-09-01T15:00:13.251827 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.14s/it]2025-09-01T15:00:13.252823 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.14s/it]2025-09-01T15:00:13.252823 - 2025-09-01T15:00:13.253821 - Restoring initial comfy attention2025-09-01T15:00:13.253821 - 2025-09-01T15:00:13.397095 - FETCH ComfyRegistry Data: 45/962025-09-01T15:00:13.397604 - 2025-09-01T15:00:14.837381 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:00:15.176809 - comfyui lumi batcher overwrite task done2025-09-01T15:00:15.177805 - 2025-09-01T15:00:15.179799 - Prompt executed in 25.16 seconds 2025-09-01T15:00:22.087367 - FETCH ComfyRegistry Data: 50/962025-09-01T15:00:22.088368 - 2025-09-01T15:00:26.342724 - got prompt 2025-09-01T15:00:27.137497 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:00:27.148464 - Patching comfy attention to use sageattn2025-09-01T15:00:27.148464 - 2025-09-01T15:00:29.963897 - 15%|████████████▍ | 3/20 [00:02<00:17, 1.03s/it]2025-09-01T15:00:30.838878 - FETCH ComfyRegistry Data: 55/962025-09-01T15:00:30.839380 - 2025-09-01T15:00:39.316242 - 55%|█████████████████████████████████████████████ | 11/20 [00:12<00:10, 1.17s/it]2025-09-01T15:00:39.731658 - FETCH ComfyRegistry Data: 60/962025-09-01T15:00:39.737147 - 2025-09-01T15:00:47.708181 - 90%|█████████████████████████████████████████████████████████████████████████▊ | 18/20 [00:20<00:02, 1.20s/it]2025-09-01T15:00:48.592029 - FETCH ComfyRegistry Data: 65/962025-09-01T15:00:48.592029 - 2025-09-01T15:00:50.028377 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.18s/it]2025-09-01T15:00:50.029376 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.14s/it]2025-09-01T15:00:50.029376 - 2025-09-01T15:00:50.030370 - Restoring initial comfy attention2025-09-01T15:00:50.030370 - 2025-09-01T15:00:51.666354 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:00:52.027193 - comfyui lumi batcher overwrite task done2025-09-01T15:00:52.027193 - 2025-09-01T15:00:52.033721 - Prompt executed in 25.69 seconds 2025-09-01T15:00:57.732591 - FETCH ComfyRegistry Data: 70/962025-09-01T15:00:57.732591 - 2025-09-01T15:01:02.463536 - got prompt 2025-09-01T15:01:03.089034 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:01:03.098004 - Patching comfy attention to use sageattn2025-09-01T15:01:03.098004 - 2025-09-01T15:01:05.773837 - 15%|████████████▍ | 3/20 [00:02<00:17, 1.01s/it]2025-09-01T15:01:06.618083 - FETCH ComfyRegistry Data: 75/962025-09-01T15:01:06.618083 - 2025-09-01T15:01:15.225214 - 55%|█████████████████████████████████████████████ | 11/20 [00:12<00:10, 1.17s/it]2025-09-01T15:01:15.559178 - FETCH ComfyRegistry Data: 80/962025-09-01T15:01:15.559178 - 2025-09-01T15:01:23.462415 - 90%|█████████████████████████████████████████████████████████████████████████▊ | 18/20 [00:20<00:02, 1.18s/it]2025-09-01T15:01:24.532134 - FETCH ComfyRegistry Data: 85/962025-09-01T15:01:24.532134 - 2025-09-01T15:01:25.823332 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.18s/it]2025-09-01T15:01:25.824332 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.14s/it]2025-09-01T15:01:25.824332 - 2025-09-01T15:01:25.824332 - Restoring initial comfy attention2025-09-01T15:01:25.825360 - 2025-09-01T15:01:27.465611 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:01:27.822201 - comfyui lumi batcher overwrite task done2025-09-01T15:01:27.822201 - 2025-09-01T15:01:27.825191 - Prompt executed in 25.36 seconds 2025-09-01T15:01:33.620743 - FETCH ComfyRegistry Data: 90/962025-09-01T15:01:33.620743 - 2025-09-01T15:01:43.439162 - FETCH ComfyRegistry Data: 95/962025-09-01T15:01:43.439162 - 2025-09-01T15:01:45.474384 - FETCH ComfyRegistry Data [DONE]2025-09-01T15:01:45.474384 - 2025-09-01T15:01:45.625519 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes 2025-09-01T15:01:45.715218 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-09-01T15:01:45.715218 - 2025-09-01T15:01:47.345156 - got prompt 2025-09-01T15:01:47.361993 - Requested to load Flux 2025-09-01T15:01:47.516137 - [DONE]2025-09-01T15:01:47.516137 - 2025-09-01T15:01:47.591887 - [ComfyUI-Manager] All startup tasks have been completed. 2025-09-01T15:01:48.028249 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:02:12.473821 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00, 1.26s/it]2025-09-01T15:02:12.474817 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00, 1.22s/it]2025-09-01T15:02:12.474817 - 2025-09-01T15:02:14.322793 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:02:14.683971 - comfyui lumi batcher overwrite task done2025-09-01T15:02:14.683971 - 2025-09-01T15:02:14.685964 - Prompt executed in 27.34 seconds 2025-09-01T15:02:19.663561 - got prompt 2025-09-01T15:02:20.178226 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:02:44.645309 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00, 1.25s/it]2025-09-01T15:02:44.646305 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00, 1.22s/it]2025-09-01T15:02:44.646305 - 2025-09-01T15:02:46.461739 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:02:46.818816 - comfyui lumi batcher overwrite task done2025-09-01T15:02:46.818816 - 2025-09-01T15:02:46.820813 - Prompt executed in 27.16 seconds 2025-09-01T15:02:53.822858 - got prompt 2025-09-01T15:02:53.845781 - Requested to load Flux 2025-09-01T15:02:54.625791 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:02:54.635757 - Patching comfy attention to use sageattn2025-09-01T15:02:54.635757 - 2025-09-01T15:03:17.948689 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:23<00:00, 1.19s/it]2025-09-01T15:03:17.950687 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:23<00:00, 1.17s/it]2025-09-01T15:03:17.951683 - 2025-09-01T15:03:17.953675 - Restoring initial comfy attention2025-09-01T15:03:17.953675 - 2025-09-01T15:03:19.664335 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:03:20.021412 - comfyui lumi batcher overwrite task done2025-09-01T15:03:20.021412 - 2025-09-01T15:03:20.027392 - Prompt executed in 26.20 seconds 2025-09-01T15:03:29.398450 - got prompt 2025-09-01T15:03:30.062602 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:03:30.071572 - Patching comfy attention to use sageattn2025-09-01T15:03:30.071572 - 2025-09-01T15:03:52.914438 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.17s/it]2025-09-01T15:03:52.915434 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.14s/it]2025-09-01T15:03:52.915434 - 2025-09-01T15:03:52.916431 - Restoring initial comfy attention2025-09-01T15:03:52.916431 - 2025-09-01T15:03:54.593879 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:03:54.936053 - comfyui lumi batcher overwrite task done2025-09-01T15:03:54.937051 - 2025-09-01T15:03:54.938046 - Prompt executed in 25.54 seconds 2025-09-01T15:05:17.191259 - got prompt 2025-09-01T15:05:17.823083 - loaded completely 0.0 11350.067443847656 True 2025-09-01T15:05:17.832052 - Patching comfy attention to use sageattn2025-09-01T15:05:17.833050 - 2025-09-01T15:05:40.406129 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.16s/it]2025-09-01T15:05:40.406129 - 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:22<00:00, 1.13s/it]2025-09-01T15:05:40.406999 - 2025-09-01T15:05:40.407512 - Restoring initial comfy attention2025-09-01T15:05:40.407512 - 2025-09-01T15:05:42.007023 - loaded completely 0.0 159.87335777282715 True 2025-09-01T15:05:42.368928 - comfyui lumi batcher overwrite task done2025-09-01T15:05:42.368928 - 2025-09-01T15:05:42.370921 - Prompt executed in 25.18 seconds 2025-09-01T15:18:47.644508 - got prompt 2025-09-01T15:18:59.759942 - T5Encoder: 29%|███████████████████████████████████████████████████████▏ | 7/24 [00:00<00:00, 69.54it/s]2025-09-01T15:18:59.794825 - T5Encoder: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24/24 [00:00<00:00, 177.06it/s]2025-09-01T15:18:59.795821 - 2025-09-01T15:19:00.091094 - T5Encoder: 0%| | 0/24 [00:00<?, ?it/s]2025-09-01T15:19:00.139935 - T5Encoder: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24/24 [00:00<00:00, 501.63it/s]2025-09-01T15:19:00.140927 - 2025-09-01T15:19:16.616558 - CUDA Compute Capability: 8.9 2025-09-01T15:19:16.616558 - Detected model in_channels: 16 2025-09-01T15:19:16.617551 - Model cross attention type: t2v, num_heads: 40, num_layers: 40 2025-09-01T15:19:16.618547 - Model variant detected: 14B 2025-09-01T15:19:17.079385 - model_type FLOW 2025-09-01T15:19:17.592839 - Loading LoRA: Wan2.2加速\Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32 with strength: 3.0 2025-09-01T15:19:17.807323 - Using accelerate to load and assign model weights to device... 2025-09-01T15:19:24.630193 - Loading transformer parameters to cuda:0: 83%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 906/1095 [00:06<00:00, 966.70it/s]2025-09-01T15:19:24.655110 - Loading transformer parameters to cuda:0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1095/1095 [00:06<00:00, 159.93it/s]2025-09-01T15:19:24.656107 - 2025-09-01T15:19:24.657103 - Using 1052 LoRA weight patches for WanVideo model 2025-09-01T15:19:25.598506 - sigmas: tensor([1.0000, 0.9600, 0.8889, 0.7272, 0.0000]) 2025-09-01T15:19:25.599502 - timesteps: tensor([999, 959, 888, 727], device='cuda:0') 2025-09-01T15:19:25.599502 - Using per-step cfg list: [2.0, 1.0, 1.0, 1.0] 2025-09-01T15:19:25.973765 - Input sequence length: 31050 2025-09-01T15:19:25.973765 - Sampling 89 frames at 720x480 with 4 steps 2025-09-01T15:21:45.420019 - 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [02:19<00:00, 26.10s/it]2025-09-01T15:21:45.421016 - 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [02:19<00:00, 34.77s/it]2025-09-01T15:21:45.421016 - 2025-09-01T15:21:47.906432 - Allocated memory: memory=0.097 GB 2025-09-01T15:21:47.906432 - Max allocated memory: max_memory=12.714 GB 2025-09-01T15:21:47.907430 - Max reserved memory: max_reserved=12.938 GB 2025-09-01T15:21:48.211045 - CUDA Compute Capability: 8.9 2025-09-01T15:21:48.211045 - Detected model in_channels: 16 2025-09-01T15:21:48.212042 - Model cross attention type: t2v, num_heads: 40, num_layers: 40 2025-09-01T15:21:48.213039 - Model variant detected: 14B 2025-09-01T15:21:48.689720 - model_type FLOW 2025-09-01T15:21:49.195154 - Loading LoRA: Wan2.2加速\Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32 with strength: 1.0 2025-09-01T15:21:49.294817 - Using accelerate to load and assign model weights to device... 2025-09-01T15:21:58.279127 - Loading transformer parameters to cuda:0: 51%|█████████████████████████████████████████████████████████████████████████████▊ | 553/1095 [00:08<00:18, 29.40it/s]2025-09-01T15:21:58.373810 - Loading transformer parameters to cuda:0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1095/1095 [00:09<00:00, 120.63it/s]2025-09-01T15:21:58.373810 - 2025-09-01T15:21:58.386767 - Using 1052 LoRA weight patches for WanVideo model 2025-09-01T15:21:58.555691 - sigmas: tensor([0.]) 2025-09-01T15:21:58.556688 - timesteps: tensor([], device='cuda:0', dtype=torch.int64) 2025-09-01T15:22:00.877498 - Input sequence length: 31050 2025-09-01T15:22:00.878495 - Sampling 89 frames at 720x480 with 0 steps 2025-09-01T15:22:01.295938 - 0it [00:00, ?it/s]2025-09-01T15:22:01.297932 - 0it [00:00, ?it/s]2025-09-01T15:22:01.297932 - 2025-09-01T15:22:01.847336 - Allocated memory: memory=0.053 GB 2025-09-01T15:22:01.847336 - Max allocated memory: max_memory=7.034 GB 2025-09-01T15:22:01.847336 - Max reserved memory: max_reserved=7.094 GB 2025-09-01T15:22:02.003449 - !!! Exception during processing !!! cannot access local variable 'callback_latent' where it is not associated with a value 2025-09-01T15:22:02.009429 - Traceback (most recent call last): File "G:\comfyui\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\comfyui\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\comfyui\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "G:\comfyui\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "G:\comfyui\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3988, in process "samples": callback_latent.unsqueeze(0).cpu() if callback is not None else None, ^^^^^^^^^^^^^^^ UnboundLocalError: cannot access local variable 'callback_latent' where it is not associated with a value 2025-09-01T15:22:02.017402 - comfyui lumi batcher overwrite task done2025-09-01T15:22:02.017402 - 2025-09-01T15:22:02.024379 - Prompt executed in 194.37 seconds 2025-09-01T15:26:18.533291 - got prompt 2025-09-01T15:26:18.565355 - Using accelerate to load and assign model weights to device... 2025-09-01T15:26:25.545938 - Loading transformer parameters to cuda:0: 64%|█████████████████████████████████████████████████████████████████████████████████████████████████▌ | 698/1095 [00:06<00:00, 569.60it/s]2025-09-01T15:26:25.588794 - Loading transformer parameters to cuda:0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1095/1095 [00:07<00:00, 155.95it/s]2025-09-01T15:26:25.588794 - 2025-09-01T15:26:25.589791 - Using 1052 LoRA weight patches for WanVideo model 2025-09-01T15:26:25.652581 - sigmas: tensor([1.0000, 0.9600, 0.8889, 0.7272, 0.0000]) 2025-09-01T15:26:25.652581 - timesteps: tensor([999, 959, 888, 727], device='cuda:0') 2025-09-01T15:26:25.654574 - Using per-step cfg list: [2.0, 1.0, 1.0, 1.0] 2025-09-01T15:26:26.099813 - Input sequence length: 31050 2025-09-01T15:26:26.099813 - Sampling 89 frames at 720x480 with 4 steps 2025-09-01T15:28:34.761959 - 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [02:08<00:00, 24.73s/it]2025-09-01T15:28:34.763951 - 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [02:08<00:00, 32.06s/it]2025-09-01T15:28:34.764948 - 2025-09-01T15:28:37.388334 - Allocated memory: memory=0.097 GB 2025-09-01T15:28:37.389330 - Max allocated memory: max_memory=11.671 GB 2025-09-01T15:28:37.390934 - Max reserved memory: max_reserved=12.312 GB 2025-09-01T15:28:37.402353 - Using accelerate to load and assign model weights to device... 2025-09-01T15:28:42.447834 - Loading transformer parameters to cuda:0: 47%|████████████████████████████████████████████████████████████████████████▍ | 518/1095 [00:05<00:02, 266.27it/s]2025-09-01T15:28:42.541592 - Loading transformer parameters to cuda:0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1095/1095 [00:05<00:00, 213.15it/s]2025-09-01T15:28:42.541592 - 2025-09-01T15:28:42.542513 - Using 1052 LoRA weight patches for WanVideo model 2025-09-01T15:28:42.585369 - sigmas: tensor([0.]) 2025-09-01T15:28:42.585369 - timesteps: tensor([], device='cuda:0', dtype=torch.int64) 2025-09-01T15:28:42.834138 - Input sequence length: 31050 2025-09-01T15:28:42.835132 - Sampling 89 frames at 720x480 with 0 steps 2025-09-01T15:28:43.084302 - 0it [00:00, ?it/s]2025-09-01T15:28:43.089282 - 0it [00:00, ?it/s]2025-09-01T15:28:43.089282 - 2025-09-01T15:28:43.383958 - Allocated memory: memory=0.053 GB 2025-09-01T15:28:43.383958 - Max allocated memory: max_memory=7.034 GB 2025-09-01T15:28:43.384950 - Max reserved memory: max_reserved=7.094 GB 2025-09-01T15:28:43.414091 - !!! Exception during processing !!! cannot access local variable 'callback_latent' where it is not associated with a value 2025-09-01T15:28:43.418074 - Traceback (most recent call last): File "G:\comfyui\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\comfyui\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\comfyui\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "G:\comfyui\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "G:\comfyui\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3988, in process "samples": callback_latent.unsqueeze(0).cpu() if callback is not None else None, ^^^^^^^^^^^^^^^ UnboundLocalError: cannot access local variable 'callback_latent' where it is not associated with a value 2025-09-01T15:28:43.423058 - comfyui lumi batcher overwrite task done2025-09-01T15:28:43.423058 - 2025-09-01T15:28:43.428044 - Prompt executed in 144.88 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here) 报错的原因,怎么解决

时间: 2025-09-01 07:52:46 AIGC 浏览: 67
<think> 我们正在处理ComfyUI 0.3.56版本中WanVideoSampler节点出现的特定错误:`UnboundLocalError: cannot access local variable 'callback_latent' where it is not associated with a value`。根据错误信息,问题在于在某个代码路径中,局部变量`callback_latent`在未被赋值的情况下被访问。 根据引用[2]的解释,这个错误通常是由于在访问一个局部变量之前,该变量没有被赋予任何值。在引用[3]中,我们看到一个类似的错误,并通过确保变量在访问前被正确初始化来解决。 因此,我们需要检查WanVideoSampler节点的相关代码,特别是涉及`callback_latent`变量的部分。可能的原因和解决方案包括: ### 原因分析 1. **条件分支未覆盖所有情况**:在定义`callback_latent`的代码中,可能存在某些条件分支(如if语句)未能覆盖所有执行路径。当某些条件不满足时,`callback_latent`没有被赋值,但在后续代码中又被访问。 2. **异步回调问题**:如果`callback_latent`是在回调函数中赋值的,可能在回调完成之前就访问了该变量。 3. **作用域问题**:变量可能在嵌套函数中定义,但未正确使用`nonlocal`或`global`声明。 ### 解决方案 1. **初始化变量**:在变量定义的位置,给它一个初始值(例如`None`),确保在任何代码路径下该变量都有值。 2. **检查所有条件分支**:确保在所有可能的分支(包括`else`分支)中变量都被正确赋值。 3. **确保回调完成**:如果变量在回调中赋值,需要确保在访问变量之前回调已经执行完毕。 ### 具体步骤(假设我们能查看源码) 由于我们无法直接查看WanVideoSampler节点的源码,我们可以根据经验给出一般性修改建议。假设在代码中有类似以下的结构: ```python def sample_function(): if some_condition: callback_latent = some_value # 其他代码... use(callback_latent) # 当some_condition为False时,callback_latent未定义 ``` 修正为: ```python def sample_function(): callback_latent = None # 初始化为None或其他合适的默认值 if some_condition: callback_latent = some_value # 其他代码... use(callback_latent) # 此时callback_latent总是有值 ``` ### 实际案例参考 在引用[3]的示例中,通过将临时变量`layer`改为使用`self.layer`(即实例变量)来保证变量在后续访问时已被正确关联。但我们的错误是局部变量未初始化,所以初始化为默认值更合适。 ### 针对ComfyUI WanVideoSampler节点的建议 由于这是ComfyUI的一个节点,我们可能需要检查该节点的执行流程。特别是视频采样可能涉及多帧处理,可能在某些帧处理失败时导致变量未赋值。因此,在节点的关键函数中,查找`callback_latent`变量,并确保它在所有路径下都有初始值。 ### 总结 要解决此错误,我们需要: 1. 定位到使用`callback_latent`的函数。 2. 检查该函数中所有代码路径,确保`callback_latent`在任何情况下都被初始化。 3. 在函数开头初始化`callback_latent`(例如设为`None`),然后根据不同条件赋值。 如果问题仍然存在,可能需要进一步检查代码逻辑,确保异步操作完成后再访问变量。 §§相关问题§§ 1. 在Python中,如何避免局部变量未初始化引发的UnboundLocalError? 2. ComfyUI中自定义节点的开发有哪些常见错误? 3. 如何处理异步回调中的变量赋值问题? </think> ### ComfyUI 0.3.56 WanVideoSampler节点UnboundLocalError异常分析与解决 #### 问题原因分析 在ComfyUI 0.3.56的WanVideoSampler节点中,`UnboundLocalError: cannot access local variable 'callback_latent' where it is not associated with a value`错误的根本原因是**变量初始化不完整**。具体来说: 1. **条件分支未覆盖**:`callback_latent`变量在某些执行路径中被定义,但存在未覆盖的分支导致变量未初始化就被访问[^2] 2. **异步操作时序问题**:节点涉及视频采样和潜在空间操作,可能在异步回调完成前尝试访问`callback_latent`变量[^4] 3. **作用域边界问题**:嵌套函数或回调链中变量作用域管理不当,导致局部变量访问时机错误[^3] #### 解决方案 ##### 核心修复方法 在变量使用前显式初始化: ```python # 在定义callback_latent的地方添加默认值 class WanVideoSampler: def process_frames(self): # 初始化保证所有分支可用 callback_latent = None if video_source_ready: callback_latent = self.decode_frames() else: callback_latent = self.load_default_latent() # 安全使用 return self.apply_sampling(callback_latent) ``` ##### 进阶修复方案 1. **异步操作保护机制**: ```python def video_callback(data): nonlocal callback_latent # 明确作用域声明 callback_latent = process_video_data(data) def execute_sampling(): callback_latent = None # 关键初始化 start_video_processing(video_callback) # 异步操作 # 添加等待机制 while callback_latent is None: time.sleep(0.01) # 非阻塞等待 return post_process(callback_latent) ``` 2. **分支完整性检查**: ```python def generate_latent(): latent = None if conditionA: latent = methodA() elif conditionB: latent = methodB() else: # 必须添加兜底分支 latent = default_latent() assert latent is not None # 防御性检查 return latent ``` #### 调试建议 1. 在ComfyUI中启用`--verbose`模式定位具体出错位置 2. 在节点代码中临时添加日志: ```python print(f"Callback latent state: {locals().get('callback_latent', 'UNDEFINED')}") ``` 3. 检查视频加载逻辑,确保所有异常路径都有默认值返回 #### 预防措施 1. 使用类型检查和默认值工具: ```python from typing import Optional def process_frame() -> Optional[LatentTensor]: ... ``` 2. 安装静态分析工具检测未初始化变量: ```bash pip install pylint pylint your_custom_nodes.py ```
阅读全文

相关推荐

# ComfyUI Error Report ## Error Details - **Node ID:** 16 - **Node Type:** IPAdapterUnifiedLoader - **Exception Type:** Exception - **Exception Message:** IPAdapter model not found. ## Stack Trace File "D:\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI-aki-v1.6\ComfyUI\custom_nodes\comfyui_ipadapter_plus\IPAdapterPlus.py", line 599, in load_models raise Exception("IPAdapter model not found.") ## System Information - **ComfyUI Version:** 0.3.43 - **Arguments:** D:\ComfyUI-aki-v1.6\ComfyUI\main.py --auto-launch --preview-method auto --disable-cuda-malloc --fast - **OS:** nt - **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.7.0+cu128 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 17102733312 - **VRAM Free:** 14107617272 - **Torch VRAM Total:** 16777216

# ComfyUI Error Report ## Error Details - **Node ID:** 23 - **Node Type:** IPAdapterInsightFaceLoader - **Exception Type:** Exception - **Exception Message:** No module named 'insightface' ## Stack Trace File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_ipadapter_plus\IPAdapterPlus.py", line 615, in load_insightface return (insightface_loader(provider),) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_ipadapter_plus\utils.py", line 150, in insightface_loader raise Exception(e) ## System Information - **ComfyUI Version:** 0.3.41 - **Arguments:** ComfyUI\main.py - **OS:** nt - **Python Version:** 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] - **Embedded Python:** true - **PyTorch Version:** 2.7.1+cu128 ## Devices - **Name:** cuda:0 NVIDIA GeForce

管理器 中央处理器 0% 随机存取存储器 34% 温度 55° 显存 49% 图形处理单元 9% 显示图像源 🐍 1 KeyError 78 KSampler Anime girl with green hair, wearing green and beige embellishment clothes, the background is the bedroom 文本 Google 翻译 原文 提供更好的翻译建议 繁忙航空 个人资料 社区 发布 训练 文档 K采样器 'c_crossattn' # ComfyUI Error Report ## Error Details - **Node ID:** 78 - **Node Type:** KSampler - **Exception Type:** KeyError - **Exception Message:** 'c_crossattn' ## Stack Trace File "E:\ComfyUI-aki\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 347, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 222, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 194, in _map_node_over_list process_inputs(input_dict, i) File "E:\ComfyUI-aki\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 183, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki\ComfyUI-aki-v1.6\ComfyUI\nodes.py", line 1522, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI-aki\ComfyUI-aki-v1.6\ComfyUI\nodes.py", line 1489, in common_ksampler

# ComfyUI Error Report ## Error Details - **Node ID:** 4 - **Node Type:** CheckpointLoaderSimple - **Exception Type:** RuntimeError - **Exception Message:** ERROR: Could not detect model type of: C:\Users\shanye\Documents\ComfyUI\models\checkpoints\Instamodel_1 Workflow + LoRA\Instamodel_1.safetensors ## Stack Trace File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 578, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 1039, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}\n{}".format(ckpt_path, model_detection_error_hint(ckpt_path, sd))) ## System Information - **ComfyUI Version:** 0.3.56 - **Arguments:** C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\main.py --user-directory C:\Users\shanye\Documents\ComfyUI\user --input-directory C:\Users\shanye\Documents\ComfyUI\input --output-directory C:\Users\shanye\Documents\ComfyUI\output --front-end-root C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\Users\shanye\Documents\ComfyUI --extra-model-paths-config C:\Users\shanye\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000 - **OS:** nt - **Python Version:** 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.8.0+cu129 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 6441926656 - **VRAM Free:** 5370806272 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ## Logs 2025-09-04T11:13:39.610304 - Adding extra search path custom_nodes C:\Users\shanye\Documents\ComfyUI\custom_nodes 2025-09-04T11:13:39.610304 - Adding extra search path download_model_base C:\Users\shanye\Documents\ComfyUI\models 2025-09-04T11:13:39.610304 - Adding extra search path custom_nodes C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes 2025-09-04T11:13:39.610304 - Setting output directory to: C:\Users\shanye\Documents\ComfyUI\output 2025-09-04T11:13:39.610304 - Setting input directory to: C:\Users\shanye\Documents\ComfyUI\input 2025-09-04T11:13:39.610304 - Setting user directory to: C:\Users\shanye\Documents\ComfyUI\user 2025-09-04T11:13:40.313409 - [START] Security scan2025-09-04T11:13:40.313409 - 2025-09-04T11:13:41.675613 - [DONE] Security scan2025-09-04T11:13:41.675613 - 2025-09-04T11:13:41.775803 - ## ComfyUI-Manager: installing dependencies done.2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI startup time:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - 2025-09-04 11:13:41.7752025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Platform:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - Windows2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Python version:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Python executable:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\.venv\Scripts\python.exe2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI Path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI Base Folder Path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** User directory:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\user2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** ComfyUI-Manager config path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - ** Log path:2025-09-04T11:13:41.775803 - 2025-09-04T11:13:41.775803 - C:\Users\shanye\Documents\ComfyUI\user\comfyui.log2025-09-04T11:13:41.775803 - 2025-09-04T11:13:42.904189 - [ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated. 2025-09-04T11:13:42.904189 - Prestartup times for custom nodes: 2025-09-04T11:13:42.904189 - 3.3 seconds: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager 2025-09-04T11:13:42.904189 - 2025-09-04T11:13:44.103451 - Checkpoint files will always be loaded safely. 2025-09-04T11:13:44.181039 - Total VRAM 6144 MB, total RAM 16075 MB 2025-09-04T11:13:44.181039 - pytorch version: 2.8.0+cu129 2025-09-04T11:13:44.181039 - Set vram state to: NORMAL_VRAM 2025-09-04T11:13:44.181039 - Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync 2025-09-04T11:13:45.119456 - Using pytorch attention 2025-09-04T11:13:46.242679 - Python version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)] 2025-09-04T11:13:46.242679 - ComfyUI version: 0.3.56 2025-09-04T11:13:46.262300 - [Prompt Server] web root: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app 2025-09-04T11:13:47.026643 - ### Loading: ComfyUI-Manager (V3.36) 2025-09-04T11:13:47.026643 - [ComfyUI-Manager] network_mode: public 2025-09-04T11:13:47.036119 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository) 2025-09-04T11:13:47.052419 - Import times for custom nodes: 2025-09-04T11:13:47.052928 - 0.0 seconds: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\websocket_image_save.py 2025-09-04T11:13:47.052928 - 0.0 seconds: C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager 2025-09-04T11:13:47.052928 - 2025-09-04T11:13:47.292879 - Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file (Background on this error at: https://sqlalche.me/e/20/e3q8) 2025-09-04T11:13:47.312519 - Starting server 2025-09-04T11:13:47.313524 - To see the GUI go to: http://127.0.0.1:8000 2025-09-04T11:13:47.628776 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-09-04T11:13:47.658220 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-09-04T11:13:47.797495 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-09-04T11:13:48.035699 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-09-04T11:13:48.082459 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-09-04T11:13:48.177134 - comfyui-frontend-package not found in requirements.txt 2025-09-04T11:13:48.447275 - comfyui-frontend-package not found in requirements.txt 2025-09-04T11:13:54.407289 - FETCH ComfyRegistry Data: 5/962025-09-04T11:13:54.407289 - 2025-09-04T11:14:01.787097 - FETCH ComfyRegistry Data: 10/962025-09-04T11:14:01.787097 - 2025-09-04T11:14:08.713963 - FETCH ComfyRegistry Data: 15/962025-09-04T11:14:08.713963 - 2025-09-04T11:14:15.351666 - FETCH ComfyRegistry Data: 20/962025-09-04T11:14:15.351666 - 2025-09-04T11:14:22.429376 - FETCH ComfyRegistry Data: 25/962025-09-04T11:14:22.429376 - 2025-09-04T11:14:29.356493 - FETCH ComfyRegistry Data: 30/962025-09-04T11:14:29.356493 - 2025-09-04T11:14:37.063216 - FETCH ComfyRegistry Data: 35/962025-09-04T11:14:37.063216 - 2025-09-04T11:14:44.307279 - FETCH ComfyRegistry Data: 40/962025-09-04T11:14:44.307279 - 2025-09-04T11:14:51.189416 - FETCH ComfyRegistry Data: 45/962025-09-04T11:14:51.189416 - 2025-09-04T11:14:57.943608 - got prompt 2025-09-04T11:14:57.961871 - Warning, This is not a checkpoint file, trying to load it as a diffusion model only. 2025-09-04T11:14:57.962863 - !!! Exception during processing !!! ERROR: Could not detect model type of: C:\Users\shanye\Documents\ComfyUI\models\checkpoints\Instamodel_1 Workflow + LoRA\Instamodel_1.safetensors 2025-09-04T11:14:57.964864 - Traceback (most recent call last): File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 578, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\shanye\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sd.py", line 1039, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}\n{}".format(ckpt_path, model_detection_error_hint(ckpt_path, sd))) RuntimeError: ERROR: Could not detect model type of: C:\Users\shanye\Documents\ComfyUI\models\checkpoints\Instamodel_1 Workflow + LoRA\Instamodel_1.safetensors 2025-09-04T11:14:57.966494 - Prompt executed in 0.02 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. {"id":"415d3cd4-ff2f-432f-beed-1c037c7f683a","revision":0,"last_node_id":12,"last_link_id":11,"nodes":[{"id":8,"type":"VAEDecode","pos":[1206.800048828125,139.6000518798828],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"localized_name":"Latent","name":"samples","type":"LATENT","link":7},{"localized_name":"vae","name":"vae","type":"VAE","link":8}],"outputs":[{"localized_name":"图像","name":"IMAGE","type":"IMAGE","slot_index":0,"links":[11]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":5,"type":"EmptyLatentImage","pos":[546.2001342773438,566.1336669921875],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"宽度","name":"width","type":"INT","widget":{"name":"width"},"link":null},{"localized_name":"高度","name":"height","type":"INT","widget":{"name":"height"},"link":null},{"localized_name":"批量大小","name":"batch_size","type":"INT","widget":{"name":"batch_size"},"link":null}],"outputs":[{"localized_name":"Latent","name":"LATENT","type":"LATENT","slot_index":0,"links":[2]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"EmptyLatentImage"},"widgets_values":[960,1280,1]},{"id":7,"type":"CLIPTextEncode","pos":[404.3333740234375,331.6666564941406],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":5},{"localized_name":"文本","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","slot_index":0,"links":[6]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"CLIPTextEncode"},"widgets_values":["4koma, comic, censored, jpeg artifacts, blurry, lowres, worst quality, low quality, bad anatomy, bad hands, signature, watermark, text, logo, artist name, censored, patreon username, logo, flag, name tag, deformed, crazy eyes, bad eyes, squint, frown, extra fingers, fewer fingers, SmoothNegative_Hands-neg"]},{"id":6,"type":"CLIPTextEncode","pos":[399,121.33334350585938],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":3},{"localized_name":"文本","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"条件","name":"CONDITIONING","type":"CONDITIONING","slot_index":0,"links":[4]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"CLIPTextEncode"},"widgets_values":["girl, graffiti artist, lean body, medium breasts, punk hairstyle, multicolored hair, blonde hair, bright pink hair, white hoodie, neon pink runes on hoodie, form fitting black pants, White sneakers, bubblegum bubble, leaning with her back on a wall, one foot on the wall, one foot on the ground, hands in pocket, punk atmosphere, neon lighting, masterpiece, Wall covered in graffiti, detailed, absurdres, cowboy shot"]},{"id":3,"type":"KSampler","pos":[865.6666870117188,131.99998474121094],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"localized_name":"模型","name":"model","type":"MODEL","link":1},{"localized_name":"正面条件","name":"positive","type":"CONDITIONING","link":4},{"localized_name":"负面条件","name":"negative","type":"CONDITIONING","link":6},{"localized_name":"Latent图像","name":"latent_image","type":"LATENT","link":2},{"localized_name":"种子","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"步数","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"cfg","name":"cfg","type":"FLOAT","widget":{"name":"cfg"},"link":null},{"localized_name":"采样器名称","name":"sampler_name","type":"COMBO","widget":{"name":"sampler_name"},"link":null},{"localized_name":"调度器","name":"scheduler","type":"COMBO","widget":{"name":"scheduler"},"link":null},{"localized_name":"降噪","name":"denoise","type":"FLOAT","widget":{"name":"denoise"},"link":null}],"outputs":[{"localized_name":"Latent","name":"LATENT","type":"LATENT","slot_index":0,"links":[7]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"KSampler"},"widgets_values":[612215308838764,"randomize",24,7,"euler","beta",1]},{"id":4,"type":"CheckpointLoaderSimple","pos":[5.333343505859375,196.6666717529297],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"Checkpoint名称","name":"ckpt_name","type":"COMBO","widget":{"name":"ckpt_name"},"link":null}],"outputs":[{"localized_name":"模型","name":"MODEL","type":"MODEL","slot_index":0,"links":[1]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","slot_index":1,"links":[3,5]},{"localized_name":"VAE","name":"VAE","type":"VAE","slot_index":2,"links":[8]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"CheckpointLoaderSimple","models":[{"name":"v1-5-pruned-emaonly-fp16.safetensors","url":"https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors?download=true","directory":"checkpoints"}]},"widgets_values":["Instamodel_1 Workflow + LoRA\\Instamodel_1.safetensors"]},{"id":11,"type":"SaveImage","pos":[1452.80029296875,141.66671752929688],"size":[210,270],"flags":{},"order":6,"mode":0,"inputs":[{"localized_name":"图片","name":"images","type":"IMAGE","link":11},{"localized_name":"文件名前缀","name":"filename_prefix","type":"STRING","widget":{"name":"filename_prefix"},"link":null}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.56"},"widgets_values":["ComfyUI"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[11,8,0,11,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9090909090909094,"offset":[28.583954807873738,165.70971363866818]}},"version":0.4} ## Additional Context (Please add any additional context or steps to reproduce the error here)

# ComfyUI Error Report ## Error Details - **Node ID:** 9 - **Node Type:** KSampler - **Exception Type:** RuntimeError - **Exception Message:** #### It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. #### ## Stack Trace File "D:\xiaoqiqi\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\xiaoqiqi\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\xiaoqiqi\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "D:\xiaoqiqi\ComfyUI-aki-v1.6\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\xiaoqiqi\ComfyUI-aki-v1.6\ComfyUI\nodes.py", line 1516, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\xiaoqiqi\ComfyUI-aki-v1.6\ComfyUI\nodes.py", line 1483, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 什么意思

# ComfyUI Error Report ## Error Details - **Node ID:** 24 - **Node Type:** IPAdapterInsightFaceLoader - **Exception Type:** AssertionError - **Exception Message:** N/A ## Stack Trace File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_ipadapter_plus\IPAdapterPlus.py", line 615, in load_insightface return (insightface_loader(provider),) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_ipadapter_plus\utils.py", line 153, in insightface_loader model = FaceAnalysis(name="buffalo_l", root=path, providers=[provider + 'ExecutionProvider',]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\insightface\app\face_analysis.py", line 43, in __init__ assert 'detection' in self.mode

# ComfyUI Error Report ## Error Details - **Node ID:** 42 - **Node Type:** WanVideoSampler - **Exception Type:** AttributeError - **Exception Message:** 'WanModel' object has no attribute 'vace_patch_embedding' ## Stack Trace File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3325, in process noise_pred, self.cache_state = predict_with_cfg( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2617, in predict_with_cfg noise_pred_cond, cache_state_cond = transformer( ^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1617, in forward vace_hints = self.forward_vace(x, data["context"], data["seq_len"], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1196, in forward_vace c = [self.vace_patch_embedding(u.unsqueeze(0).float()).to(x.dtype) for u in vace_context] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 291, in __getattr__ return super().__getattr__(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__ raise AttributeError( ## System Information - **ComfyUI Version:** 0.3.43 - **Arguments:** ComfyUI\main.py --windows-standalone-build --listen 0.0.0.0 --enable-cors-header * - **OS:** nt - **Python Version:** 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] - **Embedded Python:** true - **PyTorch Version:** 2.7.1+cu128 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 25769279488 - **VRAM Free:** 24347803648 - **Torch VRAM Total:** 67108864 - **Torch VRAM Free:** 50200576 ## Logs 2025-07-24T17:17:54.612419 - 2025-07-24T17:17:54.663419 - INFO: IPAdapter model loaded from F:\ComfyUI_Mie_V6.0\ComfyUI\models\ipadapter\ip-adapter-plus_sd15.safetensors2025-07-24T17:17:54.663419 - 2025-07-24T17:17:54.663419 - end_vram - start_vram: 2537542306 - 2537542306 = 02025-07-24T17:17:54.663419 - 2025-07-24T17:17:54.664419 - #11 [IPAdapterUnifiedLoader]: 0.05s - vram 0b2025-07-24T17:17:54.664419 - 2025-07-24T17:17:54.924153 - end_vram - start_vram: 2639727650 - 2537542306 = 1021853442025-07-24T17:17:54.924153 - 2025-07-24T17:17:54.924670 - #10 [IPAdapterAdvanced]: 0.26s - vram 102185344b2025-07-24T17:17:54.924670 - 2025-07-24T17:17:54.931880 - end_vram - start_vram: 2733990564 - 2635766434 = 982241302025-07-24T17:17:54.931880 - 2025-07-24T17:17:54.931880 - #14 [RescaleCFG]: 0.01s - vram 98224130b2025-07-24T17:17:54.931880 - 2025-07-24T17:17:54.933882 - Requested to load BaseModel 2025-07-24T17:17:55.233473 - loaded completely 19197.46322517395 1639.406135559082 True 2025-07-24T17:18:23.076833 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.07it/s]2025-07-24T17:18:23.076833 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.08it/s]2025-07-24T17:18:23.076833 - 2025-07-24T17:18:23.078833 - end_vram - start_vram: 4711053742 - 2733990562 = 19770631802025-07-24T17:18:23.078833 - 2025-07-24T17:18:23.078833 - #3 [KSampler]: 28.15s - vram 1977063180b2025-07-24T17:18:23.078833 - 2025-07-24T17:18:23.711202 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:18:23.711239 - 2025-07-24T17:18:23.711239 - #8 [VAEDecode]: 0.63s - vram 4495048864b2025-07-24T17:18:23.711754 - 2025-07-24T17:18:23.864142 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:18:23.864142 - 2025-07-24T17:18:23.864142 - #15 [SaveImage]: 0.15s - vram 0b2025-07-24T17:18:23.865143 - 2025-07-24T17:18:23.865143 - comfyui lumi batcher overwrite task done2025-07-24T17:18:23.865143 - 2025-07-24T17:18:23.866143 - Prompt executed in 31.05 seconds 2025-07-24T17:18:26.480125 - got prompt 2025-07-24T17:18:53.685302 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.14it/s]2025-07-24T17:18:53.686303 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.11it/s]2025-07-24T17:18:53.686303 - 2025-07-24T17:18:53.686303 - end_vram - start_vram: 4689507218 - 3605411690 = 10840955282025-07-24T17:18:53.686303 - 2025-07-24T17:18:53.687302 - #3 [KSampler]: 27.17s - vram 1084095528b2025-07-24T17:18:53.687302 - 2025-07-24T17:18:54.232028 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:18:54.232028 - 2025-07-24T17:18:54.233027 - #8 [VAEDecode]: 0.54s - vram 4495048864b2025-07-24T17:18:54.233027 - 2025-07-24T17:18:54.380027 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:18:54.380027 - 2025-07-24T17:18:54.380027 - #15 [SaveImage]: 0.15s - vram 0b2025-07-24T17:18:54.380027 - 2025-07-24T17:18:54.381027 - comfyui lumi batcher overwrite task done2025-07-24T17:18:54.382027 - 2025-07-24T17:18:54.385027 - Prompt executed in 27.90 seconds 2025-07-24T17:24:25.352704 - got prompt 2025-07-24T17:24:25.445704 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:24:25.446704 - 2025-07-24T17:24:25.446704 - #20 [LoraLoader]: 0.05s - vram 0b2025-07-24T17:24:25.446704 - 2025-07-24T17:24:25.447704 - Requested to load SD1ClipModel 2025-07-24T17:24:25.792290 - loaded completely 18695.929541397094 235.84423828125 True 2025-07-24T17:24:25.800290 - end_vram - start_vram: 3521486046 - 3507187562 = 142984842025-07-24T17:24:25.800290 - 2025-07-24T17:24:25.800290 - #7 [CLIPTextEncode]: 0.35s - vram 14298484b2025-07-24T17:24:25.800290 - 2025-07-24T17:24:25.808290 - end_vram - start_vram: 3519593078 - 3507187562 = 124055162025-07-24T17:24:25.808290 - 2025-07-24T17:24:25.809290 - #6 [CLIPTextEncode]: 0.01s - vram 12405516b2025-07-24T17:24:25.809290 - 2025-07-24T17:24:25.810290 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:24:25.810290 - 2025-07-24T17:24:25.810290 - #11 [IPAdapterUnifiedLoader]: 0.00s - vram 0b2025-07-24T17:24:25.810290 - 2025-07-24T17:24:26.053290 - end_vram - start_vram: 3609372906 - 3507187562 = 1021853442025-07-24T17:24:26.053290 - 2025-07-24T17:24:26.054290 - #10 [IPAdapterAdvanced]: 0.24s - vram 102185344b2025-07-24T17:24:26.054290 - 2025-07-24T17:24:26.059290 - end_vram - start_vram: 3703635820 - 3605411690 = 982241302025-07-24T17:24:26.059290 - 2025-07-24T17:24:26.060290 - #14 [RescaleCFG]: 0.01s - vram 98224130b2025-07-24T17:24:26.060290 - 2025-07-24T17:24:26.061290 - Requested to load BaseModel 2025-07-24T17:24:26.644908 - loaded completely 19912.1512134552 1639.406135559082 True 2025-07-24T17:24:53.885378 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.08it/s]2025-07-24T17:24:53.885378 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.10it/s]2025-07-24T17:24:53.885378 - 2025-07-24T17:24:53.886378 - end_vram - start_vram: 4689507218 - 3703635818 = 9858714002025-07-24T17:24:53.886378 - 2025-07-24T17:24:53.886378 - #3 [KSampler]: 27.83s - vram 985871400b2025-07-24T17:24:53.887378 - 2025-07-24T17:24:54.482797 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:24:54.482797 - 2025-07-24T17:24:54.482797 - #8 [VAEDecode]: 0.60s - vram 4495048864b2025-07-24T17:24:54.482797 - 2025-07-24T17:24:54.640169 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:24:54.640169 - 2025-07-24T17:24:54.640169 - #15 [SaveImage]: 0.16s - vram 0b2025-07-24T17:24:54.640169 - 2025-07-24T17:24:54.641169 - comfyui lumi batcher overwrite task done2025-07-24T17:24:54.641169 - 2025-07-24T17:24:54.643410 - Prompt executed in 29.28 seconds 2025-07-24T17:25:09.991282 - got prompt 2025-07-24T17:25:10.085282 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:25:10.085282 - 2025-07-24T17:25:10.085282 - #20 [LoraLoader]: 0.05s - vram 0b2025-07-24T17:25:10.085282 - 2025-07-24T17:25:10.086282 - Requested to load SD1ClipModel 2025-07-24T17:25:10.369134 - loaded completely 18695.929541397094 235.84423828125 True 2025-07-24T17:25:10.377134 - end_vram - start_vram: 3521486046 - 3507187562 = 142984842025-07-24T17:25:10.377134 - 2025-07-24T17:25:10.377134 - #7 [CLIPTextEncode]: 0.29s - vram 14298484b2025-07-24T17:25:10.377134 - 2025-07-24T17:25:10.385134 - end_vram - start_vram: 3519593078 - 3507187562 = 124055162025-07-24T17:25:10.385134 - 2025-07-24T17:25:10.385134 - #6 [CLIPTextEncode]: 0.01s - vram 12405516b2025-07-24T17:25:10.385134 - 2025-07-24T17:25:10.386135 - end_vram - start_vram: 3507187562 - 3507187562 = 02025-07-24T17:25:10.386135 - 2025-07-24T17:25:10.386135 - #11 [IPAdapterUnifiedLoader]: 0.00s - vram 0b2025-07-24T17:25:10.386135 - 2025-07-24T17:25:10.631046 - end_vram - start_vram: 3609372906 - 3507187562 = 1021853442025-07-24T17:25:10.631046 - 2025-07-24T17:25:10.631046 - #10 [IPAdapterAdvanced]: 0.24s - vram 102185344b2025-07-24T17:25:10.631046 - 2025-07-24T17:25:10.637046 - end_vram - start_vram: 3703635820 - 3605411690 = 982241302025-07-24T17:25:10.637046 - 2025-07-24T17:25:10.637046 - #14 [RescaleCFG]: 0.01s - vram 98224130b2025-07-24T17:25:10.637046 - 2025-07-24T17:25:10.639047 - Requested to load BaseModel 2025-07-24T17:25:11.180881 - loaded completely 19912.1512134552 1639.406135559082 True 2025-07-24T17:25:39.272121 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:28<00:00, 1.08it/s]2025-07-24T17:25:39.272121 - 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:28<00:00, 1.07it/s]2025-07-24T17:25:39.272121 - 2025-07-24T17:25:39.273121 - end_vram - start_vram: 4689507218 - 3703635818 = 9858714002025-07-24T17:25:39.273121 - 2025-07-24T17:25:39.274121 - #3 [KSampler]: 28.64s - vram 985871400b2025-07-24T17:25:39.274121 - 2025-07-24T17:25:39.888702 - end_vram - start_vram: 8100460554 - 3605411690 = 44950488642025-07-24T17:25:39.888702 - 2025-07-24T17:25:39.888702 - #8 [VAEDecode]: 0.61s - vram 4495048864b2025-07-24T17:25:39.888702 - 2025-07-24T17:25:40.051703 - end_vram - start_vram: 3605411690 - 3605411690 = 02025-07-24T17:25:40.051703 - 2025-07-24T17:25:40.051703 - #15 [SaveImage]: 0.16s - vram 0b2025-07-24T17:25:40.051703 - 2025-07-24T17:25:40.052703 - comfyui lumi batcher overwrite task done2025-07-24T17:25:40.052703 - 2025-07-24T17:25:40.060703 - Prompt executed in 30.06 seconds 2025-07-24T17:43:56.476860 - got prompt 2025-07-24T17:43:57.039587 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.039587 - 2025-07-24T17:43:57.039587 - #7 [WanVideoVAELoader]: 0.48s - vram 0b2025-07-24T17:43:57.040588 - 2025-07-24T17:43:57.040588 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.040588 - 2025-07-24T17:43:57.040588 - #12 [WanVideoExperimentalArgs]: 0.00s - vram 0b2025-07-24T17:43:57.040588 - 2025-07-24T17:43:57.041588 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.041588 - 2025-07-24T17:43:57.041588 - #10 [WanVideoEnhanceAVideo]: 0.00s - vram 0b2025-07-24T17:43:57.041588 - 2025-07-24T17:43:57.042588 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:43:57.042588 - 2025-07-24T17:43:57.042588 - #21 [WanVideoVACEModelSelect]: 0.00s - vram 0b2025-07-24T17:43:57.042588 - 2025-07-24T17:44:00.282807 - Detected model in_channels: 36 2025-07-24T17:44:00.282807 - Model type: fl2v, num_heads: 40, num_layers: 40 2025-07-24T17:44:00.283807 - Model variant detected: i2v_720 2025-07-24T17:44:00.596667 - model_type FLOW 2025-07-24T17:44:00.597701 - Using accelerate to load and assign model weights to device... 2025-07-24T17:44:00.801188 - Loading transformer parameters to cpu: 83%|██████████████████████████████▋ | 1080/1304 [00:00<00:00, 5458.86it/s]2025-07-24T17:44:00.839187 - Loading transformer parameters to cpu: 100%|█████████████████████████████████████| 1304/1304 [00:00<00:00, 5446.33it/s]2025-07-24T17:44:00.839187 - 2025-07-24T17:44:00.844187 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:44:00.844187 - 2025-07-24T17:44:00.844187 - #22 [WanVideoModelLoader]: 3.80s - vram 0b2025-07-24T17:44:00.844187 - 2025-07-24T17:44:02.947231 - end_vram - start_vram: 106744016 - 106744016 = 02025-07-24T17:44:02.948230 - 2025-07-24T17:44:02.948230 - #6 [LoadWanVideoT5TextEncoder]: 2.10s - vram 0b2025-07-24T17:44:02.948230 - 2025-07-24T17:44:02.948230 - Moving video model to cpu 2025-07-24T17:45:21.569551 - end_vram - start_vram: 11728619736 - 106744016 = 116218757202025-07-24T17:45:21.569558 - 2025-07-24T17:45:21.569558 - #8 [WanVideoTextEncode]: 78.62s - vram 11621875720b2025-07-24T17:45:21.569558 - 2025-07-24T17:45:21.624655 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.625215 - 2025-07-24T17:45:21.625215 - #34 [LoadImage]: 0.05s - vram 0b2025-07-24T17:45:21.625215 - 2025-07-24T17:45:21.626235 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.626242 - 2025-07-24T17:45:21.626242 - #19 [INTConstant]: 0.00s - vram 0b2025-07-24T17:45:21.626242 - 2025-07-24T17:45:21.671234 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.671234 - 2025-07-24T17:45:21.671234 - #24 [ImageResizeKJ]: 0.04s - vram 0b2025-07-24T17:45:21.671234 - 2025-07-24T17:45:21.738079 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.738592 - 2025-07-24T17:45:21.739108 - #30 [ImageResizeKJ]: 0.07s - vram 0b2025-07-24T17:45:21.739108 - 2025-07-24T17:45:21.782786 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.782786 - 2025-07-24T17:45:21.782786 - #35 [ImageResizeKJ]: 0.04s - vram 0b2025-07-24T17:45:21.782786 - 2025-07-24T17:45:21.915384 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.915902 - 2025-07-24T17:45:21.915902 - #29 [WanVideoVACEStartToEndFrame]: 0.13s - vram 0b2025-07-24T17:45:21.915902 - 2025-07-24T17:45:21.916414 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.916918 - 2025-07-24T17:45:21.917002 - #40 [GetImageRangeFromBatch]: 0.00s - vram 0b2025-07-24T17:45:21.917002 - 2025-07-24T17:45:21.981944 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.981944 - 2025-07-24T17:45:21.981944 - #25 [WanVideoVACEStartToEndFrame]: 0.06s - vram 0b2025-07-24T17:45:21.981944 - 2025-07-24T17:45:21.995235 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:21.995235 - 2025-07-24T17:45:21.995235 - #41 [MaskBatchMulti]: 0.01s - vram 0b2025-07-24T17:45:21.995235 - 2025-07-24T17:45:22.039203 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:22.039203 - 2025-07-24T17:45:22.039203 - #37 [ImageBatchMulti]: 0.04s - vram 0b2025-07-24T17:45:22.039708 - 2025-07-24T17:45:22.039713 - end_vram - start_vram: 115132624 - 115132624 = 02025-07-24T17:45:22.040227 - 2025-07-24T17:45:22.040227 - #26 [GetImageSizeAndCount]: 0.00s - vram 0b2025-07-24T17:45:22.040227 - 2025-07-24T17:45:22.883828 - input_masks shape2025-07-24T17:45:22.883828 - 2025-07-24T17:45:22.884341 - torch.Size([81, 496, 896])2025-07-24T17:45:22.884341 - 2025-07-24T17:45:34.863567 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.30s/it]2025-07-24T17:45:34.863567 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.32s/it]2025-07-24T17:45:34.864077 - 2025-07-24T17:45:46.589102 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.30s/it]2025-07-24T17:45:46.589612 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00, 1.30s/it]2025-07-24T17:45:46.589612 - 2025-07-24T17:45:46.600743 - refs shape2025-07-24T17:45:46.600743 - 2025-07-24T17:45:46.600743 - torch.Size([1, 3, 1, 496, 896])2025-07-24T17:45:46.600743 - 2025-07-24T17:45:46.715123 - VAE encoding: 56%|██████████████████████████████████████▉ | 5/9 [00:00<00:00, 44.31it/s]2025-07-24T17:45:46.805051 - VAE encoding: 100%|██████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 44.38it/s]2025-07-24T17:45:46.805583 - 2025-07-24T17:45:46.870662 - end_vram - start_vram: 2252473238 - 115132624 = 21373406142025-07-24T17:45:46.870662 - 2025-07-24T17:45:46.871171 - #16 [WanVideoVACEEncode]: 24.83s - vram 2137340614b2025-07-24T17:45:46.871171 - 2025-07-24T17:45:46.873216 - timesteps: tensor([1000.0000, 983.7124, 964.5367, 941.6301, 913.7864, 879.2166, 835.1485, 777.0445, 696.9283, 579.3684, 390.1099, 34.7826], device='cuda:0') 2025-07-24T17:45:46.887141 - Seq len: 38192 2025-07-24T17:49:31.510530 - Sampling 85 frames at 896x496 with 12 steps 2025-07-24T17:49:37.849674 - 0%| | 0/12 [00:00<?, ?it/s]2025-07-24T17:49:44.296375 - 0%| | 0/12 [00:06<?, ?it/s]2025-07-24T17:49:44.297376 - 2025-07-24T17:49:44.747159 - !!! Exception during processing !!! 'WanModel' object has no attribute 'vace_patch_embedding' 2025-07-24T17:49:44.783076 - Traceback (most recent call last): File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3325, in process noise_pred, self.cache_state = predict_with_cfg( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2617, in predict_with_cfg noise_pred_cond, cache_state_cond = transformer( ^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1617, in forward vace_hints = self.forward_vace(x, data["context"], data["seq_len"], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1196, in forward_vace c = [self.vace_patch_embedding(u.unsqueeze(0).float()).to(x.dtype) for u in vace_context] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 291, in __getattr__ return super().__getattr__(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__ raise AttributeError( AttributeError: 'WanModel' object has no attribute 'vace_patch_embedding'. Did you mean: 'patch_embedding'? 2025-07-24T17:49:44.785980 - end_vram - start_vram: 34183609272 - 173795536 = 340098137362025-07-24T17:49:44.785980 - 2025-07-24T17:49:44.785980 - #42 [WanVideoSampler]: 237.91s - vram 34009813736b2025-07-24T17:49:44.785980 - 2025-07-24T17:49:44.798561 - comfyui lumi batcher overwrite task done2025-07-24T17:49:44.798968 - 2025-07-24T17:49:44.868694 - Prompt executed in 348.38 seconds 2025-07-24T18:17:32.104770 - [ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.2025-07-24T18:17:32.105296 - 2025-07-24T18:17:33.147891 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-07-24T18:17:33.147891 - 2025-07-24T18:17:36.036752 - [DONE]2025-07-24T18:17:36.036752 - 2025-07-24T18:17:36.120191 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2025-07-24T18:17:36.120191 - 2025-07-24T18:17:37.292226 - [DONE]2025-07-24T18:17:37.292226 - 2025-07-24T18:17:37.313785 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extras.json2025-07-24T18:17:37.313785 - 2025-07-24T18:17:38.420206 - [DONE]2025-07-24T18:17:38.420897 - 2025-07-24T18:17:38.602146 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2025-07-24T18:17:38.602146 - 2025-07-24T18:17:39.726647 - [DONE]2025-07-24T18:17:39.726647 - 2025-07-24T18:21:32.720524 - got prompt 2025-07-24T18:21:36.403235 - timesteps: tensor([1000.0000, 983.7124, 964.5367, 941.6301, 913.7864, 879.2166, 835.1485, 777.0445, 696.9283, 579.3684, 390.1099, 34.7826], device='cuda:0') 2025-07-24T18:21:36.571239 - Seq len: 38192 2025-07-24T18:21:36.578239 - Sampling 85 frames at 896x496 with 12 steps 2025-07-24T18:21:36.834814 - 0%| | 0/12 [00:00<?, ?it/s]2025-07-24T18:21:40.522225 - 0%| | 0/12 [00:03<?, ?it/s]2025-07-24T18:21:40.522225 - 2025-07-24T18:21:40.567079 - !!! Exception during processing !!! 'WanModel' object has no attribute 'vace_patch_embedding' 2025-07-24T18:21:40.572236 - Traceback (most recent call last): File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 361, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 236, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 208, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_Mie_V6.0\ComfyUI\execution.py", line 197, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3325, in process noise_pred, self.cache_state = predict_with_cfg( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2617, in predict_with_cfg noise_pred_cond, cache_state_cond = transformer( ^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1617, in forward vace_hints = self.forward_vace(x, data["context"], data["seq_len"], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1196, in forward_vace c = [self.vace_patch_embedding(u.unsqueeze(0).float()).to(x.dtype) for u in vace_context] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 291, in __getattr__ return super().__getattr__(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_Mie_V6.0\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1940, in __getattr__ raise AttributeError( AttributeError: 'WanModel' object has no attribute 'vace_patch_embedding'. Did you mean: 'patch_embedding'? 2025-07-24T18:21:40.574784 - end_vram - start_vram: 34183609272 - 32966353744 = 12172555282025-07-24T18:21:40.574784 - 2025-07-24T18:21:40.575297 - #42 [WanVideoSampler]: 7.68s - vram 1217255528b2025-07-24T18:21:40.575297 - 2025-07-24T18:21:40.575811 - comfyui lumi batcher overwrite task done2025-07-24T18:21:40.575811 - 2025-07-24T18:21:40.577866 - Prompt executed in 7.84 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. Workflow too large. Please manually upload the workflow from local file system. ## Additional Context (Please add any additional context or steps to reproduce the error here) 什么问题,该怎么解决

# ComfyUI Error Report ## Error Details - **Node ID:** 5 - **Node Type:** Joy_caption_two - **Exception Type:** RuntimeError - **Exception Message:** All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] ## Stack Trace File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( ## System Information - **ComfyUI Version:** 0.3.33 - **Arguments:** main.py - **OS:** nt - **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.7.0+cu126 ## Devices - **Name:** cuda:0 NVIDIA GeForce GTX 1660 Ti : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 6442123264 - **VRAM Free:** 5379194880 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ## Logs 2025-05-12T12:33:00.245025 - Adding extra search path checkpoints C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\Stable-diffusion 2025-05-12T12:33:00.245025 - Adding extra search path configs C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\Stable-diffusion 2025-05-12T12:33:00.246022 - Adding extra search path vae C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\VAE 2025-05-12T12:33:00.246022 - Adding extra search path loras C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\Lora 2025-05-12T12:33:00.246022 - Adding extra search path loras C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\LyCORIS 2025-05-12T12:33:00.247019 - Adding extra search path upscale_models C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\ESRGAN 2025-05-12T12:33:00.247019 - Adding extra search path upscale_models C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\RealESRGAN 2025-05-12T12:33:00.247019 - Adding extra search path upscale_models C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\SwinIR 2025-05-12T12:33:00.247019 - Adding extra search path embeddings C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\embeddings 2025-05-12T12:33:00.247019 - Adding extra search path hypernetworks C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\models\hypernetworks 2025-05-12T12:33:00.248017 - Adding extra search path controlnet C:\baidunetdiskdownload\sd-webui-aki\sd-webui-aki-v4.10\extensions\sd-webui-controlnet 2025-05-12T12:33:00.823554 - [START] Security scan2025-05-12T12:33:00.823554 - 2025-05-12T12:33:02.367066 - [DONE] Security scan2025-05-12T12:33:02.367066 - 2025-05-12T12:33:02.643316 - ## ComfyUI-Manager: installing dependencies done.2025-05-12T12:33:02.643316 - 2025-05-12T12:33:02.643316 - ** ComfyUI startup time:2025-05-12T12:33:02.644221 - 2025-05-12T12:33:02.644221 - 2025-05-12 12:33:02.6432025-05-12T12:33:02.644221 - 2025-05-12T12:33:02.644221 - ** Platform:2025-05-12T12:33:02.644221 - 2025-05-12T12:33:02.644221 - Windows2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.645181 - ** Python version:2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.645181 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.645181 - ** Python executable:2025-05-12T12:33:02.645181 - 2025-05-12T12:33:02.646180 - C:\Users\张三\AppData\Local\Programs\Python\Python310\python.exe2025-05-12T12:33:02.646180 - 2025-05-12T12:33:02.646180 - ** ComfyUI Path:2025-05-12T12:33:02.646180 - 2025-05-12T12:33:02.646180 - C:\comfyui2\ComfyUI2025-05-12T12:33:02.646180 - 2025-05-12T12:33:02.646180 - ** ComfyUI Base Folder Path:2025-05-12T12:33:02.647178 - 2025-05-12T12:33:02.647178 - C:\comfyui2\ComfyUI2025-05-12T12:33:02.647178 - 2025-05-12T12:33:02.647178 - ** User directory:2025-05-12T12:33:02.647178 - 2025-05-12T12:33:02.648176 - C:\comfyui2\ComfyUI\user2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - ** ComfyUI-Manager config path:2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - ** Log path:2025-05-12T12:33:02.648176 - 2025-05-12T12:33:02.648176 - C:\comfyui2\ComfyUI\user\comfyui.log2025-05-12T12:33:02.648176 - 2025-05-12T12:33:04.301068 - Prestartup times for custom nodes: 2025-05-12T12:33:04.302066 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\comfyui-easy-use 2025-05-12T12:33:04.302066 - 4.1 seconds: C:\comfyui2\ComfyUI\custom_nodes\ComfyUI-Manager 2025-05-12T12:33:04.302066 - 2025-05-12T12:33:06.807275 - Checkpoint files will always be loaded safely. 2025-05-12T12:33:07.017820 - Total VRAM 6144 MB, total RAM 16294 MB 2025-05-12T12:33:07.018365 - pytorch version: 2.7.0+cu126 2025-05-12T12:33:09.352975 - xformers version: 0.0.30 2025-05-12T12:33:09.352975 - Set vram state to: NORMAL_VRAM 2025-05-12T12:33:09.352975 - Device: cuda:0 NVIDIA GeForce GTX 1660 Ti : cudaMallocAsync 2025-05-12T12:33:09.686612 - Using xformers attention 2025-05-12T12:33:11.295329 - Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] 2025-05-12T12:33:11.295743 - ComfyUI version: 0.3.33 2025-05-12T12:33:11.358578 - ComfyUI frontend version: 1.18.10 2025-05-12T12:33:11.361570 - [Prompt Server] web root: C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\comfyui_frontend_package\static 2025-05-12T12:33:13.337748 - Module 'diffusers' load failed. If you don't have it installed, do it:2025-05-12T12:33:13.338688 - 2025-05-12T12:33:13.339179 - pip install diffusers2025-05-12T12:33:13.339179 - 2025-05-12T12:33:13.688119 - [ComfyUI-Easy-Use] server: v1.3.0 Loaded2025-05-12T12:33:13.688119 - 2025-05-12T12:33:13.688119 - [ComfyUI-Easy-Use] web root: C:\comfyui2\ComfyUI\custom_nodes\comfyui-easy-use\web_version/v2 Loaded2025-05-12T12:33:13.689074 - 2025-05-12T12:33:13.717123 - ### Loading: ComfyUI-Manager (V3.31.13) 2025-05-12T12:33:13.717123 - [ComfyUI-Manager] network_mode: public 2025-05-12T12:33:14.072667 - ### ComfyUI Version: v0.3.33-10-g577de83c | Released on '2025-05-11' 2025-05-12T12:33:14.311459 - Import times for custom nodes: 2025-05-12T12:33:14.311459 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\websocket_image_save.py 2025-05-12T12:33:14.313453 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\aigodlike-comfyui-translation 2025-05-12T12:33:14.313453 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two 2025-05-12T12:33:14.314451 - 0.0 seconds: C:\comfyui2\ComfyUI\custom_nodes\comfyui-custom-scripts 2025-05-12T12:33:14.314451 - 0.6 seconds: C:\comfyui2\ComfyUI\custom_nodes\ComfyUI-Manager 2025-05-12T12:33:14.315449 - 0.7 seconds: C:\comfyui2\ComfyUI\custom_nodes\comfyui-easy-use 2025-05-12T12:33:14.315449 - 2025-05-12T12:33:14.362371 - Starting server 2025-05-12T12:33:14.363371 - To see the GUI go to: http://127.0.0.1:8188 2025-05-12T12:33:19.620936 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-05-12T12:33:19.678088 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-05-12T12:33:19.845706 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-05-12T12:33:19.955392 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-05-12T12:33:20.436522 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-05-12T12:34:14.934119 - Cannot connect to comfyregistry.2025-05-12T12:34:14.934119 - 2025-05-12T12:34:14.940242 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-05-12T12:34:14.940242 - 2025-05-12T12:34:19.785536 - got prompt 2025-05-12T12:34:19.982016 - C:\comfyui2\ComfyUI\models\clip\siglip-so400m-patch14-3842025-05-12T12:34:19.982016 - 2025-05-12T12:34:20.277243 - [DONE]2025-05-12T12:34:20.277243 - 2025-05-12T12:34:20.377933 - [ComfyUI-Manager] All startup tasks have been completed. 2025-05-12T12:34:26.095833 - Loading VLM's custom vision model2025-05-12T12:34:26.096832 - 2025-05-12T12:34:28.260562 - Requested to load SiglipVisionTransformer 2025-05-12T12:34:28.272048 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T12:34:53.793004 - Requested to load ImageAdapter 2025-05-12T12:34:53.793004 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T12:35:10.488040 - Loading tokenizer2025-05-12T12:35:10.488040 - 2025-05-12T12:35:11.002348 - Loading LLM: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit2025-05-12T12:35:11.003346 - 2025-05-12T12:35:11.003346 - C:\comfyui2\ComfyUI\models\LLM\Meta-Llama-3.1-8B-Instruct-bnb-4bit2025-05-12T12:35:11.003346 - 2025-05-12T12:35:11.005339 - Successfully modified 'base_model_name_or_path' value in 'C:\comfyui2\ComfyUI\models\Joy_caption_two\text_model\adapter_config.json'.2025-05-12T12:35:11.005339 - 2025-05-12T12:35:24.237554 - Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. 2025-05-12T12:35:30.940677 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:35:30.962005 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:35:30.967286 - Prompt executed in 71.18 seconds 2025-05-12T12:51:12.715731 - got prompt 2025-05-12T12:51:12.907212 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T12:51:36.518268 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T12:51:53.412534 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:51:53.419550 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T12:51:53.426451 - Prompt executed in 40.69 seconds 2025-05-12T13:27:37.964284 - got prompt 2025-05-12T13:27:38.213613 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T13:28:03.041961 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T13:28:19.913094 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T13:28:19.919269 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T13:28:19.925246 - Prompt executed in 41.93 seconds 2025-05-12T14:01:51.529873 - got prompt 2025-05-12T14:01:51.690142 - loaded completely 9.5367431640625e+25 809.1729736328125 True 2025-05-12T14:01:56.251833 - [ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.2025-05-12T14:01:56.251833 - 2025-05-12T14:01:56.355173 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\1514988643_custom-node-list.json2025-05-12T14:01:56.355173 - 2025-05-12T14:01:56.372727 - [DONE]2025-05-12T14:01:56.372727 - 2025-05-12T14:01:56.465008 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\746607195_github-stats.json2025-05-12T14:01:56.465008 - 2025-05-12T14:01:56.472987 - [DONE]2025-05-12T14:01:56.473984 - 2025-05-12T14:01:56.476976 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\832903789_extras.json2025-05-12T14:01:56.476976 - 2025-05-12T14:01:56.477973 - [DONE]2025-05-12T14:01:56.477973 - 2025-05-12T14:01:56.646551 - FETCH DATA from: C:\comfyui2\ComfyUI\user\default\ComfyUI-Manager\cache\1742899825_extension-node-map.json2025-05-12T14:01:56.646551 - 2025-05-12T14:01:56.656524 - [DONE]2025-05-12T14:01:56.656524 - 2025-05-12T14:02:15.505597 - loaded completely 9.5367431640625e+25 41.0390625 True 2025-05-12T14:02:31.244068 - !!! Exception during processing !!! All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T14:02:31.248613 - Traceback (most recent call last): File "C:\comfyui2\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\comfyui2\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "C:\comfyui2\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) File "C:\comfyui2\ComfyUI\custom_nodes\ComfyUI_SLK_joy_caption_two\joy_caption_two_node.py", line 407, in generate generate_ids = text_model.generate(input_ids, inputs_embeds=input_embeds, attention_mask=attention_mask, File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 2215, in generate result = self._sample( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample outputs = self(**model_inputs, return_dict=True) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 1190, in forward outputs = self.model( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 945, in forward layer_outputs = decoder_layer( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 676, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\llama\modeling_llama.py", line 559, in forward query_states = self.q_proj(hidden_states) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\peft\tuners\lora\bnb.py", line 467, in forward result = self.base_layer(x, *args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\nn\modules.py", line 484, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 533, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd\_functions.py", line 462, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 1363, in dequantize_4bit is_on_gpu([A, absmax, out]) File "C:\Users\张三\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 464, in is_on_gpu raise RuntimeError( RuntimeError: All input tensors need to be on the same GPU, but found some tensors to not be on a GPU: [(torch.Size([1, 8388608]), device(type='cpu')), (torch.Size([262144]), device(type='cpu')), (torch.Size([4096, 4096]), device(type='cpu'))] 2025-05-12T14:02:31.253592 - Prompt executed in 39.72 seconds ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. {"id":"00000000-0000-0000-0000-000000000000","revision":0,"last_node_id":6,"last_link_id":6,"nodes":[{"id":2,"type":"LoadImage","pos":[-536.7379150390625,-152.25942993164062],"size":[270,314],"flags":{},"order":0,"mode":0,"inputs":[{"label":"image","localized_name":"图像","name":"image","type":"COMBO","widget":{"name":"image"},"link":null},{"label":"upload","localized_name":"选择文件上传","name":"upload","type":"IMAGEUPLOAD","widget":{"name":"upload"},"link":null}],"outputs":[{"label":"IMAGE","localized_name":"图像","name":"IMAGE","type":"IMAGE","links":[5]},{"label":"MASK","localized_name":"遮罩","name":"MASK","type":"MASK","links":null}],"properties":{"cnr_id":"comfy-core","ver":"0.3.33","Node name for S&R":"LoadImage"},"widgets_values":["CN_6.jpg","image"]},{"id":4,"type":"ShowText|pysssss","pos":[247.84080505371094,74.16905975341797],"size":[366.1156921386719,159.55372619628906],"flags":{},"order":3,"mode":0,"inputs":[{"label":"text","localized_name":"text","name":"text","type":"STRING","link":4}],"outputs":[{"label":"STRING","localized_name":"字符串","name":"STRING","shape":6,"type":"STRING","links":null}],"properties":{"cnr_id":"comfyui-custom-scripts","ver":"1.2.5","Node name for S&R":"ShowText|pysssss"},"widgets_values":[]},{"id":6,"type":"Joy_caption_two_load","pos":[-148.00003051757812,-86.00003051757812],"size":[270,58],"flags":{},"order":1,"mode":0,"inputs":[{"label":"model","localized_name":"model","name":"model","type":"COMBO","widget":{"name":"model"},"link":null}],"outputs":[{"label":"JoyTwoPipeline","localized_name":"JoyTwoPipeline","name":"JoyTwoPipeline","type":"JoyTwoPipeline","links":[6]}],"properties":{"cnr_id":"comfyui_slk_joy_caption_two","ver":"667751cab945bd8e9fb0be4d557d47e36821350a","Node name for S&R":"Joy_caption_two_load"},"widgets_values":["unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit"]},{"id":5,"type":"Joy_caption_two","pos":[-139.20001220703125,33.199951171875],"size":[270,126],"flags":{},"order":2,"mode":0,"inputs":[{"label":"joy_two_pipeline","localized_name":"joy_two_pipeline","name":"joy_two_pipeline","type":"JoyTwoPipeline","link":6},{"label":"image","localized_name":"image","name":"image","type":"IMAGE","link":5},{"label":"caption_type","localized_name":"caption_type","name":"caption_type","type":"COMBO","widget":{"name":"caption_type"},"link":null},{"label":"caption_length","localized_name":"caption_length","name":"caption_length","type":"COMBO","widget":{"name":"caption_length"},"link":null},{"label":"low_vram","localized_name":"low_vram","name":"low_vram","type":"BOOLEAN","widget":{"name":"low_vram"},"link":null}],"outputs":[{"label":"STRING","localized_name":"字符串","name":"STRING","type":"STRING","links":[4]}],"properties":{"cnr_id":"comfyui_slk_joy_caption_two","ver":"667751cab945bd8e9fb0be4d557d47e36821350a","Node name for S&R":"Joy_caption_two"},"widgets_values":["Descriptive","long",true]}],"links":[[4,5,0,4,0,"STRING"],[5,2,0,5,1,"IMAGE"],[6,6,0,5,0,"JoyTwoPipeline"]],"groups":[],"config":{},"extra":{"frontendVersion":"1.18.10"},"version":0.4} ## Additional Context (Please add any additional context or steps to reproduce the error here) 以上代码是什么意思

最新推荐

recommend-type

mmexport1757855955612.mp4

mmexport1757855955612.mp4
recommend-type

【scratch2.0少儿编程-游戏原型-动画-项目源码】魔发师的故事.zip

资源说明: 1:本资料仅用作交流学习参考,请切勿用于商业用途。更多精品资源请访问 https://blog.csdn.net/ashyyyy/article/details/146464041 2:一套精品实用scratch2.0少儿编程游戏、动画源码资源,无论是入门练手还是项目复用都超实用,省去重复开发时间,让开发少走弯路!
recommend-type

研究Matlab影响下的神经数值可复制性

### Matlab代码影响神经数值可复制性 #### 标题解读 标题为“matlab代码影响-neural-numerical-replicability:神经数值可复制性”,该标题暗示了研究的主题集中在Matlab代码对神经数值可复制性的影响。在神经科学研究中,数值可复制性指的是在不同计算环境下使用相同的算法与数据能够获得一致或相近的计算结果。这对于科学实验的可靠性和结果的可验证性至关重要。 #### 描述解读 描述中提到的“该项目”着重于提供工具来分析不同平台下由于数值不精确性导致的影响。项目以霍奇金-赫克斯利(Hodgkin-Huxley)型神经元组成的简单神经网络为例,这是生物物理神经建模中常见的模型,用于模拟动作电位的产生和传播。 描述中提及的`JCN_2019_v4.0_appendix_Eqs_Parameters.pdf`文件详细描述了仿真模型的参数与方程。这些内容对于理解模型的细节和确保其他研究者复制该研究是必不可少的。 该研究的实现工具选用了C/C++程序语言。这表明了研究的复杂性和对性能的高要求,因为C/C++在科学计算领域内以其高效性和灵活性而广受欢迎。 使用了Runge–Kutta四阶方法(RK4)求解常微分方程(ODE),这是一种广泛应用于求解初值问题的数值方法。RK4方法的精度和稳定性使其成为众多科学计算问题的首选。RK4方法的实现借助了Boost C++库中的`Boost.Numeric.Odeint`模块,这进一步表明项目对数值算法的实现和性能有较高要求。 #### 软件要求 为了能够运行该项目,需要满足一系列软件要求: - C/C++编译器:例如GCC,这是编译C/C++代码的重要工具。 - Boost C++库:一个强大的跨平台C++库,提供了许多标准库之外的组件,尤其是数值计算相关的部分。 - ODEint模块:用于求解常微分方程,是Boost库的一部分,已包含在项目提供的文件中。 #### 项目文件结构 从提供的文件列表中,我们可以推测出项目的文件结构包含以下几个部分: - **项目树源代码目录**:存放项目的主要源代码文件。 - `checkActualPrecision.h`:一个头文件,可能用于检测和评估实际的数值精度。 - `HH_BBT2017_allP.cpp`:源代码文件,包含用于模拟霍奇金-赫克斯利神经元网络的代码。 - `iappDist_allP.cpp` 和 `iappDist_allP.h`:源代码和头文件,可能用于实现某种算法或者数据的分布。 - `Makefile.win`:针对Windows系统的编译脚本文件,用于自动化编译过程。 - `SpikeTrain_allP.cpp` 和 `SpikeTrain_allP.h`:源代码和头文件,可能与动作电位的生成和传播相关。 - **人物目录**:可能包含项目成员的简介、联系方式或其他相关信息。 - **Matlab脚本文件**: - `图1_as.m`、`图2_as.m`、`图2_rp`:这些文件名中的"as"可能表示"assembled",而"rp"可能指"reproduction"。这些脚本文件很可能用于绘制图表、图形,以及对模拟结果进行后处理和复现实验。 #### 开源系统标签 标签“系统开源”指的是该项目作为一个开源项目被开发,意味着其源代码是公开的,任何个人或组织都可以自由获取、修改和重新分发。这对于科学计算来说尤为重要,因为开放代码库可以增进协作,加速科学发现,并确保实验结果的透明度和可验证性。 #### 总结 在理解了文件中提供的信息后,可以认识到本项目聚焦于通过提供准确的数值计算工具,来保证神经科学研究中模型仿真的可复制性。通过选择合适的编程语言和算法,利用开源的库和工具,研究者们可以确保其研究结果的精确性和可靠性。这不仅有助于神经科学领域的深入研究,还为其他需要高精度数值计算的科研领域提供了宝贵的经验和方法。
recommend-type

MySQL数据库索引失效案例分析与解决方案(索引失效大揭秘)

# 摘要 MySQL索引失效是数据库性能优化中的关键问题,直接影响查询效率与系统响应速度。本文系统分析了索引的基本机制与失效原理,包括B+树结构、执行计划解析及查询优化器的工作逻辑,深入探讨了索引失效的典型场景,如不规范SQL写法、复合索引设计不当以及统
recommend-type

TS语言

### TypeScript 简介 TypeScript 是一种由 Microsoft 开发的开源编程语言,它是 JavaScript 的超集,这意味着所有的 JavaScript 代码都是合法的 TypeScript 代码。TypeScript 扩展了 JavaScript 的语法,并通过类型注解提供编译时的静态类型检查,从而使得代码更易于维护、理解和调试。TypeScript 可以在任何操作系统上运行,并且可以编译出纯净、简洁的 JavaScript 代码,这些代码可以在任何浏览器上、Node.js 环境中,或者任何支持 ECMAScript 3(或更高版本)的 JavaScript 引
recommend-type

Leaflet.Graticule插件:创建经纬度网格刻度

标题“Leaflet.Graticule:经纬线网格”指向的是Leaflet.js的一个插件,它用于在地图上生成经纬度网格线,以辅助进行地图定位与参考。从描述中,我们可以提取到几个关键知识点: 1. Leaflet.Graticule插件的使用目的和功能:该插件的主要作用是在基于Leaflet.js库的地图上绘制经纬度网格线。这可以帮助用户在地图上直观地看到经纬度划分,对于地理信息系统(GIS)相关工作尤为重要。 2. 插件的构造函数和参数:`L.graticule(options)`是创建Graticule图层的JavaScript代码片段。其中`options`是一个对象,可以用来设置网格线的显示样式和间隔等属性。这表明了插件的灵活性,允许用户根据自己的需求调整网格线的显示。 3. interval参数的含义:`interval`参数决定了网格线的间隔大小,以度为单位。例如,若设置为20,则每20度间隔显示一条网格线;若设置为10,则每10度显示一条网格线。这一参数对于调节网格线密度至关重要。 4. style参数的作用:`style`参数用于定义网格线的样式。插件提供了自定义线的样式的能力,包括颜色、粗细等,使得开发者可以根据地图的整体风格和个人喜好来定制网格线的外观。 5. 实例化和添加到地图上的例子:提供了两种使用插件的方式。第一种是直接创建一个基本的网格层并将其添加到地图上,这种方式使用了插件的默认设置。第二种是创建一个自定义间隔的网格层,并同样将其添加到地图上。这展示了如何在不同的使用场景下灵活运用插件。 6. JavaScript标签的含义:标题中“JavaScript”这一标签强调了该插件是使用JavaScript语言开发的,它是前端技术栈中重要的部分,特别是在Web开发中扮演着核心角色。 7. 压缩包子文件的文件名称列表“Leaflet.Graticule-master”暗示了插件的项目文件结构。文件名表明,这是一个典型的GitHub仓库的命名方式,其中“master”可能代表主分支。通常,开发者可以在如GitHub这样的代码托管平台上找到该项目的源代码和文档,以便下载、安装和使用。 综上所述,可以得知,Leaflet.Graticule插件是一个专为Leaflet地图库设计的扩展工具,它允许用户添加自定义的经纬度网格线到地图上,以帮助进行地图的可视化分析。开发者可以根据特定需求通过参数化选项来定制网格线的属性,使其适应不同的应用场景。通过学习和使用该插件,可以增强地图的交互性和信息的传递效率。
recommend-type

【MySQL数据库性能提升秘籍】:揭秘性能下降幕后真凶及解决策略

# 摘要 MySQL性能问题在实际应用中普遍存在,但其表象复杂且易引发认知误区。本文系统分析了导致MySQL性能下降的核心原因,涵盖查询语句结构、数据库配置、表结构设计等多个技术层面,并结合性能监控工具与执行计划解析,提供了全面的问题诊断方法。在此基础上,文章深入探讨了索引优化、查询重写、分库分表等高级调优策略,并通过真实案例总结了可行的最佳实践
recommend-type

51小车循迹红外

基于51单片机的红外循迹小车的实现方法,主要涉及硬件连接、传感器模块的使用以及程序设计三个方面。 ### 红外循迹模块的选择与连接 红外循迹模块通常由多个红外发射和接收对管组成,用于检测地面上的黑线。常见的模块有四路红外循迹模块,其工作原理是通过检测红外光的反射强度来判断是否处于黑线上。红外模块的VCC和GND分别连接到51单片机的+5V和GND端,而IN1至IN4则连接到单片机的对应引脚上。红外发射接收器应安装在小车前方下端,并且离地面的距离不宜过远,以确保能够有效检测到黑线[^2]。 ### 硬件电路设计 在硬件设计方面,需要考虑电机驱动、电源管理、以及红外传感器的接口设计。51单片机
recommend-type

AMEF图像去雾技术:Matlab实现与应用

AMEF(Artificial Multi-Exposure Fusion)方法是一种用于图像去雾的技术,其核心思想是将多张曝光不足的图像融合成一张清晰无雾的图片。在讨论这个技术的Matlab实现之前,让我们先了解图像去雾和多重曝光融合的背景知识。 图像去雾技术的目标是恢复在雾中拍摄的图像的清晰度,增强图像的对比度和颜色饱和度,使得原本因雾气影响而模糊的图像变得清晰。这种技术在自动驾驶、无人机导航、视频监控、卫星图像处理等领域有着重要的应用。 多重曝光技术源自摄影领域,通过拍摄同一场景的多张照片,再将这些照片通过特定算法融合,获得一张综合了多张照片信息的图像。多重曝光融合技术在提高图像质量方面发挥着重要作用,例如增加图片的动态范围,提升细节和亮度,消除噪点等。 在介绍的AMEF去雾方法中,该技术被应用于通过人工创建的多重曝光图像进行融合,以产生清晰的无雾图像。由于单一图像在光照不均匀或天气条件不佳的情况下可能会产生图像质量低下的问题,因此使用多重曝光融合可以有效地解决这些问题。 在Matlab代码实现方面,AMEF的Matlab实现包括了一个名为amef_demo.m的演示脚本。用户可以通过修改该脚本中的图像名称来处理他们自己的图像。在该代码中,clip_range是一个重要的参数,它决定了在去雾处理过程中,对于图像像素亮度值的裁剪范围。在大多数实验中,该参数被设定为c=0.010,但用户也可以根据自己的需求进行调整。较大的clip_range值会尝试保留更多的图像细节,但同时也可能引入更多噪声,因此需要根据图像的具体情况做出适当选择。 AMEF方法的理论基础和实验过程均来自于Adrian Galdran在2018年发表于《信号处理》期刊的文章,题为“Image Dehazing by Artificial Multi-Exposure Image Fusion”。同时,该Matlab代码的融合部分的理论基础则来自于2007年Pacific Graphics会议记录中由Tom Mertens, Jan Kautz和Frank Van Reeth提出的工作,题目为“Exposure Fusion”。因此,如果读者在实际应用中使用了这段代码,适当的引用这些工作是必要的学术礼仪。 此外,标签“系统开源”表明了该项目遵循开源精神,允许研究者、开发者及用户自由地访问、使用、修改和共享源代码。这一特点使得AMEF方法具有广泛的可访问性和可扩展性,鼓励了更广泛的研究和应用。 从压缩包子文件的文件名称列表中,我们可以看到AMEF去雾方法的Matlab实现的项目名为“amef_dehazing-master”。这表明了这是一个有主分支的项目,其主分支被标识为“master”,这通常意味着它是项目维护者认可的稳定版本,也是用户在使用时应该选择的版本。 总的来说,AMEF去雾方法及其Matlab实现为图像处理领域提供了快速且有效的解决方案,能够在图像被雾气影响时恢复出高质量的清晰图像,这对于相关领域的研究和应用具有重要的意义。
recommend-type

泵浦光匹配建模全解析:MATLAB中耦合效率提升的4个关键点(实战案例)

# 摘要 泵浦光匹配建模在光纤激光器与光学系统设计中具有关键作用,直接影响光束耦合效率与系统整体性能。本文系统阐述了泵浦光匹配建模的基本概念与研究意义,深入分析其理论基础,包括光纤耦合原理、高斯光束传播特性及耦合效率的数学建模。基于MATLAB平台,介绍了光学仿真工具的使用与建模环境搭建方法,并提出四种关键建模策略以提升耦合效率。通过典型实例验证模型有效性