Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'exists' (ComfyUI Portable) #48

Closed
krigeta opened this issue Jan 25, 2025 · 12 comments
Closed

Comments

@krigeta
Copy link

krigeta commented Jan 25, 2025

I am installing it for the ComfyUI portable and got this error:

[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-25 23:47:14.970
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\python.exe
** ComfyUI Path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI
** User directory: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user
** ComfyUI-Manager config path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\comfyui.log

Prestartup times for custom nodes:
   5.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager

Total VRAM 8192 MB, total RAM 16310 MB
pytorch version: 2.5.1+cu124
Set vram state to: HIGH_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 SUPER : cudaMallocAsync
Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\main.py", line 136, in <module>
    import execution
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 13, in <module>
    import nodes
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 22, in <module>
    import comfy.diffusers_load
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\sd.py", line 10, in <module>
    from .ldm.cascade.stage_c_coder import StageC_coder
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\ldm\cascade\stage_c_coder.py", line 19, in <module>
    import torchvision
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torchvision\__init__.py", line 10, in <module>
    from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils  # usort:skip
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torchvision\models\__init__.py", line 2, in <module>
    from .convnext import *
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torchvision\models\convnext.py", line 8, in <module>
    from ..ops.misc import Conv2dNormActivation, Permute
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torchvision\ops\__init__.py", line 23, in <module>
    from .poolers import MultiScaleRoIAlign
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torchvision\ops\poolers.py", line 10, in <module>
    from .roi_align import roi_align
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torchvision\ops\roi_align.py", line 7, in <module>
    from torch._dynamo.utils import is_compile_supported
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\__init__.py", line 3, in <module>
    from . import convert_frame, eval_frame, resume_execution
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 31, in <module>
    from torch._dynamo.utils import CompileTimeInstructionCounter
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\utils.py", line 1320, in <module>
    if has_triton_package():
       ^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\utils\_triton.py", line 9, in has_triton_package
    from triton.compiler.compiler import triton_key
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\__init__.py", line 8, in <module>
    msvc_winsdk_inc_dirs, _ = find_msvc_winsdk()
                              ^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\windows_utils.py", line 230, in find_msvc_winsdk
    msvc_inc_dirs, msvc_lib_dirs = find_msvc()
                                   ^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\windows_utils.py", line 146, in find_msvc
    msvc_base_path, version = find_msvc_hardcoded()
                              ^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\windows_utils.py", line 123, in find_msvc_hardcoded
    if not vs_path.exists():
           ^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'exists'
@GraftingRayman
Copy link

Do you have MSVC installed?

@woct0rdho
Copy link
Owner

Fixed in 863adf0

But this is only a clearer warning that MSVC is not found. You need to install MSVC and Windows SDK

@krigeta
Copy link
Author

krigeta commented Jan 27, 2025

Hello @woct0rdho @GraftingRayman , so yes, I need to install the MSVC (added it to the path manually) but then I got a new error that WARNING: Failed to find CUDA. so I manually installed the latest version of CUDA now the errors are gone and comfy open without any error but then I try to run my usual workflow that is working without triton(Wavespeed) but when I try to run I got this error:

KSampler
backend='inductor' raised:
RuntimeError: Triton only support CUDA 10.0 or higher, but got CUDA version: 12.8

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True

the some here has the same issue but slightly different I would say as the solutions suggested to him and worked for him in the comment are not relevant in my base, I am attaching the log so you guys can check the issue, hope there is a workflow for me:

Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --listen --gpu-only --reserve-vram 0.5
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-27 22:50:38.389
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\python.exe
** ComfyUI Path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI
** User directory: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user
** ComfyUI-Manager config path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\comfyui.log

Prestartup times for custom nodes:
  13.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager

Total VRAM 8192 MB, total RAM 16310 MB
pytorch version: 2.5.1+cu124
Set vram state to: HIGH_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 SUPER : cudaMallocAsync
Using pytorch attention
[Prompt Server] web root: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\web
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ckpts path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using symlinks: False
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
### Loading: ComfyUI-Manager (V3.7.3)
### ComfyUI Revision: 2980 [ee9547ba] *DETACHED | Released on '2024-12-26'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH DATA from: https://api.comfy.org/nodes?page=1&limit=1000[comfyui_controlnet_aux] | INFO -> Using ckpts path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']

Import times for custom nodes:
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\teacache
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-inpaint-nodes
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed
   0.4 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes
   1.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager
   3.4 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-tooling-nodes
   4.8 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Anyline

Starting server

To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188
FETCH DATA from: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json [DONE]
 [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes?page=1&limit=1000
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cuda:0, dtype: torch.float32
Requested to load SDXLClipModel
0 models unloaded.
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP model load device: cuda:0, offload device: cuda:0, current: cuda:0, dtype: torch.float16
loaded straight to GPU
Requested to load SDXL
0 models unloaded.
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load SDXLClipModel
0 models unloaded.
loaded completely 9.5367431640625e+25 1560.802734375 True
0 models unloaded.
Requested to load SDXL
Requested to load ControlNet
0 models unloaded.
loaded completely 9.5367431640625e+25 4897.0483474731445 True
loaded completely 9.5367431640625e+25 2386.120147705078 True
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0] Triton compilation failed: triton_poi_fused_cat_0
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0] def triton_(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     xnumel = 320
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     xoffset = tl.program_id(0) * XBLOCK
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     xmask = xindex < xnumel
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     x0 = xindex
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp5 = tl.load(in_ptr0 + (0))
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp6 = tl.broadcast_to(tmp5, [XBLOCK])
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp0 = x0
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp1 = tl.full([1], 0, tl.int64)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp2 = tmp0 >= tmp1
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp3 = tl.full([1], 160, tl.int64)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp4 = tmp0 < tmp3
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp7 = tmp0.to(tl.float32)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp8 = -9.210340371976184
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp9 = tmp7 * tmp8
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp10 = 0.00625
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp11 = tmp9 * tmp10
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp12 = tl_math.exp(tmp11)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp13 = tmp6 * tmp12
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp14 = tl_math.cos(tmp13)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp15 = tl.full(tmp14.shape, 0.0, tmp14.dtype)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp16 = tl.where(tmp4, tmp14, tmp15)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp17 = tmp0 >= tmp3
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp18 = tl.full([1], 320, tl.int64)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp19 = tmp0 < tmp18
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp20 = (-160) + x0
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp21 = tmp20.to(tl.float32)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp22 = tmp21 * tmp8
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp23 = tmp22 * tmp10
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp24 = tl_math.exp(tmp23)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp25 = tmp6 * tmp24
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp26 = tl_math.sin(tmp25)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp27 = tl.full(tmp26.shape, 0.0, tmp26.dtype)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp28 = tl.where(tmp17, tmp26, tmp27)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tmp29 = tl.where(tmp4, tmp16, tmp28)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     tl.store(out_ptr0 + (x0), tmp29, xmask)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0] metadata: {'signature': {0: '*fp32', 1: '*fp32', 2: 'i32'}, 'device': 0, 'constants': {3: 256}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2), equal_to_1=())], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 1, 'debug': True, 'cc': 75}
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0] Traceback (most recent call last):
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 443, in _precompile_config
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     binary = triton.compile(*compile_args, **compile_kwargs)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\compiler\compiler.py", line 286, in compile
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     next_module = compile_ir(module, metadata)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 341, in <lambda>
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     stages["ptx"] = lambda src, metadata: self.make_ptx(src, metadata, options, self.capability)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 267, in make_ptx
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     ptx_version = ptx_get_version(cuda_version)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 62, in ptx_get_version
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0]     raise RuntimeError("Triton only support CUDA 10.0 or higher, but got CUDA version: " + cuda_version)
E0127 22:56:30.650000 16756 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [6/0] RuntimeError: Triton only support CUDA 10.0 or higher, but got CUDA version: 12.8
  0%|                                                                                           | 0/25 [00:24<?, ?it/s]
!!! Exception during processing !!! backend='inductor' raised:
RuntimeError: Triton only support CUDA 10.0 or higher, but got CUDA version: 12.8

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1446, in _call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 129, in __call__
    compiled_gm = compiler_fn(gm, example_inputs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\__init__.py", line 2234, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1521, in compile_fx
    return aot_autograd(
           ^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\backends\common.py", line 72, in __call__
    cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1071, in aot_module_simplified
    compiled_fn = dispatch_and_compile()
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1056, in dispatch_and_compile
    compiled_fn, _ = create_aot_dispatcher_function(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 522, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 759, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
                               ^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1350, in fw_compiler_base
    return _fw_compiler_base(model, example_inputs, is_inference)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1421, in _fw_compiler_base
    return inner_compile(
           ^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 475, in compile_fx_inner
    return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\repro\after_aot.py", line 85, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 661, in _compile_fx_inner
    compiled_graph = FxGraphCache.load(
                     ^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 1334, in load
    compiled_graph = compile_fx_fn(
                     ^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 570, in codegen_and_compile
    compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 878, in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
                  ^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\graph.py", line 1913, in compile_to_fn
    return self.compile_to_module().call
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\graph.py", line 1839, in compile_to_module
    return self._compile_to_module()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\graph.py", line 1867, in _compile_to_module
    mod = PyCodeCache.load_by_key_path(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 2876, in load_by_key_path
    mod = _reload_python_module(key, path)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 45, in _reload_python_module
    exec(code, mod.__dict__, mod.__dict__)
  File "C:\Users\KRISHE~1\AppData\Local\Temp\torchinductor_Krisheetu-Prime\lk\clktisl4tir6q5vm7vnekxfdeeflkx74m4usxtt7l4bkt7eu5sob.py", line 39, in <module>
    triton_poi_fused_cat_0 = async_compile.triton('triton_', '''
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\async_compile.py", line 203, in triton
    kernel.precompile()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 244, in precompile
    compiled_binary, launcher = self._precompile_config(
                                ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 443, in _precompile_config
    binary = triton.compile(*compile_args, **compile_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\compiler\compiler.py", line 286, in compile
    next_module = compile_ir(module, metadata)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 341, in <lambda>
    stages["ptx"] = lambda src, metadata: self.make_ptx(src, metadata, options, self.capability)
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 267, in make_ptx
    ptx_version = ptx_get_version(cuda_version)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 62, in ptx_get_version
    raise RuntimeError("Triton only support CUDA 10.0 or higher, but got CUDA version: " + cuda_version)
RuntimeError: Triton only support CUDA 10.0 or higher, but got CUDA version: 12.8

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\first_block_cache.py", line 90, in new_get_output_data
    out = get_output_data(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1519, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1486, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 125, in KSampler_sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 1013, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 143, in sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 911, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 897, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 866, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 850, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 108, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 707, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 379, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 832, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 835, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 183, in sampling_function
    out = orig_fn(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 359, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 195, in calc_cond_batch
    return executor.execute(model, conds, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 306, in _calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\fbcache_nodes.py", line 160, in model_unet_function_wrapper
    return model_function(input, timestep, **c)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\model_base.py", line 130, in apply_model
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\model_base.py", line 159, in _apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 465, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 831, in forward
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\first_block_cache.py", line 464, in unet_model__forward
    t_emb = timestep_embedding(timesteps,
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1269, in __call__
    return self._torchdynamo_orig_callable(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1064, in __call__
    result = self._inner_convert(
             ^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 526, in __call__
    return _compile(
           ^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 924, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 666, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_utils_internal.py", line 87, in wrapper_function
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 699, in _compile_inner
    out_code = transform_code_object(code, transform)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1322, in transform_code_object
    transformations(instructions, code_options)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 219, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 634, in transform
    tracer.run()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2796, in run
    super().run()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 983, in run
    while self.step():
          ^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 895, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2987, in RETURN_VALUE
    self._return(inst)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2972, in _return
    self.output.compile_subgraph(
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1117, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1369, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1416, in call_user_compiler
    return self._call_user_compiler(gm)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1465, in _call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Triton only support CUDA 10.0 or higher, but got CUDA version: 12.8

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True


Prompt executed in 255.49 seconds

@woct0rdho
Copy link
Owner

This is fixed in 094edac

I'll make a new release

@woct0rdho
Copy link
Owner

I've made the new release post9. This should solve your issue

@krigeta
Copy link
Author

krigeta commented Jan 28, 2025

I've made the new release post9. This should solve your issue

so I did that manually and as the OP said in 094edac I had installed cuda 12.4 instead of 12.8 but now I got a new error:

KSampler
backend='inductor' raised:
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\.2160.3324.tmp' -> 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\9306ec0cfc14fd2c2f2c13d88d59ea7550dbea37dcc80dc014ddafa3ed262fd2'

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True

and here is the full log:

[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-28 08:13:35.837
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\python.exe
** ComfyUI Path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI
** User directory: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user
** ComfyUI-Manager config path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\comfyui.log

Prestartup times for custom nodes:
   4.6 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager

Total VRAM 8192 MB, total RAM 16310 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 SUPER : cudaMallocAsync
Using pytorch attention
[Prompt Server] web root: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\web
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ckpts path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using symlinks: False
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
### Loading: ComfyUI-Manager (V3.7.3)
### ComfyUI Revision: 2980 [ee9547ba] *DETACHED | Released on '2024-12-26'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
FETCH DATA from: https://api.comfy.org/nodes?page=1&limit=1000[comfyui_controlnet_aux] | INFO -> Using ckpts path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']

Import times for custom nodes:
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\teacache
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-inpaint-nodes
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed
   0.4 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes
   0.8 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager
   2.8 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Anyline
   4.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-tooling-nodes

Starting server

To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188
FETCH DATA from: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json [DONE]
 [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes?page=1&limit=1000
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
Requested to load SDXL
Requested to load ControlNet
loaded completely 9.5367431640625e+25 4897.0483474731445 True
loaded partially 791.5786407470703 791.578369140625 0
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]ptxas info    : 35 bytes gmem, 16 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    32 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 23 registers, used 0 barriers, 32 bytes cumulative stack size, 372 bytes cmem[0], 8 bytes cmem[2]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 35 bytes gmem, 16 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    32 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 18 registers, used 0 barriers, 32 bytes cumulative stack size, 372 bytes cmem[0], 8 bytes cmem[2]
ptxas info    : Compile time = 0.000 ms
  0%|                                                                                           | 0/25 [00:48<?, ?it/s]
!!! Exception during processing !!! backend='inductor' raised:
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\.2160.3324.tmp' -> 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\9306ec0cfc14fd2c2f2c13d88d59ea7550dbea37dcc80dc014ddafa3ed262fd2'

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1446, in _call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 129, in __call__
    compiled_gm = compiler_fn(gm, example_inputs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\__init__.py", line 2234, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1521, in compile_fx
    return aot_autograd(
           ^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\backends\common.py", line 72, in __call__
    cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1071, in aot_module_simplified
    compiled_fn = dispatch_and_compile()
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1056, in dispatch_and_compile
    compiled_fn, _ = create_aot_dispatcher_function(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 522, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 759, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
                               ^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1350, in fw_compiler_base
    return _fw_compiler_base(model, example_inputs, is_inference)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1359, in _fw_compiler_base
    _recursive_joint_graph_passes(model)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 281, in _recursive_joint_graph_passes
    joint_graph_passes(gm)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\fx_passes\joint_graph.py", line 460, in joint_graph_passes
    count += patterns.apply(graph.graph)  # type: ignore[arg-type]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\pattern_matcher.py", line 1729, in apply
    if is_match(m) and entry.extra_check(m):
                       ^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\pattern_matcher.py", line 1331, in check_fn
    if is_match(specific_pattern_match) and extra_check(specific_pattern_match):
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\fx_passes\pad_mm.py", line 146, in should_pad_addmm
    return should_pad_common(mat1, mat2, input) and should_pad_bench(
                                                    ^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\fx_passes\pad_mm.py", line 567, in should_pad_bench
    set_cached_base_mm_benchmark_time(ori_time_key, ori_time)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\fx_passes\pad_mm.py", line 262, in set_cached_base_mm_benchmark_time
    return get_pad_cache().set_value(key, value=value)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 286, in set_value
    self.update_local_cache(cache)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 257, in update_local_cache
    write_atomic(
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 466, in write_atomic
    tmp_path.rename(path)
  File "pathlib.py", line 1363, in rename
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\.2160.3324.tmp' -> 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\9306ec0cfc14fd2c2f2c13d88d59ea7550dbea37dcc80dc014ddafa3ed262fd2'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\first_block_cache.py", line 90, in new_get_output_data
    out = get_output_data(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1519, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1486, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 125, in KSampler_sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 1013, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 143, in sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 911, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 897, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 866, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 850, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 108, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 707, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 379, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 832, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 835, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 183, in sampling_function
    out = orig_fn(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 359, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 195, in calc_cond_batch
    return executor.execute(model, conds, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 306, in _calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\fbcache_nodes.py", line 160, in model_unet_function_wrapper
    return model_function(input, timestep, **c)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\model_base.py", line 130, in apply_model
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\model_base.py", line 159, in _apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 465, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 831, in forward
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\first_block_cache.py", line 467, in unet_model__forward
    emb = self.time_embed(t_emb)
          ^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
    input = module(input)
            ^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1269, in __call__
    return self._torchdynamo_orig_callable(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1064, in __call__
    result = self._inner_convert(
             ^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 526, in __call__
    return _compile(
           ^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 924, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 666, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_utils_internal.py", line 87, in wrapper_function
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 699, in _compile_inner
    out_code = transform_code_object(code, transform)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1322, in transform_code_object
    transformations(instructions, code_options)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 219, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 634, in transform
    tracer.run()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2796, in run
    super().run()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 983, in run
    while self.step():
          ^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 895, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2987, in RETURN_VALUE
    self._return(inst)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2972, in _return
    self.output.compile_subgraph(
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1117, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1369, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1416, in call_user_compiler
    return self._call_user_compiler(gm)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1465, in _call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\.2160.3324.tmp' -> 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\torchinductor_Krisheetu-Prime\\cache\\9306ec0cfc14fd2c2f2c13d88d59ea7550dbea37dcc80dc014ddafa3ed262fd2'

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True


Prompt executed in 516.28 seconds

@woct0rdho
Copy link
Owner

You need this: pytorch/pytorch#138211

This will be solved only when PyTorch 2.6 is out

@krigeta
Copy link
Author

krigeta commented Jan 28, 2025

You need this: pytorch/pytorch#138211

This will be solved only when PyTorch 2.6 is out

I made the following changes:

    #tmp_path.rename(path)
    shutil.copy2(src=tmp_path, dst=path)
    os.remove(tmp_path)

in the codecache.py and now I got a new error(crying in the corner), please suggest me how can I revert back so I can able to use something that is stable and workable for comfyUI portable btw here is the error:

KSampler
backend='inductor' raised:
RuntimeError: `ptxas` failed with error code 4294967295:


Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-28 09:50:14.086
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\python.exe
** ComfyUI Path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI
** User directory: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user
** ComfyUI-Manager config path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\user\comfyui.log

Prestartup times for custom nodes:
   4.5 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager

Total VRAM 8192 MB, total RAM 16310 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 SUPER : cudaMallocAsync
Using pytorch attention
[Prompt Server] web root: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\web
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ckpts path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using symlinks: False
[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
### Loading: ComfyUI-Manager (V3.7.3)
### ComfyUI Revision: 2980 [ee9547ba] *DETACHED | Released on '2024-12-26'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH DATA from: https://api.comfy.org/nodes?page=1&limit=1000[comfyui_controlnet_aux] | INFO -> Using ckpts path: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']

Import times for custom nodes:
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\teacache
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_controlnet_aux
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-inpaint-nodes
   0.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed
   0.4 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes
   0.7 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager
   3.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-tooling-nodes
   3.1 seconds: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\ComfyUI-Anyline

Starting server

To see the GUI go to: http://0.0.0.0:8188
To see the GUI go to: http://[::]:8188
 [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes?page=1&limit=1000
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
FETCH DATA from: Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json [DONE]
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
Requested to load ControlNet
Requested to load SDXL
loaded completely 9.5367431640625e+25 2386.120147705078 True
loaded partially 3302.554753112793 3302.5537185668945 516
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 20 registers, used 1 barriers, 376 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 30 registers, used 1 barriers, 376 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 20 registers, used 1 barriers, 384 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 31 registers, used 1 barriers, 384 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 31 registers, used 1 barriers, 384 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 33 registers, used 1 barriers, 384 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 32 registers, used 0 barriers, 384 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 34 registers, used 1 barriers, 384 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 36 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 20 registers, used 0 barriers, 400 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 18 registers, used 0 barriers, 400 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 29 registers, used 1 barriers, 400 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 50 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 60 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 61 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 43 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 36 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 53 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 12 registers, used 0 barriers, 372 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 10 registers, used 0 barriers, 372 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 33 registers, used 1 barriers, 416 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 28 registers, used 1 barriers, 408 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 61 registers, used 0 barriers, 428 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 27 registers, used 0 barriers, 428 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 24 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 35 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 32 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 39 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 28 registers, used 0 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 43 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 25 registers, used 0 barriers, 404 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
main.c
   Creating library C:\Users\KRISHE~1\AppData\Local\Temp\tmpb_glwecb\__triton_launcher.cp312-win_amd64.lib and object C:\Users\KRISHE~1\AppData\Local\Temp\tmpb_glwecb\__triton_launcher.cp312-win_amd64.exp
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 18 registers, used 0 barriers, 404 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 14 registers, used 0 barriers, 380 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
main.c
   Creating library C:\Users\KRISHE~1\AppData\Local\Temp\tmp6rv0honw\__triton_launcher.cp312-win_amd64.lib and object C:\Users\KRISHE~1\AppData\Local\Temp\tmp6rv0honw\__triton_launcher.cp312-win_amd64.exp
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 12 registers, used 0 barriers, 380 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 55 registers, used 1 barriers, 416 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 26 registers, used 0 barriers, 428 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 22 registers, used 0 barriers, 428 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 20 registers, used 0 barriers, 380 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 16 registers, used 0 barriers, 380 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 14 registers, used 0 barriers, 372 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 10 registers, used 0 barriers, 372 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 36 registers, used 1 barriers, 392 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 25 registers, used 0 barriers, 404 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 18 registers, used 0 barriers, 404 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 60 registers, used 1 barriers, 416 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 26 registers, used 0 barriers, 428 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 11 bytes gmem, 8 bytes cmem[4]
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 22 registers, used 0 barriers, 428 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function 'triton_' for 'sm_75'
ptxas info    : Function properties for triton_
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 61 registers, used 1 barriers, 416 bytes cmem[0]
ptxas info    : Compile time = 0.000 ms
ptxas C:\Users\KRISHE~1\AppData\Local\Temp\tmpdlrt7h41.ptx, line 55; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas C:\Users\KRISHE~1\AppData\Local\Temp\tmpdlrt7h41.ptx, line 55; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas C:\Users\KRISHE~1\AppData\Local\Temp\tmpdlrt7h41.ptx, line 59; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas C:\Users\KRISHE~1\AppData\Local\Temp\tmpdlrt7h41.ptx, line 59; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas fatal   : Ptx assembly aborted due to errors
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] Triton compilation failed: triton_poi_fused__to_copy_13
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] def triton_(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     xnumel = 20480
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     xoffset = tl.program_id(0) * XBLOCK
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     xmask = tl.full([XBLOCK], True, tl.int1)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     x0 = xindex
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     tmp0 = tl.load(in_ptr0 + (x0), None).to(tl.float32)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     tmp1 = tmp0.to(tl.float32)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     tl.store(out_ptr0 + (x0), tmp1, None)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] metadata: {'signature': {0: '*bf16', 1: '*fp16', 2: 'i32'}, 'device': 0, 'constants': {3: 256}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2), equal_to_1=())], 'device_type': 'cuda', 'num_warps': 4, 'num_stages': 1, 'debug': True, 'cc': 75}
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] Traceback (most recent call last):
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 312, in make_cubin
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     subprocess.run(ptxas_cmd, check=True, close_fds=False, stderr=flog)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]   File "subprocess.py", line 571, in run
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] subprocess.CalledProcessError: Command '['C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\ptxas.exe', '-lineinfo', '-v', '--gpu-name=sm_75', 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\tmpdlrt7h41.ptx', '-o', 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\tmpdlrt7h41.ptx.o']' returned non-zero exit status 4294967295.
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] During handling of the above exception, another exception occurred:
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] Traceback (most recent call last):
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 443, in _precompile_config
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     binary = triton.compile(*compile_args, **compile_kwargs)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\compiler\compiler.py", line 286, in compile
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     next_module = compile_ir(module, metadata)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 342, in <lambda>
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     stages["cubin"] = lambda src, metadata: self.make_cubin(src, metadata, options, self.capability)
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]   File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 325, in make_cubin
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]     raise RuntimeError(f'`ptxas` failed with error code {e.returncode}: \n{log}')
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0] RuntimeError: `ptxas` failed with error code 4294967295:
E0128 10:01:39.068000 12148 Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py:445] [10/0]
  0%|                                                                                           | 0/25 [04:18<?, ?it/s]
!!! Exception during processing !!! backend='inductor' raised:
RuntimeError: `ptxas` failed with error code 4294967295:


Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 312, in make_cubin
    subprocess.run(ptxas_cmd, check=True, close_fds=False, stderr=flog)
  File "subprocess.py", line 571, in run
subprocess.CalledProcessError: Command '['C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\bin\\ptxas.exe', '-lineinfo', '-v', '--gpu-name=sm_75', 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\tmpdlrt7h41.ptx', '-o', 'C:\\Users\\KRISHE~1\\AppData\\Local\\Temp\\tmpdlrt7h41.ptx.o']' returned non-zero exit status 4294967295.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1446, in _call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 129, in __call__
    compiled_gm = compiler_fn(gm, example_inputs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\__init__.py", line 2234, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1521, in compile_fx
    return aot_autograd(
           ^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\backends\common.py", line 72, in __call__
    cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1071, in aot_module_simplified
    compiled_fn = dispatch_and_compile()
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1056, in dispatch_and_compile
    compiled_fn, _ = create_aot_dispatcher_function(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 522, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\aot_autograd.py", line 759, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
                               ^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1350, in fw_compiler_base
    return _fw_compiler_base(model, example_inputs, is_inference)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 1421, in _fw_compiler_base
    return inner_compile(
           ^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 475, in compile_fx_inner
    return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\repro\after_aot.py", line 85, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 661, in _compile_fx_inner
    compiled_graph = FxGraphCache.load(
                     ^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 1336, in load
    compiled_graph = compile_fx_fn(
                     ^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 570, in codegen_and_compile
    compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\compile_fx.py", line 878, in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
                  ^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\graph.py", line 1913, in compile_to_fn
    return self.compile_to_module().call
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\graph.py", line 1839, in compile_to_module
    return self._compile_to_module()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\graph.py", line 1867, in _compile_to_module
    mod = PyCodeCache.load_by_key_path(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\codecache.py", line 2878, in load_by_key_path
    mod = _reload_python_module(key, path)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 45, in _reload_python_module
    exec(code, mod.__dict__, mod.__dict__)
  File "C:\Users\KRISHE~1\AppData\Local\Temp\torchinductor_Krisheetu-Prime\rp\crp2td26y4ua23tckhw42tsk3sv2k2jc5c6ztkuzfjlhyt656krs.py", line 883, in <module>
    triton_poi_fused__to_copy_13 = async_compile.triton('triton_', '''
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\async_compile.py", line 203, in triton
    kernel.precompile()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 244, in precompile
    compiled_binary, launcher = self._precompile_config(
                                ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_inductor\runtime\triton_heuristics.py", line 443, in _precompile_config
    binary = triton.compile(*compile_args, **compile_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\compiler\compiler.py", line 286, in compile
    next_module = compile_ir(module, metadata)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 342, in <lambda>
    stages["cubin"] = lambda src, metadata: self.make_cubin(src, metadata, options, self.capability)
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\triton\backends\nvidia\compiler.py", line 325, in make_cubin
    raise RuntimeError(f'`ptxas` failed with error code {e.returncode}: \n{log}')
RuntimeError: `ptxas` failed with error code 4294967295:


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\first_block_cache.py", line 90, in new_get_output_data
    out = get_output_data(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1519, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\nodes.py", line 1486, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 125, in KSampler_sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 1013, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 143, in sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 911, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 897, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 866, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 850, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 108, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 707, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 379, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 832, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 835, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\comfyui_smznodes\smZNodes.py", line 183, in sampling_function
    out = orig_fn(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 359, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 195, in calc_cond_batch
    return executor.execute(model, conds, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\samplers.py", line 306, in _calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\fbcache_nodes.py", line 160, in model_unet_function_wrapper
    return model_function(input, timestep, **c)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\model_base.py", line 130, in apply_model
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\model_base.py", line 159, in _apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 465, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 831, in forward
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\comfy\patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\ComfyUI\custom_nodes\Comfy-WaveSpeed\first_block_cache.py", line 526, in unet_model__forward
    h, hidden_states_residual = call_remaining_blocks(
                                ^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1269, in __call__
    return self._torchdynamo_orig_callable(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 1064, in __call__
    result = self._inner_convert(
             ^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 526, in __call__
    return _compile(
           ^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 924, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 666, in compile_inner
    return _compile_inner(code, one_graph, hooks, transform)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_utils_internal.py", line 87, in wrapper_function
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 699, in _compile_inner
    out_code = transform_code_object(code, transform)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1322, in transform_code_object
    transformations(instructions, code_options)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 219, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\convert_frame.py", line 634, in transform
    tracer.run()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2796, in run
    super().run()
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 983, in run
    while self.step():
          ^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 895, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2987, in RETURN_VALUE
    self._return(inst)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2972, in _return
    self.output.compile_subgraph(
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1142, in compile_subgraph
    self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1369, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1416, in call_user_compiler
    return self._call_user_compiler(gm)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Y:\Tutorials\Artificial_Intelligence\Image\Inference_Interfaces\ComfyUI\ComfyUI_windows_portable_nvidia\python_embeded\Lib\site-packages\torch\_dynamo\output_graph.py", line 1465, in _call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: `ptxas` failed with error code 4294967295:


Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True


Prompt executed in 638.99 seconds

@woct0rdho
Copy link
Owner

woct0rdho commented Jan 28, 2025

Hmmm, your GPU is RTX 2060, it's too old (sm75) and Triton does not support compiling bf16 on it

If you really want to try out triton, you can change bf16 to fp16 in the nodes or the code

If you want to revert, just uninstall triton: python -m pip uninstall triton

Anyway, thanks for the bug report

@krigeta
Copy link
Author

krigeta commented Jan 28, 2025

Hmmm, your GPU is RTX 2060, it's too old (sm75) and Triton does not support compiling bf16 on it

If you really want to try out triton, you can change bf16 to fp16 in the nodes or the code

If you want to revert, just uninstall triton: python -m pip uninstall triton

Anyway, thanks for the bug report

Hey, may you please tell me where can I change the bf16 to fp16 in the nodes?

and is there any triton, comfyUI combination that will let me use Triton? any other old version?

I watch to run wavespeed and in the backend it is using Triton, so if you may show some light?

EDIT:
it is showing that I am using Float16:
model weight dtype torch.float16,

@woct0rdho
Copy link
Owner

Some ComfyUI nodes may convert the model dtype. As long as there is one bf16 in the middle of the computation, Triton may fail to compile it

The easiest way for changing bf16 to fp16 is to look if there is somewhere to choose the dtype in all the nodes you use. If not, then you need some programming

@krigeta
Copy link
Author

krigeta commented Jan 28, 2025

seems like the RTX 20 series is out of options for Triton and Wavespeed btw thanks for the support.

@krigeta krigeta closed this as completed Jan 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants