Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: FileExistsError: [WinError 183] Cannot create a file when that file already exists #6

Open
zer0int opened this issue Dec 18, 2024 · 5 comments

Comments

@zer0int
Copy link
Owner

zer0int commented Dec 18, 2024

Just gonna open an issue on myself as I assume people will be most likely to see it here. =)
If you get this error, it has nothing to do with my code / node. It's an issue with PyTorch. To fix it:

In <your-Python>\site-packages\torch\_inductor\codecache.py find def write_atomic, edit it like so (last 3 lines below):

def write_atomic(
    path_: str,
    content: Union[str, bytes],
    make_dirs: bool = False,
    encode_utf_8: bool = False,
) -> None:
    # Write into temporary file first to avoid conflicts between threads
    # Avoid using a named temporary file, as those have restricted permissions
    assert isinstance(
        content, (str, bytes)
    ), "Only strings and byte arrays can be saved in the cache"
    path = Path(path_)
    if make_dirs:
        path.parent.mkdir(parents=True, exist_ok=True)
    tmp_path = path.parent / f".{os.getpid()}.{threading.get_ident()}.tmp"
    write_mode = "w" if isinstance(content, str) else "wb"
    with tmp_path.open(write_mode, encoding="utf-8" if encode_utf_8 else None) as f:
        f.write(content)
    #tmp_path.rename(path) # comment this out, and add the following two lines instead:
    shutil.copy2(src=tmp_path, dst=path)
    os.remove(tmp_path)

Seen here

@Kahdeg-15520487
Copy link

what do you mean by " uncomment this ", did you meant " comment this " ?

@zer0int
Copy link
Owner Author

zer0int commented Dec 20, 2024

@Kahdeg-15520487
Oops. Yes, thank you - fixed!

@heystanlee
Copy link

help me plz:

C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\main.c(2): fatal error C1083: 无法打开包括文件: “cuda.h”: No such file or directory
0%| | 0/30 [00:00<?, ?it/s]
!!! Exception during processing !!! backend='inductor' raised:
CalledProcessError: Command '['C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\bin\Hostx64\x64\cl.EXE', 'C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\main.c', '/nologo', '/O2', '/LD', '/wd4819', '/IH:\AI-video-onekey-1229\Python\Lib\site-packages\triton\backends\nvidia\include', '/IC:\Users\stanl\AppData\Local\Temp\tmpqlw46heu', '/IH:\AI-video-onekey-1229\Python\Include', '/IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\include', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\shared', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\ucrt', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\um', '/link', '/LIBPATH:H:\AI-video-onekey-1229\Python\Lib\site-packages\triton\backends\nvidia\lib', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:C:\Python310\libs', '/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\lib\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\ucrt\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\um\x64', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:C:\Python310\libs', '/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\lib\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\ucrt\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\um\x64', 'cuda.lib', '/OUT:C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\__triton_launcher.cp310-win_amd64.pyd']' returned non-zero exit status 2.

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True

Traceback (most recent call last):
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\output_graph.py", line 1446, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\repro\after_dynamo.py", line 129, in call
compiled_gm = compiler_fn(gm, example_inputs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_init
.py", line 2234, in call
return compile_fx(model_, inputs_, config_patches=self.config)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\backends\common.py", line 72, in call
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_functorch\aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_functorch\aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_functorch\aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_functorch\aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_functorch_aot_autograd\jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\repro\after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\compile_fx.py", line 878, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\graph.py", line 1913, in compile_to_fn
return self.compile_to_module().call
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\graph.py", line 1839, in compile_to_module
return self._compile_to_module()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\graph.py", line 1867, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\codecache.py", line 2876, in load_by_key_path
mod = _reload_python_module(key, path)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\runtime\compile_tasks.py", line 45, in reload_python_module
exec(code, mod.dict, mod.dict)
File "C:\Users\stanl\AppData\Local\Temp\torchinductor_stanl\iq\ciqtxvfuicpolbfioluyix6o2tfvwjekgqxqoqis6xepsojwqadm.py", line 47, in
triton_poi_fused_silu_0 = async_compile.triton('triton
', '''
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\async_compile.py", line 203, in triton
kernel.precompile()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\runtime\triton_heuristics.py", line 244, in precompile
compiled_binary, launcher = self._precompile_config(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_inductor\runtime\triton_heuristics.py", line 452, in _precompile_config
binary._init_handles()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\triton\compiler\compiler.py", line 374, in _init_handles
self.run = driver.active.launcher_cls(self.src, self.metadata)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\triton\backends\nvidia\driver.py", line 404, in init
mod = compile_module_from_src(src, "__triton_launcher")
File "H:\AI-video-onekey-1229\Python\lib\site-packages\triton\backends\nvidia\driver.py", line 69, in compile_module_from_src
so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\triton\runtime\build.py", line 71, in _build
ret = subprocess.check_call(cc_cmd)
File "H:\AI-video-onekey-1229\Python\lib\subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\bin\Hostx64\x64\cl.EXE', 'C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\main.c', '/nologo', '/O2', '/LD', '/wd4819', '/IH:\AI-video-onekey-1229\Python\Lib\site-packages\triton\backends\nvidia\include', '/IC:\Users\stanl\AppData\Local\Temp\tmpqlw46heu', '/IH:\AI-video-onekey-1229\Python\Include', '/IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\include', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\shared', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\ucrt', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\um', '/link', '/LIBPATH:H:\AI-video-onekey-1229\Python\Lib\site-packages\triton\backends\nvidia\lib', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:C:\Python310\libs', '/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\lib\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\ucrt\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\um\x64', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:C:\Python310\libs', '/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\lib\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\ucrt\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\um\x64', 'cuda.lib', '/OUT:C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\__triton_launcher.cp310-win_amd64.pyd']' returned non-zero exit status 2.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "H:\AI-video-onekey-1229\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "H:\AI-video-onekey-1229\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "H:\AI-video-onekey-1229\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "H:\AI-video-onekey-1229\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "H:\AI-video-onekey-1229\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 1143, in process
out_latents = model["pipe"](
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "H:\AI-video-onekey-1229\ComfyUI\custom_nodes\ComfyUI-HunyuanVideo-Nyan-24-Dec-24\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 732, in call
noise_pred = self.transformer( # For an input image (129, 192, 336) (1, 256, 256)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI-video-onekey-1229\ComfyUI\custom_nodes\ComfyUI-HunyuanVideo-Nyan-24-Dec-24\hyvideo\modules\models.py", line 951, in forward
img, txt = block(*double_block_args)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 1269, in call
return self._torchdynamo_orig_callable(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 1064, in call
result = self._inner_convert(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 526, in call
return _compile(
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\convert_frame.py", line 634, in transform
tracer.run()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\symbolic_convert.py", line 2796, in run
super().run()
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\symbolic_convert.py", line 983, in run
while self.step():
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\symbolic_convert.py", line 580, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\symbolic_convert.py", line 649, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\output_graph.py", line 1142, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\output_graph.py", line 1369, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\output_graph.py", line 1416, in call_user_compiler
return self._call_user_compiler(gm)
File "H:\AI-video-onekey-1229\Python\lib\site-packages\torch_dynamo\output_graph.py", line 1465, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CalledProcessError: Command '['C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\bin\Hostx64\x64\cl.EXE', 'C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\main.c', '/nologo', '/O2', '/LD', '/wd4819', '/IH:\AI-video-onekey-1229\Python\Lib\site-packages\triton\backends\nvidia\include', '/IC:\Users\stanl\AppData\Local\Temp\tmpqlw46heu', '/IH:\AI-video-onekey-1229\Python\Include', '/IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\include', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\shared', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\ucrt', '/IC:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\um', '/link', '/LIBPATH:H:\AI-video-onekey-1229\Python\Lib\site-packages\triton\backends\nvidia\lib', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:C:\Python310\libs', '/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\lib\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\ucrt\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\um\x64', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:H:\AI-video-onekey-1229\Python\libs', '/LIBPATH:C:\Python310\libs', '/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.42.34433\lib\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\ucrt\x64', '/LIBPATH:C:\Program Files (x86)\Windows Kits\10\Lib\10.0.26100.0\um\x64', 'cuda.lib', '/OUT:C:\Users\stanl\AppData\Local\Temp\tmpqlw46heu\__triton_launcher.cp310-win_amd64.pyd']' returned non-zero exit status 2.

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True

@zer0int
Copy link
Owner Author

zer0int commented Jan 1, 2025

@heystanlee
Install CUDA toolkit 12.X: https://developer.nvidia.com/cuda-downloads
Install latest stable PyTorch: https://pytorch.org/

If that does not help, you may have to upgrade Triton. Let me know if you need additional help!

@heystanlee
Copy link

Big props to you for helping me! I'm truly grateful.

@heystanlee Install CUDA toolkit 12.X: https://developer.nvidia.com/cuda-downloads Install latest stable PyTorch: https://pytorch.org/

If that does not help, you may have to upgrade Triton. Let me know if you need additional help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants