Skip to content

Device error using low_vram_mode on gradio app #244

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ChuXiaoyuu opened this issue May 15, 2025 · 1 comment
Open

Device error using low_vram_mode on gradio app #244

ChuXiaoyuu opened this issue May 15, 2025 · 1 comment

Comments

@ChuXiaoyuu
Copy link

ChuXiaoyuu commented May 15, 2025

I got a device error when I use the low vram mode...
Run it on a single RTX3060 with 12G VRAM

Traceback (most recent call last):
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/gradio/blocks.py", line 2146, in process_api
result = await self.call_function(
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/gradio/blocks.py", line 1664, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 967, in run
result = context.run(func, *args)
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/gradio/utils.py", line 884, in wrapper
response = f(*args, **kwargs)
File "/mnt/d/zcc/PythonProject/Hunyuan3D-2/gradio_app.py", line 290, in generation_all
textured_mesh = texgen_worker(mesh, image)
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/mnt/d/zcc/PythonProject/Hunyuan3D-2/hy3dgen/texgen/pipelines.py", line 250, in call
multiviews = self.models['multiview_model'](images_prompt, normal_maps + position_maps, camera_info)
File "/mnt/d/zcc/PythonProject/Hunyuan3D-2/hy3dgen/texgen/utils/multiview_utils.py", line 87, in call
mvd_image = self.pipeline(input_images, num_inference_steps=30, **kwargs).images
File "/home/chuxiaoyu/anaconda3/envs/512/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/chuxiaoyu/.cache/huggingface/modules/diffusers_modules/local/pipeline.py", line 364, in call
latents: torch.Tensor = self.denoise(
File "/home/chuxiaoyu/.cache/huggingface/modules/diffusers_modules/local/pipeline.py", line 583, in denoise
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_CUDA_cat)

@hadariru
Copy link

same problem here!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants