-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
an error occur(module ‘torch.distributed’ has no attribute ‘ReduceOp’]) #2674
Comments
slove the problem using torch 2.5.0 (not 2.5.0a0+872d972e41.nv24.8), but new erro as below: /usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. |
solved by this issue(NVlabs/VILA#160), but the model result not good python3 run.py result: |
jetson agx orin 64G version
Platform Serial Number: [s|XX CLICK TO READ XXX]
Machine: aarch64 Hardware
System: Linux Model: NVIDIA Jetson AGX Orin Developer Kit
Distribution: Ubuntu 22.04 Jammy Jellyfish 699-level Part Number: 699-13701-0005-500 M.0
Release: 5.15.148-tegra P-Number: p3701-0005
Python: 3.10.12 Module: NVIDIA Jetson AGX Orin (64GB ram)
SoC: tegra234
Libraries CUDA Arch BIN: 8.7
CUDA: 12.6.68 L4T: 36.4.0
cuDNN: 9.3.0.75 Jetpack: 6.1
TensorRT: 10.3.0.30
VPI: 3.2.4 Hostname: ubuntu
Vulkan: 1.3.204 Interfaces
OpenCV: 4.8.0 with CUDA: NO
TensorRT-LLM version :0.12.0-jetson
try multimodal vila demo
when setup enviroment (install deepspeed ), an error occur(module ‘torch.distributed’ has no attribute ‘ReduceOp’])
https://forums.developer.nvidia.com/t/module-torch-distributed-has-no-attribute-reduceop/256581/5 this tell need pytorch 1.11
but TensorRT-LLM v0.12.0-jetson branch need run on JetPack 6.1 which need pytorch 2.5
The text was updated successfully, but these errors were encountered: