Skip to content

A script that automatically installs all the required stuff to run selected AI interfaces on AMD Radeon 7900XTX.

License

Notifications You must be signed in to change notification settings

Mateusz-Dera/ROCm-AI-Installer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ROCm-AI-Installer

A script that automatically installs all the required stuff to run selected AI interfaces on AMD Radeon 7900XTX. It should also work on 7900XT cards. For other cards, change HSA_OVERRIDE_GFX_VERSION and GFX at the beginning of the script (Not tested).

Info

Version

Note

Ubuntu 24.04.1 is recommended. Version 6.x is not tested on older systems.

Test platform:

Name Info
CPU AMD Ryzen 7900X3D (iGPU disabled in BIOS)
GPU AMD Radeon 7900XTX
RAM 64GB DDR5 6600MHz
Motherboard ASRock B650E PG Riptide WiFi (3.08)
OS Ubuntu 24.04.1
Kernel 6.8.0-51-generic
ROCm 6.3.1

Text generation

Name Enviroment Links Additional information
KoboldCPP Python 3.12 venv https://github.com/YellowRoseCx/koboldcpp-rocm GGML and GGUF models.
Text generation web UI Python 3.12 venv https://github.com/oobabooga/text-generation-webui
https://github.com/ROCm/bitsandbytes.git
https://github.com/ROCmSoftwarePlatform/flash-attention
https://github.com/turboderp/exllamav2
ExLlamaV2 and Transformers models.
SillyTavern (1.12.9) Node https://github.com/SillyTavern/SillyTavern
llama.cpp C++ https://github.com/ggerganov/llama.cpp 1. Put model.gguf into llama.cpp folder.
2. Change context size in run.sh file (Default: 32768).
3. Set GPU offload layers in run.sh file (Default: 1)

Image generation

Name Enviroment Links Additional information
ANIMAGINE XL 3.1 Python 3.12 venv https://huggingface.co/spaces/cagliostrolab/animagine-xl-3.1
https://huggingface.co/cagliostrolab/animagine-xl-3.1
ComfyUI Python 3.12 venv https://github.com/comfyanonymous/ComfyUI 1. Prepared workflows examples.
Artist Python 3.12 venv https://github.com/songrise/Artist/

ComfyUI Addons

Name Link Additional information
ComfyUI-Manager https://github.com/ltdrdata/ComfyUI-Manager Manage nodes of ComfyUI.
ComfyUI-GGUF https://github.com/city96/ComfyUI-GGUF
https://huggingface.co/city96/t5-v1_1-xxl-encoder-bf16
https://huggingface.co/openai/clip-vit-large-patch14
https://huggingface.co/black-forest-labs/FLUX.1-schnell
GGUF models loader.
ComfyUI-AuraSR https://github.com/alexisrolland/ComfyUI-AuraSR
https://huggingface.co/fal/AuraSR
https://huggingface.co/fal/AuraSR-v2
ComfyUI node to upscale images.
AuraFlow-v0.3 https://huggingface.co/fal/AuraFlow-v0.3 Text to image model.
FLUX.1-schnell GGUF https://huggingface.co/black-forest-labs/FLUX.1-schnell
https://huggingface.co/city96/FLUX.1-schnell-gguf
Text to image model.
AnimePro FLUX GGUF https://civitai.com/models/934628?modelVersionId=1053818 Text to image model.

Video generation

Name Enviroment Links Additional information
Cinemo Python 3.12 venv https://huggingface.co/spaces/maxin-cn/Cinemo
https://github.com/maxin-cn/Cinemo

Music generation

Name Enviroment Links Additional information
AudioCraft Python 3.12 venv https://github.com/facebookresearch/audiocraft

Voice generation

Name Enviroment Links Additional information
WhisperSpeech web UI Python 3.12 venv https://github.com/Mateusz-Dera/whisperspeech-webui
https://github.com/collabora/WhisperSpeech
https://github.com/ROCmSoftwarePlatform/flash-attention
MeloTTS Python 3.12 venv https://github.com/myshell-ai/MeloTTS
MetaVoice Python 3.12 venv https://github.com/metavoiceio/metavoice-src
https://github.com/metavoiceio/metavoice-src/tree/sidroopdaska/faster_decoding
1. Script uses the faster_decoding branch.
2. Telemetry is disabled by default
F5-TTS Python 3.12 venv https://github.com/SWivid/F5-TTS 1. Remember to select the voice file when using the interface.
Matcha-TTS Python 3.12 venv https://github.com/shivammehta25/Matcha-TTS

3D generation

Name Enviroment Links Additional information
TripoSR Python3.12 venv https://github.com/VAST-AI-Research/TripoSR
https://github.com/ROCmSoftwarePlatform/flash-attention
1. It uses PyTorch ROCm, but torchmcubes is built for the CPU. This method is still faster than using just PyTorch CPU-only version.

Tools

Name Enviroment Links Additional information
Fastfetch Shell https://github.com/fastfetch-cli/fastfetch Custom Fastfetch configuration with GPU memory info.

Instalation:

Note

First startup after installation of the selected interface may take longer.

Important

This script does not download any models. If the interface does not have defaults, download your own.

Caution

If you update, back up your settings and models. Reinstallation deletes the previous directories.

1. Add the user to the required groups.

sudo adduser `whoami` video
sudo adduser `whoami` render

2. Reboot

sudo reboot

3. Clone repository

git clone https://github.com/Mateusz-Dera/ROCm-AI-Installer.git

4. Run installer

bash ./install.sh

5. Select installation path.

6. Select ROCm installation if you are upgrading or running the script for the first time.

7. Install selected interfaces

8. Go to the installation path with the selected interface and run:

./run.sh