this post was submitted on 12 Feb 2025
80 points (98.8% liked)
Linux
49895 readers
943 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If it's just about self-hosting and not training, ROCm works perfectly fine for that. I self-host DeepSeek R1 32b and FLUX.1-dev on my 7900 XTX.
You even get more VRAM for cheaper.
I'm curious. Say you are getting a new computer, put Debian on, want to run e.g. DeepSeek via ollama via a container (e.g. Docker or podman) and also play, how easy or difficult is it?
I know that for NVIDIA you install the (closed official) drivers, setup the container insuring you get GPU passthrough, and thanks to CUDA from the driver, you're pretty much good to go. Is it the same for AMD? Do you "just" need to install another package or is there more tinkering involved?
On the host system, you don't need to do anything. AMDGPU and Mesa are included on most distros.
For LLMs you can go the easy route and just install the Alpaca flatpak and the AMD addon. It will work out of the box and uses ollama in the background.
If you need a Docker container for it: AMD provides the handy
rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete
images. They contain all the required ROCm dependencies and runtimes and you can just install your stuff ontop of it.As for GPU passthrough, all you need to do is add a device link for
/dev/kfd
and/dev/dri
and you are set. For example, in a docker-compose.yml you just add this:For example, this is the entire Dockerfile needed to build ComfyUI from scratch with ROCm. The user/group commands are only needed to get the container groups to align with my Fedora host system.
spoiler
This is very good to know. I read that ROCm can be a pain to get up and running, but I read that months ago and this space is moving fast. I may switch over when I can if this is the case. My 3080 is feeling it's age already. Thank you!
That used to be the case, yes.
Alpaca pretty much allows running LLM out of the box on AMD after installing the ROCm addon in Discover/Software. LM Studio also works perfectly.
Image generation is a little bit more complicated. ComfyUI supports AMD when all ROCm dependencies are installed and the PyTorch version is swapped for the AMD version.
However, ComfyUI provides no builds for Linux or AMD right now and you have to build it yourself. I currently use a simple Docker container for ComfyUI which just takes the AMD ROCm image and installs ComfyUI ontop.
Definitely bookmarking this reply. I haven't tried ComfyUI yet, but I've had it starred on Github from back when it was fairly new. I'm no stranger to building from source, but I have not dived into Docker yet, which is becoming more and more of a weakness by the day. Docker is sometimes required by some really cool projects and I'm missing out.