this post was submitted on 07 Nov 2024
15 points (94.1% liked)

Selfhosted

39980 readers
587 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hello yall, currently I have an RTX 2060, which I'll be passing down to slap a 1060 into my server, but I'd like to weigh some options first.

The 2060 has been pretty good with Linux thus far, I'm a little worried about going to the 30 series - so I'll be accepting affirmations - but I am curious what any of you think about AMD cards and which one to get. Also if there's any reason not to use a 1060 for jellyfin and such that would be very helpful

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 19 hours ago (2 children)

NVidia is great in a server, drivers are a pain but do-able. I have a 3000 series that I use regularly and pass into my kubernetes cluster. NVidia on a gaming rig linux is fine, but there is more overhead with the drivers.

AMD is great in gaming servers, but doesn't have CUDA, so it's not as useful in a server environment in my experience - if you're thinking of doing CUDA workloads like hosting LLMs.

1060 will be a noticeable step in Jellyfin

[–] [email protected] 2 points 19 hours ago (1 children)

I didn't even realize CUDA had a weight on LLMs, thank you!

[–] [email protected] 3 points 18 hours ago

Oh yeah, critical component. And vram, in fact I would only consider LLMs on a 3000+ card right now, they require quite a bit of vram