this post was submitted on 09 Oct 2024
9 points (76.5% liked)

Stable Diffusion

4297 readers
1 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I want to buy a new GPU mainly for SD. The machine-learning space is moving quickly so I want to avoid buying a brand new card and then a fresh model or tool comes out and puts my card back behind the times. On the other hand, I also want to avoid needlessly spending extra thousands of dollars pretending I can get a 'future-proof' card.

I'm currently interested in SD and training LoRas (etc.). From what I've heard, the general advice is just to go for maximum VRAM.

  • Is there any extra advice I should know about?
  • Is NVIDIA vs. AMD a critical decision for SD performance?

I'm a hobbyist, so a couple of seconds difference in generation or a few extra hours for training isn't going to ruin my day.

Some example prices in my region, to give a sense of scale:

  • 16GB AMD: $350
  • 16GB NV: $450
  • 24GB AMD: $900
  • 24GB NV: $2000

edit: prices are for new, haven't explored pros and cons of used GPUs

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 1 points 4 weeks ago (1 children)

My 16 GB 3080Ti is only annoying with Flux gens right now. Those take like 1.5-2 minutes each and need a lot of iterations. My laptop heat saturates from Flux. It could get better if the tools support splitting the workflow between GPU and CPU like with Textgen, but at the moment it is GPU only and kinda sucks. Stuff like Pony runs fast enough at around 20-30 seconds for most models.

[โ€“] [email protected] 2 points 4 weeks ago* (last edited 4 weeks ago)

I am using a lot of Pony* models at the moment and Pony v7 looks like it will switch to AuraFlow or FLUX [1] so it's useful to hear your experience with it on a 3080Ti.