this post was submitted on 14 Jan 2025
33 points (90.2% liked)

Selfhosted

41009 readers
237 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I’m doing a lot of coding and what I would ideally like to have is a long context model (128k tokens) that I can use to throw in my whole codebase.

I’ve been experimenting e.g. with Claude and what usually works well is to attach e.g. the whole architecture of a CRUD app along with the most recent docs of the framework I’m using and it’s okay for menial tasks. But I am very uncomfortable sending any kind of data to these providers.

Unfortunately I don’t have a lot of space so I can’t build a proper desktop. My options are either renting out a VPS or going for something small like a MacStudio. I know speeds aren’t great, but I was wondering if using e.g. RAG for documentation could help me get decent speeds.

I’ve read that especially on larger contexts Macs become very slow. I’m not very convinced but I could get a new one probably at 50% off as a business expense, so the Apple tax isn’t as much an issue as the concern about speed.

Any ideas? Are there other mini pcs available that could have better architecture? Tried researching but couldn’t find a lot

top 29 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 3 hours ago (1 children)

The context cache doesn't take up too much memory compared to the model. The main benefit of having a lot of VRAM is that you can run larger models. I think you're better off buying a 24 GB Nvidia card from a cost and performance standpoint.

[–] [email protected] 1 points 1 hour ago

Yeah I was thinking about running something like Code Qwen 72B which apparently requires 145GB Ram to run the full model. But if it’s super slow especially with large context and I can only run small models at acceptable speed anyway it may be worth going NVIDIA alone for CUDA.

[–] [email protected] 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

There are some videos on youtube of people running local LLMs on the newer M4 chips which have pretty good AI performance. Obviously, a 5090 is going to destroy it in raw compute power, but the large unified memory on Apple Silicon is nice.

That being said, there are plenty of small ITX cases at about 13-15L that can fit a large nvidia GPU.

[–] [email protected] 1 points 1 hour ago* (last edited 1 hour ago)

Thanks! Hadn’t thought of YouTube at all but it’s super helpful. I guess that’ll help me decide if the extra Ram is worth it considering that inference will be much slower if I don’t go NVIDIA.

[–] [email protected] 6 points 18 hours ago* (last edited 18 hours ago) (1 children)

I do this on my ultra, token speed is not great, depending on the model of course, a lot of source code sets are optimized for Nvidia and don't even use native Mac gpu without modifying the code, defaulting to cpu. I've had to modify about half of what I run

Ymmv but I find it's actually cheaper to just use a hosted service

If you want some specific numbers lmk

[–] [email protected] 1 points 18 hours ago

Interesting, is there any kind of model you could run at reasonable speed?

I guess over time it could amortize but if the usability sucks that may make it not worth it. OTOH really don’t want to send my data to any company.

[–] [email protected] 3 points 17 hours ago (1 children)

If you enjoy waiting around, sure

[–] [email protected] 1 points 16 hours ago (1 children)
[–] [email protected] 2 points 16 hours ago (2 children)

Then don't go with an Apple chip. They're impressive for how little power they consume. But any 50 watt chip will get absolutely destroyed by a 500 watt gpu, even one from almost a decade ago will beat it.

And you'll save money to boot, if you don't count your power bill

[–] [email protected] 5 points 14 hours ago (1 children)

But any 50 watt chip will get absolutely destroyed by a 500 watt gpu

If you are memory-bound (and since OP's talking about 192GB, it's pretty safe to assume they are), then it's hard to make a direct comparison here.

You'd need 8 high-end consumer GPUs to get 192GB. Not only is that insanely expensive to buy and run, but you won't even be able to support it on a standard residential electrical circuit, or any consumer-level motherboard. Even 4 GPUs (which would be great for 70B models) would cost more than a Mac.

The speed advantage you get from discrete GPUs rapidly disappears as your memory requirements exceed VRAM capacity. Partial offloading to GPU is better than nothing, but if we're talking about standard PC hardware, it's not going to be as fast as Apple Silicon for anything that requires a lot of memory.

This might change in the near future as AMD and Intel catch up to Apple Silicon in terms of memory bandwidth and integrated NPU performance. Then you can sidestep the Apple tax, and perhaps you will be able to pair a discrete GPU and get a meaningful performance boost even with larger models.

[–] [email protected] 1 points 4 hours ago

Again, you'd be waiting around all day

[–] [email protected] 2 points 14 hours ago (1 children)

The power bill side is also not even clear cut. The longer processing time for slower chips sometimes ends up resulting in higher costs. It's surprisingly not as simple as lower wattage chip is cheaper to operate.

[–] [email protected] 1 points 4 hours ago

Good point!

[–] [email protected] 4 points 18 hours ago* (last edited 18 hours ago) (3 children)

I've not run such things on Apple hardware, so can't speak to the functionality, but you'd definitely be able to do it cheaper with PC hardware.

The problem with this kind of setup is going to be heat. There are definitely cheaper minipcs, but I wouldn't think they have the space for this much memory AND a GPU, so you'd be looking for an AMD APU/NPU combo maybe. You could easily build something about the size of a game console that does this for maybe $1.5k.

[–] [email protected] 2 points 11 hours ago (1 children)

you'd definitely be able to do it cheaper with PC hardware.

You can get a GPU with 192GB VRAM for less than a Mac? Sign me up please.

[–] [email protected] 2 points 9 hours ago (2 children)

AMD APU uses whatever system RAM is as VRAM, so...yeah. NPU as well.

[–] [email protected] 1 points 7 hours ago (1 children)

And what is the memory bandwidth on these APUs?

[–] [email protected] 0 points 5 hours ago* (last edited 5 hours ago) (1 children)

As fast as it gets to the CPU. That should be pretty obvious.

[–] [email protected] 1 points 52 minutes ago

Which is how fast?

[–] [email protected] 1 points 8 hours ago

Up to half of system RAM*

[–] [email protected] 10 points 18 hours ago (1 children)

For context length, vram is important, you can’t break contexts across memory pools so it would be limited to maybe 16gb. With m series you can have a lot more space since ram/vram are the same, but its ram at apple prices. You can get a +24gb setup way cheaper than some nvidia server card though

[–] [email protected] 4 points 18 hours ago

Yeah the VRAM of Mac M series is very attractive for running models at full context length and the memory bandwidth is quite good for token generation compared to the price, power consumption and heat generation of NVidia GPUs.

Since I’ll have to put this in my kitchen/living room that’d be a big plus but idk how well prompt processing would work if I send over like 80k tokens.

[–] [email protected] 3 points 18 hours ago (1 children)

I’d honestly be open for that but would an AMD setup not take up a lot of space and consume lots of power / be loud?

It seems like in terms of price & speed, the Macs suck compared to other options, but if you don’t have a lot of space and don’t want to hear an airplane engine constantly I’m wondering if there are options.

[–] [email protected] -3 points 18 hours ago* (last edited 12 hours ago) (2 children)

~~I just looked, and the MM maxes out at 24G anyway. Not sure where you got the thought of 196GB at.~~ NVM you said m2 ultra

Look, you have two choices. Just pick one. Whichever is more cost effective and works for you is the winner. Talking it down to the Nth degree here isn't going to help you with the actual barriers to entry you've put in place.

[–] [email protected] 2 points 17 hours ago

Mac Mini M4 Pro can be ordered with up to 64GB shared memory

[–] [email protected] 2 points 17 hours ago (2 children)

I understand what you’re saying but I’m coming to this community because I like having more input, hear about the experience of others and potentially learn about things I didn’t know about. I wouldn’t ask specifically in this community if I wouldn’t want to optimize my setup as much as I can.

[–] [email protected] 3 points 17 hours ago (1 children)

Here's a quick idea of what you'd want in a PC build https://newegg.io/2d410e4

[–] [email protected] 1 points 16 hours ago

Thanks, that’s very helpful! Will look into that type of build

[–] [email protected] 1 points 17 hours ago

You can have a slightly bigger package in PC form and doing 4x the work for half the price. That's the gist.