this post was submitted on 14 Jan 2025
36 points (90.9% liked)

Selfhosted

41084 readers
268 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I’m doing a lot of coding and what I would ideally like to have is a long context model (128k tokens) that I can use to throw in my whole codebase.

I’ve been experimenting e.g. with Claude and what usually works well is to attach e.g. the whole architecture of a CRUD app along with the most recent docs of the framework I’m using and it’s okay for menial tasks. But I am very uncomfortable sending any kind of data to these providers.

Unfortunately I don’t have a lot of space so I can’t build a proper desktop. My options are either renting out a VPS or going for something small like a MacStudio. I know speeds aren’t great, but I was wondering if using e.g. RAG for documentation could help me get decent speeds.

I’ve read that especially on larger contexts Macs become very slow. I’m not very convinced but I could get a new one probably at 50% off as a business expense, so the Apple tax isn’t as much an issue as the concern about speed.

Any ideas? Are there other mini pcs available that could have better architecture? Tried researching but couldn’t find a lot

Edit: I found some stats on GitHub on different models: https://github.com/ggerganov/llama.cpp/issues/10444

Based on that I also conclude that you’re gonna wait forever if you work with a large codebase.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 hours ago* (last edited 3 hours ago) (2 children)

Yeah I found some stats now and indeed you’re gonna wait like an hour to process if you throw like 80-100k token into a powerful model. With APIs that kinda works instantly, not surprising but just to give a comparison. Bummer.

[–] [email protected] 1 points 1 hour ago

Anyways, the important thing is the "TOPS" aka trillions of operations per second. Having enough ram in important, but if you don't have a fast processor than you're wasting ram while you can just stream it from a fast ssd.

One such cases is when your system can't handle more than 50 tops, like the apple m systems. Try an old gpu, and enjoy 1000's of tops

[–] [email protected] 1 points 1 hour ago* (last edited 1 hour ago)

Application Programming Interface, are you talking about something on the internet? On a gpu driver? On your phone?

Then also, what's the size model you're using? Define with int32? fp4? Somewhere in between? That's where ram requirements come in

I get that you're trying to do a mic drop or something, but you're not being very clear