this post was submitted on 07 May 2024
46 points (100.0% liked)

Technology

37693 readers
314 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

I'm new to the field of large language models (LLMs) and I'm really interested in learning how to train and use my own models for qualitative analysis. However, I'm not sure where to start or what resources would be most helpful for a complete beginner. Could anyone provide some guidance and advice on the best way to get started with LLM training and usage? Specifically, I'd appreciate insights on learning resources or tutorials, tips on preparing datasets, common pitfalls or challenges, and any other general advice or words of wisdom for someone just embarking on this journey.

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (2 children)

Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it's the 8B model because the intermediate sizes aren't out yet and 70B simply won't fly on a single GPU.

But it really screams. Much faster than I can read. PS: Ollama is just llama.cpp under the hood.

Edit: Ah, wait, I know what's going wrong here. The 22B parameter model is probably too big for your VRAM. Then it gets extremely slow yes.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

It should be split between VRAM and regular RAM, at least if it's a GGUF model. Maybe it's not, and that's what's wrong?

[–] [email protected] 1 points 5 months ago (1 children)

What is the appropriate size for 10Gb VRAM?

[–] [email protected] 2 points 5 months ago

It depends on your prompt/context size too. The more you have the more memory you need. Try to check the memory usage of your GPU with GPU-Z with different models and scenarios.