this post was submitted on 15 Jul 2024
16 points (94.4% liked)

Selfhosted

40218 readers
1093 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I switched from llamacpp to koboldcpp. Koboldcpp is really really fast because it can use gpu. The problem is that I'm having a hard time to get it to generate long enough outputs.

"write an essay about the history of the moon. It needs to be at least 500 words" for example is a prompt where the same model will give me an output that's actually that long on llamacpp. Koboldcpp never gives me more than about 70 words per response. Pressing enter to make the ai continue writing or asking it to continue doesn't work as well in my koboldcpp setup as it does on llamacpp. I've set the tokens to generate to 512, the highest number. I've set the context tokens to 4096.

What else can I do to try to get longer responses?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 months ago (1 children)

llama.cpp uses the gpu if you compile it with gpu support and you tell it to use the gpu..

Never used koboldcpp, so I don't know why it would it would give you shorter responses if both the model and the prompt are the same (also assuming you've generated multiple times, and it's always the same). If you don't want to use discord to visit the official koboldcpp server, you might get more answers from a more llm-focused community such as [email protected]

[–] [email protected] 2 points 4 months ago

Cool I didn't know llamacpp could do gpu acceleration at all. I'm going to look into that.