this post was submitted on 15 Jul 2024
16 points (94.4% liked)

Selfhosted

40218 readers
1009 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I switched from llamacpp to koboldcpp. Koboldcpp is really really fast because it can use gpu. The problem is that I'm having a hard time to get it to generate long enough outputs.

"write an essay about the history of the moon. It needs to be at least 500 words" for example is a prompt where the same model will give me an output that's actually that long on llamacpp. Koboldcpp never gives me more than about 70 words per response. Pressing enter to make the ai continue writing or asking it to continue doesn't work as well in my koboldcpp setup as it does on llamacpp. I've set the tokens to generate to 512, the highest number. I've set the context tokens to 4096.

What else can I do to try to get longer responses?

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 4 months ago (1 children)

llama.cpp uses the gpu if you compile it with gpu support and you tell it to use the gpu..

Never used koboldcpp, so I don't know why it would it would give you shorter responses if both the model and the prompt are the same (also assuming you've generated multiple times, and it's always the same). If you don't want to use discord to visit the official koboldcpp server, you might get more answers from a more llm-focused community such as [email protected]

[–] [email protected] 2 points 4 months ago

Cool I didn't know llamacpp could do gpu acceleration at all. I'm going to look into that.

[–] [email protected] 3 points 4 months ago (1 children)

You're part of the way there by setting the token count higher. Context will make the model "remember" more, so that's helpful for generating responses up to the token count.

If you haven't already, go into the settings menu and make sure "Continue bot responses" is turned on. If it is, pressing the submit button with no input should make the bot add onto what it output before.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

I think I might be on to something that contributes to the problem. The built-in "KoboldGPT chat" option puts some example queries in its context memory. They aren't very long responses so I think it's just seeing that and using it as a guideline for what to say which results in shorter answers.

If I use the "new chat" option instead of "KoboldGPT chat", it makes it so that nothing is in the context. No prompt and no memory. This way when I tell it to write 500 words of crap, it doesn't quite write that much but it's a lot better than before. Pressing enter to make it generate more text works more often this way too.

[–] [email protected] 1 points 4 months ago (1 children)

https://old.reddit.com/r/KoboldAI/comments/163jfmo/more_than_512_tokens_possible/

nevermind, just realised you can type in the token amount

tries it

Yeah. There's a slider, and if you enter a number outside its acceptable range, it'll be red, but it'll still permit using the number. Worked with Amount to Generate = 1024 and Max Tokens (which also needs to be increased) set to 2048 in a quick test.

[–] [email protected] 1 points 4 months ago (1 children)

Is max tokens different from context size?

Might be worth keeping in mind that the generated tokens go into the context, so if you set it to 1k with 4k context you only get 3k left for character card and chat history. I think i usually have it set to 400 tokens or something, and use TGW's continue button in case a long response gets cut off

[–] [email protected] 2 points 4 months ago

Is max tokens different from context size?

No. Same thing. If you hover over the question mark by "Max Tokens" in the Kobold AI Web UI:

"Max number of tokens of context to submit to the AI for sampling. Make sure this is higher than Amount to Generate. Higher values increase VRAM/RAM usage."