this post was submitted on 21 Nov 2023
44 points (92.3% liked)
Technology
59398 readers
2816 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It refuses more from what I've seen. Personally I don't think it's a good idea to become dependant on any commercially hosted model. Open models are a bit behind but they're getting there.
The problem with open models is you basically have to run it on your own hardware, and the hardware is not only expensive it's also unobtainable.
H100 GPUs are sold by scalpers for $50k with no warranty — and worse that's an obsolete model. The H200 GPU just can't be purchased at all unless you're filling a datacentre with them.
You can run ollama on a regular laptop
It's also insufferablely slow, and the answers are ... well ... not exactly up to gpt-4 level to say the least