this post was submitted on 05 Nov 2023
2 points (51.7% liked)
Technology
59292 readers
3884 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It isn't hedging on anything. It's already here, it already works. I run an LLM on my home computer, using open-source code and commodity hardware. I use it for actual real-world problems and it helps me solve them.
At this point the ones who are calling it a "fantasy" are the delusional ones.
By it's already here, and it already works, you mean guessing the next token? That's not really intelligence. In any sense, let alone the classical sense. Any allegedly real world problem you're solving with it. It's not a real world problem. It's likely a problem you could solve with a text template.
It works for what I need it to do for me. I don't really care what word you use to label what it's doing, the fact is that it's doing it.
If you think LLMs could be replaced with a "text template" you are completely clueless.
I'm not sure you understand what the LLM is doing, or how support responses have been optimized over the decades. Or even how "AI" responses have worked for the past couple decades. But I'm glad you've got an auto-responder that works for you.