this post was submitted on 13 Dec 2023
98 points (92.2% liked)
Technology
59658 readers
2642 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It could be described that way, but it wouldn't be a very apt metaphor. We aren't simple, stateful input-to-output algorithms, but a confluence of innate tendencies, learned experiences, acquired habits, unconscious motivations, and capable of modifying our own thought processes and behavior on the fly to suit whatever best fits the local context. Our brains encode a model of the world we live in that includes models of ourselves and the other people we interact with, all built in realtime from our observations without conscious effort.
I’m not disputing that our intelligence isn’t more sophisticated, but rather that maybe the “intelligence” in llms is not necessarily all that different from ours, just based on different and limited inputs, and trained on a vastly less wide data.
But it is, necessarily.
For example, when we make shit up, we're aware that the shit we made up isn't real. LLMs are structurally incapable of recognizing the distinction between facts they regurgitate and the ones they manufacture from whole cloth.
You didn't have to consume terabytes of text to build a model for how to form sentences like a human, you did that with a few megabytes of overheard conversation before you were even conscious enough to be aware of it.
There's no model of intelligence so over-simplified to the point of giving LLMs partial credit that wouldn't also give equivalent credence to the "intelligence" of search engines.