this post was submitted on 31 Aug 2023
596 points (97.9% liked)
Technology
59658 readers
2632 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You are correct this is a misunderstanding here. But it is of your misunderstanding of neural networks, not mine of memory.
LLMs are mathematical models. It does not know any information about Paris, not in the same way humans do or even the Wikipedia does. It knows what words appear in response to questions about Paris. That is not the same thing as knowing anything about Paris. It does not know what Paris is.
I agree with you the word “Paris” exists in it. But I disagree that information is relevant in any human sense.
You have apparently been misled into believing a word generation tool contains any information at all other than word weights. Every word it contains is as exactly meaningless to it as every other word.
Brains do not store data in this way. Firstly, neural networks are mathematical approximations of neurons. But they are not neurons and do not have the same properties of neurons, even in aggregate. Secondly, brains contain thoughts, memories, and consciousness. Even if that is representable in a similar vector space as LLM neural networks (a debatable conjecture), the contents of that vector space are as different as newts are from the color purple.
I encourage you to do some more research on this before continuing to discuss it. Ask ChatGPT itself if its neural networks are like human brains; it will tell you categorically no. Just remember it also doesn’t know what it’s talking about. It is reporting word weights from its corpus and is no substitute for actual thought and research.