this post was submitted on 09 Dec 2023
243 points (98.8% liked)
Linux
48372 readers
1665 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You’re conferring a level of agency where none exists.
It appears to “understand.” It appears to be “knowledgeable. “
But LLMs do neither of those things.
Take this note from an OpenAI dev:
It’s that these models have leveraged so much data they’ve been able to map out relationships between words (or images) in way as to be able to generate what seem like new versions of those things.
I grant you that an LLM has more base level knowledge than any one human, but again this is thanks to terrifyingly large dataset and a design that means it can access this data reasonably reliably.
But it is still a prediction model. It just has more context, better design and (most importantly) data to make predictions at a level never before seen.
If you’ve ever had a chance to play with a model at level where you can control some of its basic parameters it offers a glimpse into just how much of a prediction machine it can be.
My favourite game for a while was to give midjourney a wildly vague prompt but crank the chaos up to 100 (literally the chaos flag at the highest level) to see what kind of wild connections exist but are being filtered out during “normal” use.
The same with the GPT-3.5 API in the “early days” - you could return multiple versions of the response and see the sausage being made to a very small degree.
It doesn’t take away from the sense of magic using these tools. It just helps frame what’s going on under the hood.
Given it's an artificial intelligence it stands to reason its understanding and knowledge are artificial.
I don't think there's any relevance pointing that out anymore. No one thinks it's conscious or a general AI.
I also don't see how it's massively different to our ability to parse and output text tbh.
it's different to our ability because we actually know what words are, we know they refer to things.
All an LLM sees is tokens, it has absolutely no concept of what langauge actually is or what things mean, it's literally just "this number seems to occur after these numbers".
I think that is overly simplistic. Embeddings used for LLMs do definitely include a concept of what things mean and the relationship of things to other things.
E.g., compare the embeddings of Paris, Athens, and London to other cities and they will have small cosine distance between them. Compare France, Greece, and England and same. Then very interestingly, look at Paris - France, Athens - Greece, London - England and you'll find the resulting vectors all align (fundamentally the vector operation seems to account for the relationship "is the capital of"). Then go a step further, compare those vector to Paris - US, Athens - US, London - Canada. You'll see the previous set are not aligned with these nearly as much but these are aligned with each other (relationship being something like "is a smaller city in this countrry, named after a famous city in some other country")
The way attention works there is a whole bunch of semantic meaning baked into embeddings, and by comparing embeddings you can get to pragmatic meaning as well.
That's kind of a given though. It's a large language model, so of course its "understanding" can only be in terms of language. In a way, words are its only sense (input), and only way to interact with the world (output). The mechanism isn't really important, imo, since we could reduce our own understanding to chemical reactions.
Homo sapiens have many more dimensions of awareness, dozens maybe including sight, hearing, time, pressure, acceleration, etc., and we've been collecting data from them all 24/7 since embryo, plus instinct (pre-baked weights) from millions of years of evolution. We know that people born without a sense, let's say vision cannot conceptualize visually, even when their sight is restored for a time. I remember reading awhile back that a person born blind had their vision fixed, but they didn't know what "pointy" looked like. They couldn't know. Do they have a lower quality understanding of a word?
My point being, I don't think it's fair to objectively compare understanding between a person and a model without a testable definition of that word. Imo, and feel free to disagree, understanding is no different than merely knowing, it's just implied that the knowledge is deeper, across multiple dimensions of awareness, including subconscious awareness of our own hormones.