this post was submitted on 02 Dec 2023
17 points (74.3% liked)

Technology

34832 readers
19 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 11 months ago (1 children)

I think I kind of understand the term, but what does "hallucinations" in this context refer to? It seems like it might be fabricated unformation?

[–] [email protected] 6 points 11 months ago (1 children)

Basically, the model just makes stuff up.

[–] [email protected] 4 points 11 months ago (1 children)

Not sure why someone downvoted you. That’s exactly what the term means in this context. It’s those confidently written answers that contain false or fabricated information.

[–] [email protected] 0 points 11 months ago

And this seems like the biggest limitation for the LLM approach. The model just knows that a certain set of tokens tends to follow another set of tokens.

It has no understanding of what the tokens represent. So it does a great job of producing sentences that look meaningful, but any actual meaning in them is purely incidental.