this post was submitted on 19 Jul 2024
433 points (98.4% liked)

Technology

58122 readers
4366 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 2 months ago (1 children)

So they came up with the ai equivalent of the Linux nice command.

[–] [email protected] 4 points 2 months ago (2 children)

I guess? I'm surprised that the original model was on equal footing to the user prompts to begin with. Why was the removal of the origina training a feature in the first place? It doesn't make much sense to me to use a specialized model just to discard it.

It sounds like a very dumb oversight in GPT and it was probably long overdue for fixing.

[–] [email protected] 3 points 2 months ago

A dumb oversight but an useful method to identify manufactured artificial manipulation. It's going to make social media even worse than it already is.

[–] [email protected] 1 points 2 months ago

Because all of these models are focused on text prediction/QA, the whole idea of "prompts" organically grew out of the functionality when they tried to make it something more useful/powerful. Everything from function calling, agents, now this are just be bolted onto the foundation of LLMs.

Its why this seems more like a patch than an actual iteration of the technology. They aren't approaching it at the fundamentals.