this post was submitted on 23 Nov 2023
69 points (86.3% liked)

Technology

59675 readers
3218 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 year ago

I totally get the skepticism. It's not surprising given how abused the term "AI" is, but they are a much bigger deal than "a search engine with that robot voice". In fact that's exactly the thing they are definitely NOT. They are terrible at recall and terrible at prioritizing reference information.

That said, Gpt models are the first, possibly most important piece of an AGI. They are a proof of concept that the ability to draw basic conceptual and linguistic understanding is possible from an enormous amount of data and shockingly little instruction. There's no real reason to think they should be as good as they are at correctly interpreting written content, but here we are.

People make a big deal out of gpt because they think it will enable rapid improvement, and personally I don't think that's a forgone conclusion. It's probably appropriate to compare it to the development of the first rudimentary computer: by itself it isn't particularly groundbreaking, but drawn to its maximum it has revolutionary potential. Every additional step from here is likely as big or bigger than the one from gpt2 and to 3 and 4.