this post was submitted on 11 Sep 2023
402 points (94.3% liked)

Technology

59030 readers
3004 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

https://archive.ph/hMZPi

Remember when tech workers dreamed of working for a big company for a few years, before striking out on their own to start their own company that would knock that tech giant over?

Then that dream shrank to: work for a giant for a few years, quit, do a fake startup, get acqui-hired by your old employer, as a complicated way of getting a bonus and a promotion.

Then the dream shrank further: work for a tech giant for your whole life, get free kombucha and massages on Wednesdays.

And now, the dream is over. All that’s left is: work for a tech giant until they fire your ass, like those 12,000 Googlers who got fired six months after a stock buyback that would have paid their salaries for the next 27 years.

We deserve better than this. We can get it.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago

It was impossible for computers to beat chess and go masters when the computers were trying to play like humans -trying to model high level understanding of strategy and abstract values. The computers started winning when they got fast enough to brute force games - to calculate all of the possible outcomes from all of the possible moves, and to choose the best one.

This is basically the same difference between LLMs and 'true' general AI. The LLMs are brute forcing the next line of a screenplay, with no way to incorporate abstract concepts like truth or logic. If you confuse an LLM for an AI, then you're going to be disappointed in its performance. If you accept that an LLM is a way to average past communications, and accept that a lot of its training set were fiction, then it's an amazing tool for generating consensus text (given that the consensus includes fantasies and lies). It's not going to write new code, but it will give you an approximation of all the existing examples of some algorithm. An approximation that may introduce errors, like copy-pasting sequential lines from every stackexchange answer.

Computer graphics, computer game opponents, they're still doing the same things they were doing decades ago, and the improvements are just doing it all faster. General AI needs to do something different than LLMs and most other ML algorithms.