this post was submitted on 02 Dec 2023
158 points (85.3% liked)

Technology

60055 readers
3337 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I think he could be right about generative AI, but that's not a serious problem given we're moving beyond generative AI and into virtual intelligence territory.

Generative ai right now requires someone (or something) to initiate it with a prompt, but according to some of the latest research papers in OpenAI as well as the drama that happened recently surrounding the leadership, it appears that we're moving beyond the 'generative' phase into the 'virtual intelligence' phase.

It's not going to be 'smart' it will be knowledgeable (and accurate, hopefully). That is to say VI's will be useful as data retrieval or organization but not necessarily data creation (although IIRC the way to get around this would be to develop a VI that specifically only works on creating ideas but we'd be moving into AGI territory and I don't expect we'll have serious contenders for AGI for another decade at least).

The rumours abound surrounding the OpenAI drama, the key one being the potential for accidentally developing AGI internally (I doubt this heavily). The more likely reason is that the board of directors had a financial stake in Nvidia and when they found out altman was working on chips specifically for AI that were faster, lower cost, and lower power consumption than current nvidia trash (by literally tens of thousands of dollars), they fired him to try and force the company onto their preferred track (and profit in the process, which IMO, kind of ironic that a non-profit board of directors has so many 'closed door' discussions with nvidia staff...)

This is just the thoughts of a comp-sci student with a focus on artificial intelligence systems.

If interested in further reading:

https://www.ibm.com/blog/understanding-the-different-types-of-artificial-intelligence/

https://digitalreality.ieee.org/publications/virtual-intelligence-vs-artificial-intelligence

https://www.psychologytoday.com/us/blog/what-we-really-want-in-a-leader/202204/why-you-need-to-focus-on-virtual-intelligence

Keep in mind that because it's still early days in this field that a lot of terms haven't reached an established consensus across academia yet, so you'll notice variations in how each organization explains what "x" type of intelligence is.