this post was submitted on 30 Sep 2024
195 points (93.0% liked)

Technology

34891 readers
514 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 23 points 1 month ago (3 children)

You do all this on three pounds of wet meat powered by cornflakes.

The idea we'll never recreate it through deliberate effort is absurd.

What you mean is, LLMs probably aren't how we get there. Which is fair. "Spicy autocorrect" is a limited approach with occasionally spooky results. It does a bunch of stuff people insisted would never happen without AGI - but that's how this always goes. The products of human intelligence have always shown some hard-to-define qualities which humans can eventually distinguish from our efforts to make a machine produce anything similar.

Just remember the distinction got narrower.

[–] [email protected] 7 points 1 month ago (1 children)

I agree. Very few people in industry are claiming that LLMs will become AGI. The release of o1 demonstrates that even OpenAI are pivoting from pure LLM approaches. It was always going to be a framework approach that utilizes LLMs.

[–] [email protected] 1 points 1 month ago

I had hopes for recurrent systems becoming kinda... Dixie Flatline. Maybe not general enough to learn, but spooky enough to evaluate claims.

[–] [email protected] 2 points 1 month ago

"Spicy auto~~correct~~assume"

Ftfy