this post was submitted on 12 Oct 2023
36 points (100.0% liked)

Technology

37604 readers
455 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 11 months ago (2 children)

I guess the point I have an issue with here is 'ability to do things not specifically trained on'. LLMs are still doing just that, and often incorrectly - they basically just try to guess the next words based on a huge dataset they trained on. You can't actually teach it anything new, or to put it better it can't actually derive conclusions by itself and improve in such way - it is not actually intelligent, it's just freakishly good at guessing.

[–] [email protected] 2 points 11 months ago

Heck, sometimes someone comes to me and asks if some system can solve something they just thought of. Sometimes, albeit very rarely, it just works perfectly, no code changes required.

Not going to argue that my code is artificial intelligence, but huge AI models obviously has a higher odds of getting something random correct, just because it correlates.

[–] [email protected] 1 points 11 months ago (1 children)

You can’t actually teach it anything new, or to put it better it can’t actually derive conclusions by itself and improve in such way

That is true, at least after training. They don't have any long-term memory. Short term you can teach them simple games, though.

Of course, this always goes into Chinese room territory. Is simply replicating intelligent behavior not enough to be equivalent to it? I like to remind people we're just a chemical reaction ourselves, according to all our science.

[–] [email protected] 1 points 11 months ago (1 children)

It's actually false. You can't teach them long-term, but within the course of a conversation they can be taught new rules and behaviors. There's dozens if not hundreds of papers on it.

[–] [email protected] 2 points 11 months ago (1 children)
[–] [email protected] 2 points 11 months ago (1 children)

You're right, apologies. Skimmed too hard

[–] [email protected] 2 points 11 months ago

Yup, been there!