this post was submitted on 08 May 2024
8 points (59.1% liked)

Videos

14311 readers
287 users here now

For sharing interesting videos from around the Web!

Rules

  1. Videos only
  2. Follow the global Mastodon.World rules and the Lemmy.World TOS while posting and commenting.
  3. Don't be a jerk
  4. No advertising
  5. No political videos, post those to [email protected] instead.
  6. Avoid clickbait titles. (Tip: Use dearrow)
  7. Link directly to the video source and not for example an embedded video in an article or tracked sharing link.
  8. Duplicate posts may be removed

Note: bans may apply to both [email protected] and [email protected]

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 6 months ago (3 children)

Interesting video. At the core it can be summed up as:

  • "AI is existential threat" is a lie of big tech trying to use regulatory capture against competitors
  • the main competition for that big tech would be open source generative models
  • we should fight against big tech in this
[–] [email protected] 9 points 6 months ago

we should fight against big tech

I think that's a good idea in general, not just because of AI

[–] [email protected] 6 points 6 months ago

Thanks for the TL;DR!

[–] [email protected] 5 points 6 months ago (1 children)

I've been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.

[–] [email protected] 1 points 6 months ago (1 children)

Those concerns mostly apply to artificial general intelligence, or "AGI". What's being developed is another can of worms entirely, it's a bunch of generative models. They're far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they're going to go rogue.

[–] [email protected] 2 points 6 months ago

I'm well aware, but we don't get to build an AGI and then figure it out, and we can't keep these ones on target, see any number of "funny" errors people posted, up to the paper I can't recall the name of offhand that had all of the examples of even simpler systems being misaligned.