this post was submitted on 06 Jul 2024
142 points (90.3% liked)

Technology

59187 readers
2746 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 months ago (2 children)

It’s also not clear if it’s even possible to fully prevent AI systems from misbehaving. The truth is, we don’t know a lot about how LLMs work, and today’s leading AI models from OpenAI, Anthropic, and Google are jailbroken all the time. That’s why some researchers are saying regulators should focus on the bad actors, not the model providers.

It seems a complicated debate. Hard to find out where you want to stand. I want to show a method to find answers by creating 3 variants of an analogy.

For how many of these cases do you think somebody should be doing something?

Case 1:
A huge warehouse full of firearms. Burglars are breaking into it every night and stealing lots of weapons. The owners say they don't know how this warehouse was built and how to make it more secure in order to stop the criminals from obtaining lots of new weapons every day. The general public starts calling to the government to do something. Some say the warehouse owner should take responsibility. Others say it all depends on how the criminals use the weapons. The criminals seem to know how to use them good...

Case 2:
A huge warehouse full of hammers. Burglars are breaking into it every night and stealing lots of hammers. The owners say they don't know how this warehouse was built and how to make it more secure in order to stop the criminals from obtaining lots of new hammers every day. The general public starts calling to the government to do something. Some say the warehouse owner should take responsibility. Others say it all depends on how the criminals use the hammers. The criminals seem to know how to use them good...

Case 3:
A huge warehouse full of tulips. Burglars are breaking into it every night and stealing lots of flowers. The owners say they don't know how this warehouse was built and how to make it more secure in order to stop the criminals from obtaining lots of new flowers every day. The general public starts calling to the government to do something. Some say the warehouse owner should take responsibility. Others say it all depends on how the criminals use the tulips. The criminals seem to know how to use them good...

[–] [email protected] 7 points 4 months ago (1 children)

Are warehouse owners analogous to AI companies here? I don't think AI companies care about their models being misused unless it has economic impact whereas warehouse owners certainly care about their wares being stolen regardless of how those wares are then used or how dangerous they are.

[–] [email protected] 8 points 4 months ago* (last edited 4 months ago)

I don't think AI companies care about their models being misused

Yes, that is one of the current questions, if you have read the article: Should they care?

It is a serious question, because if the models are misused, that could be a threat to all mankind - much worse than a warehouse full of weapons. And if they are required to care, then they might have to rebuild their models fundamentally, and they don't know how.