this post was submitted on 06 Jul 2024
142 points (90.3% liked)

Technology

59152 readers
2306 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 4 months ago (1 children)

SB 1047 is a California state bill that would make large AI model providers – such as Meta, OpenAI, Anthropic, and Mistral – liable for the potentially catastrophic dangers of their AI systems.

Now this sounds like a complicated debate - but it seems to me like everyone against this bill are people who would benefit monetarily from not having to deal with the safety aspect of AI, and that does sound suspicious to me.

Another technical piece of this bill relates to open-source AI models. [...] There’s a caveat that if a developer spends more than 25% of the cost to train Llama 3 on fine-tuning, that developer is now responsible. That said, opponents of the bill still find this unfair and not the right approach.

In regards to the open source models, while it makes sense that if a developer takes the model and does a significant portion of the fine tuning, they should be liable for the result of that...

But should the main developer still be liable if a bad actor does less than 25% fine tuning and uses exploits in the base model?

One could argue that developers should be trying to examine their black-boxes for vunerabilities, rather than shrugging and saying it can't be done then demanding they not be held liable.

[–] [email protected] 5 points 4 months ago (1 children)

In regards to the open source models, while it makes sense that if a developer takes the model and does a significant portion of the fine tuning, they should be liable for the result of that...

This kind of goes against the model that open source has operated on for a long time, as providing source doesn't represent liability. So providing a fine-tuned model shouldn't either.

[–] [email protected] 1 points 4 months ago (1 children)

So providing a fine-tuned model shouldn't either.

I didn't mean in terms of providing. I meant that if someone provided a base model, someone took that, built upon it, then used it for a harmful purpose - of course the person modified it should be liable, not the base provider.

It's like if someone took a version of Linux, modified it, then used that modified version for an illegal act - you wouldn't go after the person who made the unmodified version.

[–] [email protected] 1 points 4 months ago

You wouldn't necessarily punish the person that modified Linux either, you'd punish the person that uses it for a nefarious purpose.

Important distinction is the intention to deceive, not that the code/model was modified to be able to be used for nefarious purposes.