this post was submitted on 22 Dec 2024
1475 points (97.4% liked)
Technology
60055 readers
3620 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But wouldn't that mean making it open source, then it not functioning properly without the data while open, would prove that it is using a huge amount of unlicensed data?
Probably not "burden of proof in a court of law" prove though.
Making it open source doesn't change how it works. It doesn't need the data after it's been trained. Most of these AIs are just figuring out patterns to look for in the new data it comes across.
So you're saying the data wouldn't exist anywhere in the source code, but it would still be able to answer questions based on the data it has previously seen?
Most AI are not built to answer questions. They're designed to act as some kind of detection/filter heuristic to identify specific things about an input that leads to a desired output.
That is how LLM works, they don't store the data as data, but as weight values.
So then why, if it were all open sourced, including the weights, would the AI be worthless? Surely having an identical but open source version, that would strip profitability from the original paid product.
It wouldn't be. It would still work. It just wouldn't be exclusively available to the group that created it-any competitive advantage is lost.
But all of this ignores the real issue - you're not really punishing the use of unauthorized data. Those who owned that data are still harmed by this.
It does discourages the use of unauthorised data. If stealing doesn't give you competitive advantage, it's not really worth the risk and cost of stealing it in the first place.
If you can still use it after you stole it, as opposed to not being able to use it at all... Then it does give you an incentive
If you did all the work and potentially criminal collection of data, but everyone else gets the benefit as well, that is not an incentive. You underestimate how selfish corporations can be.
OpenAI wouldn't stay at the forefront of LLM if every competitor gets to use the model they spent money on training.
in civil matters, the burden of proof is actually usually just preponderance of evidence and not beyond a reasonable doubt. in other words to win a lawsuit, you only need to have more compelling evidence than the other person.
But you still have to have EVIDENCE. Not derivative evidence. The output of a model could be argued to be hearsay because it's not direct evidence of originating content, it's derivative.
You'd have to have somebody backtrack generations of model data to even find snippets of something that defines copyright material, or a human actually saying "Yes, we definitely trained on unlicensed data".
so like I am not making any comment on anything but the legal system here. but it’s absolutely the case that you can win a lawsuit on purely circumstantial evidence if the defense is unable to produce a compelling alternative set of circumstances which can lead to the same outcome.