this post was submitted on 10 Jan 2024
1239 points (96.5% liked)
Technology
59133 readers
2259 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lemmy users: Copyright law is broken and stupid.
Also Lemmy users: A.I. violates copyright law!
Where's the contradiction though
yes. there are myriad ways that copyright law is broken and stupid, but protecting the creations of independent artists isn't one of them
take this bullshit back to reddit
I mean, consistency is better than inconsistency, even if we don't agree with the rules.
A.I. doesn't violate copywrite laws. It is the data-mining done to train A.I. and the regurgitation of said data in the responses that ultimately violate these laws. A model trained on privately owned, properly licensed, or exclusively public works wouldn't be a problem.
Even then, I would argue that lack of attribution is a bigger problem than merely violating copywrite. A big part of the LLM mystique is in how it can spit out a few lines of Shakespeare without accreditation and convince its users that its some kind of master poet.
Copywrite law is stupid and broken. But plagarism is a problem in its own right, as it seeks to effectively sell people their own creative commons at an absurd markup.
This is how we end up with only corpo owned AIs being allowed to exist imo, places like stock photo sites are the only ones with large enough repositories of images to train AI that they have all the legal rights to
The way I see it, either generative AI is legal, free for everyone to run locally, and the created works are public domain, OR, everyone pays $20/mo to massive faceless corpos for the rest of their lives to have the privilege of access to it because they're the only ones who own all (or have enough money to license) the IP needed to train them
Its how you end up with sixteen different streaming services that only vend a sliver of the total available content, sure. But the underlying technology of AI grows independent of what its trained on.
There are other alternatives. These sites can be restricted to data within the public domain. And we can increase our investment in public media. The problem of NYT articles being digested and regurgitated as ChatGPT info-vomit isn't a problem if the NYT is a publicly owned and operated enterprise. Then its not struggling to profit off journalism, but treating this information as a loss-leading public service open to all, with ChatGPT simply operating as a tool to store, process, and present the data.
Similarly, if you limit generative AI to the old Mickey Mouse and Winnie-the-Pooh films from the 1930s, you leave plenty of room for original artists to create new works without fear that their livelihoods get chews up and fed back into the system. If you invest in public art exhibitions then these artists can get paid to pursue their craft, the art becomes public domain immediately, and digital tools that want to riff on the original are free to do so without undermining the artists themselves.