292
Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos
(arstechnica.com)
This is a most excellent place for technology news and articles.
You're never going to get rights over the training data your pictures that are freely available for anything to scan creates. By being on the internet your pictures basically have the right to be viewed by anyone or anything even an AI. You have never gotten to control who looks at your content after you post it.
You're trying to make the same argument the "don't copy my nft" bros tried to make.
Imagine going into court and saying you should get paid for all the stuff u gave away for free on the Internet willingly.
Well there's a difference between "don't look at my work without paying me, even if it's posted publicly" and "don't sell my work without paying me, even if it's posted publicly"
Like I said, there's nothing we can do about companies using all the data they can get their hands on for private R&D. It IS possible to protect against the second case, where companies can't sell an LLM product with copyrighted training data.
My question was about how that second case could be extended to stuff posted on the Fediverse, such as if an instance had a blanket "all rights belong to the user posting the content".
These laws exist, if companies can use them then so can we
LLMS and Generative AI do not learn like humans and regulating it the same would be disingenuous and completely off base.