this post was submitted on 15 Mar 2024
492 points (95.4% liked)
Technology
59594 readers
3376 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLM is just another iteration of Search. Search engines do the same thing. Do we outlaw search engines?
SoRA is a generative video model, not exactly a large language model.
But to answer your question: if all LLMs did was redirect you to where the content was hosted, then it would be a search engine. But instead they reproduce what someone else was hosting, which may include copyrighted material. So they’re fundamentally different from a simple search engine. They don’t direct you to the source, they reproduce a facsimile of the source material without acknowledging or directing you to it. SoRA is similar. It produces video content, but it doesn’t redirect you to finding similar video content that it is reproducing from. And we can argue about how close something needs to be to an existing artwork to count as a reproduction, but I think for AI models we should enforce citation models.
I think the question of how close does it have to be is the real question.
If I use similar lighting in my movie as was used in Citizen Kane do I owe a credit?
I suppose that really depends. Are you making a reproduction of Citizen Kane, which includes cinematographic techniques? Then that’s probably a hard “gotta get a license if it’s under copyright”. Where it gets more tricky is something like reproducing media in a particular artistic style (say, a very distinctive drawing animation style). Like realistically you shouldn’t reproduce the marquee style of a currently producing artist just because you trained a model on it (most likely from YouTube clips of it, and without paying the original creator or even the reuploader [who hopefully is doing it in fair use]). But in any case, all of the above and questions of closeness and fair use are already part of the existing copyright legal landscape. That very question of how close does it have to be is at the core of all the major song infringement court battles, and those are between two humans. Call me a Luddite, but I think a generative model should be offered far less legal protection and absolutely not more legal protection for its output than humans are.
How does a search engine know where to point you? It injests all that data and processes it 'locally' on the search engines systems using algorithms to organize the data for search. It's effectively the same dataset.
LLM is absolutely another iteration of Search, with natural language ouput for the same input data. Are you advocating against search engine data injest as not fair use and copyright violations as well?
You equate LLM to Intelligence which it is not. It is algorithmic search interation with natural language responses, but that doesn't sound as cool as AI. It's neat, it's useful, and yes, it should cite the sourcing details (upon request), but it's not (yet?) a real intelligence and is equal to search in terms of fair use and copyright arguments.
I never equated LLMs to intelligence. And indexing the data is not the same as reproducing the webpage or the content on a webpage. For you to get beyond a small snippet that held your query when you search, you have to follow a link to the source material. Now of course Google doesn’t like this, so they did that stupid amp thing, which has its own issues and I disagree with amp as a general rule as well. So, LLMs can look at the data, I just don’t think they can reproduce that data without attribution (or payment to the original creator). Perplexity.ai is a little better in this regard because it does link back to sources and is attempting to be a search engine like entity. But OpenAI is not in almost all cases.
Why do you say it is not intelligence? It seems to meet all the requirements of any definition I can find.
I feel conflicted about the whole thing. Technically it's a model. I don't feel that people should be able to sue me as a scientist for making a model based on publicly available data. I myself am merely trying to use the model itself to explain stuff about the world. But OpenAI are also selling access to the outputs of the model, that can very closely approximate the intellectual property of people. Also, most of the training data was accessed via scraping and other gray market methods that were often explicitly violating the TOU of the various places they scraped from. So it all is very difficult to sort through ethically.
Don't know why you are down voted it's a good question.
As a matter of fact it almost happened for search engines in France. Newspaper's argued that snippets were leading people to not go into their ad infested sites thus losing them revenue.
https://techcrunch.com/2020/04/09/frances-competition-watchdog-orders-google-to-pay-for-news-reuse/