this post was submitted on 08 Jan 2024
134 points (89.0% liked)

Technology

59217 readers
3063 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Generative AI Has a Visual Plagiarism Problem::Experiments with Midjourney and DALL-E 3 show a copyright minefield

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 22 points 10 months ago* (last edited 10 months ago) (3 children)

This has been known for a long time. The main point of contention now will be who is liable for infringing outputs. The convenient answer would be to put the responsibility on the users, who would then have to avoid sharing/profiting from infringing images. In my opinion this solution can only apply in cases where the model is being run by the end user.

When a model is served online, locked behind a subscription or api fee, the service provider is potentially selling infringing works straight to the user. Section 230 will likely play a role, but even then there will be issues in the cases where a model outputs protected characters without an explicit request.

[–] [email protected] 7 points 10 months ago* (last edited 10 months ago)

This is literally it it’s really not that complicated. Training a Data set is not (currently) an infringement of any of the rights conferred by copyright. Generating copyright infringing content is still possible, but only when the work would otherwise be infringing. The involvement of not of AI in the workflow is not some black pill that automatically makes infringement, but it is still possible to make a work substantially similar to a copyrighted work.

[–] [email protected] 2 points 10 months ago (1 children)

Meanwhile as we speak websites like Civitai and others started to paywall these models and outputs. It's going to get ugly for some of them.

[–] [email protected] 3 points 10 months ago (1 children)

That isn't happening. They've backtracked on that plan and are working with users on a better plan.

[–] [email protected] 2 points 10 months ago

Oh, really? Let's see. Good to hear.

[–] [email protected] -2 points 10 months ago* (last edited 10 months ago)

The users did not access copyright protected data, they can reasonably argue a lack of knowledge of similarities as a defence.

In music that gives you a free pass because a lot of music is similar.

Ed Sheeran made similar music to Marvin Gaye through essentially cultural osmosis of ideas. Robin Thick deliberately took a Marvin Gaye reference and directly copied it.

The legal and moral differences relied on knowledge.

The liability has to fall on who fed the model the data in the first place. The model might be Robin Thick or Ed Sheeran, but given the model has been programmed with the specific intention to create similar work from a collection of references. That puts it plainly in the Robin Thick camp to me.

The AI's intent is programmed and if a human followed that programmed objective, with copyright owned material, that human would be infringing on copyright unless they paid royalties.