this post was submitted on 23 Feb 2024
55 points (88.7% liked)

Fediverse

28200 readers
368 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.

So if my kbin/lemmy or Mastodon server blocks OpenAI's crawler via robot.txt, what does that even mean when people on other servers that don't block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 64 points 8 months ago* (last edited 8 months ago) (4 children)

We're sick of closed walled-garden monoliths like Reddit! Let's move to an open federated protocol where anyone can participate and the APIs can't be locked down!

...wait, not like that!

Yeah. This is what you signed up for when you joined the Fediverse, the ActivityPub protocol broadcasts your content to any other servers that ask for it. And just generally, that's how the Internet works. You're putting up a public billboard and expecting to be able to control who gets to look at it. That's not going to work. Even robots.txt is just a gentleman's agreement, it's not enforceable.

If you really want to prevent AI from training on your content with any degree of certainty you're probably looking for a private forum of some kind that's run by someone you trust.

[–] [email protected] 24 points 8 months ago (1 children)

I don't expect anything, I was merely asking a question to clarify this

[–] [email protected] 25 points 8 months ago

Well, I hope my answer clarifies it. You can't prevent LLMs from being trained on your public posts.

load more comments (2 replies)