this post was submitted on 08 Aug 2023
1 points (100.0% liked)

Technology

34799 readers
249 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago (1 children)

Or don't do anything. There are plenty of crawlers out there and disallowing won't stop the unethical ones.

[–] [email protected] 1 points 1 year ago (1 children)

Just because some people might break into my house doesn't mean I'll stop locking my doors.

[–] [email protected] 1 points 1 year ago (1 children)

that doesn't lock anything, it's not a security feature.

[–] [email protected] 1 points 1 year ago (1 children)

A house door lock isn’t that much about security either.

[–] [email protected] 1 points 1 year ago

It's a deterrent. Which is a pretty apt comparison for robots.txt and user agent blocking.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Is there some way you could have your web server log who scrapes the site? If you disallow ChatGPT and still find that it has scraped your site would you have cause to sue? @legaleagle (or anyone else too)

[–] [email protected] 1 points 1 year ago (1 children)

It's gotta be pretty difficult to differentiate human users from bots. If it was easy, you could prevent bots from loading the page altogether.

[–] [email protected] 1 points 1 year ago

Exactly what Google are trying to do currently. Just in the worst way possible.

[–] [email protected] 1 points 1 year ago

I mean, you can add their user agent to the robots file but the crawler could just change their user agent or even ignore the robots file if the server isn't filtering requests by user agent

[–] [email protected] 1 points 1 year ago

"Please label all of your interesting text so we can flag it with our webcrawler to train on later."