this post was submitted on 29 Jan 2025
608 points (96.8% liked)
Technology
61227 readers
4304 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I hope it's effective.
Maybe against bad crawlers. If you know what you're trying to look for and just just trying to grab anything and everything this should not be very effective. Any good web crawler has limits. This seems to be targeted. This seems to be targeted at Facebooks apparently very dumb web crawler.
Yeah I was just thinking... this is not at all how the tools work.
It might be initially, but they'll figure out a way around it soon enough.
Remember those articles about "poisoning" images? Didn't get very far on that either
The way to get around it is respecting
robots.txt
lolBut that's not respecting the shareholders 😤
This kind of stuff has always been an endless war of escalation, the same as any kind of security. There was a period of time where all it took to mess with Gen AI was artists uploading images of large circles or something with random tags to their social media accounts. People ended up with random bits of stop signs and stuff in their generated images for like a week. Now, artists are moving to sites that treat AI scrapers like malware attacks and degrading the quality of the images that they upload.
It's not. If it was, every search engine out there would be belly up at the first nested link.
Google/Bing just consume their own crawling traffic. You don't want to NOT show up in search queries right?
Same problems with tarpitting. They search engines are doing the crawling for each of their own companies, you don't want to poison your own search results.
Conceptually, they'll stop being search crawls altogether and if you expect to get any traffic it'll come from AI crawls :/
I think to use it defensively, you should put the path into robots.txt, and only those doesn't follows the rule will be greeted with the maze. For proper search engine crawler, that's should be the standard behavior.
Spiders already detect link bombs, recursion bombs, they're capable of rendering the page out in memory to see what's truly visible.
It's a great idea but it's a really old trick and it's already been covered.