this post was submitted on 18 Aug 2024
80 points (77.8% liked)

Privacy

31998 readers
1186 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 111 points 3 months ago (4 children)

Block? Nope, robots.txt does not block the bots. It's just a text file that says: "Hey robot X, please do not crawl my website. Thanks :>"

[–] [email protected] 60 points 3 months ago (7 children)

I disallow a page in my robots.txt and ip-ban everyone who goes there. Thats pretty effective.

[–] [email protected] 16 points 3 months ago (1 children)

Did you ban it in your humans.txt too?

[–] [email protected] 18 points 3 months ago* (last edited 3 months ago) (2 children)

humans typically don't visit [website]/fdfjsidfjsidojfi43j435345 when there's no button that links to it

[–] [email protected] 16 points 3 months ago (1 children)

I used to do this on one of my sites that was moderately popular in the 00's. I had a link hidden via javascript, so a user couldn't click it (unless they disabled javascript and clicked it), though it was hidden pretty well for that too.

IP hits would be put into a log and my script would add a /24 of that subnet into my firewall. I allowed specific IP ranges for some search engines.

Anyway, it caught a lot of bots. I really just wanted to stop automated attacks and spambots on the web front.

I also had a honeypot port that basically did the same thing. If you sent packets to it, your /24 was added to the firewall for a week or so. I think I just used netcat to add to yet another log and wrote a script to add those /24's to iptables.

I did it because I had so much bad noise on my logs and spambots, it was pretty crazy.

[–] [email protected] 10 points 3 months ago

This thread has provided genius ideas I somehow never thought of, and I'm totally stealing them for my sites lol.

[–] [email protected] 14 points 3 months ago* (last edited 3 months ago)

I LOVE VISITING FDFJSIDFJSIDOJFI435345 ON HUMAN WEBSITES, IT IS ONE OF MY FAVORITE HUMAN HOBBIES. ~~🤖~~👨

[–] [email protected] 9 points 3 months ago (1 children)
[–] [email protected] 25 points 3 months ago (1 children)

Imagine posting a rule that says "do not walk on the grass" among other rules and then banning anyone who steps on the grass with the thought process that if they didn't obey that rule they were likely disobeying other rules. Except the grass is somewhere that no one would see unless they actually read the rules. The rules were the only place that mentioned that grass.

[–] [email protected] 7 points 3 months ago (1 children)

Is the page linked in the site anywhere, or just mentioned in the robots.txt file?

[–] [email protected] 10 points 3 months ago (1 children)
[–] [email protected] 8 points 3 months ago

Excellent.

I think I might be able to create a fail2ban rule for that.

[–] [email protected] 5 points 3 months ago (3 children)

Not sure if that is effective at all. Why would a crawler check the robots.txt if it's programmed to ignore it anyways?

[–] [email protected] 16 points 3 months ago

cause many crawlers seem to explicitly crawl "forbidden" sites

[–] [email protected] 3 points 3 months ago

Google and script kiddies copying code...

[–] [email protected] 1 points 2 months ago

You could also place the same page as a hidden link on your home page.

[–] [email protected] 4 points 3 months ago

I doubt it'd be possible in most any way due to lack of server control, but I'm definitely gonna have to look this up to see if anything similar could be done on a neocities site.

[–] [email protected] 4 points 3 months ago
[–] [email protected] 2 points 2 months ago (2 children)

Can this be done without fail2ban?

[–] [email protected] 1 points 2 months ago (1 children)
[–] [email protected] 2 points 2 months ago (1 children)

How did you do it? Looking to do this on my own site.

[–] [email protected] 3 points 2 months ago

My websites Backend is written in flask so it was pretty easy to add

[–] [email protected] 1 points 2 months ago

Should be able to do it with Crowdsec

[–] [email protected] 45 points 3 months ago

Robots.txt is honor-based and Big Data has no honor.

[–] [email protected] 13 points 3 months ago (1 children)

Unfortunate indeed.

“Can AI bots ignore my robots.txt file? Well-established companies such as Google and OpenAI typically adhere to robots.txt protocols. But some poorly designed AI bots will ignore your robots.txt.”

[–] [email protected] 23 points 3 months ago

typically adhere. but they don’t have to follow it.

poorly designed AI bots

Is it a poor design if its explicitly a design choice to ignore it entirely to scrape as much data as possible? Id argue its more AI bots designed to scrape everything regardless of robots.txt. That’s the intention. Asshole design vs poor design.

[–] [email protected] 7 points 3 months ago (1 children)

This is why I block in a htaccess:

# Bot Agent Block Rule
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (BOTNAME|BOTNAME2|BOTNAME3) [NC]
RewriteRule (.*) - [F,L]
[–] [email protected] 19 points 3 months ago (1 children)

This is still relying on the bot being nice enough to tell you that it's a bot; it could just not.

[–] [email protected] 7 points 3 months ago (2 children)

Exactly. The only truly effectively way I’ve ever found to block bots is to use a service like Akamai. They have an add-on called Bot Manager that identifies requests as bots in real time. They have a library of over 1000 known bots and can also identify unknown bots built on different frameworks, bots that impersonate well known bots like Googlebot, etc. This service is expensive, but effective…

[–] [email protected] 5 points 3 months ago* (last edited 3 months ago)

I wonder if there is an AI scraper block list I could add to Suricata 🤔

[–] [email protected] 2 points 3 months ago (1 children)

How does this differentiate between a user and a bot if the User Agent doesn't say it's a bot?

[–] [email protected] 9 points 3 months ago* (last edited 3 months ago) (1 children)

When any browser, app, etc. makes an HTTP request, the request consists of a series of lines (headers) that define the details of the request, and what is expected in the response. For example:


GET /home.html HTTP/1.1
Host: developer.mozilla.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:50.0) Gecko/20100101 Firefox/50.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: https://developer.mozilla.org/testpage.html
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0

The thing is, many of these headers are optional, and there’s no requirement regarding their order. As a result, virtually every web browser, every programming framework, etc. sends different headers and/or orders them differently. So by looking at what headers are included in a request, the order of the headers, and in some cases the values of some headers, it’s possible to tell if a person is using Firefox or Chrome, even if you use a plug-in to spoof your User-Agent to look like you’re using Safari.

Then there’s what is known as TLS fingerprinting, which can also be used to help identify a browser/app/programming language. Since so many sites use/require HTTPS these days it provides another way to collect details of an end user. Before the HTTP request is sent, the client & server have to negotiate the encryption to use. Similar to the HTTP headers, there are a number of optional encryption protocols & ciphers that can be used. Once again, different browsers, etc. will offer different ciphers & in different orders. The TLS fingerprint for Googlebot is likely very different than the one for Firefox, or for the Java HTTP library or the Python requests package, etc.

On top of all this Akamai uses other knowledge & tricks to determine bots vs. humans, not all of which is public knowledge. One thing they know, for example, is the set of IP addresses that Google’s bots operate out of. (Google likely publishes it somewhere) So if they see a User-Agent identifying itself as Googlebot they know it’s fake if it didn’t come from one of Google’s IP’s. Akamai also occasionally injects JavaScript, cookies, etc. into a request to see how the client responds. Lots of bots don’t process JavaScript, or only support a subset of it. Some bots also ignore cookies, and others even modify cookies to try to trick servers.

It’s through a combination of all the above plus other sorts of analysis that Akamai doesn’t publicize that they can identify bot vs human traffic pretty reliably.

[–] [email protected] 1 points 3 months ago (1 children)

What if a bot/crawler Puppeteers a Chromium browser instead of sending a direct HTTP requisition and, somehow, it managed to set navigator.webdriver = false so that the browser will seem not to be automated? It'd be tricky to identify this as a bot/crawler.

[–] [email protected] 3 points 3 months ago

Oh there are definitely ways to circumvent many bot protections if you really want to work at it. Like a lot of web protection tools/systems, it’s largely about frustrating the attacker to the point that they give up and move on.

Having said that, I know Akamai can detect at least some instances where browsers are controlled as you suggested. My employer (which is an Akamai customer and why I know a bit about all this) uses tools from a company called Saucelabs for some automated testing. My understanding is that our QA teams can create tests that launch Chrome (or other browsers) and script their behavior to log into our website, navigate around, test different functionality, etc. I know that Akamai can recognize this traffic as potentially malicious because we have to configure the Akamai WAF to explicitly allow this traffic to our sites. I believe Akamai classifies this traffic as a “headless” Chrome impersonator bot.