this post was submitted on 02 May 2024
66 points (91.2% liked)

Ask Lemmy

26890 readers
1675 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try [email protected]


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

I’ve started to realize that every social media platform, including Facebook, Telegram, Twitter, etc., has issues with bot spam and fake follower accounts. These platforms typically combat this problem by implementing various measures such as ban waves, behavior detection, and more.

What strategies/tools did Lemmy employ to address bots, and what additional measures could further improve these efforts?

top 20 comments
sorted by: hot top controversial new old
[–] [email protected] 36 points 6 months ago (1 children)

Currently, it's mostly manual removals which isn't sustainable if the platform grows. Various instances are experimenting with their own moderation tools outside of Lemmy, and I don't think Lemmy itself has any features to combat this. Moderation improvements is something that's been talked about with Sublinks.

What additional measures could further improve these efforts?

Having an 'automod', similar to but more advanced than Reddit, would help a lot as the first step. No one likes excess use of automod, but not having it at all will be much worse. Having an improved automod system with guides and tips on how to use it effectively, will go a long way towards making moderation easier.

[–] [email protected] 10 points 6 months ago

I think the right strategy is providing all the tools, and then the instances themselves have to stay attractive. That's not on the developers, that's on the instances themselves.

[–] [email protected] 24 points 6 months ago (2 children)

We're not mainstream enough to have many bots yet. I think some instances needed to deal with bot spam, but I haven't seen any in the community I moderate.

[–] [email protected] 5 points 6 months ago (1 children)

I don't know if its lemmy or other parts of the federation but I see plenty of drug and other stuff that I guess could be manually done but my guess is its bots.

[–] [email protected] 6 points 6 months ago

That's a kbin thing. I have never seen 'buy cheap Viagra, Oxycontin, etc.' on Lemmy. It probably exists, but whenever I block and report a user they're from kbin.

[–] [email protected] 3 points 6 months ago

Id change it to "it's not financially viable to have many bots yet".

[–] [email protected] 24 points 6 months ago

By not being popular enough to attract the majority of them

[–] [email protected] 15 points 6 months ago (1 children)

We don't, because bots don't know about us.

[–] [email protected] 3 points 6 months ago (1 children)
[–] [email protected] 3 points 6 months ago

Why did I forget to mention that?

[–] [email protected] 11 points 6 months ago

As a moderator of a couple communities, some basic/copypasta misbehaviour is caught by automated bots that I largely had to bootstrap or heavily modify myself. Near everything else has to be manually reviewed, which obviously isn't particularly sustainable in the long term.

Improving the situation is a complex issue, since these kinds of tools often require a level of secrecy incompatible with FOSS principles to work effectively. If you publicly publish your model/algorithm for detecting spam, spammers will simply craft their content to avoid it by testing against it. This problem extends to accessing third party tools, such as specialised tools Microsoft and Google provide for identifying and reporting CSAM content to authorities. They are generally unwilling to provision their service to small actors, IMO in an attempt to stop producers themselves testing and manipulating their content to subvert the tool.

[–] [email protected] 4 points 6 months ago
[–] [email protected] 3 points 6 months ago

Using hard af Captchas

[–] [email protected] 1 points 6 months ago

it welcome bot

[–] [email protected] 1 points 6 months ago

Personally I just block them

[–] [email protected] 0 points 6 months ago (2 children)

You can filter bots that identify as such in your account settings.

[–] [email protected] 11 points 6 months ago (1 children)

Except nobody with a bot farm will do that.

[–] [email protected] 1 points 6 months ago (1 children)

a bot farm can't get past the register captcha. only individual bots can with help of their owner and so far they check the bot box.

[–] [email protected] 1 points 6 months ago

It's cute that you think that.

[–] [email protected] 3 points 6 months ago

Those bots are misclanious tools people have developed, such as the one for YouTube to Piped links.

OP is talking about spam bots that won't be kind enough to tell us they are bots.