this post was submitted on 02 Feb 2024
121 points (88.1% liked)

Fediverse

28723 readers
79 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

It feels like the amount of both, divisive posts and ghoulish comments is rising again.

One could argue that the world has a lot of divisive stuff going on and lemmy just talks about it. But the way people post about stuff seems more oot and hateful than it has been in the past.

Not saying it is that but if I wanted to bring the Fediverse down or at least keep my customers from going there, I would sow this stuff as much as I can.

I'm blocking ghouls left right and center atm but if I ever asked a friend to join lemmy, I'd hate to think of what they would see that I dont anymore.

Do we need stronger moderation?

  • Maybe ban politics from c/memes?
  • Become a little more stringent on "dont be a jerk" rules in communities?

One thing that really bothers me is the collapsing "discourse". Trying to mend fences and keep the conversation between sides going ime leads to nothing but downvotes and shitstorm.

I feel like a little more interaction (instead of intervention, at first) of the moderators would do wonders there.

Thanks for reading this rant. Have a nice day.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 10 months ago (1 children)

It's interesting right?

I'm thinking the architecture of the fediverse makes it particularly vulnerable to these sorts of attacks.

I'm pretty sure I've spotted bots circle jerking on some subjects also which makes me think there's a few different sources.

[–] [email protected] 0 points 10 months ago (1 children)

Very interesting indeed.

I‘m starting to report, block and ban accounts from being viewed on my instance that use abusive language but from a systemic standpoint we should find a design solution to make this work.

Reddit had karma for this reason among others. People needed to make helpful contributions to prove they are able to function in the group.

For many reasons this is not implemented in the fediverse but a design solution would be good.

[–] [email protected] 4 points 10 months ago (2 children)

If I was designing an anti troll/bot system I'd implement a few things. Let's call any bad actor on here a bot/troll or broll for ease.

  1. Reputation based posting isn't a bad idea if carefully done
  2. When a broll is banned any users from the same ip are flagged a suspect and a subsequent ban causes delayed posting from that ip. If a VPN the instance host list has the same effect applied. Exceptions based on Reputation.
  3. You can check if text is ai/llm generated and an automated api check before posting and immediate ban if found.
  4. Checks on if a user posts inhumanly fast or is oddly active etc would be sensible.
  5. Any broll system has to be adaptive and measures taken need to be kept secret (this post for example).
[–] [email protected] 2 points 10 months ago (2 children)

Very good ideas! Any idea if something like this already exists? If not, shall we work on something? I have some experience in python if that helps.

[–] [email protected] 1 points 10 months ago (1 children)

Thanks but not sure what's currently implemented or even what the code base is written in 😅.

I might have a poke around and see if there's any low hanging fruit.

Call me crazy but with a 5b ipo about to start I'd be shocked if reddit wasn't paying some troll farms to brigade the fediverse and it'd be a shame if spez wins

[–] [email protected] 2 points 10 months ago (1 children)

Thats an interesting idea! Thank you very much for mentioning it!

We can absolutely write a bot in python and could try to use it like that. I already made a discord bot so this shouldn’t be brutally hard.

[–] [email protected] 1 points 10 months ago (1 children)

Awesome 👍 I'm more c#/Java, angular if there's anything I can contribute.

[–] [email protected] 2 points 10 months ago (1 children)

Well, I do know some c# but not enough for it to be functional.

You could hit me up on github or pm here to get a repo set up somehere and go from there.

What do you think?

[–] [email protected] 1 points 10 months ago (1 children)

I'm currently on holidays but that sounds great when I return. I might even get started early. Can you pm the details?

[–] [email protected] 2 points 10 months ago (1 children)

The other person we came across now tries to somehow discredit me. Whatever their plan is. Jeez.

Sure, I‘ll send you a pm. Have a nice vacation.

[–] [email protected] 1 points 10 months ago

Thanks. Yeah that was odd

[–] [email protected] 1 points 10 months ago (2 children)

PieFed is an open source lemmy alternative (written in Python) that makes good use of karma/reputation, as shown in this video:

https://mastodon.nzoss.nz/system/media_attachments/files/111/648/646/494/228/522/original/02cb1b5182a1f9b6.mp4

Try the demo site at https://piefed.social and check out https://join.piefed.social. Also see https://piefed.social/c/piefed_meta for recent feature announcements.

[–] [email protected] 2 points 10 months ago

Thank you very much for sharing, I'll keep an eye on it.

[–] [email protected] 0 points 10 months ago (1 children)

I‘m not searching for another thing to start but a way to make the current thing work. But thanks.

[–] [email protected] 2 points 10 months ago (1 children)

To be fair, Piefed uses Lemmy communities and comments, it's almost just another interface.

The reputation is indeed interesting, example in this thread with warnings "low reputation, beware!": https://piefed.social/post/27070#post_replies

[–] [email protected] 1 points 10 months ago

Ah! Understood. Thanks for clarifying.

[–] [email protected] 0 points 10 months ago (1 children)

You can check if text is ai/llm generated and an automated api check before posting and immediate ban if found

If this LLM-detection function ever results in false positives, this system will be banning innocent people.

Also there are many, many cases where a person openly displays results from an LLM, without it being in any way antisocial.

[–] [email protected] 0 points 10 months ago (1 children)

The odds of someone coming up with the same sentence as an llm within common sense bounds of time far exceed winning the lottery or getting struck by lightning.

Your second point is straight up nonsense. This platform is for humans to interact. The use of bots is inherently deceptive.

Fascinating to have someone argue for them. I think the backend logs will be pretty illuminating.

[–] [email protected] 0 points 10 months ago (1 children)

I don’t know what a person “coming up with the same sentence as an llm” would have anything to do with this unless the LLM detection is based on direct string comparison.

The use of bots is inherently deceptive

Nope. I can say:

Here’s what GPT-4 generated when I gave it that prompt: “[some LLM output that would get them banned by the machine we’re proposing to build]”

That is not deceptive. But it would be detected by this system and result in them being banned. Because you guys are gung-ho to build a powerful head-cracking machine and didn’t think of an obvious edge case.

[–] [email protected] 0 points 10 months ago

You're wrong and don't have the technical knowledge to understand why and I can't be asked explaining it.

Relax, it won't affect that case.