this post was submitted on 02 Mar 2025
154 points (86.3% liked)

Technology

63614 readers
3989 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 25 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 day ago

"that way we can profit from normies and Nazis!"

[–] [email protected] 4 points 1 day ago

It already does, though not in the individualized manner he's describing.

I don't think that's entirely a bad thing. Its current form, where priority one is keeping advertisers happy is a bad thing, but I'm going to guess everyone reading this has a machine learning algorithm of some sort keeping most of the spam out of their email.

BlueSky's labelers are a step toward the individualized approach. I like them; one of the first things I did there is filter out what one labeler flags as AI-generated images.

[–] [email protected] 4 points 1 day ago
[–] [email protected] 2 points 1 day ago (1 children)

Empowering users or just handing them the keys to their own echo chambers? Innovative but fraught with potential downsides.

🐱🐱🐱

[–] [email protected] 2 points 1 day ago (1 children)

We're all living inside echo chambers already. Nobody wants to be forcibly fed a "balanced" online media diet. Just imagine what the feed would be like if it contained an equal amount of content from every social media platform in the world with all possible views being represented. People would either not want to engage with it at all, would just fight and argue all day or start blocking opposing views to get back into the echo chamber. I think people should be free to choose for themselves what kind of content they consume.

[–] [email protected] 1 points 1 day ago

You’re right that echo chambers are unavoidable, but dismissing balance as chaos ignores the nuance. The current system already feeds division, so why not explore tools that nudge users toward diverse perspectives without forcing them? Autonomy doesn’t have to mean isolation—it can coexist with thoughtful design that fosters understanding instead of entrenching biases.

Rejecting balance outright feels like surrendering to the status quo.

🐱🐱🐱

[–] [email protected] 1 points 1 day ago

Yeah but my madlibs generator might get confused without a /s

At least 'AI' has thick skin?

[–] [email protected] 2 points 1 day ago

I disagree, but I think he's close. The future of moderation should be customizable by users, but it needs to be based on human moderation. Let them pick their own moderators, and fine tune that moderation to their liking, and give them an option to review moderation and make adjustments.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago)

I think that he's probably correct that this is, in significant part, going to be the future.

I don't think that human moderation is going to entirely vanish, but it's not cheap to pay a ton of humans to do what it would take. A lot of moderation is, well...fairly mechanical. Like, it's probably possible to detect, with reasonable accuracy, that you've got a flamewar on your hands, stuff like that. You'd want to do as much as you can in software.

Human moderators sleep, leave the keyboard, do things like that. Software doesn't.

Also, if you have cheap-enough text classification, you can do it on a per-user basis, so that instead of a global view of the world, different people see different content being filtered and recommended, which I think is what he's proposing:

Ohanian said at the conference that he thinks social media will "eventually get to a place where we get to choose our own algorithm."

Most social media relies on at least some level of recommendations.

This isn't even new for him. The original vision for Reddit, as I recall, was that the voting was going to be used to build a per-user profile to feed a recommendations engine. That never really happened. Instead, one wound up with subreddits (so self-selecting communities are part of it) and a global voting on stuff within that.

I mean, text classifiers aimed at filtering out spam have been around forever for email. It's not even terribly new technology. Some subreddits on Reddit had bots run by moderators that did do some level of automated moderation.

load more comments
view more: ‹ prev next ›