this post was submitted on 21 Jan 2025
1656 points (98.9% liked)

Technology

60677 readers
3652 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 195 points 18 hours ago (42 children)

Agreed. But we need a solution against bots just as much. There's no way the majority of comments in the near future won't just be LLMs.

[–] [email protected] 4 points 10 hours ago

we have to use trust from real life. it's the only thing that centralized entities can't fake

[–] [email protected] 69 points 18 hours ago (25 children)

Closed instances with vetted members, there’s no other way.

[–] [email protected] 1 points 3 hours ago

There might be clever ways of doing this: Having volunteers help with the vetting process, allowing a certain number of members per day + a queue and then vetting them along the way...

[–] [email protected] 108 points 18 hours ago* (last edited 18 hours ago) (8 children)

Too high of a barrier to entry is doomed to fail.

[–] [email protected] 39 points 18 hours ago (4 children)

Programming.dev does this and is the tenth largest instance.

[–] [email protected] 95 points 17 hours ago* (last edited 17 hours ago) (1 children)

Techy people are a lot more likely to jump through a couple of hoops for something better, compared to your average Joe who isn't even aware of the problem

[–] [email protected] 11 points 17 hours ago (2 children)

Techy people are a lot more likely to jump through hoops because that knowledge/experience makes it easier for them, they understand it's worthwhile or because it's fun. If software can be made easier for non-techy people and there's no downsides then of course that aught to be done.

[–] [email protected] 5 points 15 hours ago (1 children)

Ok, now tell the linux people this.

[–] [email protected] 5 points 14 hours ago* (last edited 14 hours ago) (5 children)

It's not always obvious or easy to make what non-techies will find easy. Changes could unintentionally make the experience worse for long-time users.

I know people don't want to hear it but can we expect non-techies to meet techies half way by leveling their tech skill tree a bit?

load more comments (5 replies)
[–] [email protected] 7 points 17 hours ago

Yeah that was kinda my point

[–] [email protected] 22 points 16 hours ago (1 children)

10th largest instance being like 10k users... we're talking about the need for a solution to help pull the literal billions of users from mainstream social media

[–] [email protected] 12 points 14 hours ago (1 children)

There isn't a solution. People don't want to pay for something that costs huge resources. So their attention becoming the product that's sold is inevitable. They also want to doomscroll slop; it's mindless and mildly entertaining. The same way tabloid newspapers were massively popular before the internet and gossip mags exist despite being utter horseshite. It's what people want. Truly fighting it would requires huge benevolent resources, a group willing to finance a manipulative and compelling experience and then not exploit it for ad dollars, push educational things instead or something. Facebook, twitter etc are enshitified but they still cost huge amounts to run. And for all their faults at least they're a single point where illegal material can be tackled. There isn't a proper corollary for this in decentralised solutions once things scale up. It's better that free, decentralised services stay small so they can stay under the radar of bots and bad actors. When things do get bigger then gated communities probably are the way to go. Perhaps until there's a social media not-for-profit that's trusted to manage identity, that people don't mind contributing costs to. But that's a huge undertaking. One day hopefully...

[–] [email protected] 1 points 4 hours ago

They also want to doomscroll slop; it’s mindless and mildly entertaining. The same way tabloid newspapers were massively popular before the internet and gossip mags exist despite being utter horseshite. It’s what people want.

The same analogy is applicable to food.

People want to eat fastfood because it's tasty, easily available and cheap. Healthy food is hard to come by, needs time to prepare and might not always be tasty. We have the concepts of nutrition taught at school and people still want to eat fast-food. We have to do the same thing about social/internet literacy at school and I'm not sure whether that will be enough.

[–] [email protected] 10 points 16 hours ago

We have a human vetted application process too and that's why there's rarely any bots or spam accounts originating from our instance. I imagine it's a similar situation for programming.dev. It's just not worth the tradeoff to have completely open signups imo. The last thing lemmy needs is a massive influx of Meta users from threads, facebook or instagram, or from shitter. Slow, organic growth is completely fine when you don't have shareholders and investors to answer to.

load more comments (1 replies)
[–] [email protected] 7 points 13 hours ago (1 children)

It's how most large forums ran back in the day and it worked great. Quality over quantity.

[–] [email protected] 4 points 12 hours ago* (last edited 12 hours ago)

@a1studmuffin @ceenote the only reason these massive Web 2.0 platforms achieved such dominance is because they got huge before governments understood what was happening and then claimed they were too big to follow basic publishing law or properly vet content/posters. So those laws were changed to give them their own special carve-outs. We're not mentally equipped for social networks this huge.

load more comments (6 replies)
load more comments (23 replies)
[–] [email protected] 2 points 9 hours ago (1 children)

I feel like it's only a matter of time before most people just have AI's write their posts.

The rest of us with brains, that don't post our status as if the entire world cares, will likely be here, or some place similar... Screaming into the wind.

[–] [email protected] 1 points 2 hours ago

I feel like it’s only a matter of time before most people just have AI’s write their posts.

That's going right into /dev/null as soon as I detect it-- both user and content.

[–] [email protected] 11 points 18 hours ago (1 children)

Instances that don’t vet users sufficiently get defederated for spam. Users then leave for instances that don’t get blocked. If instances are too heavy handed in their moderation then users leave those instances for more open ones and the market of the fediverse will balance itself out to what the users want.

[–] [email protected] 12 points 16 hours ago* (last edited 15 hours ago) (1 children)

I wish this was the case but the average user is uninformed and can’t be bothered leaving.

Otherwise the bigger service would be lemmy, not reddit.

the market of the fediverse will balance itself out to what the users want.

Just like classical macroeconomics, you make the deadly (false) assumption that users are rational and will make the choice that’s best for them.

[–] [email protected] 1 points 7 hours ago

The sad truth is that when Reddit blocked 3rd party apps, and the mods revolted, Reddit was able to drive away the most nerdy users and the disloyal moderators. And this made Reddit a more mainstream place that even my sister and her friends know about now.

[–] [email protected] 3 points 13 hours ago (2 children)

I mentioned this in another comment, but we need to somehow move away from free form text. So here’s a super flawed makes-you-think idea to start the conversation:

Suppose you had an alternative kind of Lemmy instance where every post has to include both the post like normal and a “Simple English” summary of your own post. (Like, using only the “ten hundred most common words” Simple English) If your summary doesn’t match your text, that’s bannable. (It’s a hypothetical, just go with me on this.)

Now you have simple text you can search against, use automated moderation tools on, and run scripts against. If there’s a debate, code can follow the conversation and intervene if someone is being dishonest. If lots of users are saying the same thing, their statements can be merged to avoid duplicate effort. If someone is breaking the rules, rule enforcement can be automated.

Ok so obviously this idea as written can never work. (Though I love the idea of brand new users only being allowed to post in Simple English until they are allow-listed, to avoid spam, but that’s a different thing.) But the essence and meaning of a post can be represented in some way. Analyze things automatically with an LLM, make people diagram their sentences like English class, I don’t know.

[–] [email protected] 1 points 7 hours ago

It sounds like you're describing doublespeak from 1984.

Simplifying language removes nuance. If you make moderation decisions based on the simple English vs. what the person is actually saying, then you're policing the simple English more than the nuanced take.

I've got a knee-jerk reaction against simplifying language past the point of clarity, and especially automated tools trying to understand it.

[–] [email protected] 5 points 12 hours ago (1 children)

A bot can do that and do it at scale.

I think we are going to need to reconceptualize the Internet and why we are on here at all.

It already is practically impossible to stop bots and I'm a very short time it'll be completely impossible.

[–] [email protected] 2 points 10 hours ago

I think I communicated part of this badly. My intent was to address “what is this speech?” classification, to make moderation scale better. I might have misunderstood you but I think you’re talking about a “who is speaking?” problem. That would be solved by something different.

[–] [email protected] 4 points 15 hours ago* (last edited 15 hours ago) (1 children)

We could ask for anonymous digital certificates. It works this way.

Many countries already emit digital certificates for it's citizens. Only one certificate by id. Then anonymous certificates could be made. The anonymous certificate contains enough information to be verificable as valid but not enough to identify the user. Websites could ask for an anonymous certificate for register/login. With the certificate they would validate that it's an human being while keeping that human being anonymous. The only leaked data would probably be the country of origin as these certificates tend to be authentificated by a national AC.

The only problem I see in this is international adoption outside fully developed countries: many countries not being able to provide this for their citizens, having lower security standards so fraudulent certificates could be made, or a big enough poor population that would gladly sell their certificate for bot farms.

[–] [email protected] 2 points 12 hours ago (1 children)

Your last sentence highlights the problem. I can have a bot that posts for me. Also, if an authority is in charge of issuing the certificates then they have an incentive to create some fake ones.

Bots are vastly more useful as the ratio of bots to humans drops.

[–] [email protected] 2 points 7 hours ago

Also the problem of relying on a nation state to allow these certificates to be issued in the first place. A repressive regime could simply refuse to give its citizens a certificate, which would effectively block them from access to a platform that required them.

load more comments (36 replies)