this post was submitted on 12 Mar 2024
179 points (100.0% liked)

Technology

37723 readers
558 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 66 points 8 months ago (4 children)

I think the advancement of LLMs, which culminated in the creation of ChatGPT, is this generation's Eternal September. In a couple of decades, we'll talk about how the internet "used to be" before free, public websites were abandoned because our CAPTCHAs could no longer filter out bots and device attestation and continuous mictopayments became the only way to keep platforms spam free.

Even when Microsoft and OpenAI stop hemorrhaging money by giving away stuff like ChatGPT for basically free, the spam farms will run this stuff on their own soon. I expect a wave of internet users to get upset and call paying for used services "enshittification", because people don't realise how much running these AI models actually costs.

I think this will also start the transition of not only AI being sold like Netflix or like mobile data caps, but also to an "every company that doesn't get the most expensive AI will start lagging behind" economy. After all, AI only needs to cost a little less than the manpower it's replacing. Any internet facing company needs good AI to outwit the AI trying to abuse cheap or free services (like trials) that they may offer.

We're probably lucky that AI spammers haven't discovered the Fediverse yet, but if the Fediverse does actually become big enough for mainstream use, we'll see Twitter level reaction spam in no time, and no amount of CAPTCHAs will be able to stop it.

[–] [email protected] 36 points 8 months ago (2 children)

Part of what makes Twitter, Reddit, etc. such easy targets for bot spammers is that they're single-point-of-entry. You join, you have access to everyone, and then you exhaust an account before spinning up 10 more.

The Fediverse has some advantages and disadvantages here. One significant advantage is that -- particularly if, when the dust finally settles, it's a big network of a large number of small sites -- it's relatively easy to cut off nodes that aren't keeping the bots out. One disadvantage, though, is that it can create a ton of parallel work if spam botters target a large number of sites to sign up on.

A big advantage, though, is that most Fediverse sites are manually moderated and administered. By and large, sites aren't looking to offload this responsibility to automated systems, so what needs to get beaten is not some algorithmic puzzle, but human intuition. Though, the downside to this is that mods and admins can become burned out dealing with an unending stream of scammers.

[–] [email protected] 16 points 8 months ago (1 children)

If it really ramps up, we could share block lists too, like with ad blockers. So if a friend (or nth-degree friend) blocks someone, then you would block them automatically.

[–] [email protected] 13 points 8 months ago

That work has already started with Fediseer. It's not automatic, but it's really easy, which is probably the best we'll get for a while.

[–] [email protected] 22 points 8 months ago (1 children)

I expect a wave of internet users to get upset and call paying for used services “enshittification”, because people don’t realise how much running these AI models actually costs.

I am so tired of this bullshit. Every time I've turned around, for the past thirty years now, I've seen some variation on this same basic song and dance.

Yet somehow, in spite of supposedly being burdened with so much expense and not given their due by a selfish, ignorant public, these companies still manage to build plush offices on some of the most expensive real estate on the planet and pay eight- or even nine-figure salaries to a raft of executive parasites.

When they start selling assets and cutting executive salaries, or better yet laying them off, then I'll entertain the possibility that they need more revenue. Until then, fuck 'em.

[–] [email protected] 14 points 8 months ago (1 children)

We’re probably lucky that AI spammers haven’t discovered the Fediverse yet, but if the Fediverse does actually become big enough for mainstream use, we’ll see Twitter level reaction spam in no time, and no amount of CAPTCHAs will be able to stop it.

I was thinking about this the other day. We might have to move to a whitelist federation model with invite-only instances at some point.

[–] [email protected] 5 points 8 months ago (3 children)

The downside of that approach is that AI can pretend to be humans wanting to join quite well. It's possible to set up a lobster.rs like system where there's a tree of people you've invited so admins can cull entire spam groups at once, but that also has its downsides (i.e. it's impossible to join if none of your friends have already joined, or if you don't want to attach your online socials to your friends).

[–] [email protected] 2 points 8 months ago

It's a trade off that we'll probably have to take unless we want to deanonymize the internet.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

where there’s a tree of people you’ve invited.

And that is how you get singular point of view echo chamber.

[–] [email protected] 6 points 8 months ago

Most of the internet is made up of echo chambers now even though anyone and everyone can access a majority of it. I don't think being selective in who we allow into communities worsens the pre-existing echo chamber issue. If anything it may help to be more selective. It can sometimes be impossible to tell the difference between trolls, bots, and real people, so I feel like we assume every person we disagree with is a troll or bot. The issue with that is that we may be outright dismissing real opinions. In theory, everyone in a selective community is a real person who is expressing their true thoughts and feelings.

[–] [email protected] 2 points 8 months ago

I don't think that's a perfect system anyway though, spammers could create a massive tree of fake accounts and just only use a small proportion of them for spam

Use a number of compromised user accounts to set this up and it becomes a nightmare

[–] [email protected] 8 points 8 months ago

Instead of being this gen's September 1993, I feel like the changes being sped up by the introduction of generative models are finally forcing us into October 1993. As in: they're reverting some aspects of the internet to how they used to be.

also to an “every company that doesn’t get the most expensive AI will start lagging behind” economy.

That spells tragedy of the commons for those companies. They ruining themselves will probably have a mixed impact on us [Internet users in general].