this post was submitted on 23 Sep 2023
210 points (91.0% liked)

Technology

59322 readers
5106 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/5400607

This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 109 points 1 year ago (2 children)

Ironically, I read about three lines of this article before I got a full-screen popup and then a paywall then closed the tab. And it's going to get worse apparently.

[–] [email protected] 9 points 1 year ago (1 children)

I typically don't read anything from the new york times, unless I find a free paper somewhere.

[–] [email protected] 2 points 1 year ago

Noscript extension on Firefox still works.

Though if you want to support quality reporting, paying for a nytimes account is not a bad idea.

[–] [email protected] 6 points 1 year ago

Insert astronaut "always has been" meme here.

[–] [email protected] 70 points 1 year ago (2 children)

I don't think the issue is corps feeding the internet into AI systems. The real issue is gatekeeping to information and only giving access to this information while milking the individual for data by trackers, money by subscriptions, and more money by ads (that we pay for with subscriptions).

Another larger issue that I fear is often ignored is the amount of control large corporations and in theory the government can have over us just by looking at our trace we leave in the internet. Just have a look at Russia and China for real world examples of this.

[–] [email protected] 24 points 1 year ago (8 children)

As an open source contributor, I believe information (facts and techniques) should be free.

As an open source contributor, I also know that two-way collaboration only happens when users understand where the software came from and how they can communicate back to the original author(s).

The layer of obfuscation that LLMs add, where the code is really from XYZ open-source project, but appears to be manifesting from thin air... worries me, because it's going to alienate would-be collaborators from the original authors.

"AI" companies are not freeing information. They are colonizing it.

load more comments (8 replies)
[–] [email protected] 11 points 1 year ago (1 children)

Yep, the truly free and open internet is coming to an end. Corporations and governments have spent decades trying to claim control over it, and they're nearly there.

[–] [email protected] 10 points 1 year ago (2 children)

Which, ironically, will be greatly expedited by the drive to prohibit AI from learning from "unlicensed" materials. That will guarantee that the only AIs with a broad training set will be those owned by corporations that already control an enormous amount of training materials (Disney, Getty Images, etc.)

[–] [email protected] 3 points 1 year ago

Yeah, right now the fight is between corporations and creators, but I feel like the future battle is going to be between corporate AIs and "pirated" ones, because Disney is going to keep a firm chokehold over what its generative AI can make, while the community ones will completely ignore copyright restrictions and just let people do whatever they want.

Not gonna need to worry about paywalls when you can get a pirated generative AI to create the superhero mashup you always wanted to watch as a child. That said, I could definitely see Disney and other piggybacking off of AI panic to extend copyright protection into spaces that were previously fair use.

load more comments (1 replies)
[–] [email protected] 63 points 1 year ago* (last edited 1 year ago) (2 children)

The internet is fine.

Listen. The era of algorithms and automated aggregators and what not feeding you endless interesting content is over. Before that we read blogs, we shared them on Usenet and IRC, we had webrings. We engaged in communities and the content we were exposed to was human curated. That is coming back. If we can quit it with the hackernews bot spam on Lemmy, it can be one of those places. You need to find niche forums that interest you that are invite only and start talking to people. The future of the internet is human.

[–] [email protected] 18 points 1 year ago (2 children)

Algorithm created curation isn't necessarily bad. It's just not great when it's designed to increase engagement, rather than have the most liked, most interesting or best written content rise to the top. When engagement is the most important metric, instead we get lies, click bait and emotive content rising to the top.

[–] [email protected] 8 points 1 year ago

Enragement is hard to distinguish from engagement and most creators of algorithms don’t seem to particularly care about the difference. Some creators DO know the difference and still choose the dark side. It’s shitheads all the way down.

[–] [email protected] 4 points 1 year ago (1 children)

I'd say it's more the problem that if you have any system, someone will try to game the system and succeed eventually. There's no metric for objectively good objective quality that we can measure. Most liked? Use bots or use the number of likes as a goal where you'll do a silly thing. Most interesting? That's completely subjective and varied, the only real way to use that would be to track the individuals and serve "things that interest them." Best written? I don't know enough about writing to appreciate what's good and isn't and most people don't either as long as it's good enough and appeals to them.

[–] [email protected] 3 points 1 year ago

See also SEO. Or marketing in general I guess.

In theory, you have a better widget so you want to get it to the top of the relevant search results. In practice.... 10,000 people trying to make money off a lemon pie recipe create a hellscape of mostly indistinguishable garbage that technically fits the description.

load more comments (1 replies)
[–] [email protected] 61 points 1 year ago (3 children)

Start making deepfakes of CEOs saying stuff they never said. Bet your ass they’ll make laws real quick about AI protections for individuals.

[–] [email protected] 26 points 1 year ago* (last edited 1 year ago)

Sir, we have the top of the line ChatGPT7 online. What should we ask it?

Ask it what our board should direct the company to do.

Sir its answer is to immediately raise salaries as there is no logical or sustainable reason for excess wealth at the levels of concentration we are at currently with everyone but a few suffering and living our their working years in stress, anxiety and misery for no gain.

What are our other AI options?

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)

Basically every law in favor of the average person only exists because it benefits the owning class in some way.

It's the main reason why theft and murder are seen as the highest of crimes yet r--- is rarely if ever prosecuted.

[–] [email protected] 15 points 1 year ago (6 children)
load more comments (6 replies)
[–] [email protected] 2 points 1 year ago
[–] [email protected] 30 points 1 year ago (3 children)

When there is just paywalls and AI generated text garbage everywhere, it's nice to have a place where you can read what actual people think about things, good or bad.

That's the value of forums nowadays I think.

[–] [email protected] 28 points 1 year ago (2 children)

Actual user generated content is absolutely where it's at.

I trust a 8 year old forum post or a product review on YouTube by someone with 1,000 subscribers much more than any of the Amazon affiliate link riddled listicles that dominate search results.

[–] [email protected] 15 points 1 year ago (1 children)

Exactly, which is why I keep repeating here, the Google/Facebook advertising model of "personalized content algorithm" was and is a lie that they've been selling for decades. There really is nothing more effective to promote something than genuine word of mouth, and that is not something that can be automated by an unfeeling machine.

So, in that sense, actual human content are a dwindling resource on the Internet right now, and that's where Lemmy comes in. If we want Lemmy to grow, you should actively contribute your own expertise here(everybody is good at something) instead of arguing pointlessly, so people can think of Lemmy as a place where people help people.

load more comments (1 replies)
[–] [email protected] 2 points 1 year ago

I'm loving how kagi banishes listicles to a single, small, condensed section of the search results

load more comments (2 replies)
[–] [email protected] 14 points 1 year ago (4 children)

Just a tangent, how long until game companies use AI voice synth to make us think we're playing with real people?

[–] [email protected] 13 points 1 year ago (2 children)

When they actually invent AI. What we have now is just a statistical model. There is no AI. It’s just a buzz word.

[–] [email protected] 10 points 1 year ago (3 children)

Which is enough to imitate your usual ingame voice communication.

load more comments (3 replies)
[–] [email protected] 4 points 1 year ago

Semantics aside, they already have voice synthesis

[–] [email protected] 6 points 1 year ago (1 children)

Have you seen another player in slither.io?? No, no you haven't. It is a single player game.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago

Skyrim's already got a mod that does effectively this.

[–] [email protected] 3 points 1 year ago

Where's the money in that?

I guess they could make you think you're better at competitive games than you thought, but then that still doesn't guide you to buy anything extra

[–] [email protected] 10 points 1 year ago (6 children)

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

This analogy falls apart when you note that "overgrazing" these resources does absolutely nothing to harm them.

They're still there. They haven't been affected in any way by the fact that a machine somewhere has read them and learned a bunch of stuff from them. So what?

[–] [email protected] 7 points 1 year ago (1 children)

This analogy falls apart when you note that "overgrazing" these resources does absolutely nothing to harm them.

Only if you consider AI-supercharged misinformation to not be harmful.

Only if you consider the entropy of human interaction on the internet to not be harmful.

Only if you consider being unable to know who is real to not be harmful.

[–] [email protected] 8 points 1 year ago (7 children)

None of those things directly harm the resources being "grazed", and none of them are inevitable consequences of AI. If you think they are then you're actually arguing against AI in general and not the specific way in which they've been trained.

load more comments (7 replies)
[–] [email protected] 5 points 1 year ago (2 children)

While the analogy is not perfect, you can think that the harm is getting lost in the noise. If the "overgrazing" of content on the internet (content which has the purpose of being read/listened/etc. Often for a job) causes a huge amount of other content based on it (AI-generated), then the original is damaged by being lost in the noise.

load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 5 points 1 year ago

This is the best summary I could come up with:


Thanks to artificial intelligence, however, IBM was able to sell Mr. Marston’s decades-old sample to websites that are using it to build a synthetic voice that could say anything.

A.I.-generated books — including a mushroom foraging guide that could lead to mistakes in identifying highly poisonous fungi — are so prevalent on Amazon that the company is asking authors who self-publish on its Kindle platform to also declare if they are using A.I.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I.

Consider, for instance, that the volunteers who build and maintain Wikipedia trusted that their work would be used according to the terms of their site, which requires attribution.

A Washington Post investigation revealed that OpenAI’s ChatGPT relies on data scraped without consent from hundreds of thousands of websites.

Whether we are professional actors or we just post pictures on social media, everyone should have the right to meaningful consent on whether we want our online lives fed into the giant A.I.


The original article contains 1,094 words, the summary contains 188 words. Saved 83%. I'm a bot and I'm open source!

load more comments
view more: next ›