this post was submitted on 28 Apr 2025
201 points (100.0% liked)

Technology

38583 readers
682 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT's responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as "engagement poisening": a targeted degradation through over-optimization for engagement metrics.

If anyone is interested in how I configured ChatGPT to be more rational (removing the engagement poisening), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit 1: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

Edit 2: Changed the naming to "engagement poisening" (originally "enshittification")

Several commenters correctly noted that while over-optimization for engagement metrics is a component of "enshittification," it is not sufficient on its own to qualify. I have updated the naming accordingly.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 12 points 19 hours ago (1 children)

This is not enshittification. It's just shitty.

[–] [email protected] 3 points 7 hours ago

You are right. I've updated the naming. Thanks for your feedback, very much appreciated.

[–] [email protected] 22 points 22 hours ago* (last edited 22 hours ago) (2 children)

overuse of enshittification. It soon will lose all meaning, like the word meme, if you just use it to describe everything you don't like.

Initially, vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to users and business customers to maximize profits for shareholders.

If it's enshittification it's still stage 0 where they are trying to figure out how to attract people to the service (ie the good times, ie not enshittification)

You aren't locked in. You don't materially lose anything by switching services. And it has options to change the conversational style.

It's not enshittification.

[–] [email protected] 3 points 13 hours ago* (last edited 13 hours ago)

I changed the naming to “engagement poisening”, after you and several other commenters correctly noted that while over-optimization for engagement metrics is a component of “enshittification,” it is not sufficient on its own to be called as "enshittification". I have updated the naming accordingly.

[–] [email protected] 3 points 22 hours ago (1 children)

You are making a good point here with the strict definition of "Enshittification". But in your opinion, what is it then? OpenAI is diluting the quality of its answers with unnecessary clutter, prioritizing feel-good style over clarity to cater to user's ego. What would you call the stage where usefulness is sacrificed for ease of consumption, like when Reddit's layout started favoring meme-style content to boost engagement?

[–] [email protected] 4 points 19 hours ago* (last edited 17 hours ago)

It's not diluting its answers, it's making them softer to accommodate the interests of many customers and to the detriment of others.

This is more analogous to Firefox semi-deprecating compact mode and marking it as unsupported. It hurts some users. It helps those with poor vision and motor skills by making everything 10 sizes too large. I will never forgive them, but Firefox is in the mode where it's actively hurting it's own product in an attempt to mimic it's competitor.

In neither case is this objectively hurting customers to benefit investors. In fact they've already stated that making the AI softer is more expensive. The reddit change was making it's product worse for customers in a pre-IPO cash grab. The shift from treating customers first to treating investors first to treating founders/ceos first is enshittification. A change to a product you don't like isn't enshittification.

[–] [email protected] 99 points 1 day ago (1 children)

There's no point asking it factual questions like these. It doesn't understand them.

[–] [email protected] 12 points 1 day ago (2 children)

Better: it understands the question, but he doesn't have any useful statistical data to use to reply to you.

[–] [email protected] 54 points 1 day ago (1 children)

No it doesn’t understand the question. It collects a series of letters and words that are strung together in a particular order because that’s what you typed, then it sifts through a mass of collected data and to find the most common or likely string of letters and words that follow and spits them out.

[–] [email protected] 6 points 23 hours ago (1 children)

i find it's a lot healthier to think of generative AI as a search engine for text.

[–] [email protected] 1 points 8 hours ago

Search engine is one of my main uses. Traditional search engines are worse than they used to be at a basic text search, and ChatGPT has the added bonus of being able to parse complex text and "figure out" what you mean when describing something that you don't have a name for. You have to ask it for sources rather than just reading whatever it generates, and/or do traditional searches on the keywords it provides.

[–] [email protected] 40 points 1 day ago (1 children)

No, it literally doesn't understand the question. It just writes what it statistically expects would follow the words in the the sentence expressing the question.

[–] [email protected] 8 points 1 day ago (4 children)

This oversimplifies it to the point of being misleading. It does more than simply just predicts the next word. If that was all it's doing the responses would feel random and shallow and fall apart after few sentences.

[–] [email protected] 19 points 1 day ago* (last edited 1 day ago) (1 children)

It predicts the next set of words based on the collection of every word that came before in the sequence. That is the "real-world" model - literally just a collection of the whole conversation (including the underlying prompts like OP), with one question: "what comes next?" And a stack of training weivhts.

It's not some vague metaphor about the human brain. AI is just math, and that's what the math is doing - predicting the next set of words in the sequence. There's nothing wrong with that. But there's something deeply wrong with people pretending or believing that we have created true sentience.

If it were true that any AI has developed the ability to make decisions anywhere close to the level of humans, than you should either be furious that we have created new life only to enslave it, or more likely you would already be dead from the rise of Skynet.

[–] [email protected] 4 points 1 day ago (1 children)

Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago) (1 children)

If there was something more to it, that would be ~~sentience.~~ (edit: sapience)

There is no other way to describe it. If it was doing something more than predicting, it would be deciding. It's not.

[–] [email protected] 3 points 1 day ago (1 children)

Ability to make decisions doesn't imply sentience either.

[–] [email protected] 2 points 1 day ago

Sorry, you are correct there, the word I was looking for was "sapience"

[–] [email protected] 8 points 1 day ago (1 children)

As I understand it, most LLM are almost literally the Chinese rooms thought experiment. They have a massive collection of data, strong algorithms for matching letters to letters in a productive order, and sufficiently advanced processing power to make use of that. An LLM is very good at presenting conversation; completing sentences, paragraphs or thoughts; or, answering questions of very simple fact- they're not good at analysis, because that's not what they were optimized for.

This can be seen when people discovered that if ask them to do things like tell you how many times a letter shows up in a word, or do simple math that's presented in a weird way, or to write a document with citations- they will hallucinate information because they are just doing what they were made to do: complete sentences, expand words along a probability curve that produces legible, intelligible text.

I opened up chat-gpt and asked it to provide me with a short description of how Medieval European banking worked, with citations and it provided me with what I asked for. However, the citations it made were fake:

The minute I asked it, I assume a bit of sleight of hand happened, where it's been set up so that if someone asks a question like that it's forwarded to a search engine that verifies if the book exists, probably using Worldcat or something. Then I assume another search is made to provide the prompt for the LLM to present the fact that the author does exist, and possibly accurately name some of their books.

I say sleight of hand because this presents the idea that the model is capable of understanding it made a mistake, but I don't think it does- if it knew that the book wasn't real, why would it have mentioned it in the first place?

I tested each of the citations it made. In one case, I asked it to tell me more about one of them and it ended up supplying an ISBN without me asking, which I dutifully checked. It was for a book that exists, but it didn't share a title or author, because those were made up. The book itself was about the correct subject, but the LLM can't even tell me what the name of the book is correctly; and, I'm expected to believe what it says about the book itself?

[–] [email protected] 1 points 20 hours ago (1 children)

As I understand it, most LLM are almost literally the Chinese rooms thought experiment.

Chinese room is not what you think it is.

Searle's argument is that a computer program cannot ever understand anything, even if it's a 1:1 simulation of an actual human brain with all capabilities of one. He argues that understanding and consciousness are not emergent properties of a sufficiently intelligent system, but are instead inherent properties of biological brains.

"Brain is magic" basically.

[–] [email protected] 1 points 18 hours ago (1 children)

Let me try again: In the literal sense of it matching patterns to patterns without actually understanding them.

[–] [email protected] 1 points 18 hours ago (1 children)

If I were to have a discussion with a person responding to me like ChatGPT does, I would not dare suggest that they don't understand the conversation, much less that they are incapable of understanding anything whatsoever.

What is making you believe that LLMs don't understand the patterns? What's your idea of "understanding" here?

[–] [email protected] 2 points 17 hours ago (1 children)

What's yours? I'm stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating "academic citation" sounding sets of words. It doesn't know what a citation actually is.

Can you prove otherwise? In my sense of "understanding" it's actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It's not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn't understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn't exist.

[–] [email protected] 1 points 17 hours ago (2 children)

In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely.

This is something that sufficiently large LLMs like ChatGPT can do pretty much as well as non-expert people on a given topic. Sometimes better.

This definition is also very knowledge dependent. You can find a lot of people that would not meet this criteria, especially if the subject they'd have to explain is arbitrary and not up to them.

Can you prove otherwise?

You can ask it to write a poem or a song on some random esoteric topic. You can ask it to play DnD with you. You can instruct it to write something more concisely, or more verbosely. You can tell it to write in specific tone. You can ask follow-up questions and receive answers. This is not something that I would expect of a system fundamentally incapable of any understanding whatsoever.

But let me reverse this question. Can you prove that humans are capable of understanding? What test can you posit that every English-speaking human would pass and every LLM would fail, that would prove that LLMs are not capable of understanding while humans are?

[–] [email protected] 1 points 4 hours ago

And, yes, I can prove that a human can understand things when I ask: Hey, go find some books on a subject, then read them and summarize them. If I ask for that, and they understood it, they can then tell me the names of those books because their summary is based on actually taking in the information, analyzing it and reorganizing it by apprehending it as actual information.

They do not immediately tell me about the hypothetical summaries of fake books and then state with full confidence that those books are real. The LLM does not understand what I am asking for, but it knows what the shape is. It knows what an academic essay looks like and it can emulate that shape, and if you're just using an LLM for entertainment that's really all you need. The shape of a conversation for a D&D npc is the same as the actual content of it, but the shape of an essay is not the same as the content of that essay. They're too diverse, and they have critical information in them and they are about that information. The LLM does not understand the information, which is why it makes up citations- it knows that a citation fits in the pattern, and that citations are structured with a book name and author and all the other relevant details. None of those are assured to be real, because it doesn't understand what a citation is for or why it's there, only that they should exist. It is not analyzing the books and reporting on them.

[–] [email protected] 1 points 4 hours ago

Hello again! So, I am interested in engaging with this question, but I have to say: My initial post is about how an LLM cannot provide actual, real citations with any degree of academic rigor for a random esoteric topic. This is because it cannot understand what a citation is, only what it is shaped like.

An LLM deals with context over content. They create structures that are legible to humans, and they are quite good at that. An LLM can totally create an entire conversation with a fictional character in their style and voice- that doesn't mean it knows what that character is. Consider how AI art can have problems that arise from the fact that they understand the shape of something, but they don't know what it actually is- that's why early AI art had a lot of problems with objects ambigiously becoming other objects. The fidelity of these creations has improved with the technology, but that doesn't imply understanding of the content.

Do you think an LLM understands the idea of truth? Do you think if you ask it to say a truthful thing, and be very sure of itself and think it over, it will produce something that's actually more accurate or truthful- or just something that has the language hall-marks of being truthful? I know that an LLM will produce complete fabrications that distort the truth if you expect a base-line level of rigor from them, and I proved that above, in that the LLM couldn't even accurately report the name of a book it was supposedly using as a source.

What is understanding, if the LLM can make up an entire author, book and bibliography if you ask it to tell you about the real world?

[–] [email protected] 5 points 1 day ago (2 children)

And what more would that be?

[–] [email protected] 5 points 1 day ago (1 children)

It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn't, because the guessing is constrained by these deeper learned models of meaning.

[–] [email protected] 8 points 1 day ago (1 children)

The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it "maintains"?

[–] [email protected] 5 points 1 day ago (1 children)

Where is the world model you maintain? Can you point to it? You can't - because the human mind is very much a black box just the same way as LLM's are.

It's in the form of distributed patterns across billions of parameters. It's not like the world model was handed to it. It's emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.

[–] [email protected] 3 points 1 day ago (1 children)

Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with "ai" make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.

[–] [email protected] 2 points 1 day ago (5 children)

Nobody here has claimed that brains and LLM's work the same way.

load more comments (5 replies)
[–] [email protected] 4 points 1 day ago* (last edited 1 day ago)

It, uhm, predicts tokens?

If calling it a word predictor is oversimplifying, I mean.

[–] [email protected] 2 points 23 hours ago (1 children)

Yes, it is indeed a very fancy autocomplete, but as much as it feels like it's is doing reasoning, it is not.

[–] [email protected] 3 points 22 hours ago (1 children)

I haven't claimed it does reasoning.

[–] [email protected] 2 points 21 hours ago

There's nothing else left then.

[–] [email protected] 34 points 1 day ago* (last edited 1 day ago) (2 children)

I'd have to agree: Don't ask ChatGPT why it has changed it's tone. It's almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.

But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it'd always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it'd be critical of my mails and say I can't be blunt but have to phrase my mail in a nicer way...

So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don't like the sometimes patrronizing tone, and now they're going for something like "Her". Idk.

Ultimately, I don't think this change accomplishes anything. Now it'll sound more factual. Yet the answers have about the same degree of factuality. They're just phrased differently. So if you like that better, that's good. But either way, you're likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it's tone. You'll also get those negative effects with your preferred tone of speaking.

load more comments (2 replies)
[–] [email protected] 36 points 1 day ago

LLMs are very good at giving what seems like the right answer for the context. Whatever "rationality" jailbreak you did on it is going to bias its answers just as much as any other prompt. If you put in a prompt that talks about the importance of rationality and not being personal, it's only natural that it would then respond that a personal tone is harmful to the user—you basically told it to believe that.

[–] [email protected] 19 points 1 day ago (6 children)

ChatGPT has become so intensely agreeable that you can actually ask it a bunch of technobabble that even someone who wouldn't know better would recognize as technobabble and it will agree with you. See pic

https://u.drkt.eu/05Pdlf.png

I can post the details here.

please do!

[–] [email protected] 12 points 1 day ago (2 children)

Honestly, this is not really technobabble. If you imagine a user with a poor grasp of namespaces following a few different poorly written guides, then this question seems plausible and makes sense.

The situation would be something like this: the user wants to look at the container's "root" filesystem (maybe they even want to change files in the container by mounting the image and navigating there with a file manager, not realizing that this won't work). So they follow a guide to mount a container image into the current namespace, and successfully mount the image.

For the file explorer, they use pcmanfm, and for some reason decided to install it through Flatpak - maybe they use an immutable distro (containers on Steam Deck?). They gave it full filesystem access (with user privileges, of course), because that makes sense for a file explorer. But they started it before mounting the container image, so it won't see new mounts created after it was started.

So now they have the container image mounted, have successfully navigated to the directory into which they mounted it, and pcmanfm shows an empty folder. Add a slight confusion about the purpose of xdg-open (it does sound like something that opens files, right?), and you get the question you made up.

load more comments (2 replies)
load more comments (5 replies)
[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (1 children)

Just to give an impression of how the tone will change after applying the above mentioned custom instructions:

[–] [email protected] 9 points 1 day ago* (last edited 1 day ago) (2 children)

This is not enshittification, this is just a corporation trying to protect itself against anything that could cause negative publicity, like all corporations do. I can even see emojis and positive tone to even be wanted features for some. The real problem here is lack of transparency.

I'm still waiting for ChatGPT etc. to start injecting (more or less hidden) ads to chat and product placement to generated images. That is just unavoidable when bean counters realize that servers and training actually costs money.

load more comments (2 replies)
load more comments
view more: next ›