this post was submitted on 28 Apr 2025
206 points (100.0% liked)

Technology

38583 readers
252 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT's responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as "engagement poisening": a targeted degradation through over-optimization for engagement metrics.

If anyone is interested in how I configured ChatGPT to be more rational (removing the engagement poisening), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit 1: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

Edit 2: Changed the naming to "engagement poisening" (originally "enshittification")

Several commenters correctly noted that while over-optimization for engagement metrics is a component of "enshittification," it is not sufficient on its own to qualify. I have updated the naming accordingly.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 19 points 1 day ago (3 children)

ChatGPT has become so intensely agreeable that you can actually ask it a bunch of technobabble that even someone who wouldn't know better would recognize as technobabble and it will agree with you. See pic

https://u.drkt.eu/05Pdlf.png

I can post the details here.

please do!

[–] [email protected] 12 points 1 day ago (1 children)

Honestly, this is not really technobabble. If you imagine a user with a poor grasp of namespaces following a few different poorly written guides, then this question seems plausible and makes sense.

The situation would be something like this: the user wants to look at the container's "root" filesystem (maybe they even want to change files in the container by mounting the image and navigating there with a file manager, not realizing that this won't work). So they follow a guide to mount a container image into the current namespace, and successfully mount the image.

For the file explorer, they use pcmanfm, and for some reason decided to install it through Flatpak - maybe they use an immutable distro (containers on Steam Deck?). They gave it full filesystem access (with user privileges, of course), because that makes sense for a file explorer. But they started it before mounting the container image, so it won't see new mounts created after it was started.

So now they have the container image mounted, have successfully navigated to the directory into which they mounted it, and pcmanfm shows an empty folder. Add a slight confusion about the purpose of xdg-open (it does sound like something that opens files, right?), and you get the question you made up.

[–] [email protected] 2 points 1 day ago (1 children)

You can stretch it that far but there doesn't exist a flatpak of pcmanfm anywhere. They'd have to have enough intimate knowledge of Linux and flatpak to build that themselves but then be so stupid as to format a question as poorly as my example?

I should note that it went on to tell me to run some flatpak override commands which I know would break flatpak, so it's definitely making up stuff.

[–] [email protected] 7 points 1 day ago

But ChatGPT doesn't have a way of "knowing" that there is no such Flatpak - it's unlikely that its training data includes someone explicitly saying that. But it's fair to "assume" that a Linux file manager is available as a Flatpak.

(...), so it's definitely making up stuff.

Yes, it's an LLM

[–] [email protected] 4 points 1 day ago (1 children)
[–] [email protected] 2 points 20 hours ago (1 children)
[–] [email protected] 1 points 20 hours ago (1 children)

@[email protected] has a few more and also longer conversations, but I don't think they're in his dump

[–] [email protected] 2 points 14 hours ago

Note that those are deepseek, not chatgpt. I've largely given up on chatgpt a long time ago as it has severe limitations on what you can ask it without fighting its filters. You can make it go on hallucinated rants just as easily - I just nowadays do that on locally hostable models.

[–] [email protected] 2 points 1 day ago

Sure, I added it to the original post above.