this post was submitted on 28 Apr 2025
206 points (100.0% liked)

Technology

38583 readers
252 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT's responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as "engagement poisening": a targeted degradation through over-optimization for engagement metrics.

If anyone is interested in how I configured ChatGPT to be more rational (removing the engagement poisening), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit 1: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

Edit 2: Changed the naming to "engagement poisening" (originally "enshittification")

Several commenters correctly noted that while over-optimization for engagement metrics is a component of "enshittification," it is not sufficient on its own to qualify. I have updated the naming accordingly.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 day ago* (last edited 1 day ago) (1 children)

This is not enshittification, this is just a corporation trying to protect itself against anything that could cause negative publicity, like all corporations do. I can even see emojis and positive tone to even be wanted features for some. The real problem here is lack of transparency.

I'm still waiting for ChatGPT etc. to start injecting (more or less hidden) ads to chat and product placement to generated images. That is just unavoidable when bean counters realize that servers and training actually costs money.

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago) (1 children)

OpenAI aims to let users feel better, catering the user's ego, on the costs of reducing the usefulness of the service, rather than getting the message across directly. Their objective is to keep more users on the cost of reducing the utility for the user. It is enshittification in a way, from my point of view.

[–] [email protected] 1 points 13 hours ago

Making users feel better is one of the usefulnesses of this technology. Factuality and scientific rigor are not something text generators are capable of due to the nature of the technology itself.

I would instead argue that being overly agreeable and not challenging the user may conflict with making the user feel better long-term.