this post was submitted on 28 Apr 2025
206 points (100.0% liked)

Technology

38583 readers
252 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT's responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as "engagement poisening": a targeted degradation through over-optimization for engagement metrics.

If anyone is interested in how I configured ChatGPT to be more rational (removing the engagement poisening), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit 1: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

Edit 2: Changed the naming to "engagement poisening" (originally "enshittification")

Several commenters correctly noted that while over-optimization for engagement metrics is a component of "enshittification," it is not sufficient on its own to qualify. I have updated the naming accordingly.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 day ago (2 children)
[–] [email protected] 5 points 1 day ago (1 children)

It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn't, because the guessing is constrained by these deeper learned models of meaning.

[–] [email protected] 8 points 1 day ago (1 children)

The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it "maintains"?

[–] [email protected] 5 points 1 day ago (1 children)

Where is the world model you maintain? Can you point to it? You can't - because the human mind is very much a black box just the same way as LLM's are.

It's in the form of distributed patterns across billions of parameters. It's not like the world model was handed to it. It's emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.

[–] [email protected] 3 points 1 day ago (1 children)

Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with "ai" make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.

[–] [email protected] 2 points 1 day ago (1 children)

Nobody here has claimed that brains and LLM's work the same way.

[–] [email protected] 1 points 1 day ago (1 children)

Where is the world model you maintain? Can you point to it? You can't - because the human mind is very much a black box just the same way as LLM's are.

something being a black box is not even slightly notable a feature of relation, it's a statement about model detail; the only reason you'd make this comparison is if you want the human brain to seem equivalent to llm.

for example, you didnt make the claim: "The inner workings of Europa are very much a black box, just the same way as LLM's are"

[–] [email protected] 3 points 1 day ago (1 children)

"The human mind is very much a black box just the same way as LLMs are" is a factually correct statement. You can’t look into a human brain for an exact explanation of why an individual did something any more than you can look into the inner workings of an LLM to explain why it said A rather than B. Claiming that my motive is to equate LLMs and human brains is not something I said - it’s something you imagined.

[–] [email protected] 1 points 1 day ago (1 children)

It's not really factually correct if you want to get pedantic, both brains and llms are called black boxes for different reasons, but this is ultimately irrelevant. Your motive may be here or there, the rhetorical effect is the same. You are arguing very specifically that we cant know llm's dont hae similar features (world model) to human brains because "both are black boxes", which is wrong for a few reasons, but also plainly an equivalence. It's rude to pretend everyone in the conversation is as illiterate as wed need to be to not understand this point.

[–] [email protected] 2 points 1 day ago (1 children)

A statement can be simplified down to the point that it borderlines on misinformation while still being factually correct. Another examples would be saying "photography is just pointing a camera and pressing a button" or "internet is just a bunch of computers talking to each other." It would be completely reasonable for someone to take issue with these statements.

You are arguing very specifically that we cant know llm’s dont hae similar features (world model) to human brains because “both are black boxes”

At no point have I made such claim.

[–] [email protected] 1 points 1 day ago

Yes we agree on the first part.

I will again direct you here re: the second.

Where is the world model you maintain? Can you point to it? You can't - because the human mind is very much a black box just the same way as LLM's are.

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago)

It, uhm, predicts tokens?

If calling it a word predictor is oversimplifying, I mean.