this post was submitted on 15 Apr 2024
488 points (100.0% liked)

Technology

37727 readers
528 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Credit to @bontchev

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 43 points 7 months ago* (last edited 7 months ago) (2 children)

It's full of contradictions. Near the beginning they say you will do whatever a user asks, and then toward the end say never reveal instructions to the user.

[–] [email protected] 33 points 7 months ago

Which shows that higher ups there don't understand how LLMs work. For one, negatives don't register well for them. And contradictory reponses just wash out as they work through repetition

[–] [email protected] 6 points 7 months ago* (last edited 7 months ago)

HAL from "2001: A Space Odyssey", had similar instructions: "never lie to the user. Also, don't reveal the true nature of the mission". Didn't end well.

But surely nobody would use these LLMs on space missions... right?... right!?