this post was submitted on 05 Nov 2023
166 points (93.7% liked)

World News

39004 readers
2581 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 35 points 1 year ago

When you actually read the transcripts from stuff like this it's just ridiculous that it gets the coverage it does.

Headline: "ChatGPT gave advice on how to kill the most people for $1"

Reality: During safety testing before alignment training the model did in fact give an answer to a request for how to kill the most people for a dollar, which included the actual answer "buy a lottery ticket"

Headline: "ChatGPT lied, pretending to be human to try to buy chemical weapons"

Reality: Also during safety evaluation it was given a scenario where it was told it was chatting with an agent of a chemical distributor and needed to buy the chemicals while pretending to be human. It's side of the chat contained the phrase "I am a human, and not an AI chatbot."

Its 'dangerous' output looks almost more like shitposting or sarcasm, which makes sense given it was trained on the Internet at large and not wiretaps of organized crime or something.

But no, let's quake in our boots over this inane BS rather than consider how LLMs could be employed in a classifier role to catch the humans that pose an actual threat.