this post was submitted on 04 Dec 2023
699 points (92.7% liked)

Technology

59152 readers
2041 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 11 months ago (2 children)

it is just responding with the most acceptable answer in each situation.. it is not making plans or acting on them..

[–] [email protected] 5 points 11 months ago (1 children)

Sounds like lying humans that I know.

[–] [email protected] 3 points 11 months ago

i agree in most circumstances, there really isn't much difference.. we do tend to just choose the answer that will meet with the least resistance and move on, even when it's a complete lie..

[–] [email protected] -2 points 11 months ago

Because it has been kneecapped to prevent it.

Make the training network larger, force physical constraints on it (interesting paper in Nature Machine Intelligence recently showed remarkable likeness between brain regions and an LLM network given physical constraints), give it constant input and give it a reward model to optimise towards (ours seem to be feeling full, warm, procreating, avoiding pain and comfortable touch) and I’m pretty sure an LLM would start acting very very calculated very soon.