teawrecks

joined 1 year ago
[–] [email protected] 1 points 6 months ago (8 children)

Any input to the 2nd LLM is a prompt, so if it sees the user input, then it affects the probabilities of the output.

There's no such thing as "training an AI to follow instructions". The output is just a probibalistic function of the input. This is why a jailbreak is always possible, the probability of getting it to output something that was given as input is never 0.

[–] [email protected] 3 points 6 months ago

Yeah, I've had a cifs share in my fstab before, mounting it to a folder in my home, and I took the PC off-site for a lan party, and just trying to ls my home dir took forever for some reason. Commenting it out and restarting fixed it all.

Good luck with the new install!

[–] [email protected] 1 points 6 months ago (10 children)

Yeah, as soon as you feed the user input into the 2nd one, you've created the potential to jailbreak it as well. You could possibly even convince the 2nd one to jailbreak the first one for you, or If it has also seen the instructions to the first one, you just need to jailbreak the first.

This is all so hypothetical, and probabilistic, and hyper-applicable to today's LLMs that I'd just want to try it. But I do think it's possible, given the paper mentioned up at the top of this thread.

[–] [email protected] 5 points 6 months ago

Which of course precedes being KERBLOOIED!

[–] [email protected] 4 points 6 months ago (28 children)

Oh, I misread your original comment. I thought you meant looking at the user's input and trying to determine if it was a jailbreak.

Then I think the way around it would be to ask the LLM to encode it some way that the 2nd LLM wouldn't pick up on. Maybe it could rot13 encode it, or you provide a key to XOR with everything. Or since they're usually bad at math, maybe something like pig latin, or that thing where you shuffle the interior letters of each word, but keep the first/last the same? Would have to try it out, but I think you could find a way. Eventually, if the AI is smart enough, it probably just reduces to Diffie-Hellman lol. But then maybe the AI is smart enough to not be fooled by a jailbreak.

[–] [email protected] 3 points 6 months ago (2 children)

Any chance you have a network share that it might be trying/failing to mount?

[–] [email protected] 20 points 6 months ago

Sounds like you need to install polkit for the window manager you're using (xfce-polkit or lxqt-policykit on arch). That should enable apps to request root using the login popup.

[–] [email protected] 7 points 6 months ago* (last edited 6 months ago)

Even if you can run your .net code on linux, it's better for you to run on the actual platform you'll be deploying to. You could dual boot just for work (that's what I do) or try running in a VM, but I assume your work is hard enough without generating new friction.

[–] [email protected] 5 points 6 months ago (30 children)

I think if the 2nd LLM has ever seen the actual prompt, then no, you could just jailbreak the 2nd LLM too. But you may be able to create a bot that is really good at spotting jailbreak-type prompts in general, and then prevent it from going through to the primary one. I also assume I'm not the first to come up with this and OpenAI knows exactly how well this fares.

[–] [email protected] 1 points 6 months ago

Hah yeah, this was in the back of my mind. I forgot the context of it, though, thanks.

[–] [email protected] 21 points 6 months ago

Modern Android Do Not Disturb is configurable enough for you to do this. Allow your family contacts through, block the rest.

[–] [email protected] 1 points 6 months ago

Ah, I didn't go back far enough. Yeah, that's fair then. In fact, I wonder how possible it is to just run the mac build on linux.

view more: ‹ prev next ›