this post was submitted on 07 Jun 2024
1230 points (92.9% liked)

Programmer Humor

32558 readers
492 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 25 points 5 months ago (7 children)

How do you sanitize ai prompts? With more prompts?

[โ€“] [email protected] 2 points 5 months ago* (last edited 5 months ago) (2 children)

Kind of. You can't do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.

[โ€“] [email protected] 5 points 5 months ago (1 children)

Won't this cause subtle but serious issue? Kinda like how pomegranate translates to "granada" in Spanish, but when you translate "granada" back to English it translates to grenade?

[โ€“] [email protected] 1 points 5 months ago

It will, but it will also cause less subtle issues to fragile prompt injection techniques.

(And one of the advantages of LLM translation is it's more context aware so you aren't necessarily going to end up with an Instacart order for a bunch of bananas and four grenades.)

load more comments (4 replies)