570
This new data poisoning tool lets artists fight back against generative AI
(www.technologyreview.com)
This is a most excellent place for technology news and articles.
Obviously this is using some bug and/or weakness in the existing training process, so couldn't they just patch the mechanism being exploited?
Or at the very least you could take a bunch of images, purposely poison them, and now you have a set of poisoned images and their non-poisoned counterparts allowing you to train another model to undo it.
Sure you've set up a speedbump but this is hardly a solution.
The AI can have some NaN, as a treat..
As a topping on some Pi