42Firehawk

joined 1 year ago
[–] [email protected] 2 points 1 month ago

Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).

The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.

You can't hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.

[–] [email protected] 9 points 1 month ago (2 children)

If I commission a vending machine, get one that was made automatically and runs itself, and I set it up and let it operate in my store, then I am responsible if it eats someone's money without giving them their item, giving the wrong thing, or dispensing dangerous products.

This has already been decided, and it's why you can open up and fix them, and each mechanism is controlled.

A llm making business decisions has no such control or safety mechanisms.

[–] [email protected] 3 points 1 month ago

In that case the ads are video only, no clicking on them, including to skip or anything else. So it would be detecting that trying to change where you are in the video doesn't change anything (and exclusively playing via your 3 second buffer)

[–] [email protected] 13 points 1 month ago (2 children)

These cameras are now recording lawn signs as well now, which tends to be a bit more of a broad array of intelligence.