this post was submitted on 14 Dec 2024
404 points (97.6% liked)

Political Memes

5602 readers
521 users here now

Welcome to politcal memes!

These are our rules:

Be civilJokes are okay, but don’t intentionally harass or disturb any member of our community. Sexism, racism and bigotry are not allowed. Good faith argumentation only. No posts discouraging people to vote or shaming people for voting.

No misinformationDon’t post any intentional misinformation. When asked by mods, provide sources for any claims you make.

Posts should be memesRandom pictures do not qualify as memes. Relevance to politics is required.

No bots, spam or self-promotionFollow instance rules, ask for your bot to be allowed on this community.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Most of that time in my career I spent designing and deploying algorithms was in Equity Derivatives and a lot of that work wasn't even for Market Traded instruments like Options but actually OTCs, which are Marked To Model, so all a bit more advanced than what you think I should be studying.

Also part of my background is Physics and another part is Systems Analysis, so I both understand the Maths that go into making models and the other parts of that process including the human element (such as how the definition of the inputs, outputs and even the selection of a model as "working" or "not working needs to be redone" is what shapes what the model produces).

One could say I'm intimately familiar with how the sausages are made, and we're not talking about the predictive kind of stuff which is harder to be controlled by humans (because the Market itself serves as reference for a model's quality and if it fails to predict the market too much it gets thrown out), but the kind of stuff for which there is no Market and everything is based on how the Traders feel the model should behave in certain conditions, which is a lot more like the kind of situation for how Algorithms are made for companies like Healthcare Insurers.

I can understand that if your background is in predictive modelling you would think that models are genuine attempts at modelling reality (hence isolating the makers of the model of the blame for what the model does), but what we're talking about here is NOT predictive modelling but something else altogether - an automation of the maximizing of certain results whilst minimizing certain risks - and in that kind of situation the model/algorithm is entirely an expression of the will of humans, from the very start because they defined its goals (minimizing payout, including via Courts) and made a very specific choice of elements for it to take in account (for example, using the history of the Health Insurance Company having their decision gets taken to Court and they lose, so that they can minimize it with having to pay too much out), thus shaping its goals and to a great extent how it can reach those goals. Further, once confronted with the results, they approved the model for use.

Technology here isn't an attempt at reproducing reality so as to predict it (though it does have elements of that in that they're trying to minimize the risk of having to pay lots of money from losing in Court, hence there will be some statistical "predicting" of the likelihood of people taking them to court and winning, which is probably based on the victim's characteristics and situation), it's just an automation of a particularly sociopath human decision process (i.e. a person trying to unfairly and even illegally denying people payment whilst taking in account the possibility of that backfiring) - in this case what the Algorithm does and even to a large extent how it does it is defined by what the decision makers want it to do, as is which ways of doing it are acceptable, thus the decision makers are entirely to blame for what it does.

Or if you want it in plain language: if I was making an AI robot to get people out of my way whilst choosing that it would have no limits to the amount of force it could use and giving it blade arms, any deaths it would cause would be on me - having chosen the goal, the means and the limits as well as accepting the bloody results from testing the robot and deploying it anyway, the blame for actually using such an autonomous device would've been mine.

People in this case might not have been killed by blades and the software wasn't put into a dedicated physical robotic body but it's still the fault of the people who decide to create and deploy an automated sociopath decider whose limits were defined by them and which they knew would result in deaths, for the consequences of the decisions of that automated agent of theirs.