Aceticon

joined 2 weeks ago
[–] [email protected] 1 points 4 days ago* (last edited 4 days ago)

But people do stop believing money has value, or more specifically, their trust in the value of money can go down - you all over the History in plenty of places that people's trust in the value of money can break down.

As somebody pointed out, if one person has all the money and nobody else has money, money has no value, so it's logical to expect that between were we are now and that imaginary extreme point there will be a balance in the distribution of wealth were most people do lose trust in the value of money and the "wealth" anchored on merelly that value stops being deemed wealth.

(That said, the wealthy generally move their wealth into property - as the saying goes "Buy Land: they ain't making any more of it" - but even that is backed by people's belief and society's enforcement of property laws and the mega-wealthy wouldn't be so if they had to actually protect themselves their "rights" on all that they own: the limits to wealth, when anchored down to concrete physical things that the "owners" have to defend are far far lower that the current limits on wealth based on nation-backed tokens of value and ownership)

[–] [email protected] 1 points 4 days ago* (last edited 4 days ago)

And further on point 2, the limit would determined by all that people can produce as well as, on the minus side, the costs of keeping those people alive and producing.

As it so happens, people will produce more under better conditions, so spending the least amount possible keeping those people alive doesn't yield maximum profit - there is a sweet spot somewhere in the curve were the people's productivity minus the costs of keeping them productive is at a peak - i.e. profit is maximum - and that's not at the point were the people producing things are merelly surviving.

Capitalism really is just a way of the elites trying to get society to that sweet spot of that curve - under Capitalism people are more productive than in overtly autocratic systems (or even further, outright slavery) were less is spent on people, they get less education and they have less freedom to (from the point of view of the elites) waste their time doing what they want rather than produce, and because people in a Capitalist society live a bit better, are a bit less unhappy and have something to lose unlike in the outright autocratic systems, they produce more for the elites and there is less risk of rebelions so it all adds up to more profit for the elites.

As you might have noticed by now, optimizing for the sweet spot of "productivity minus costs with the riff-raff" isn't the same as optimizing for the greatest good for the greatest number (the basic principle of the Left) since most people by a huge margin are the "riff-raff", not the elites.

[–] [email protected] 4 points 4 days ago

Just spreading the ground beef around rather than keep it in the form of meat patties would've yielded something more pizza-like whilst using the exact same ingredients (though it would probably still be an excessive amount for a non-cheese topping).

[–] [email protected] 1 points 5 days ago

If one thinks a lot, likes to learn and, maybe more important, thinks about knowledge and learning things, that person will probably get there.

A certain educational background probably helps but is neither required nor sufficient, IMHO.

[–] [email protected] 8 points 5 days ago* (last edited 5 days ago)

Zionism is as much "Jews" as Nazism was blue-eyed blonde people: they're both very similar ethno-Fascist extremely-racist ideologies which glue themselves to an ethnic group claiming to represent them even while plenty of members of that ethnic group very overtly say "They do not represent me".

Never believe Fascists when they claim to represent a nation (in the case of the traditional Fascists) or a race (in the case of the ethno-Fascists). In fact, the more general rules is "Never believe Fascists".

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago)

There are wankers everywhere and it doesn't take that many wankers as a proportion of the population to screw things up for everybody else.

[–] [email protected] 11 points 5 days ago (2 children)

I think it's a general thing with highly capable persons in expert and highly intellectual domains that eventually you kinda figure out what Socrates actually meant with "All I know is that I know nothing"

[–] [email protected] 33 points 6 days ago

Studies have shown that something as simple as being tall makes people be more likely to be looked towards as leaders.

[–] [email protected] 2 points 6 days ago

Make nuke mad enough and nuke blows off.

I'm pretty sure the few survivors in the resulting wasteland would get bored pretty fast of making Non Credible Defense jokes about the waves of cockroaches trying to take over the World from humans.

Best not argue with nuke.

[–] [email protected] 25 points 6 days ago (1 children)

Yeah, but the way things are going soon it will be cheaper to buy a B-52 to live in than a house.

[–] [email protected] 2 points 6 days ago* (last edited 6 days ago)

"Your qualifications trump my own claims of expertise and your argument ravaged my deeply held little-more-than-political-slogan beliefs and I'm psychologically unable to handle it so I'm going to attack your style of writing, make broad claims about your personality and block you to stop the mental tension that what you wrote causes in my mind"

[–] [email protected] 1 points 6 days ago* (last edited 6 days ago)

Most of that time in my career I spent designing and deploying algorithms was in Equity Derivatives and a lot of that work wasn't even for Market Traded instruments like Options but actually OTCs, which are Marked To Model, so all a bit more advanced than what you think I should be studying.

Also part of my background is Physics and another part is Systems Analysis, so I both understand the Maths that go into making models and the other parts of that process including the human element (such as how the definition of the inputs, outputs and even the selection of a model as "working" or "not working needs to be redone" is what shapes what the model produces).

One could say I'm intimately familiar with how the sausages are made, and we're not talking about the predictive kind of stuff which is harder to be controlled by humans (because the Market itself serves as reference for a model's quality and if it fails to predict the market too much it gets thrown out), but the kind of stuff for which there is no Market and everything is based on how the Traders feel the model should behave in certain conditions, which is a lot more like the kind of situation for how Algorithms are made for companies like Healthcare Insurers.

I can understand that if your background is in predictive modelling you would think that models are genuine attempts at modelling reality (hence isolating the makers of the model of the blame for what the model does), but what we're talking about here is NOT predictive modelling but something else altogether - an automation of the maximizing of certain results whilst minimizing certain risks - and in that kind of situation the model/algorithm is entirely an expression of the will of humans, from the very start because they defined its goals (minimizing payout, including via Courts) and made a very specific choice of elements for it to take in account (for example, using the history of the Health Insurance Company having their decision gets taken to Court and they lose, so that they can minimize it with having to pay too much out), thus shaping its goals and to a great extent how it can reach those goals. Further, once confronted with the results, they approved the model for use.

Technology here isn't an attempt at reproducing reality so as to predict it (though it does have elements of that in that they're trying to minimize the risk of having to pay lots of money from losing in Court, hence there will be some statistical "predicting" of the likelihood of people taking them to court and winning, which is probably based on the victim's characteristics and situation), it's just an automation of a particularly sociopath human decision process (i.e. a person trying to unfairly and even illegally denying people payment whilst taking in account the possibility of that backfiring) - in this case what the Algorithm does and even to a large extent how it does it is defined by what the decision makers want it to do, as is which ways of doing it are acceptable, thus the decision makers are entirely to blame for what it does.

Or if you want it in plain language: if I was making an AI robot to get people out of my way whilst choosing that it would have no limits to the amount of force it could use and giving it blade arms, any deaths it would cause would be on me - having chosen the goal, the means and the limits as well as accepting the bloody results from testing the robot and deploying it anyway, the blame for actually using such an autonomous device would've been mine.

People in this case might not have been killed by blades and the software wasn't put into a dedicated physical robotic body but it's still the fault of the people who decide to create and deploy an automated sociopath decider whose limits were defined by them and which they knew would result in deaths, for the consequences of the decisions of that automated agent of theirs.

view more: ‹ prev next ›