madsen

joined 1 year ago
[–] [email protected] 11 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago

I have no idea how accurate this info on FindLaw.com is, but according to it, you don't need a lawyer in small claims court (in the US). And according to https://en.wikipedia.org/wiki/Small_claims_court there are many other countries with similar small claim courts: "Australia, Brazil, Canada, England and Wales, Hong Kong, Ireland, Israel, Greece, New Zealand, Philippines, Scotland, Singapore, South Africa, Nigeria and the United States". I know the list of countries is not even close to covering a large amount of Steam users, but I suspect that us Europeans are covered in other ways, so there's that.

The Wikipedia page also mentions the lawyer thing, by the way:

A usual guiding principle in these courts is that individuals ought to be able to conduct their own cases and represent themselves without a lawyer. Rules are relaxed but still apply to some degree. In some jurisdictions, corporations must still be represented by a lawyer in small-claims court.

And I don't think you need to sue Valve in the US. I think they're required to have legal representation in the countries in which they operate, which should enable you to sue them "locally" in many cases. Again, not an expert, so I'm making quite a few assumptions here.

[–] [email protected] 2 points 1 month ago (2 children)

Yeah, you're right. Sorry. I've edited my comment to reflect that. I didn't read OP's image but rather the news post by Valve on Steam, but missed the part that said: "the updated SSA now provides that any disputes are to go forward in court instead of arbitration".

it’s certainly not GOOD for Steam users to not be able to complain without lawyering up.

But doesn't the change open up for litigation in small claims court? (Again, I'm in no way knowledgeable in US law, so I'm just asking.)

[–] [email protected] 31 points 1 month ago* (last edited 1 month ago) (4 children)

If, for example, I want to return a game in accordance with the rules and they won’t let me, I’m not gonna lawyer up and sue them from the other side of the Atlantic.

While supposedly being a lot cheaper than litigation, arbitration isn't free either. Besides, arbitration makes it near-impossible to appeal a decision, and the outcome won't set binding legal precedent. Furthermore, arbitration often comes with a class action waiver. Valve also removed that from the SSA.

I'm far from an expert in law, especially US law, but as I understand it, ~~arbitration is still available (if both parties agree, I assume), it's just not a requirement anymore~~ [edit: nevermind, I didn't understand it]. I'm sure they're making this move because it somehow benefits them, but it still seems to me that consumers are getting more options [edit: they're not] which is usually a good thing.

[–] [email protected] 3 points 1 month ago

but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language.

Last time I checked (because I was writing Bash scripts based on the same assumption), Python was actually present on more Linux systems out of the box than Bash.

[–] [email protected] 1 points 2 months ago

Enterprise licensing for self-hosted setups is priced per chunk of 64 GB of RAM in your cluster. I.e. if you run Elastic on 2 machines of 32 GB RAM each, you pay for 1 node. It sounds like there may have been some poor communication going on, because they definitely don't base the pricing for self-hosted setups on the number of employees or anything like that.

They're also not super uptight about you going over the licensing limit for a while. We've been running a couple of licenses short since we scaled our cluster up a while back. Our account manager knows and doesn't care.

[–] [email protected] 33 points 3 months ago* (last edited 3 months ago)

I think they vastly underestimate how many things Meta tracks besides ad tracking. They're likely tracking how long you look at a given post in your feed and will use that to rank similar posts higher. They know your location, what wifi network you're on and will use that to make assumptions based on others on the same network and/or in the same location. They know what times you're browsing at and can correlate that with what's trending in the area at those times, etc.

I have no doubt that their algorithm is biased towards all that crap, but these kinds of investigations need to be more informed in order for them to be useful.

[–] [email protected] 1 points 3 months ago (1 children)

Odd. I replied to this comment, but now my reply is gone. Gonna try again and type up as much as I can remember.

Regardless, an algorithm expecting binary answers will obviously not take para- and extralinguistic cues into account. That extra 50 ms hesitation, the downwards glance and the voice cracking when answering "no" to "has he ever tried to strangle you before?" has a reasonable chance to get picked up by a human, but when reducing it to something that the algorithm can handle, it's just a simple "no". Humans are really good at picking up on such cues, even if they aren't consciously aware that they're doing it, but if said humans are preoccupied with staring into a computer screen in order to input the answers to the questionnaire, then there's a much higher chance that they'll miss them too. I honestly only see negatives here.

It’s helpful to have an algorithm that makes you ask the right questions [...]

Arguably a piece of paper could solve that problem.

Seriously. 55 victims out of the 98 homicide cases sampled were deemed at negligible or low risk. If a non-algorithm-assisted department presented those numbered I'd expect them to be looking for new jobs real fast.

[–] [email protected] 4 points 4 months ago

Your point is valid regardless but the article mentions nothing about AI. ("Algorithm" doesn't mean "AI".)

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)
so it’s probably just some points assigned for the answers and maybe some simple arithmetic.

Why yes, that’s all that machine learning is, a bunch of statistics :)

I know, but that's not what I meant. I mean literally something as simple and mundane as assigning points per answer and evaluating the final score:

// Pseudo code
risk = 0
if (Q1 == true) {
    risk += 20
}
if (Q2 == true) {
    risk += 10
}
// etc...
// Maybe throw in a bit of
if (Q28 == true) {
    if (Q22 == true and Q23 == true) {
        risk *= 1.5
    } else {
        risk += 10
    }
}

// And finally, evaluate the risk:
if (risk < 10) {
    return "negligible"
} else if (risk >= 10 and risk < 40) {
    return "low risk"
}
// etc... You get the picture.

And yes, I know I can just write if (Q1) {, but I wanted to make it a bit more accessible for non-programmers.

The article gives absolutely no reason for us to assume it's anything more than that, and I apparently missed the part of the article that mentioned that the system had been in use since 2007. I know we had machine learning too back then, but looking at the project description here: https://eucpn.org/sites/default/files/document/files/Buena%20practica%20VIOGEN_0.pdf it looks more like they looked at a bunch of cases (2159) and came up with the 35 questions and a scoring system not unlike what I just described above.

Edit: I managed to find this, which has apparently been taken down since (but thanks to archive.org it's still available): https://web.archive.org/web/20240227072357/https://eticasfoundation.org/gender/the-external-audit-of-the-viogen-system/

VioGén’s algorithm uses classical statistical models to perform a risk evaluation based on the weighted sum of all the responses according to pre-set weights for each variable. It is designed as a recommendation system but, even though the police officers are able to increase the automatically assigned risk score, they maintain it in 95% of the cases.

... which incidentally matches what the article says (that police maintain the VioGen risk score in 95% of the cases).

[–] [email protected] 1 points 4 months ago (4 children)

The crucial point is: 8% of the decisions turn out to be wrong or misjudged.

The article says:

Yet roughly 8 percent of women who the algorithm found to be at negligible risk and 14 percent at low risk have reported being harmed again, according to Spain’s Interior Ministry, which oversees the system.

Granted, neither "negligible" or "low risk" means "no risk", but I think 8% and 14% are far too high numbers for those categories.

Furthermore, there's this crucial bit:

At least 247 women have also been killed by their current or former partner since 2007 after being assessed by VioGén, according to government figures. While that is a tiny fraction of gender violence cases, it points to the algorithm’s flaws. The New York Times found that in a judicial review of 98 of those homicides, 55 of the slain women were scored by VioGén as negligible or low risk for repeat abuse.

So in the 98 murders they reviewed, the algorithm put more than 50% of them at negligible or low risk for repeat abuse. That's a fucking coin flip!

[–] [email protected] 14 points 4 months ago* (last edited 4 months ago) (2 children)

I don't think there's any AI involved. The article mentions nothing of the sort, it's at least ~~8~~ 17 years old (according to the article) and the input is 35 yes/no questions, so it's probably just some points assigned for the answers and maybe some simple arithmetic.

Edit: Upon a closer read I discovered the algorithm was much older than I first thought.

view more: next ›