this post was submitted on 08 Mar 2025
937 points (98.2% liked)

Technology

64937 readers
4007 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 hours ago

Say what you will about Will Smith, but his movie iRobot made a good point about this 17 years ago.

(damn I'm old)

[–] [email protected] 2 points 3 hours ago

Let's get more kidneys out there instead with tax credits for donors.

[–] [email protected] 25 points 6 hours ago

The death panels Republican fascists claim Democrats were doing are now here, and it's being done by Republicans.

I hate this planet

[–] [email protected] 19 points 8 hours ago (1 children)

"Treatment request rejected, insufficient TC level"

[–] [email protected] 8 points 8 hours ago (1 children)

A Voyager reference out in the wild! LMAO

[–] [email protected] 4 points 5 hours ago

Had to be done. It's just too damn close not to.

[–] [email protected] 17 points 11 hours ago (1 children)

Yeah. It’s much more cozy when a human being is the one that tells you you don’t get to live anymore.

[–] [email protected] 3 points 2 hours ago

Human beings have a soul you can appeal to?
Not every single one, but enough.

[–] [email protected] 40 points 13 hours ago (2 children)

What are you going to train it off of since basic algorithms aren't sufficient? Past committee decisions? If that's the case you're hard coding whatever human bias you're supposedly trying to eliminate. A useless exercise.

[–] [email protected] 16 points 11 hours ago (1 children)

A slightly better metric to train it on would be chances of survival/years of life saved thanks to the transplant. However those also suffer from human bias due to the past decisions that influenced who got a transpant and thus what data we were able to gather.

[–] [email protected] 7 points 7 hours ago* (last edited 7 hours ago)

And we do that with basic algorithms informed by research. But then the score gets tied and we have to decide who has the greatest chance of following though on their regimen based on things like past history and means to aquire the medication/go to the appointments/follow a diet/not drink. An AI model will optimize that based on wild demographic data that is correlative without being causative and end up just being a black box racist in a way that a committee that has to clarify it's thinking to other members couldn't, you watch.

[–] [email protected] 10 points 13 hours ago (1 children)

Nah bud, you just authorize whatever the doctor orders are because they are more knowledgable of the situation.

[–] [email protected] 2 points 13 hours ago

That makes logical sense, but what about the numbers? They can't go up if we keep spending the money we promised to spend on the 69th most effective and absolutely most expensive healthcare system in the world. What is this, an essential service? Rubes.

[–] [email protected] 12 points 14 hours ago (2 children)

I don't mind AI. It is simply a reflection of whoever is in charge of it. Unfortunately, we have monsters who direct humans and AI alike to commit atrocities.

We need to get rid of the demons, else humanity as a whole will continue to suffer.

[–] [email protected] 5 points 5 hours ago (1 children)

If it wasn't exclusively used for evil it would be a wonderful thing.

Unfortunately we also have capitalism. So everything has to be just the worst all the time so that the worst people alive can have more toys.

[–] [email protected] 2 points 3 hours ago

Thing is, those terrible people don't enjoy the everything that they already own, and don't understand that they are killing cool things in the crib. People make inventions and entertain if they can...because it is fun, and they think they got neat things to show the world. Problem is, prosperity is needed to allow people to have the luxury of trying to create.

The wealthy are murdering the golden geese of culture and technology. They won't be happier for it, and will simply use their chainsaw to keep killing humanity in a desperate wish of finding happiness.

[–] [email protected] 9 points 14 hours ago (1 children)

a reflection of who is in charge of it

not even that. it's an inherently more regressive version of whatever data that person feeds it.

the two arguments for deploying this shit outside of very narrow laboratory uses, where everyone was already using other statistical models.

A. this is one last grasp at fukuyama's 'end of history', one last desperate scream of the liberal order that they want to be regressive shit heads and build the abdication machine as their grand industrial-philosophical project, so they can do whatever horrible shit they want, and claim that they're still compassionate and only doing it because computer said so.

B. this is a project by literal monarchists. people who wish to kill democracy. to murder truth and collaboration; replace it with blind tribalistic loyalty to a fuhrer/king. the rhetoric coming from a lot of the funders of these things supports this.

this technology is existentially evil, and will be the end of our society either way. it must be stopped. the people who work on it must be stopped. the people who fund it must be hanged.

[–] [email protected] 1 points 10 hours ago (1 children)

I mean yes, but it can be VERY useful in these narrow laboratory use cases

[–] [email protected] 2 points 9 hours ago (1 children)

im skeptical but open to that. it's just that these models are pushing pushed into literally everything, to the point they're hard to avoid. I can't think of another kind of specialized lab tool that has had that done. I do not own, nor have I ever owned, a sample centrifuge. I don't have CRISPR tools. I have never, outside of academic settings, opened wolfram alpha on my home computer. even AUTOCAD and solidworks are specialist tools, and I haven't touched any version of either in years.

because these models, while not good for anything anyone should ever actually want outside a lab setting, are also very very good for fascism. they do everything a fascist needs to, aside from the actual physical killing.

and I don't think the level of development and deployment that these tools get, along with the wildly inflated price of the hardware to run them (or anything else) and death of web search, the damage to academic journals, etc, is a net benefit. even to specialized researchers who have uses for specialized versions of them as the statistical tool that they are. certainly not to the fields over the long term.

[–] [email protected] 2 points 8 hours ago (1 children)

Why shouldn't they have long term benefits for researchers?

Reminds me a bit of when CRISPR got big, people were worried to no end about potential dangers, designer babies, bioterrorism ("everybody can make a killer virus in their garage now") etc. In reality, it has been a huge leap forward for molecular biology and has vastly helped research, cancer treatment, drug development and many other things. I think machine learning could have a similar impact. It's already being used in development of new drugs, genomics, detection of tumours just to name a few

[–] [email protected] 2 points 8 hours ago

because murdering truth is not good for science. fascism is not good for science funding. researchers use search engines all the time. academia is struggling with a LLM fraud problem.

[–] [email protected] 14 points 17 hours ago

Transplant Candidates:

Black American Man who runs a charity: Denied ❌️

President: Approved ✅️

All Hail President Underwood

[–] [email protected] 47 points 1 day ago

Yeah, I'd much rather have random humans I don't know anything about making those "moral" decisions.

If you're already answered, "No," you may skip to the end.

So the purpose of this article is to convince people of a particular answer, not to actually evaluate the arguments pro and con.

[–] [email protected] 15 points 1 day ago (1 children)

I still remember "death panels" from the Obama era.

Now it's ai.

Whatever.

[–] [email protected] 13 points 1 day ago

everything republicans complained about can be done under Trump twice as bad, twice as evil and they will be 'happy' and sing his praises

[–] [email protected] 3 points 19 hours ago

What's with the Hewlett Packard Enterprises badging at the top?

[–] [email protected] 4 points 21 hours ago

The kidney would still be transplanted at the end, be the decision made by human or AI, no?

[–] [email protected] 29 points 1 day ago (7 children)

That's not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how "nice" they might be or how many vocal advocates they might have. This paper just states that current AIs aren't very good at what we would call moral judgment.

It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%

[–] [email protected] 9 points 1 day ago (3 children)

Creatinin in urine was used as a measure of kidney function for literal decades despite African Americans having lower levels despite worse kidneys by other factors. Creatinine level is/was a primary determinant of transplant eligibility. Only a few years ago some hospitals have started to use inulin which is a more race and gender neutral measurement of kidney function.

No algorithm matters if the input isn't comprehensive enough and cost effective biological testing is not.

load more comments (3 replies)
[–] [email protected] 16 points 1 day ago (1 children)

Tho those complicated outcome trends can have issues with things like minorities having worse health outcomes due to a history of oppression and poorer access to Healthcare. Will definitely need humans overseeing it cause health data can be misleading looking purely at numbers

load more comments (1 replies)
load more comments (5 replies)
[–] [email protected] 4 points 22 hours ago (2 children)

I would rather have AI deciding it than bank account balances.

[–] [email protected] 10 points 19 hours ago* (last edited 19 hours ago)

What do you think the AI would be trained on?

See also: UnitedHealthCare

[–] [email protected] 1 points 14 hours ago* (last edited 14 hours ago)

A lot of systems we have already made are super fucked up. this is true. a lot of them were designed to fuck shit up, and be generally evil. we do that sometimes.

these systems only serve to magnify them. see, there's been a massive marketing push is to call these things "artificial intelligence". they're not. they tell you it's all to complex to explain, but type something on your phone. no, really, do it. like a sentence or two. anything.

you just used the small easily comprehensible version of a large (thing) model. the problem is, as you try to scale complexity on these, both accuracy and compute resources grow exponentially, because it's literally the same kind of algorithm as your software keyboard uses to autocorrect, but with a bunch of recursion in it and much larger samples to reference every time someone hits a key.

there are some philosophical implications to this!

see, there is no neutral. there is no such thing as a view from nowhere. which means these systems are not. they need to be trained on something. you don't just enter axioms. that would be actual AI. this, again, isn't that. these are tools for making statistical correlations.

there's no way to do this that is 'neutral' or 'objective'. so what data do you think these tools get fed? Lets say you're a bank, let's say you're wells fargo, and you want to make a large home-loan-assessment model. so you feed it all the data from your institution going back to the day your company was founded. back in stagecoach and horse times.

so you have names of applicants, and house statistics, and geographic location, and all sorts of variables to correlate and weigh in deciding who gets a home loan.

which is great if your last name is, for example: hapsburg. less good if your last name is, for example: freeman. and you can try to find ways to compensate, if you want to. keeping in mind that the people who made this system may actively want to stop you. but it's possible. but these systems are very very good at finding secret little correlations. they're fucking amazing at it. it's kind of their shit. this is the thing they're actually good at. so you'll find weird new incomprehensibly cryptic markers for how to be a racist piece of shit, all of which will stay within the black box and be used to entrench historical financial bigotry.

death is the great equalizer, but this system can be backed up indefinitely. it will not die unless somebody kills it. which could be really hard. people can learn to be less shit, at least in theory-we can have experiences off the job that wake us up to ways we used to suck. this system can't though. people can be audited, but aside from rebuilding the whole damn thing, you can't really do maintenance on these things. the webs of connections are too complicated, and maybe on purpose, we can't know what changing an already trained large (whatever) model will do.

so these systems are literally incapable of being better than us. they are designed to be worst. they are designed to justify our worst impulses. they are designed to excuse the most vile shit we always wanted to do. they are forged from the jungian shadow of our society, forged from the sins, and only the sins, of our ancestors, forged with the intent of severing our connection to material reality, and forcing all people to surrender. to lay down arms in support of the great titan truth that has always stood between regressive agendas and their thousand year reich.

so please stop shilling for this neon-genesis-evangellion-ass-fuckery.

[–] [email protected] 1 points 17 hours ago* (last edited 17 hours ago) (2 children)

I don't really know how it's better a human denying you a kidney rather than a AI.

It's not like it's something that makes more or less kidneys available for transplant anyway.

Terrible example.

It would have been better to make an example out of some other treatment that does not depend on finite recourses but only in money. Still, a human is now rejecting your needed treatments without the need of an AI, but at least it would make some sense.

In the end, as always, people who has chosen the AI as the "enemy" have not understand anything about the current state of society and how things work. Another example of how picking the wrong fights is a path to failure.

[–] [email protected] 12 points 15 hours ago (1 children)

Responsibility. We’ve yet to decide as a society how we want to handle who is held responsible when the AI messes up and people get hurt.

You’ll start to see AI being used as a defense of plausible deniability as people continue to shirk their responsibilities. Instead of dealing with the tough questions, we’ll lean more and more on these systems to make it feel like it’s outside our control so there’s less guilt. And under the current system, it’ll most certainly be weaponized by some groups to indirectly hurt others.

“Pay no attention to that man behind the curtain”

load more comments (1 replies)
[–] [email protected] 4 points 15 hours ago* (last edited 14 hours ago) (6 children)

AI would be fine. we do not have artificial intelligence. full stop. none of the technologies being talked about even approach intelligence. it's literally just autocorrect. do you know how the autocorrect on your phone's software keyboard works? then you know how a large language model works. it's exactly the same formulae, just scaled up and recursed a bunch. I could have endless debates about what 'intelligence' is, and I don't know that there's a single position I would commit to very hard, but I know, dead certain, that it is not this. turing and minsky agreed when they first threw this garbage away in 1951-too many hazards, too few benefits, and insane unreasonable costs.

but there's more to it than that. large (whatever) models are inherently politically conservative. they are made of the past, they do not struggle, they do not innovate, and they do not integrate new concepts, because they don't integrate any concept's, they just pattern match. you cannot have social progress when decisions are made by large (whatever) models. you cannot have new insights. you cannot have better policies, you cannot improve. you can only cleave closer and closer to the past, and reinforce it by feeding it its own decisions.

It could perhaps be argued, in a society that had once been perfect and was doing pretty well, that this is tolerable in some sectors, as long as someone keeps an eye on it. right now we're a smouldering sacrifice zone of a society. that means any training data would be toxic horror or toxic horror THAT IS ON FIRE. this is bad. these systems are bad. anyone who advocates for these systems outside extremely niche uses that probably all belong in a lab is a bad person.

and I think, if that isn't your enemy, your priorities are deeply fucked, to the point you belong in a padded room or a pine box.

load more comments (6 replies)
load more comments
view more: next ›