this post was submitted on 03 Nov 2024
1276 points (99.4% liked)

Fuck AI

1507 readers
6 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 9 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 1 month ago (1 children)

AI results are always so bad. I don't like that there is AI medical results. That needs more pushback.

[–] [email protected] 13 points 1 month ago (1 children)

Ironically, that is possibly one of the few legit uses.

Doctors can't learn about every obscure condition and illness. This means they can miss the symptoms of them for a long time. An AI that can check for potential matches to the symptoms involved could be extremely useful.

The provisio is that it is NOT a replacement for a doctor. It's a supplement that they can be trained to make efficient use of.

[–] [email protected] 6 points 1 month ago (2 children)

Couldn't that just as easily be solved with a database of illnesses which can be filtered by symptoms?

[–] [email protected] 7 points 1 month ago (1 children)

That requires the symptoms to be entered correctly, and significant effort from (already overworked) doctors. A fuzzy logic system that can process standard medical notes, as well as medical research papers would be far more useful.

Basically, a quick click, and the paperwork is scanned. If it's a match for the "bongo dancing virus" or something else obscure, it can flag it up. The doctor can now invest some effort into looking up "bongo dancing virus" to see if it's a viable match.

It could also do it's own pattern matching. E.g. if a particular set of symptoms is often followed 18-24 hours later by a sudden cardiac arrest. Flagging this up could be completely false. However, it could key doctors in on something more serious happening, before it gets critical.

An 80% false positive is still quite useful, so long as the 20% helps and the rest is easy for a human to filter.

[–] [email protected] 3 points 1 month ago

The key is considering who is going to be using these systems. Certainly Google search AI is never going to be useful in this way because the kind of info a patient needs is very different to what a doctor would find useful.

And if we do make systems for doctors, then it's pretty damn important that we consider things like you have, taking into account that doctors are already overwhelmed and spending way too much effort juggling medical notes. I read a thing a while back which highlighted how many doctors are struggling with information management and processing all the info they need to because of how IT systems have tended to be enforced on them from the top down, with some doctors even saying paper notes were far easier to deal with (especially for complex cases). Digitisation definitely has huge benefits, but it seems like the needs of doctors have been largely ignored.

Even besides doctors, I feel like the field of Human-Computer Interaction (HCI) has been way too focussed on ways of wringing out more money from people, with not enough focus put on how we can make technology that empowers people. It's no wonder why: If I were a HCI researcher, I know what kind of project would be more likely to get research funding, and it's the ruthlessly capitalistic ones.

"An 80% false positive is still quite useful, so long as the 20% helps and the rest is easy for a human to filter."

This gets at a key point, in my opinion — even when one ignores the straightforwardly scammy "AI" nonsense, a lot that remain are still overly focussed on building systems that do stuff for people (usually in a way that would eliminate or reduce people in the process. Many examples of this exist, but one is "AI teachers" which still requires a human in the room, but only as a "learning facilitator" or some nonsense). I work in a field where machine learning has been a prominent thing for years, so I'm in a weird place of being sick of hearing about AI, and also impressed by what we do have. Mainly though, I'm exasperated because we could be doing so much more with the tech we have if we made tools that were intended to be used by humans.

Humans are dumb and emotional and silly, but we are also pretty cool and we can make awesome things when given the opportunity to. I will always be cynical about tech that seems over keen to cut humans out of things

[–] [email protected] 4 points 1 month ago

In either case, a real doctor would be reviewing the results. Nobody is going to authorize surgeries or prescription meds from AI alone.