this post was submitted on 30 Jan 2024
67 points (94.7% liked)

Technology

34906 readers
278 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

...“We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly-fire.”

As I type this, the nation of Israel is using an AI program called the Gospel to assist its airstrikes, which have been widely condemned for their high level of civilian casualties...

top 13 comments
sorted by: hot top controversial new old
[–] [email protected] 40 points 9 months ago

“You should all be excited,” Google’s VP of Engineering Behshad Behzadi tells us, during a panel discussion with a McDonald’s executive.

That sentence alone is one of the more depressing ones I've read this week.

[–] [email protected] 18 points 9 months ago (2 children)

Medicine relies on verification. AI operates without that.

AI would be terrible in medicine.

The Gospel is a good example, although I'd argue it's intentionally used for that purpose - that, and so that no person can be held to account for their decisions.

[–] [email protected] 27 points 9 months ago (1 children)

I agree that in actual use, medicine needs to verifiably work. I believe "AI", if you wanna call it that, probably has its place in effectively speedrunning theoretical testing and bruteforcing of results that would take humans much longer to even think of.

The problem arises when people trust whatever the machine spits out. But thats not a new problem with AI its a general problem that any form of media has.

[–] [email protected] 23 points 9 months ago (1 children)

AI is a tool. Just like all tools, it's only as good as the tool that's using it.

[–] [email protected] 6 points 9 months ago

And, the material it has to work with, which for AI, is gathered information

[–] [email protected] 5 points 9 months ago

Yep, exactly.

As a doctor who’s into tech, before we implemented something like AI-assisted diagnostics, we’d have to consider what the laziest/least educated/most tired/most rushed doctor would do. The tools would have to be very carefully implemented such that the doctor is using the tool to make good decisions, not harmful ones.

The last thing you want to do is have a doctor blindly approve an inappropriate order suggested by an AI without applying critical thinking and causing harm to a real person because the machine generated a factually incorrect output.

[–] [email protected] 11 points 9 months ago

This is written by Behind the Bastards host Robert Evans. They just released an episode that follows this article pretty closely. Check it out if you'd like to listen to more of this sort of content.

[–] [email protected] 7 points 9 months ago

“We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures."

Yeah probably. Theres lots of objective parameters you can give a medical model, and objective goals to train it towards.

[–] [email protected] 6 points 9 months ago

I promote running neural networks, LLM's, SLM's and stable diffusion locally. Why?

The way I see it, there's a curve when various forms of AI technology becomes so effective and so powerful that it poses a problem for society. People are afraid AI will take their jobs, and that's a valid concern.

Why then do I promote the use of local AI? Because I think that human+AI will be what prevents centralisation of data, the centralisation of knowledge, the centralisation of power that big tech firms, venture capitalists and authoritarians would love to have.

It's an uphill battle though, because much like the other boardroom buzzwords like "cloud", crypto, blockchain, etc, AI is something that makes billionaires pants wet and something that people despise - which is fully understandable.

But, I also fear it is self-defeatist. If we allow AI technology to be centralised instead of learning to liberate ourselves from the central tech cabals that wish to control it, then we set our selves up for new forms of authoritarianism we never knew before.

If you see the cyberdystopia that is China, or the tech oligarchy of the US, if you are left leaning, socialist, anarchist, etc, then it should be your prerogative to take that power away from central authorities.

Please reply with actual arguments and not cathartic putdowns, because I do want to see another way, but just being a troll on Lemmy will not sway me.

Again, I am open to reproach, just be objective.

[–] [email protected] 2 points 9 months ago

This is the best summary I could come up with:


I was watching a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me.

Specifically, about a term first defined by psychologist Robert Lifton in his early writing on cult dynamics: “voluntary self-surrender.” This is what happens when people hand over their agency and the power to make decisions about their own lives to a guru.

At Davos, just days ago, he was much more subdued, saying, “I don’t think anybody agrees anymore what AGI means.” A consummate businessman, Altman is happy to lean into that old-time religion when he wants to gin up buzz in the media, but among his fellow plutocrats, he treats AI like any other profitable technology.

As I listened to PR people try to sell me on an AI-powered fake vagina, I thought back to Andreessen’s claims that AI will fix car crashes and pandemics and myriad other terrors.

In an article published by Frontiers in Ecology and Evolution, a research journal, Dr. Andreas Roli and colleagues argue that “AGI is not achievable in the current algorithmic frame of AI research.” One point they make is that intelligent organisms can both want things and improvise, capabilities no model yet extant has generated.

What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” Midjourney can read a prompt and return with art it calculates will fit the criteria.


The original article contains 3,929 words, the summary contains 266 words. Saved 93%. I'm a bot and I'm open source!

[–] [email protected] -2 points 9 months ago (1 children)

As summarized by Bing AI:

  • The author shares his experience at the Consumer Electronics Show, where he watched a keynote speech for the Rabbit R1, an AI gadget that acts as a personal assistant.
  • The Rabbit R1 can create a “digital twin” of the user, which can directly utilize all of your apps so that you, the person, don’t have to.
  • The author expresses concern about the lack of information on how the Rabbit will interact with these apps and how secure the user’s data will be.
  • The author also discusses the trend of AI assistants like Microsoft’s Copilot, which can perform a variety of tasks, potentially replacing human effort.
  • The author emphasizes that there’s nothing inherently wrong with AI technology, but expresses concern about the potential risks and implications of its misuse.
[–] [email protected] 14 points 9 months ago (1 children)

Mmmmm nice to know the AI got halfway through the article before giving up.

[–] [email protected] 6 points 9 months ago

Even the AI got bored reading it.