this post was submitted on 26 Feb 2025
792 points (96.8% liked)

Technology

63277 readers
3937 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.

(page 2) 43 comments
sorted by: hot top controversial new old
[–] [email protected] 33 points 21 hours ago (3 children)

Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…

I half expect to next read an article about his ouster.

[–] [email protected] 9 points 18 hours ago (1 children)

My theory is it's only a matter of time until the firing sprees generate enough backlog of actual work that isn't being realised by the minor productivity gains from AI until the investors start asking hard questions.

Maybe this is the start of the bubble bursting.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 6 points 15 hours ago

AI is burning a shit ton of energy and researchers’ time though!

[–] [email protected] 11 points 18 hours ago (1 children)

And crashing the markets in the process... At the same time they came out with a bunch of mambo jumbo and scifi babble about having a million qbit quantum chip.... 😂

[–] [email protected] 6 points 16 hours ago (4 children)

Tech is basically trying to push up the stocks one hype idea after another. Social media bubble about to burst? AI! AI about to burst? Quantum! I'm sure that when people will start realizing quantum computing is another smokescreen, a new moronic idea will start to gain steam from all those LinkedIn "luminaries"

load more comments (4 replies)
[–] [email protected] 36 points 1 day ago (1 children)

LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren't immediately obvious.

[–] [email protected] 19 points 1 day ago (1 children)

I think AI is generating negative value ... the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn't given anything other than, as you've said, plain-language search (with bonus hallucination bullshit, yay!) ... snake oil, indeed.

[–] [email protected] 9 points 23 hours ago (1 children)

Its a little more complicated than that I think. LLMs and AI is not remotely the same with very different use cases.

I believe in AI for sure in some fields, but I understand the skeptics around LLMs.

But the difference AI is already doing in the medical industry and hospitals is no joke. X-ray scannings and early detection of severe illness is the one being used specifically today, and will save thounsands of lives and millions of dollars / euros.

My point is, its not that black and white.

[–] [email protected] 7 points 15 hours ago

On this topic, the vast majority of people seem to think that AI means the free tier of ChatGPT.

AI isn't a magical computer demon that can grant all of your wishes, but that doesn't mean that it is worthless.

For example, Alphafold essentially solved protein folding and diffusion models built on that discovery let us generate novel proteins with specific properties with the same ease as we can make a picture of an astronaut on a horse.

Image classification is massively useful in manufacturing. Instead of custom designed programs purpose built for each client ($$$), you can find tune existing models with generic tools using labor that doesn't need to be a software engineer.

Robotics is another field. The amount of work required for humans to design and code their control systems was enormous. Now you can use standard models, give them arbitrary limbs and configurations and train them in simulated environments. This massively cuts down on the amount of engineering work ($$$) required.

[–] [email protected] 4 points 19 hours ago
[–] [email protected] 11 points 1 day ago* (last edited 1 day ago)

Is he saying it's just LLMs that are generating no value?

I wish reporters could be more specific with their terminology. They just add to the confusion.

Edit: he's talking about generative AI, of which LLMs are a subset.

[–] [email protected] 3 points 18 hours ago (3 children)

That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.

It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.

[–] [email protected] 10 points 17 hours ago* (last edited 17 hours ago)

Because there’s already been multiple AI bubbles (eg, ELIZA - I had a lot of conversations with FREUD running on an Apple IIe). It’s also been falsely presented as basically “AGI.”

AI models trained to help doctors recognize cancer cells - great, awesome.

AI models used as the default research tool for every subject - very very very bad. It’s also so forced - and because it’s forced, I routinely see that it has generated absolute, misleading, horseshit in response to my research queries. But your average Joe will take that on faith, your high schooler will grow up thinking that Columbus discovered Colombia or something.

[–] [email protected] 4 points 17 hours ago (1 children)

I just can't see AI tools like ChatGPT ever being profitable. It's a neat little thing that has flaws but generally works well, but I'm just putzing around in the free version. There's no dollar amount that could be ascribed to the service that it provides that I would be willing to pay, and I think OpenAI has their sights set way too high with the talk of $200/month subscriptions for their top of the line product.

load more comments (1 replies)
[–] [email protected] 0 points 14 hours ago (1 children)

For a lot of years, computers added no measurable productivity improvements. They sure revolutionized the way things work in all segments of society for something that doesn’t increase productivity.

AI is an inflating bubble: excessive spending, unclear use case. But it won’t take long for the pop, clearing out the failures and making successful use cases clearer, the winning approaches to emerge. This is basically the definition of capitalism

[–] [email protected] 4 points 14 hours ago (1 children)

What time span are you referring to when you say "for a lot of years"?

[–] [email protected] 2 points 13 hours ago (1 children)

Vague memories of many articles over much of my adult life decrying the costs of whatever the current trend with computers is being higher than the benefits.

And I believe it, it’s technically true. There seems to be a pattern of bubbles where everyone jumps on the new hot thing, spend way too much money on it. It’s counterproductive, right up until the bubble pops, leaving the transformative successes.

Or I believe it was a long term thing with electronic forms and printers. As long as you were just adding steps to existing business processes, you don’t see productivity gains. It took many years for businesses to reinvent the way they worked to really see the productivity gains

[–] [email protected] 2 points 11 hours ago

If you want a reference there is a Rational Reminder Podcast (nerdy and factual personal finance podcast from a Canadian team) about this concept. It was the illustrated with trains or phone infrastructure 100 years ago : new technology looks nice -> people invest stupid amounts in a variety of projects-> some crash bring back stock valuations to reasonable level and at that point the technology is adopted and its infrastructure got subsidized by those who lost money on the stock market hot thing. Then a new hot thing emerge. The Internet got its cycle in 2000, maybe AI is the next one. Usually every few decade the top 10 in the s/p 500 changes.

load more comments
view more: ‹ prev next ›