this post was submitted on 28 Oct 2024
1533 points (98.8% liked)

Technology

59378 readers
4188 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 2 weeks ago (1 children)

Sure, but LLMs aren't the only AI being used, nor will they eliminate the other forms of AI. As people see issues with the big LLMs, development focus will change to adopt other approaches.

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

There is real risk that the hype cycle around LLMs will smother other research in the cradle when the bubble pops.

The hyperscalers are dumping tens of billions of dollars into infrastructure investment every single quarter right now on the promise of LLMs. If LLMs don't turn into something with a tangible ROI, the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

Viable paths of research will become much harder to fund if investors get burned because the business model they're funding right now doesn't solidify beyond "trust us bro."

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

Well you say that, but somehow crypto is still around despite most schemes being (IMO) a much more explicit scam. We have politicans supporting it.

[–] [email protected] 2 points 2 weeks ago (1 children)

Sure, but those are largely the big tech companies you're talking about, and research tends to come from universities and private orgs. That funding hasn't stopped, it just doesn't get the headlines like massive investments into LLMs currently do. The market goes in cycles, and once it finds something new and promising, it'll dump money into it until the next hot thing comes along.

There will be massive market consequences if AI fails to deliver on its promises (and I think it will, because the promises are ridiculous), and we get those every so often. If we look back about 25 years, we saw the same thing w/ the dotcom craze, where anything with a website got obscene amounts of funding, even if they didn't have a viable business model, and we had a massive crash. But important websites survived that bubble bursting, and the market recovered pretty quickly and within a decade we had yet another massive market correction due to another bubble (the housing market, mostly due to corruption in the financial sector).

That's how the market goes. I think AI will crash, and I think it'll likely crash in the next 5 years or so, but the underlying technologies will absolutely be a core part of our day-to-day life in the same way the Internet is after the dotcom burst. It'll also look quite a bit different IMO than what we're seeing today, and within 10 years of that crash, we'll likely be beyond where we were just before the crash, at least in terms of overall market capitalization.

It's a messy cycle, but it seems to work pretty well in aggregate.

[–] [email protected] 4 points 2 weeks ago (1 children)

Sure, but those are largely the big tech companies you’re talking about, and research tends to come from universities and private orgs.

Well, that's because the hyperscalers are the only ones who can afford it at this point. Altman has said ChatGPT 4 training cost in the neighborhood of $100M (largely subsidized by Microsoft). The scale of capital being set on fire in the pursuit of LLMs is just staggering. That's why I think the failure of LLMs will have serious knock-on effects with AI research generally.

To be clear: I don't disagree with you re: the fact that AI research will continue and will eventually recover. I just think that if the LLM bubble pops, it's going to set things back for years because it will be much more difficult for researchers to get funded for a long time going forward. It won't be "LLMs fail and everyone else continues on as normal," it's going to be "LLMs fail and have significant collateral damage on the research community."

[–] [email protected] 3 points 2 weeks ago (1 children)

The scale of capital being set on fire in the pursuit of LLMs is just staggering.

I'm guessing you weren't around in the 90s then? Because the amount of money set on fire on stupid dotcom startups was also staggering. Yet here we are, the winners survived and the market is completely recovered now (took about 15 years because 2008 happened).

I just think that if the LLM bubble pops, it’s going to set things back for years because it will be much more difficult for researchers to get funded for a long time going forward

Maybe. Or if the research is promising enough, investors will dump money into it just like they did with LLMs, and we'll be right back where we are now with ridiculous valuations.

[–] [email protected] 1 points 2 weeks ago (1 children)

I'm guessing you weren't around in the 90s then? Because the amount of money set on fire on stupid dotcom startups was also staggering.

The scale is very different. OpenAI needs to raise capital at a valuation far higher than any other startup in history just to keep the doors open another 18-24 months. And then continue to do so.

There's also a very large difference between far ranging bad investments and extremely concentrated ones. The current bubble is distinctly the latter. There hasn't really been a bubble completely dependent on massive capital investments by a handful of major players like this before.

There's OpenAI and Anthropic (and by proxy MS/Google/Amazon). Meta is a lesser player. Musk-backed companies are pretty much teetering at the edge of also rans and there's a huge cliff for everything after that.

It's hard for me to imagine investors that don't understand the technology now but getting burned by it being enthusiastic about investing in a new technology they don't understand that promises the same things, but is totally different this time, trust me. Institutional and systemic trauma is real.

(took about 15 years because 2008 happened).

I mean, that's kind of exactly what I'm saying? Not that it's irrecoverable, but that losing a decade plus of progress is significant. I think the disconnect is that you don't seem to think that's a big deal as long as things eventually bounce back. I see that as potentially losing out on a generation worth of researchers and one of the largest opportunity costs associated with the LLM craze.

[–] [email protected] 1 points 2 weeks ago

OpenAI needs to raise capital at a valuation far higher than any other startup in history

The only difference is the concentration of wealth. Whether you spread the eggs across a dozen baskets or put them all in one doesn't matter if the farm producing the eggs has a salmonella outbreak. It's the same underlying problem whether it impacts a handful of companies or hundreds, investors are investing way too much in the same thing.

That said, the investment is still somewhat spread out among OpenAI, Microsoft, Apple, Google, Meta, and Amazon (leaving Nvidia out intentionally here since their risk is limited). Each of those is investing a ton into AI, so if there's a problem in management instead of the underlying tech, then there will be winners and losers among that bunch, but if there's a problem with the underlying tech, all of them are going to get hit.

It’s hard for me to imagine investors that don’t understand the technology now but getting burned by it being enthusiastic about investing in a new technology they don’t understand that promises the same things

But that's just it, they'll market it differently. Apple has the "Apple intelligence" brand they're going for, and they're trying to distance themselves a bit from the rest of the pack. Amazon is largely betting on AI processing hardware, so they're a bit less exposed if consumers shift from one incarnation to another, provided they still use similar hardware for whatever that replacement is. One of those players will capitalize on the hysteria going the other direction and rebrand successfully to attract investors.

So if LLMs end up being a liability, we'll see a bunch of rebranding of similar tech (say, "real intelligence" or "intelligent digital assistant" or whatever). Some companies will transition successfully, others won't, but tech companies will find a way to keep the funding flowing.

losing a decade plus of progress is significant

But it's not real progress, it's inflated progress. If you look at average, inflation-adjust returns (CAGR, not simple average) over the past 30 years, from the start of the dotcom (1993 -> 2023), average returns are 7.5%/year. 20 years (1993 -> 2013) is 6.7%.

If you look at innovations, smartphones started coming out right after the dotcom bust, "Web 2.0" was coined in 1999 (peak of the dotcom bubble) and became a thing in the early 2000s, etc. There was a lot of innovation in tech, which seemed largely unaffected by the dotcom bubble.

So I'm really not worried about it. We had a massive tech correction in 2000, yet the decade following had some of the biggest changes in tech, a lot of it coming from the companies that survived the dotcom bubble. Likewise after the 2008 crash, the financial sector had a massive run. I don't see any reason for the AI bubble to be any different.