this post was submitted on 28 Jul 2024
102 points (97.2% liked)

Technology

35148 readers
52 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 56 points 5 months ago (1 children)

Totally not a bubble though.

[–] [email protected] 24 points 5 months ago* (last edited 5 months ago)

Yeah. It's a legitimate business, where the funders at the top of the pyramid are paid by those that join at the bottom!

[–] [email protected] 47 points 5 months ago (2 children)

350,000 servers? Jesus, what a waste of resources.

[–] [email protected] 37 points 5 months ago (5 children)

just capitalist markets allocating resources efficiently where they're need

load more comments (5 replies)
[–] [email protected] 19 points 5 months ago

Sounds like we're going to get some killer deals on used hardware in a year or so

[–] [email protected] 20 points 5 months ago (1 children)

I do expect them to receive more funding, but I also expect that to be tied to pricing increases. And I feel like that could break their neck.

In my team, we're doing lots of GenAI use-cases and far too often, it's a matter of slapping a chatbot interface onto a normal SQL database query, just so we can tell our customers and their bosses that we did something with GenAI, because that's what they're receiving funding for. Apart from these user interfaces, we're hardly solving problems with GenAI.

If the operation costs go up and management starts asking what the pricing for a non-GenAI solution would be like, I expect the answer to be rather devastating for most use-cases.

Like, there's maybe still a decent niche in that developing a chatbot interface is likely cheaper than a traditional interface, so maybe new projects might start out with a chatbot interface and later get a regular GUI to reduce operation costs. And of course, there is the niche of actual language processing, for which LLMs are genuinely a good tool. But yeah, going to be interesting how many real-world use-cases remain once the hype dies down.

[–] [email protected] 5 points 5 months ago

It's also worth noting that smaller model work fine for these types of use cases, so it might just make sense to run a local model at that point.

[–] [email protected] 18 points 5 months ago (1 children)

Now's the time to start saving for a discount GPU in approximately 12 months.

[–] [email protected] 13 points 5 months ago (2 children)

They don't use GPUs, they use more specialized devices like the H100.

[–] [email protected] 9 points 5 months ago (1 children)

Everyone that doesn’t have access to those is using gpus though.

[–] [email protected] 8 points 5 months ago (3 children)

We are talking specifically about OpenAI, though.

[–] [email protected] 7 points 5 months ago (1 children)

People who previously were at the high end of GPU can now afford used H100s -> they sell their GPUs -> we can maybe afford them

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 3 points 5 months ago

Can I use a H100 to run hell divers 2?

[–] [email protected] 16 points 5 months ago (1 children)

Good. It's fake crap tech that no one needs.

[–] [email protected] 9 points 5 months ago (5 children)

It's actually really awesome and truly helps with my work.

load more comments (5 replies)
[–] [email protected] 16 points 5 months ago
[–] [email protected] 14 points 5 months ago

I hope so! I am so sick and tired of AI this and AI that at work.

[–] [email protected] 13 points 5 months ago (3 children)

The start(-up?)[sic] generates up to $2 billion annually from ChatGPT and an additional $ 1 billion from LLM access fees, translating to an approximate total revenue of between $3.5 billion and $4.5 billion annually.

I hope their reporting is better then their math...

[–] [email protected] 10 points 5 months ago

Probably used ChatGPT….

[–] [email protected] 8 points 5 months ago (1 children)

Maybe they also added 500M for stuff like Dall-E?

[–] [email protected] 3 points 5 months ago

Good point - it guess it could have easily fallen out while being edited, too

load more comments (1 replies)
[–] [email protected] 12 points 5 months ago

Bubble. Meet pop.

[–] [email protected] 9 points 5 months ago
[–] [email protected] 7 points 5 months ago (1 children)
[–] [email protected] 12 points 5 months ago
[–] [email protected] 7 points 5 months ago
[–] [email protected] 7 points 5 months ago
[–] [email protected] 7 points 5 months ago

Ai stands for artificial income.

[–] [email protected] 6 points 5 months ago

Last time a batch of these popped up it was saying they'd be bankrupt in 2024 so I guess they've made it to 2025 now. I wonder if we'll see similar articles again next year.

[–] [email protected] 6 points 5 months ago (1 children)

For anyone doing a serious project, it's much more cost effective to rent a node and run your own models on it. You can spin them up and down as needed, cache often-used queries, etc.

[–] [email protected] 6 points 5 months ago

For sure, and in a lot of use cases you don't even need a really big model. There are a few niche scenarios where you require a large context that's not practical to run on your own infrastructure, but in most cases I agree.

load more comments
view more: next ›