this post was submitted on 19 Jan 2024
255 points (95.4% liked)
Technology
59030 readers
2976 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it's estimated they "only" used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn't worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?
The estimated training time for GPT-4 is 90 days though.
Assuming you could scale that linearly with the amount of hardware, you'd get it down to about 3.5 days. From four times a year to twice a week.
If you're scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.
Or they just have too much money.
Which will be solved by them spending it.
Would that be diminishing returns on quality, or training speed?
If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?
Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.
My guess is that the main reason for all the GPUs is they're going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as "open" then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don't really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).
Might be a bit of a tell that they think they have something.