this post was submitted on 28 Jan 2025
875 points (94.4% liked)

memes

11276 readers
2646 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 

Office space meme:

"If y'all could stop calling an LLM "open source" just because they published the weights... that would be great."

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 19 points 2 days ago* (last edited 2 days ago) (1 children)

Open source isn't really applicable to LLM models IMO.

There is open weights (the model), and available training data, and other nuances.

They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1

Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I've made

[–] [email protected] 15 points 2 days ago (1 children)

It's worth noting that OpenR1 have themselves said that DeepSeek didn't release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn't be able to replicate it without re-discovering what they did.

OSI specifically makes a carve-out that allows models to be considered "open source" under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.

[–] [email protected] 1 points 2 days ago

Thanks for the correction and clarification! I just assumed from the open-r1 post that they gave everything aside from the training data.