this post was submitted on 28 Jan 2025
875 points (94.4% liked)
memes
11276 readers
2646 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The model is as far as I know open, even for commercial use. This is in stark contrast with Meta’s models, which have (or had?) a bespoke community license restricting commercial use.
Or is there anything that can’t be done with the DeepSeek model that I’m unaware of?
The model is open, it's not open source!
How is it so hard to understand? The complete source of the model is not open. It's not a hard concept.
Sorry if I'm coming of as rude but I'm getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.
Ok I understand now why people are upset. There’s a disagreement with terminology.
The source code for the model is open source. It’s defined in PyTorch. The source code for it is available with the MIT license. Anyone can download it and do whatever they want with it.
The weights for the model are open, but it’s not open source, as it’s not source code (or an executable binary for that matter). No one is arguing that the model weights are open source, but there seem to be an argument against that the model is open source.
And even if they provided the source code for the training script (and all its data), it’s unlikely anyone would reproduce the same model weights due to randomness involved. Training model weights is not like compiling an executable, because you’ll get different results every time.
Hey, I have trained several models in pytorch, darknet, tensorflow.
With the same dataset and the same training parameters, the same final iteration of training actually does return the same weights. There's no randomness unless they specifically add random layers and that's not really a good idea with RNNs it wasn't when I was working with them at least. In any case, weights should converge into a very similar point even if randomness is introduced or else the RNN is pretty much worthless.
There’s usually randomness involved with the initial weights and the order the data is processed.
Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.
As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won't have randomization big enough.
So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.