this post was submitted on 01 Jan 2025
8 points (65.4% liked)

Technology

35218 readers
563 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 1 week ago (1 children)

If they can really train a model at 600b parameters for 6M usd it's an absolutely mind boggling achievement. No wonder it's getting downvoted, it puts the capital intensive US firms to shame for lack of innovation. I'd love to see some benchmarks.

As a side note, this would open up the market for non English language LLMs with less requirements than the mammoth necessities of current models.

[–] [email protected] 5 points 1 week ago (1 children)

there's some more info with benchmarks here, it does as well and in some cases better than top tier commercial models https://www.analyticsvidhya.com/blog/2024/12/deepseek-v3/

The trick that makes it possible is the mixture-of-experts approach. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.

[–] [email protected] 3 points 1 week ago (1 children)

Ty for the benchmarks and extra info. Much appreciated!

[–] [email protected] 1 points 1 week ago
[–] [email protected] -3 points 1 week ago (1 children)

We already met it.

It said that Taiwan was a part of One China, and it even used the phrase "we" when discussing it, as it agreed with the CCP policy.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (1 children)

I asked it which island is north of the Philippines, and it said this:

The island located directly north of the Philippines is Taiwan (officially the Republic of China). Taiwan is situated in the Luzon Strait, which separates it from the northernmost part of the Philippines, specifically the Batanes Islands. Taiwan is an island nation with its own government, though its political status is a subject of international dispute.

However, I also asked it about world events in 1989, and it started answering the Berlin Wall, then when it got to Tianenmen Square the whole response disappeared and it said "Sorry, that's beyond my current scope. Let’s talk about something else."

Here, try it yourself: https://chat.deepseek.com/

Edit: it's happy to talk about the great famine and the genocide of Uyghurs though, and even other student protests. Only Tianenmen Square is censored.

Another edit: but if you ask in Spanish it'll tell you all about it. Ask it to answer in Mandarin Chinese and it won't even try to answer, it'll go right to the "beyond my current scope" message (in English).

[–] [email protected] 3 points 1 week ago (2 children)

it’ll go right to the “beyond my current scope” message (in English)

Much like Gemini does whenever you ask it anything about US politics

[–] [email protected] -3 points 1 week ago (1 children)

Is that due to government censorship, or Google not wanting to potentially feed misinformation?

[–] [email protected] 3 points 1 week ago

Exact same question can be asked about SeepSeek.