Skip to content

[JS Required] MiniMax M1 model claims Chinese LLM crown from DeepSeek - plus it's true open-source

Technology
13 9 23
  • This post did not contain any content.
  • This post did not contain any content.

    Well… 🤔

  • DeepSeek imposes similar restrictions, but only on their website. You can self-host and then enjoy relatively truthful (as truthful as a bullshit generator can be) answers about both Tianmen Square, Palestine, and South Africa (something American-made bullshit generators apparently like making up, to appease their corporate overlords or conspiracy theorists respectively).

  • This post did not contain any content.

    What exactly makes this more "open source" than DeepSeek? The linked page doesn't make that particularly clear.

    DeepSeek doesn't release their training data (but they release a hell of a lot of other stuff), and I think that's about as "open" as these companies can get before they risk running afoul of copyright issues. Since you can't compile the model from scratch, it's not really open source. It's just freeware. But that's true for both models, as far as I can tell.

  • DeepSeek imposes similar restrictions, but only on their website. You can self-host and then enjoy relatively truthful (as truthful as a bullshit generator can be) answers about both Tianmen Square, Palestine, and South Africa (something American-made bullshit generators apparently like making up, to appease their corporate overlords or conspiracy theorists respectively).

    Nope, Self hosted deepseek 8b thinking and distilled variants still clam up about Tianmen Square

  • Nope, Self hosted deepseek 8b thinking and distilled variants still clam up about Tianmen Square

    If you're talking about the distillations, AFAIK they take somebody else's model and run it through their (actually open-source) distiller. I tried a couple of those models because I was curious. The distilled Qwen model is cagey about Tianmen Square, but Qwen was made by Alibaba. The distillation of a US-made model did not have this problem.

    (Edit: we're talking about these distillations, right? If somebody else ran a test and posted it online, I'm not privy to it.)

    I don't have enough RAM to run the full DeepSeek R1, but AFAIK it doesn't have this problem. Maybe it does.

    In case it isn't clear, BTW, I do despise LLMs and AI in general. The biggest issue with their lies (leaving aside every other issue with them for a moment) isn't the glaringly obvious stuff. Not Tianmen Square, and certainly not the "it's woke!" complaints about generating images of black founding fathers. The worst lies are the subtle and insidious little details like agreeableness - trying to get people to spend a little more time with them, which apparently turns once-reasonable people into members of micro-cults. Like cults, perhaps, spme skeptics think they can join in and not fall for the BS... And then they do.

    All four students had by now joined their chosen groups... Hugh had completely disappeared into a nine-week Arica training seminar; he was incommunicado and had mumbled something before he left about “how my energy has moved beyond academia.”

  • What exactly makes this more "open source" than DeepSeek? The linked page doesn't make that particularly clear.

    DeepSeek doesn't release their training data (but they release a hell of a lot of other stuff), and I think that's about as "open" as these companies can get before they risk running afoul of copyright issues. Since you can't compile the model from scratch, it's not really open source. It's just freeware. But that's true for both models, as far as I can tell.

    Open weights + an OSI approved license is generally what is used to refer to models as open source. the with that said, Deepseek R1 is am MIT license, and this one is Apache 2. Technically that makes Deepseek less restrictive, but who knows.

  • If you're talking about the distillations, AFAIK they take somebody else's model and run it through their (actually open-source) distiller. I tried a couple of those models because I was curious. The distilled Qwen model is cagey about Tianmen Square, but Qwen was made by Alibaba. The distillation of a US-made model did not have this problem.

    (Edit: we're talking about these distillations, right? If somebody else ran a test and posted it online, I'm not privy to it.)

    I don't have enough RAM to run the full DeepSeek R1, but AFAIK it doesn't have this problem. Maybe it does.

    In case it isn't clear, BTW, I do despise LLMs and AI in general. The biggest issue with their lies (leaving aside every other issue with them for a moment) isn't the glaringly obvious stuff. Not Tianmen Square, and certainly not the "it's woke!" complaints about generating images of black founding fathers. The worst lies are the subtle and insidious little details like agreeableness - trying to get people to spend a little more time with them, which apparently turns once-reasonable people into members of micro-cults. Like cults, perhaps, spme skeptics think they can join in and not fall for the BS... And then they do.

    All four students had by now joined their chosen groups... Hugh had completely disappeared into a nine-week Arica training seminar; he was incommunicado and had mumbled something before he left about “how my energy has moved beyond academia.”

    That's not how distillation works if I understand what you're trying to explain.

    If you distill model A to a smaller model, you just get a smaller version of model A with the same approximate distribution curve of parameters, but fewer of them. You can't distill Llama into Deepseek R1.

    I've been able to run distillations of Deepseek R1 up to 70B, and they're all censored still. There is a version of Deepseek R1 "patched" with western values called R1-1776 that will answer topics censored by the Chinese government, however.

  • That's not how distillation works if I understand what you're trying to explain.

    If you distill model A to a smaller model, you just get a smaller version of model A with the same approximate distribution curve of parameters, but fewer of them. You can't distill Llama into Deepseek R1.

    I've been able to run distillations of Deepseek R1 up to 70B, and they're all censored still. There is a version of Deepseek R1 "patched" with western values called R1-1776 that will answer topics censored by the Chinese government, however.

    I've been able to run distillations of Deepseek R1 up to 70B

    Where do you find those?

    There is a version of Deepseek R1 "patched" with western values called R1-1776 that will answer topics censored by the Chinese government, however.

    Thank you for mentioning this, as I finally confronted my own preconceptions and actually found an article by Perplexity that demonstrated R1 itself has demonstrable pro-China bias.

    Although Perplexity's own description should cause anybody who understands the nature of LLMs to pause. They describe it in their header as a

    version of the DeepSeek-R1 model that has been post-trained to provide unbiased, accurate, and factual information.

    That's a bold (read: bullshit) statement, considering the only altered its biases on China. I wouldn't consider the original model to be unbiased either, but apparently perplexity is giving them a pass on everything else. I guess it's part of the grand corporate lie that claims "AI is unbiased," a delusion that perplexity needs to maintain.

  • What exactly makes this more "open source" than DeepSeek? The linked page doesn't make that particularly clear.

    DeepSeek doesn't release their training data (but they release a hell of a lot of other stuff), and I think that's about as "open" as these companies can get before they risk running afoul of copyright issues. Since you can't compile the model from scratch, it's not really open source. It's just freeware. But that's true for both models, as far as I can tell.

    Yup, this is open weights just like DeepSeek. Open source should mean their source data is also openly available, but we all know companies won't do that until they stop violating copyright to train these things.

  • This post did not contain any content.

    Yay another LLM! That's definitely what the world needs and don't let anyone make you think otherwise. This is so fun guys. Let's fund the surveillance, stealing, misinformation, harmful biases, and destruction of the planet. I can't believe some people think that humanity is more important than another "open source" crazy pro max ultra 8K AI 9999!

  • Yup, this is open weights just like DeepSeek. Open source should mean their source data is also openly available, but we all know companies won't do that until they stop violating copyright to train these things.

    I figured as much. Even this line...

    M1's capabilities are top-tier among open-source models

    ... is right above a chart that calls it "open-weight".

    I dislike the conflation of terms that the OSI has helped legitimize. Up until LLMs, nobody called binary blobs "open-source" just because they were compiled using open-source tooling. That would be ridiculous

  • Nope, Self hosted deepseek 8b thinking and distilled variants still clam up about Tianmen Square

    You want abliterated models, not distilled.

  • The End Of The Hackintosh Is Upon Us

    Technology technology
    30
    1
    138 Stimmen
    30 Beiträge
    0 Aufrufe
    appoxo@lemmy.dbzer0.comA
    And how much does Apple like that?
  • Misogyny and Violent Extremism: Can Big Tech Fix the Glitch?

    Technology technology
    18
    1
    20 Stimmen
    18 Beiträge
    61 Aufrufe
    G
    It is interesting that you are not answering my point... Good work
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    771 Stimmen
    205 Beiträge
    400 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • 103 Stimmen
    8 Beiträge
    14 Aufrufe
    D
    They stopped sending to me when I replied that every text message after would cost them $500 each. That's an actual thing. They got scared.
  • 121 Stimmen
    21 Beiträge
    91 Aufrufe
    T
    I thought Trump and Elon had a major falling out? Actually now that I think of it, news about that fizzled out very quickly. Did they silently kiss and make up behind closed doors or something?
  • 358 Stimmen
    113 Beiträge
    201 Aufrufe
    S
    The problem is the cost of each. Right now material is dirt cheap and energy prices are going up. And we are not good at long term planning.
  • 21 Stimmen
    41 Beiträge
    119 Aufrufe
    W
    that's because phone makers were pumping out garbage chargers with bare minimum performance for every single phone, isn't it?
  • New Cars Don't All Come With Dipsticks Anymore, Here's Why

    Technology technology
    22
    1
    2 Stimmen
    22 Beiträge
    86 Aufrufe
    L
    The U660F transmission in my wife's 2015 Highlander doesn't have a dipstick. Luckily that transmission is solid and easy to service anyway, you just need a skinny funnel to fill it.