Skip to content

(LLM) A language model built for the public good

Technology
10 7 0
  • ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

    • In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
    • This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.
    • A defining feature of the model is its multilingual fluency in over 1,000 languages.
  • ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

    • In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
    • This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.
    • A defining feature of the model is its multilingual fluency in over 1,000 languages.

    I'm sure the community will find something to hate about this as well, since this isn't an article about an LLM failing at something.

  • I'm sure the community will find something to hate about this as well, since this isn't an article about an LLM failing at something.

    Gigantic hater of all things LLM or "AI" here.

    The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with "multilingual fluency in over 1,000 languages" could be potentially useful.

    And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that's also a good thing.

    Could I find something to hate about it? Oh yeah, most certainly! 🙂

  • I'm sure the community will find something to hate about this as well, since this isn't an article about an LLM failing at something.

    Llms are useful for inspiration, light research, etc.

    They should never be used as part of a finished product or as the main scaffolding.

  • Gigantic hater of all things LLM or "AI" here.

    The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with "multilingual fluency in over 1,000 languages" could be potentially useful.

    And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that's also a good thing.

    Could I find something to hate about it? Oh yeah, most certainly! 🙂

    Most rational AI hater.

  • Llms are useful for inspiration, light research, etc.

    They should never be used as part of a finished product or as the main scaffolding.

    Honestly they are pretty good for research too. You can’t imagine the amount of obscure shit that my ChatGPT has surfaced when I bounce ideas on it. But yea it’s terrible in finished products, I think everyone knows that and in a year or two if they don’t improve I expect we will be back to shoving it behind the scenes as had been done before ChatGPT. It’s for the best.

  • Honestly they are pretty good for research too. You can’t imagine the amount of obscure shit that my ChatGPT has surfaced when I bounce ideas on it. But yea it’s terrible in finished products, I think everyone knows that and in a year or two if they don’t improve I expect we will be back to shoving it behind the scenes as had been done before ChatGPT. It’s for the best.

    That's not research. That's simply surfacing tidbits it found on the net the happen to be true

    .I've asked many questions of many llms in my chosen areas of interest and modest expertise , seeking more than basic knowledge( which it often surprisingly lacks ) it always has at least one error. Often so subtle it goes on noticed until it's too late.

  • Gigantic hater of all things LLM or "AI" here.

    The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with "multilingual fluency in over 1,000 languages" could be potentially useful.

    And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that's also a good thing.

    Could I find something to hate about it? Oh yeah, most certainly! 🙂

    i hear there are cool advances in medicine, engineering and such. i imagine techbros have an exponentially bigger budget, though.

  • I'm sure the community will find something to hate about this as well, since this isn't an article about an LLM failing at something.

    According to the article, they've even addressed my environmental concerns. Since it's created by universities, I don't think we'll even have this shoved down our throats all the time.

    I doubt whether it will be more useful than any other general LLM so far but hate it? Nah.

  • i hear there are cool advances in medicine, engineering and such. i imagine techbros have an exponentially bigger budget, though.

    Usually when I see this it's using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

    There's huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

    If you can identify cancer in x-rays using machine learning that's awesome, but that's very seperate from the AI hype machine that is currently running wild.

  • Musk's AI firm deletes posts after chatbot praises Adolf Hitler

    Technology technology
    28
    1
    333 Stimmen
    28 Beiträge
    0 Aufrufe
    M
    I don't understand why he is censoring his robot
  • 74 Stimmen
    43 Beiträge
    33 Aufrufe
    O
    The point is not visuals, though I know what you mean. The point is to gain the introspection and Brain chemistry changes. Micro dosing less than . 5 grams daily for short periods NOT LONGTERM, are very effective control vs SSRIs. Large mega doses are where the real changes happen. I highly recommend significant research and carrful planning if you choose this route. Safety. Trip sitters. Be safe. There has been major changes in PTSD war veterans and all sorts if mental health issues.
  • 0 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • 1k Stimmen
    95 Beiträge
    16 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 156 Stimmen
    79 Beiträge
    100 Aufrufe
    M
    But they did give! They did not chose to deny and not have pizza.
  • 35 Stimmen
    16 Beiträge
    24 Aufrufe
    M
    This is what I want to know also. "AI textbooks" is a great clickbait/ragebait term, but could mean a great variety of things.
  • 1 Stimmen
    14 Beiträge
    57 Aufrufe
    T
    ...is this some sort of joke my Nordic brain can't understand? I need to go hug a councilman.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet