Skip to content

(LLM) A language model built for the public good

Technology
18 10 138
  • I'm sure the community will find something to hate about this as well, since this isn't an article about an LLM failing at something.

    According to the article, they've even addressed my environmental concerns. Since it's created by universities, I don't think we'll even have this shoved down our throats all the time.

    I doubt whether it will be more useful than any other general LLM so far but hate it? Nah.

  • Usually when I see this it's using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

    There's huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

    If you can identify cancer in x-rays using machine learning that's awesome, but that's very seperate from the AI hype machine that is currently running wild.

  • Usually when I see this it's using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

    There's huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

    If you can identify cancer in x-rays using machine learning that's awesome, but that's very seperate from the AI hype machine that is currently running wild.

    .

  • That's not research. That's simply surfacing tidbits it found on the net the happen to be true

    .I've asked many questions of many llms in my chosen areas of interest and modest expertise , seeking more than basic knowledge( which it often surprisingly lacks ) it always has at least one error. Often so subtle it goes on noticed until it's too late.

    So what you’re saying is that it’s good for research, because you can’t research what you don’t know about.

    It’s good for giving starting points which is exactly what I meant.

    Next time I’ll write a dissertation with hyper specifics because it seems it’s necessary every time LLMs are involved as there’s always someone looking to nitpick the statements.

  • Yeah, I just find it to be a great rule of thumb. Those who understand what they are doing will be aware that they are not dealing with AI, those who jump to label it as such are usually bullshit artists.

  • So what you’re saying is that it’s good for research, because you can’t research what you don’t know about.

    It’s good for giving starting points which is exactly what I meant.

    Next time I’ll write a dissertation with hyper specifics because it seems it’s necessary every time LLMs are involved as there’s always someone looking to nitpick the statements.

    No you rude fuck.

    If i ask a simple question about a subject, let's say foraging as a I do that a lot. And it's wrong, it's friggin wrong.

    I'll ask about a specific plant. Full disclosure this is one of my things. 40 years at it. Ok? No big stretch to think I know a thing or two.

    So I ask about let's say, Japanese barberry. An invasive plant that is hated by many and rightly so at times. The question is , is it edible?

    The answer given was no. The truth is the opposite. It is edible. Hell there's recipes online for barberry jam. Now don't go just eating them though. Smart to test one or two leaves to see if an individual is allergic. That's not part of the answer, that's foraging 101. But I digress.
    The a.i was wrong and then argued about it until I pulled up all of the evidence. The a.i then admitted it was wrong, but who cares? It's not alive. Winning an argument with a.i is like beating oneself at poker.

    Another example
    I'll ask about intervals in music ( guitar teacher as my main profession now, as my passion for 48 years). It got the major scale intervals wrong.

    I asked ask if yogurt can replace eggs as a binding agent to one of them (can't remember which, apologies ) and it said no. That's a friggin home ec tip that's been around for at least a century

    People who give dissertations don't brag about it. Especially to make a point in a thread. It only makes one seem like a person who isn't confident in what they're saying so they drop a line that they feel will impress others. It doesn't

    Others experience is as important, vital and real as yours, regarding the answers given by a.i , but you'll brush it off because you feel that some how you have more insight than others. You dont. You just have more time to pour through a.i's mistakes to massage it to getting something close to what you want. That shows an abundance of time available. Which means you aren't doing the things I'm talking about.

    Or it means this is something you do for your job and it works for those specific needs . Which is fine, but your needs are not the world's. My need have been poorly met by that tool you spouse. Much like a rake won't help a guy digging a hole, a.i is the wrong tool for most jobs.

    Which means your opinion of my evaluation of a. I results is skewed because you don't value others experience, no matter how intelligent you are. And that is a sign of ignorance.

    I wish you a good day

  • No you rude fuck.

    If i ask a simple question about a subject, let's say foraging as a I do that a lot. And it's wrong, it's friggin wrong.

    I'll ask about a specific plant. Full disclosure this is one of my things. 40 years at it. Ok? No big stretch to think I know a thing or two.

    So I ask about let's say, Japanese barberry. An invasive plant that is hated by many and rightly so at times. The question is , is it edible?

    The answer given was no. The truth is the opposite. It is edible. Hell there's recipes online for barberry jam. Now don't go just eating them though. Smart to test one or two leaves to see if an individual is allergic. That's not part of the answer, that's foraging 101. But I digress.
    The a.i was wrong and then argued about it until I pulled up all of the evidence. The a.i then admitted it was wrong, but who cares? It's not alive. Winning an argument with a.i is like beating oneself at poker.

    Another example
    I'll ask about intervals in music ( guitar teacher as my main profession now, as my passion for 48 years). It got the major scale intervals wrong.

    I asked ask if yogurt can replace eggs as a binding agent to one of them (can't remember which, apologies ) and it said no. That's a friggin home ec tip that's been around for at least a century

    People who give dissertations don't brag about it. Especially to make a point in a thread. It only makes one seem like a person who isn't confident in what they're saying so they drop a line that they feel will impress others. It doesn't

    Others experience is as important, vital and real as yours, regarding the answers given by a.i , but you'll brush it off because you feel that some how you have more insight than others. You dont. You just have more time to pour through a.i's mistakes to massage it to getting something close to what you want. That shows an abundance of time available. Which means you aren't doing the things I'm talking about.

    Or it means this is something you do for your job and it works for those specific needs . Which is fine, but your needs are not the world's. My need have been poorly met by that tool you spouse. Much like a rake won't help a guy digging a hole, a.i is the wrong tool for most jobs.

    Which means your opinion of my evaluation of a. I results is skewed because you don't value others experience, no matter how intelligent you are. And that is a sign of ignorance.

    I wish you a good day

    I use perplexity as my main go to if i want to use an LLM, since they have access to a wide scale of models. It was correct in the cases you mentioned. It's a tool focused on correctness of information and I've had it hallucinate a lot less than other tools.

    Give it a shot if you're looking for one that focuses on correctness of information. It searches the Web and then feeds the results into the model you choose.

    You can also tell it to only use academic papers, social discussions, or SEC filings.

  • Usually when I see this it's using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

    There's huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

    If you can identify cancer in x-rays using machine learning that's awesome, but that's very seperate from the AI hype machine that is currently running wild.

    Machine learning is a subset of the AI branch of computer science. I agree that the pop culture definition of AI is different than the computer science one, but the computer science one is still valid.

  • Machine learning is a subset of the AI branch of computer science. I agree that the pop culture definition of AI is different than the computer science one, but the computer science one is still valid.

    Large language models and "generative AI" such as Stable Diffusion, Midjourney, and DALL-E are all just machine learning models. We do not currently have a real "AI branch" of computer science, we have a branch of machine learning that poses as AI.

    No matter how good a machine gets at recognizing and predicting patterns, it will not constitute AI, as intelligence is different from pattern recognition and prediction. Even if LLMs can sometimes appear to be reasoning, they importantly are not.

  • ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

    • In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
    • This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.
    • A defining feature of the model is its multilingual fluency in over 1,000 languages.

    Is the red cross involved? Because if not, using a red cross in the article is misleading and potentially a crime.

  • Apparently Debian has alienated the developers

    Technology technology
    17
    14 Stimmen
    17 Beiträge
    121 Aufrufe
    H
    Oh man, I'm a bit late to the party here. He really believes the far-right Trump propaganda, and doesn't understand what diversity programs do. It's not a war between white men an all the other groups of people... It's just that is has proven to be difficult to for example write a menstrual tracker with a 99.9% male developer base. It's just super difficult to them to judge how that's going to be used in real-world scenarios and what some specific challenges and nice features are. That's why you listen to minority opinions, to deliver a product that caters to all people. And these minority opinions are notoriously difficult to attract. That's why we do programs for that. They are task-forces to address things aside from what's mainstream and popular. It'll also benefit straight white men. Liteally everyone because it makes Linux into a product that does more than just whatever is popular as of today. Same thing applies to putting effort into screen readers and disabled people and whatever other minorities need. If he just wants what is majority, I'd recommend installing Windows to him. Because that's where we're headed with this. That's the popular choice, at least on the desktop. That's what you're supposed to use if you dislike niche. Also his hubris... Says Debian should be free from politics. And the very next sentence he talks his politics and wants to shove his Trump anti-DEI politics into Debian.... Yeah, sure dude.
  • Google faces EU antitrust complaint over AI Overviews

    Technology technology
    9
    1
    147 Stimmen
    9 Beiträge
    69 Aufrufe
    sentient_loom@sh.itjust.worksS
    That's not as clever as you think it is.
  • 83 Stimmen
    13 Beiträge
    71 Aufrufe
    M
    It's a bit of a sticking point in Australia which is becoming more and more of a 'two-speed' society. Foxtel is for the rich classes, it caters to the right wing. Sky News is on Foxtel. These eSafety directives killing access to youtube won't affect those rich kids so much, but for everyone else it's going to be a nightmare. My only possible hope out of this is that maybe, Parliament and ACMA (Australian Communications and Media Authority, TV standards) decide that since we need a greater media landscape for kids and they can't be allowed to have it online, that maybe more than 3 major broadcasters could be allowed. It's not a lack of will that stops anyone else making a new free-to-air network, it's legislation, there are only allowed to be 3 commercial FTA broadcasters in any area. I don't love Youtube or the kids watching it, it's that the alternatives are almost objectively worse. 10 and 7 and garbage 24/7 and 9 is basically a right-wing hugbox too.
  • Pornhub is Back in France.

    Technology technology
    33
    1
    311 Stimmen
    33 Beiträge
    171 Aufrufe
    D
    Nordé VPN
  • The British jet engine that failed in the 'Valley of Death'

    Technology technology
    16
    1
    40 Stimmen
    16 Beiträge
    84 Aufrufe
    R
    Giving up advancements in science and technology is stagnation. That's not what I'm suggesting. I'm suggesting giving up some particular, potential advancements in science and tecnology, which is a whole different kettle of fish and does not imply stagnation. Thinking it’s a good idea to not do anything until people are fed and housed is stagnation. Why do you think that?
  • 79 Stimmen
    14 Beiträge
    66 Aufrufe
    A
    It was very boring.
  • I'm making a guide to Pocket alternatives: getoffpocket.com

    Technology technology
    30
    159 Stimmen
    30 Beiträge
    152 Aufrufe
    B
    Update: https://lemmy.world/post/31554728
  • 1 Stimmen
    5 Beiträge
    33 Aufrufe
    A
    Turns out dry sarcasm doesn't come across well in text form, if only there was a way to indicate it