Skip to content

(LLM) A language model built for the public good

Technology
14 7 6
  • Gigantic hater of all things LLM or "AI" here.

    The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with "multilingual fluency in over 1,000 languages" could be potentially useful.

    And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that's also a good thing.

    Could I find something to hate about it? Oh yeah, most certainly! 🙂

    Most rational AI hater.

  • Llms are useful for inspiration, light research, etc.

    They should never be used as part of a finished product or as the main scaffolding.

    Honestly they are pretty good for research too. You can’t imagine the amount of obscure shit that my ChatGPT has surfaced when I bounce ideas on it. But yea it’s terrible in finished products, I think everyone knows that and in a year or two if they don’t improve I expect we will be back to shoving it behind the scenes as had been done before ChatGPT. It’s for the best.

  • Honestly they are pretty good for research too. You can’t imagine the amount of obscure shit that my ChatGPT has surfaced when I bounce ideas on it. But yea it’s terrible in finished products, I think everyone knows that and in a year or two if they don’t improve I expect we will be back to shoving it behind the scenes as had been done before ChatGPT. It’s for the best.

    That's not research. That's simply surfacing tidbits it found on the net the happen to be true

    .I've asked many questions of many llms in my chosen areas of interest and modest expertise , seeking more than basic knowledge( which it often surprisingly lacks ) it always has at least one error. Often so subtle it goes on noticed until it's too late.

  • Gigantic hater of all things LLM or "AI" here.

    The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with "multilingual fluency in over 1,000 languages" could be potentially useful.

    And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that's also a good thing.

    Could I find something to hate about it? Oh yeah, most certainly! 🙂

    i hear there are cool advances in medicine, engineering and such. i imagine techbros have an exponentially bigger budget, though.

  • I'm sure the community will find something to hate about this as well, since this isn't an article about an LLM failing at something.

    According to the article, they've even addressed my environmental concerns. Since it's created by universities, I don't think we'll even have this shoved down our throats all the time.

    I doubt whether it will be more useful than any other general LLM so far but hate it? Nah.

  • i hear there are cool advances in medicine, engineering and such. i imagine techbros have an exponentially bigger budget, though.

    Usually when I see this it's using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

    There's huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

    If you can identify cancer in x-rays using machine learning that's awesome, but that's very seperate from the AI hype machine that is currently running wild.

  • Usually when I see this it's using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

    There's huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

    If you can identify cancer in x-rays using machine learning that's awesome, but that's very seperate from the AI hype machine that is currently running wild.

    to be fair, the LLMs they use for chatbots and stolen pics generator are not AI either.

  • That's not research. That's simply surfacing tidbits it found on the net the happen to be true

    .I've asked many questions of many llms in my chosen areas of interest and modest expertise , seeking more than basic knowledge( which it often surprisingly lacks ) it always has at least one error. Often so subtle it goes on noticed until it's too late.

    So what you’re saying is that it’s good for research, because you can’t research what you don’t know about.

    It’s good for giving starting points which is exactly what I meant.

    Next time I’ll write a dissertation with hyper specifics because it seems it’s necessary every time LLMs are involved as there’s always someone looking to nitpick the statements.

  • to be fair, the LLMs they use for chatbots and stolen pics generator are not AI either.

    Yeah, I just find it to be a great rule of thumb. Those who understand what they are doing will be aware that they are not dealing with AI, those who jump to label it as such are usually bullshit artists.

  • So what you’re saying is that it’s good for research, because you can’t research what you don’t know about.

    It’s good for giving starting points which is exactly what I meant.

    Next time I’ll write a dissertation with hyper specifics because it seems it’s necessary every time LLMs are involved as there’s always someone looking to nitpick the statements.

    No you rude fuck.

    If i ask a simple question about a subject, let's say foraging as a I do that a lot. And it's wrong, it's friggin wrong.

    I'll ask about a specific plant. Full disclosure this is one of my things. 40 years at it. Ok? No big stretch to think I know a thing or two.

    So I ask about let's say, Japanese barberry. An invasive plant that is hated by many and rightly so at times. The question is , is it edible?

    The answer given was no. The truth is the opposite. It is edible. Hell there's recipes online for barberry jam. Now don't go just eating them though. Smart to test one or two leaves to see if an individual is allergic. That's not part of the answer, that's foraging 101. But I digress.
    The a.i was wrong and then argued about it until I pulled up all of the evidence. The a.i then admitted it was wrong, but who cares? It's not alive. Winning an argument with a.i is like beating oneself at poker.

    Another example
    I'll ask about intervals in music ( guitar teacher as my main profession now, as my passion for 48 years). It got the major scale intervals wrong.

    I asked ask if yogurt can replace eggs as a binding agent to one of them (can't remember which, apologies ) and it said no. That's a friggin home ec tip that's been around for at least a century

    People who give dissertations don't brag about it. Especially to make a point in a thread. It only makes one seem like a person who isn't confident in what they're saying so they drop a line that they feel will impress others. It doesn't

    Others experience is as important, vital and real as yours, regarding the answers given by a.i , but you'll brush it off because you feel that some how you have more insight than others. You dont. You just have more time to pour through a.i's mistakes to massage it to getting something close to what you want. That shows an abundance of time available. Which means you aren't doing the things I'm talking about.

    Or it means this is something you do for your job and it works for those specific needs . Which is fine, but your needs are not the world's. My need have been poorly met by that tool you spouse. Much like a rake won't help a guy digging a hole, a.i is the wrong tool for most jobs.

    Which means your opinion of my evaluation of a. I results is skewed because you don't value others experience, no matter how intelligent you are. And that is a sign of ignorance.

    I wish you a good day

  • 30 Stimmen
    6 Beiträge
    15 Aufrufe
    moseschrute@piefed.socialM
    While I agree, everyone constantly restating this is not helpful. We should instead ask ourselves what’s about BlueSky is working and what can we learn? For example, I think the threadiverse could benefit from block lists, which auto update with new filter keywords. I’ve seen Lemmy users talk about how much time they spend crafting their filters to get the feed of content they want. It would be much nicer if you could choose and even combine block lists (e.g. US politics).
  • 175 Stimmen
    9 Beiträge
    22 Aufrufe
    E
    I'm sorry but that capitalisation is really off-putting. You're Not Writing A Headline You Know
  • 181 Stimmen
    16 Beiträge
    65 Aufrufe
    P
    I really want to know the name of the contractor who made that proposal.
  • 57 Stimmen
    5 Beiträge
    17 Aufrufe
    S
    Imbezzled. Money was used to pay for somebody's vacation.
  • 119 Stimmen
    8 Beiträge
    14 Aufrufe
    wizardbeard@lemmy.dbzer0.comW
    Most still are/can be. Enough that I find it hard to believe people are missing out without podcasts through these paid services.
  • 311 Stimmen
    37 Beiträge
    84 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • The AI girlfriend guy - The Paranoia Of The AI Era

    Technology technology
    1
    1
    6 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • Is Washington state falling out of love with Tesla?

    Technology technology
    10
    1
    61 Stimmen
    10 Beiträge
    37 Aufrufe
    B
    These Tesla owners who love their cars but hate his involvement with government are a bit ridiculous because one of the biggest reasons he got in loved with shilling for the right is that the government was looking into regulations and investigations concerning how unsafe Tesla cars are.