Skip to content

I am disappointed in the AI discourse

Technology
27 13 128
  • This post did not contain any content.

    This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they've built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn't necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It's not unreasonable for someone to make a statement that an LLM doesn't get it's data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don't have a deep understanding of the technology and how it works.

  • ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

    but it doesn't do that for an entire index. it can just skim a few exrra pages you're currently chatting about. it will, for example, have trouble with latest news or finding the new domain of someones favorite piracy site, after the old one got shut down.

  • Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here's the thing, I went out of my way to say I don't know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn't sufficiently demonstrate why it's right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it's layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on "trust me bro". I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.

    He didn't write that to teach but to vent. The intended audience is people who already know.

    For more information on ChatGPT's current capabilities, consult the API docs. I found that to be the most concise source of reliable information. And under no circumstances, believe anything about AI that you read on Lemmy.

    Kudos for being willing to learn.

  • Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter

    I 100% agree with the first point, but I’d make a slight correction to the second: it’s debatable whether an LLM can truly use what we call “logic,” but it’s undeniable that its output is far more logical than that of not only the average Lemmy user, but the vast majority of social media users in general.

  • Try asking ChatGPT if you're confused

    I'm making that statement. Sorry if it was unclear.

  • This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they've built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn't necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It's not unreasonable for someone to make a statement that an LLM doesn't get it's data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don't have a deep understanding of the technology and how it works.

    Do you think that human communication is more than statistical transformation of input to output?

  • Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter

    I apologize that apparently Lemmy/Reddit people do not have enough self-awareness to accept good criticism, especially if it was just automatically generated and have downloaded that to oblivion. Though I don't really think you should respond to comments with a chatGPT link, not exactly helpful. Comes off a tad bit AI Bro...