Skip to content

I am disappointed in the AI discourse

Technology
27 13 128
  • Reality, where observation precedes perception

    Your username is 'anus', and you're posting nonsensical blog slop on Lemmy.

    That's an interesting reality that I'm not sure many others participate in.

  • Reality, where observation precedes perception

    I think we should swap usernames.

  • Your username is 'anus', and you're posting nonsensical blog slop on Lemmy.

    That's an interesting reality that I'm not sure many others participate in.

    The irony of focusing on my username when logical coherence is in question

  • Reality, where observation precedes perception

    An observation is perception(?).

  • God, that was a bad read. Not only is this person woefully misinformed, they're complaining about the state of discourse while directly contributing to the problem.

    If you're going to write about tech, at least take some time to have a pasaable understanding of it, not just "I use the product for shits and giggles occasionally."

    this person woefully misinformed

    In what way, about what? Can you elaborate?

    directly contributing to the problem

    How so?

    have a [passable] understanding of it

    Why do you insinuate that they do not?

  • The irony of focusing on my username when logical coherence is in question

    Well, hey, if I changed my name to "Dumbfuck Mc'Dipshitterson" I think I'd have a PR problem as well.

  • Well, hey, if I changed my name to "Dumbfuck Mc'Dipshitterson" I think I'd have a PR problem as well.

    Oh no, not my public image!

  • An observation is perception(?).

    Try asking ChatGPT if you're confused

  • this person woefully misinformed

    In what way, about what? Can you elaborate?

    directly contributing to the problem

    How so?

    have a [passable] understanding of it

    Why do you insinuate that they do not?

    I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

  • I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

    I think chat gpt does web searches now, maybe for the reasoning models. At least it looks like it's doing that.

  • I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

    ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

  • I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

    One doesn't't need to know how an engine works to know the Ford pinto was a disaster

    One doesn't need tknow how llms work to know they are pretty destructive and terrible

    Nite I'm not going to argue this. It's just how things are now, and no apologetics will change what it is.

  • Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter

    Dude. Go outside

  • ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

    Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here's the thing, I went out of my way to say I don't know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn't sufficiently demonstrate why it's right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it's layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on "trust me bro". I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.

  • This post did not contain any content.

    This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they've built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn't necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It's not unreasonable for someone to make a statement that an LLM doesn't get it's data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don't have a deep understanding of the technology and how it works.

  • ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

    but it doesn't do that for an entire index. it can just skim a few exrra pages you're currently chatting about. it will, for example, have trouble with latest news or finding the new domain of someones favorite piracy site, after the old one got shut down.

  • Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here's the thing, I went out of my way to say I don't know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn't sufficiently demonstrate why it's right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it's layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on "trust me bro". I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.

    He didn't write that to teach but to vent. The intended audience is people who already know.

    For more information on ChatGPT's current capabilities, consult the API docs. I found that to be the most concise source of reliable information. And under no circumstances, believe anything about AI that you read on Lemmy.

    Kudos for being willing to learn.

  • Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter

    I 100% agree with the first point, but I’d make a slight correction to the second: it’s debatable whether an LLM can truly use what we call “logic,” but it’s undeniable that its output is far more logical than that of not only the average Lemmy user, but the vast majority of social media users in general.

  • Try asking ChatGPT if you're confused

    I'm making that statement. Sorry if it was unclear.

  • This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they've built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn't necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It's not unreasonable for someone to make a statement that an LLM doesn't get it's data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don't have a deep understanding of the technology and how it works.

    Do you think that human communication is more than statistical transformation of input to output?

  • The Prime Reasons to Avoid Amazon

    Technology technology
    88
    1
    397 Stimmen
    88 Beiträge
    660 Aufrufe
    X
    Yeah, not a choice any of us who work in tech can make. But the small choices we CAN make do add up significantly.
  • Google’s electricity demand is skyrocketing

    Technology technology
    11
    1
    190 Stimmen
    11 Beiträge
    76 Aufrufe
    W
    What's dystopian is that a company like google will fight tooth and nail to remain the sole owner and rights holder to such a tech. A technology that should be made accessible outside the confines of capitalist motives. Such technologies have the potential to lift entire populations out of poverty. Not to mention that they could mitigate global warming considerably. It is simply not in the interest of humanity to allow one or more companies to hold a monopoly over such technology
  • 73 Stimmen
    13 Beiträge
    74 Aufrufe
    F
    They have a tiny version that is listed as 1000 on their website, plus the simulation is FOSS
  • 370 Stimmen
    26 Beiträge
    130 Aufrufe
    hollownaught@lemmy.worldH
    Bit misleading. Tumour-associated antigens can very easily be detected very early. Problem is, these are only associated with cancer, and provide a very high rate of false positives They're better used as a stepping stone for further testing, or just seeing how advanced a cancer is That is to say, I'm assuming that's what this is about, as i didnt rwad the article. It's the first thing I thought of when I heard "cancer in bloodstream", as the other options tend to be a bit more bleak Edit: they're talking about cancer "shedding genetic material", which I hate how general they're being. Probably talking about proto oncogenes from dead tumour debris, but seems different to what I was expecting
  • France considers requiring Musk’s X to verify users’ age

    Technology technology
    20
    1
    142 Stimmen
    20 Beiträge
    102 Aufrufe
    C
    TBH, age verification services exist. If it becomes law, integrating them shouldn't be more difficult than integrating a OIDC login. So everyone should be able to do it. Depending on these services, you might not even need to give a name, or, because they are separate entities, don't give your name to the platform using them. Other parts of regulation are more difficult. Like these "upload filters" that need to figure out if something shared via a service is violating any copyright before it is made available.
  • Hands-On: EufyMake E1 UV Printer

    Technology technology
    18
    1
    38 Stimmen
    18 Beiträge
    91 Aufrufe
    S
    I watched a bit of Michael Alm's video on this, but noped out when I saw all of the little boxes of consumables appearing. If regular printer ink is already exorbitant, I can only imagine what these proprietary cartridges will cost.
  • 229 Stimmen
    10 Beiträge
    53 Aufrufe
    Z
    I'm having a hard time believing the EU cant afford a $5 wrench for decryption
  • 317 Stimmen
    45 Beiträge
    207 Aufrufe
    F
    By giving us the choice of whether someone else should profit by our data. Same as I don't want someone looking over my shoulder and copying off my test answers.