Skip to content

I am disappointed in the AI discourse

Technology
27 13 128
  • Reality, where observation precedes perception

    Your username is 'anus', and you're posting nonsensical blog slop on Lemmy.

    That's an interesting reality that I'm not sure many others participate in.

  • Reality, where observation precedes perception

    I think we should swap usernames.

  • Your username is 'anus', and you're posting nonsensical blog slop on Lemmy.

    That's an interesting reality that I'm not sure many others participate in.

    The irony of focusing on my username when logical coherence is in question

  • Reality, where observation precedes perception

    An observation is perception(?).

  • God, that was a bad read. Not only is this person woefully misinformed, they're complaining about the state of discourse while directly contributing to the problem.

    If you're going to write about tech, at least take some time to have a pasaable understanding of it, not just "I use the product for shits and giggles occasionally."

    this person woefully misinformed

    In what way, about what? Can you elaborate?

    directly contributing to the problem

    How so?

    have a [passable] understanding of it

    Why do you insinuate that they do not?

  • The irony of focusing on my username when logical coherence is in question

    Well, hey, if I changed my name to "Dumbfuck Mc'Dipshitterson" I think I'd have a PR problem as well.

  • Well, hey, if I changed my name to "Dumbfuck Mc'Dipshitterson" I think I'd have a PR problem as well.

    Oh no, not my public image!

  • An observation is perception(?).

    Try asking ChatGPT if you're confused

  • this person woefully misinformed

    In what way, about what? Can you elaborate?

    directly contributing to the problem

    How so?

    have a [passable] understanding of it

    Why do you insinuate that they do not?

    I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

  • I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

    I think chat gpt does web searches now, maybe for the reasoning models. At least it looks like it's doing that.

  • I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

    ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

  • I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.

    In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:

    I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

    ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

    The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

    The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.

    Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.

    The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.

    I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.

    One doesn't't need to know how an engine works to know the Ford pinto was a disaster

    One doesn't need tknow how llms work to know they are pretty destructive and terrible

    Nite I'm not going to argue this. It's just how things are now, and no apologetics will change what it is.

  • Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter

    Dude. Go outside

  • ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

    Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here's the thing, I went out of my way to say I don't know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn't sufficiently demonstrate why it's right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it's layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on "trust me bro". I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.

  • This post did not contain any content.

    This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they've built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn't necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It's not unreasonable for someone to make a statement that an LLM doesn't get it's data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don't have a deep understanding of the technology and how it works.

  • ChatGPT searches the web.

    You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.

    but it doesn't do that for an entire index. it can just skim a few exrra pages you're currently chatting about. it will, for example, have trouble with latest news or finding the new domain of someones favorite piracy site, after the old one got shut down.

  • Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?

    Here's the thing, I went out of my way to say I don't know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn't sufficiently demonstrate why it's right.

    Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it's layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.

    This article is just full on "trust me bro". I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.

    He didn't write that to teach but to vent. The intended audience is people who already know.

    For more information on ChatGPT's current capabilities, consult the API docs. I found that to be the most concise source of reliable information. And under no circumstances, believe anything about AI that you read on Lemmy.

    Kudos for being willing to learn.

  • Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter

    I 100% agree with the first point, but I’d make a slight correction to the second: it’s debatable whether an LLM can truly use what we call “logic,” but it’s undeniable that its output is far more logical than that of not only the average Lemmy user, but the vast majority of social media users in general.

  • Try asking ChatGPT if you're confused

    I'm making that statement. Sorry if it was unclear.

  • This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they've built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn't necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It's not unreasonable for someone to make a statement that an LLM doesn't get it's data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don't have a deep understanding of the technology and how it works.

    Do you think that human communication is more than statistical transformation of input to output?

  • 1k Stimmen
    269 Beiträge
    75 Aufrufe
    R
    Maybe you could limit the number of verifications a key can have in a day? Limit it to say 10 verifications per day. So if you're on Pornhub and have an account, you can have the key associated with the account, verified, and so you don't need to re-verify. But if you go on 10 completely different sites and verify for each one, you can't verify after that 10th one within the same 24hr period? You could maybe also include guidelines for integration where if a key is associated with an account, that key can't be used for any other account. You can include that under some requirement that says you have to make 'best efforts' to ensure that a key is only ever used by one account at a time. That way, if a million people are sharing the same key, you'd have to trust that all one million of them will never associate that key with their account because if they do, it invalidates that key for every use other than through that account on that site.
  • 41 Stimmen
    28 Beiträge
    153 Aufrufe
    T
    The poll, published by the research firm and the Walton Family Foundation... Walton Family Foundation provides financial support to The 74. What kind of fool would believe anything from these grifters? Phony AF at its face.
  • Tracing the Honda Acty’s Evolution: Generation by Generation

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 678 Stimmen
    179 Beiträge
    844 Aufrufe
    D
    Thats what the firewall rules do too, don't allow internet connection if there's no vpn connection. Firewall is a system-wide solution that always works, while qbt config relies heavily on the application implementing interface binding properly. Which it doesn't fully btw.
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    234 Stimmen
    41 Beiträge
    224 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • 17 Stimmen
    10 Beiträge
    55 Aufrufe
    T
    That's why it's not brute force anymore.
  • 68 Stimmen
    4 Beiträge
    32 Aufrufe
    jimmydoreisalefty@lemmy.worldJ
    Damn, I heard this mentioned somewhere as well! I don't remember where, though... The CIA is also involved with the cartels in Mexico as well as certain groups in the Middle East. They like to bring "democracy" to many countries that won't become a pawn of the Western regime.
  • $20 for us citizens

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet