Skip to content

Your public ChatGPT queries are getting indexed by Google and other search engines

Technology
46 25 0
  • To use your analogy, it's like someone said, "Well, if you want a Michelin 5-star steak au jus, then your wallet is gonna take a hit". And you replied, "That's like saying in order to eat dinner you need to raise your own livestock and train for years as a professional chef".

    I'm not saying corporate LLMs are bad, or that they have no upside. I'm saying your scale for what's essential and what's a luxury is alarming.

    I wish they were always a luxury, but in some situations they’re just too important to me

  • I wish they were always a luxury, but in some situations they’re just too important to me

  • You have clearly never been in that situation then. It is obviously not like this for many people, but for students for example, it often means a lot more

    While I don't fully share the notion and tone of other commenter, I gotta say LLMs have absolutely tanked education and science, as noted by many and as I witnessed firsthand.

    I'm a young scientist on my way to PhD, and I get to assist in a microbiology course for undergraduates.

    The amount of AI slop coming from student assignments is astounding, and worse of all - they don't see it themselves. When it comes to me checking their actual knowledge, it's devastating.

    And it's not just undergrads - many scientific articles also now have signs of AI slop, which messes up with research to a concerning degree.

    Personally, I tried using more specialized tools like Perplexity in Research mode to look for sources, but it royally messed up listing the sources - it took actual info from scientific articles, but then referenced entirely different articles that hold no relation to it.

    So, in my experience LLMs can be useful to generate a simple text or help you tie known facts together. But as a learning tool...be careful, or rather just don't use them for that. Classical education exists for a good reason, and it is that you learn to get factually correct and relevant information, analyze it and keep it in your head for future reference. It takes more time, but is ultimately much worth it.

  • While I don't fully share the notion and tone of other commenter, I gotta say LLMs have absolutely tanked education and science, as noted by many and as I witnessed firsthand.

    I'm a young scientist on my way to PhD, and I get to assist in a microbiology course for undergraduates.

    The amount of AI slop coming from student assignments is astounding, and worse of all - they don't see it themselves. When it comes to me checking their actual knowledge, it's devastating.

    And it's not just undergrads - many scientific articles also now have signs of AI slop, which messes up with research to a concerning degree.

    Personally, I tried using more specialized tools like Perplexity in Research mode to look for sources, but it royally messed up listing the sources - it took actual info from scientific articles, but then referenced entirely different articles that hold no relation to it.

    So, in my experience LLMs can be useful to generate a simple text or help you tie known facts together. But as a learning tool...be careful, or rather just don't use them for that. Classical education exists for a good reason, and it is that you learn to get factually correct and relevant information, analyze it and keep it in your head for future reference. It takes more time, but is ultimately much worth it.

    Sure, many don’t care and I have also experienced this, but it’s a fabulous way to quickly get a glimpse at a subject, or to get started, or to learn more. It’s not always correct, but for known subjects it’s pretty good

    Anything related to law or really specific subjects will be horrible though

    Classical education exists for a good reason

    Sure, but not everyone teaches well enough, and LLMs are one of the ways to balance this, kinda

    And if you don’t understand then… yea it’s still useful as a way to avoid failing a year which is morally questionable but hey, another topic

  • Even through duck.ai?

    ChatGPT chats are only public when turned into a shareable chat (which is a manually created snapshot of the chat with a link). And they only show up on search machines if you, after sharing, select the opt-in checkbox for having it show up there.

    I don't know how duck.ai works, but I assume it doesn't do this.

  • Should we be surprised? Thinking AI, the most data hungry undertaking in existence, was not storing the data from what you write? Especially when the companies behind it are the most invasive in history? Lol what else

  • I assumed this was a given. Anything offered to tech overlords will be monetized and packaged for profit at every possible angle. Nice to know it's official now, I guess.

    Plus, you explicitly have to opt into this, for each chat you share individually.

    I get that it says "discoverable" at first and the search engines are in the fine print, but search engine crawlers get it anyway if it's discoverable on ChatGPT's website instead. That term is plenty clear imo.

  • Update 7/31/25 4:10pm PT: Hours after this article was published, OpenAI said it removed the feature from ChatGPT that allowed users to make their public conversations discoverable by search engines. The company says this was a short-lived experiment that ultimately “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”

    Interesting, because the checkbox is still there for me. Don't see things having changed at all, maybe they made the fine print more white? But nothing else.

    In general, this reminds me of the incognito drama. Iirc people were unhappy that incognito mode didn't prevent Google websites from fingerprinting you. Which... the mode never claimed to do, it explicitly told you it didn't do that.

    For chats to be discoverable through search engines, you not only have to explicitly and manually share them, you also have to then opt in to having them appear on search machines via a checkbox.

    The main criticism I've seen is that the checkbox's main label only says it makes the chat "discoverable", while the search engines clarification is in the fine print. But I don't really understand how that is unclear.
    Like, even if they made them discoverable through ChatGPT's website only (so no third party data sharing), Google would still get their hands on them via their crawler. This is just them skipping the middleman, the end result is the same. We'd still hear news about them appearing on Google.

    This just seems to me like people clicking a checkbox based on vibes rather than critical thought of what consequences it could have and whether they want them. I don't see what can really be done against people like that.

    I don't think OpenAI can be blamed for doing the data sharing, as it's opt-in, nor for the chats ending up on Google at all. If the latter was a valid complaint, it would also be valid to complain to the Lemmy devs about Lemmy posts appearing on Google. And again, I don't think the label complaint has much weight to it either, because if it's discoverable, it gets to Google one way or another.

  • I use DuckDuckGo. 🙂

  • Mine are not public, i use a tinfoil duck.ai.

    I use local Ollama. I don't trust anyone with my AI conversations.

  • Sure, many don’t care and I have also experienced this, but it’s a fabulous way to quickly get a glimpse at a subject, or to get started, or to learn more. It’s not always correct, but for known subjects it’s pretty good

    Anything related to law or really specific subjects will be horrible though

    Classical education exists for a good reason

    Sure, but not everyone teaches well enough, and LLMs are one of the ways to balance this, kinda

    And if you don’t understand then… yea it’s still useful as a way to avoid failing a year which is morally questionable but hey, another topic

    Alright, we generally seem to be on the same page 🙂

    (Except numerous great books and helpful short materials exist for virtually any popular major, and, while they take longer to study, they provide order of magnitude better knowledge)

  • I was wrong about robots.txt

    Technology technology
    23
    1
    85 Stimmen
    23 Beiträge
    306 Aufrufe
    E
    Right, but the article does. Anyway, I'm moving on. Thanks for the discussion.
  • 0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Microsoft Tests Removing Its Name From Bing Search Box

    Technology technology
    11
    1
    52 Stimmen
    11 Beiträge
    106 Aufrufe
    alphapuggle@programming.devA
    Worse. Office.com now takes me to m365.cloud.microsoft which as of today now takes me to a fucking Copilot chat window. Ofc no way to disable it because gee why would anyone want to do that?
  • 10 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • The AI girlfriend guy - The Paranoia Of The AI Era

    Technology technology
    4
    1
    7 Stimmen
    4 Beiträge
    48 Aufrufe
    S
    Saying 'don't downvote' is the flammable inflammable conundrum, both don't and do parse as do.
  • 6 Stimmen
    9 Beiträge
    60 Aufrufe
    blue_berry@lemmy.worldB
    Cool. Well, the feedback until now was rather lukewarm. But that's fine, I'm now going more in a P2P-direction. It would be cool to have a way for everybody to participate in the training of big AI models in case HuggingFace enshittifies
  • CrowdStrike Announces Layoffs Affecting 500 Employees

    Technology technology
    8
    1
    242 Stimmen
    8 Beiträge
    69 Aufrufe
    S
    This is where the magic of near meaningless corpo-babble comes in. The layoffs are part of a plan to aspirationally acheive the goal of $10b revenue by EoY 2025. What they are actually doing is a significant restructuring of the company, refocusing by outside hiring some amount of new people to lead or be a part of departments or positions that haven't existed before, or are being refocused to other priorities... ... But this process also involves laying off 500 of the 'least productive' or 'least mission critical' employees. So, technically, they can, and are, arguing that their new organizational paradigm will be so succesful that it actually will result in increased revenue, not just lower expenses. Generally corpos call this something like 'right-sizing' or 'refocusing' or something like that. ... But of course... anyone with any actual experience with working at a place that does this... will tell you roughly this is what happens: Turns out all those 'grunts' you let go of, well they actually do a lot more work in a bunch of weird, esoteric, bandaid solutions to keep everything going, than upper management was aware of... because middle management doesn't acknowledge or often even understand that that work was being done, because they are generally self-aggrandizing narcissist petty tyrants who spend more time in meetings fluffing themselves up than actually doing any useful management. Then, also, you are now bringing on new, outside people who look great on paper, to lead new or modified apartments... but they of course also do not have any institutional knowledge, as they are new. So now, you have a whole bunch of undocumented work that was being done, processes which were being followed... which is no longer being done, which is not documented.... and the new guys, even if they have the best intentions, now have to spend a quarter or two or three figuring out just exactly how much pre-existing middle management has been bullshitting about, figuring out just how much things do not actually function as they ssid it did... So now your efficiency improving restructuring is actually a chaotic mess. ... Now, this 'right sizing' is not always apocalyptically extremely bad, but it is also essentially never totally free from hiccups... and it increases stress, workload, and tensions between basically everyone at the company, to some extent. Here's Forbes explanation of this phenomenon, if you prefer an explanation of right sizing in corpospeak: https://www.forbes.com/advisor/business/rightsizing/