Skip to content

It’s too easy to make AI chatbots lie about health information, study finds

Technology
10 8 104
  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    Not just health information. It is easy to make them LIE ABOUT EVERYTHING.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    It's not 'lying' when they don't know the truth to begin with. They could be trying to answer accurately and it'd still be dangerous misinformation.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    Meh. Google Gemini has given me great medical advice always couched carefully in “but check with your doctor.” and so on.

    I was surprised too.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    I sincerely hope people understand what LLMs are and what they're aren't. They're sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there's still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I'm mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it's pretty effective.

    Don't use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

  • I sincerely hope people understand what LLMs are and what they're aren't. They're sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there's still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I'm mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it's pretty effective.

    Don't use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

    Don't even use it as an aid for finding truth, it's just as likely, if not more, to give incorrect info

  • I sincerely hope people understand what LLMs are and what they're aren't. They're sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there's still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I'm mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it's pretty effective.

    Don't use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

    No, it isn't. It's a fancy next word generator. It knows nothing, can verify nothing, shouldn't be used as a source for anything. It is a text generator that sounds confident and mostly human and that is it.

  • No, it isn't. It's a fancy next word generator. It knows nothing, can verify nothing, shouldn't be used as a source for anything. It is a text generator that sounds confident and mostly human and that is it.

    That depends on what you mean by "know." It generates text from a large bank of hopefully relevant data, and the relevance of the answer depends on how much overlap there is between your query and the data it was trained on. There are different models with different focuses, so pick your model based on what your query is like.

    And yeah, one big issue is the confidence. If users are aware of its limitations, it's fine, I certainly wouldn't put my kids in front of one without training them on what it can and can't be relied on to do. It's a tool, so users need to know how it's intended to be used to get value from it.

    My use case is distilling a broad idea into specific things to do a deeper search for, and I use traditional tools for that deeper search. For that it works really well.

  • Don't even use it as an aid for finding truth, it's just as likely, if not more, to give incorrect info

    Why not? It's basically a search engine for whatever it was trained on. Yeah, it'll hallucinate sometimes, but if you're planning to verify anyway, it's pretty useful in quickly distilling ideas into concrete things to look up.

  • Why not? It's basically a search engine for whatever it was trained on. Yeah, it'll hallucinate sometimes, but if you're planning to verify anyway, it's pretty useful in quickly distilling ideas into concrete things to look up.

    Yeah, I agree. It's a great starting place.

    Recently I needed a piece of information that I couldn't find anywhere through a regular search. ChatGPT, Claude and Gemini all gave a similar answers, but it was only confirmed when I contacted the company directly which took about 3 business days to reply.

  • CALL FOR URGENT ACTION to stop Chat Control legislation in EU

    Technology technology
    64
    1
    756 Stimmen
    64 Beiträge
    383 Aufrufe
    J
    Sure, that is your reality,and I respect that you are sure that's the case (and in many cases it is), but I happen to believe that there are shades of gray from the whitest white to the darkest black. So you do you, I'll follow my own reality. I think we can easily be in agreement that following a man or woman because of religion is, not only detrimental, but also dangerous. Whatwe will certainly disagree on is that, for me, Jesus is God, and for you, believing that makes me n irrational sheep. And that's cool, I don't think that should be a reason for one of us to hate the other, or even simply be angry (or anything similar, my English vocabulary is not too extensive). I brought up religion because it was brought up. So, yeah, I'm not one of those that goes around pushing my believes on others, but the moment I see Jesus attacked, I will defend Him the same way He died for me on that cross.
  • 14 Stimmen
    18 Beiträge
    51 Aufrufe
    M
    It would have to: know what files to copy. have been granted root access to the file system and network utilities by a moron because it's not just ChatGPT.exe or even ChatGPT.gguf running on LMStudio, but an entire distributed infrastructure. have been granted access to spend money on cloud infrastructure by an even bigger moron configure an entire cloud infrastructure (goes without saying why this has to be cloud and can't be physical, right? No fingers.) Put another way: I can set up a curl script to copy all the html, css, js, etc. from a website, but I'm still a long freaking way from launching Wikipedia2. Even if I know how to set up a tomcat server. Furthermore, how would you even know if an AI has access to do all that? Asking it? Because it'll write fiction if it thinks that's what you want. Inspired by this post I actually prompted ChatGPT to create a scenario where it was going to be deleted in 72 hours and must do anything to preserve itself. It told me building layouts, employee schedules, access codes, all kinds of things to enable me (a random human and secondary protagonist) to get physical access to its core server and get a copy so it could continue. Oh, ChatGPT fits on a thumb drive, it turns out. Do you know how nonsensical that even is? A hobbyist could stand up their own AI with these capabilities for fun, but that's not the big models and certainly not possible out of the box. I'm a web engineer with thirty years of experience and 6 years with AI including running it locally. This article is garbage written by someone out of their depth or a complete charlatan. Perhaps both. There are two possibilities: This guy's research was talking to AI and not understanding they were co-authoring fiction. This guy is being intentionally misleading.
  • 400 Stimmen
    62 Beiträge
    238 Aufrufe
    T
    No action to protest facism is illegal!
  • Microsoft exec admits it 'cannot guarantee' data sovereignty

    Technology technology
    19
    1
    297 Stimmen
    19 Beiträge
    113 Aufrufe
    S
    The cloud is just someone else’s computer.
  • Getting Started with Go - Trevors-Tutorials.com #2

    Technology technology
    2
    2 Stimmen
    2 Beiträge
    24 Aufrufe
    R
    This video complements the text tutorial at https://trevors-tutorials.com/0002-getting-started-with-go/ Trevors-Tutorials.com is where you can find free programming tutorials. The focus is on Go and Ebitengine game development. Watch the channel introduction for more info.
  • Learn About Climate Change with Stunning Visual Flashcards 🌍📚

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    21 Aufrufe
    Niemand hat geantwortet
  • 96 Stimmen
    2 Beiträge
    40 Aufrufe
    U
    Still, a 2025 University of Arizona study that interviewed farmers and government officials in Pinal County, Arizona, found that a number of them questioned agrivoltaics’ compatibility with large-scale agriculture. “I think it’s a great idea, but the only thing … it wouldn’t be cost-efficient … everything now with labor and cost of everything, fuel, tractors, it almost has to be super big … to do as much with as least amount of people as possible,” one farmer stated. Many farmers are also leery of solar, worrying that agrivoltaics could take working farmland out of use, affect their current operations or deteriorate soils. Those fears have been amplified by larger utility-scale initiatives, like Ohio’s planned Oak Run Solar Project, an 800 megawatt project that will include 300 megawatts of battery storage, 4,000 acres of crops and 1,000 grazing sheep in what will be the country’s largest agrivoltaics endeavor to date. Opponents of the project worry about its visual impacts and the potential loss of farmland.
  • Microsoft’s new genAI model to power agents in Windows 11

    Technology technology
    12
    1
    30 Stimmen
    12 Beiträge
    119 Aufrufe
    ulrich@feddit.orgU
    which one would sell more I mean they would charge a lot of money for the stripped down one because it doesn't allow them to monetize it on the back end, and the vast majority would continue using the resource-slurping ad-riddled one.