Skip to content

It’s too easy to make AI chatbots lie about health information, study finds

Technology
10 8 57
  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    Not just health information. It is easy to make them LIE ABOUT EVERYTHING.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    It's not 'lying' when they don't know the truth to begin with. They could be trying to answer accurately and it'd still be dangerous misinformation.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    Meh. Google Gemini has given me great medical advice always couched carefully in “but check with your doctor.” and so on.

    I was surprised too.

  • Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

    Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

    “If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

    I sincerely hope people understand what LLMs are and what they're aren't. They're sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there's still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I'm mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it's pretty effective.

    Don't use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

  • I sincerely hope people understand what LLMs are and what they're aren't. They're sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there's still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I'm mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it's pretty effective.

    Don't use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

    Don't even use it as an aid for finding truth, it's just as likely, if not more, to give incorrect info

  • I sincerely hope people understand what LLMs are and what they're aren't. They're sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there's still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I'm mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it's pretty effective.

    Don't use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

    No, it isn't. It's a fancy next word generator. It knows nothing, can verify nothing, shouldn't be used as a source for anything. It is a text generator that sounds confident and mostly human and that is it.

  • No, it isn't. It's a fancy next word generator. It knows nothing, can verify nothing, shouldn't be used as a source for anything. It is a text generator that sounds confident and mostly human and that is it.

    That depends on what you mean by "know." It generates text from a large bank of hopefully relevant data, and the relevance of the answer depends on how much overlap there is between your query and the data it was trained on. There are different models with different focuses, so pick your model based on what your query is like.

    And yeah, one big issue is the confidence. If users are aware of its limitations, it's fine, I certainly wouldn't put my kids in front of one without training them on what it can and can't be relied on to do. It's a tool, so users need to know how it's intended to be used to get value from it.

    My use case is distilling a broad idea into specific things to do a deeper search for, and I use traditional tools for that deeper search. For that it works really well.

  • Don't even use it as an aid for finding truth, it's just as likely, if not more, to give incorrect info

    Why not? It's basically a search engine for whatever it was trained on. Yeah, it'll hallucinate sometimes, but if you're planning to verify anyway, it's pretty useful in quickly distilling ideas into concrete things to look up.

  • Why not? It's basically a search engine for whatever it was trained on. Yeah, it'll hallucinate sometimes, but if you're planning to verify anyway, it's pretty useful in quickly distilling ideas into concrete things to look up.

    Yeah, I agree. It's a great starting place.

    Recently I needed a piece of information that I couldn't find anywhere through a regular search. ChatGPT, Claude and Gemini all gave a similar answers, but it was only confirmed when I contacted the company directly which took about 3 business days to reply.

  • 131 Stimmen
    23 Beiträge
    122 Aufrufe
    S
    theoretically software support This. And it's not only due to drivers and much more due to them not having insourced software development and their outsourced developers not using Fairphones as their daily drivers.
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    772 Stimmen
    205 Beiträge
    970 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • 0 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 1 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • Trump Mobile launches $47 service and a gold phone

    Technology technology
    129
    1
    357 Stimmen
    129 Beiträge
    581 Aufrufe
    S
    Why mention it? Because the media has a DUTY to call out a corrupt government! Because they're not doing their job!
  • 169 Stimmen
    13 Beiträge
    66 Aufrufe
    E
    Hold on let me find something[image: 1b188197-bd96-49bd-8fc0-0598e75468ea.avif]
  • Looking elsewhere

    Technology technology
    3
    1
    7 Stimmen
    3 Beiträge
    23 Aufrufe
    J
    That's a valid point! I've been searching for places to hangout for a while, sometimes called "campfires". Found a cool Discord with generous front-end folks (that's a broad spectrum!), on frontend.horse.
  • *deleted by creator*

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet