Skip to content

With a Trump-driven reduction of nearly 2,000 employees, F.D.A. will Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’

Technology
91 69 40
  • Text to avoid paywall

    The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

    Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

    “The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

    The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

    Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

    “I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

    My experiences with most AI is that you really, really need to double check EVERYTHING they do.

  • People will die because of this.

    I'll try arguing in the opposite direction for the sake of it:

    An "AI", if not specifically tweaked, is just a bullshit machine approximating reality same way human-produced bullshit does.

    A human is a bullshit machine with an agenda.

    Depending on the cost of decisions made, an "AI", if it's trained on properly vetted data and not tweaked for an agenda, may be better than a human.

    If that cost is high enough, and so is the conflict of interest, a dice set might be better than a human.

    There are positions where any decision except a few is acceptable, yet malicious humans regularly pick one of those few.

  • when directed and used correctly by an expert

    They're also likely to fire the experts.

    They already have.

  • People will die because of this.

    Yeah I'm going to make sure I don't take any new drugs for a few years. As it is I'm probably going to have to forgo vaccinations for a while because dipshit Kennedy has fucked with the vaccination board.

  • I'll try arguing in the opposite direction for the sake of it:

    An "AI", if not specifically tweaked, is just a bullshit machine approximating reality same way human-produced bullshit does.

    A human is a bullshit machine with an agenda.

    Depending on the cost of decisions made, an "AI", if it's trained on properly vetted data and not tweaked for an agenda, may be better than a human.

    If that cost is high enough, and so is the conflict of interest, a dice set might be better than a human.

    There are positions where any decision except a few is acceptable, yet malicious humans regularly pick one of those few.

    Your argument becomes idiotic once you understand the actual technology. The AI bullshit machine's agenda is "give nice answer" ("factual" is not an idea that has neural center in the AI brain), and "make reader happy". The human "bullshit" machine, has many agendas, but it would have not got so far if it was spouting just happy bullshit (but I guess America is a becoming a very special case).

  • People will die because of this.

    pretty sure that's the basis of it's appeal for them

  • Yeah I'm going to make sure I don't take any new drugs for a few years. As it is I'm probably going to have to forgo vaccinations for a while because dipshit Kennedy has fucked with the vaccination board.

    Just check if the drug is approved in a proper country of your choice.

  • Yeah I'm going to make sure I don't take any new drugs for a few years. As it is I'm probably going to have to forgo vaccinations for a while because dipshit Kennedy has fucked with the vaccination board.

    If you can afford it, there is always the vaccines from other countries. It's fucked up that it's come to this and there's even more of a price tag on health.

  • Text to avoid paywall

    The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

    Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

    “The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

    The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

    Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

    “I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

    Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

  • I have to quibble with you, because you used the term "AI" instead of actually specifying what technology would make sense.

    As we have seen in the last 2 years, people who speak in general terms on this topic are almost always selling us snake oil. If they had a specific model or computer program that they thought was going to be useful because it fit a specific need in a certain way, they would have said that, but they didn't.

    ik what you mean, there's a difference between LLMs and other systems but its just generally easier to put it all under the umbrella of 'AI'

  • Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.

  • Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.

    I am convinced that law enforcement wants intentionally biased AI decision makers so that they can justify doing what they’ve always done with the cover of “it’s not racist because a computer said so!”

    The scary part is most people are ignorant enough to buy it.

  • Your argument becomes idiotic once you understand the actual technology. The AI bullshit machine's agenda is "give nice answer" ("factual" is not an idea that has neural center in the AI brain), and "make reader happy". The human "bullshit" machine, has many agendas, but it would have not got so far if it was spouting just happy bullshit (but I guess America is a becoming a very special case).

    It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.

  • Text to avoid paywall

    The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

    Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

    “The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

    The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

    Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

    “I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

    Final stage capitalism: Purging all the experts (at catching bullshit from appllicants) before the agencies train the AI with newb level inputs.

  • It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.

    LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for "Trump is divine", the LLM will predict that Trump is divine, with no second thought (no first thought either). And it's not even great to use as a language-based database.

    Please don't even consider LLMs as "AI".

  • Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.

    I saw a paper a while back that argued that AI is being used as "moral crumple zones". For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It's an interesting concept that I've thought about a lot since I found it.

  • LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for "Trump is divine", the LLM will predict that Trump is divine, with no second thought (no first thought either). And it's not even great to use as a language-based database.

    Please don't even consider LLMs as "AI".

    Even an RNG does decision-making.

    I know what LLMs are, thank you very much!

    If you wanted to even understand my initial point, you already would have.

    Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.

  • Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

    If it actually ends up being an AI and not just some Trump cuck stooge masquerading as AI picking the drug by the company that gave the largest bribe to Trump, I 100% guarantee this AI is trained only on papers written by non-peer reviewed drug company paid "scientists" containing made up narratives.

    Those of us prescribed the drugs will be the guinea pigs because R&D costs money and hits the bottom line. The many deaths will be conveniently scape-goated on "the AI" the morons in charge promised is smarter and more efficient than a person.

    Fuck this shit.

  • Even an RNG does decision-making.

    I know what LLMs are, thank you very much!

    If you wanted to even understand my initial point, you already would have.

    Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.

    I wouldn't define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.

  • Different types of AI, different training data, different expectations and outcomes. Generative AI is but one use case.

    It's already been proven a useful tool in research, when directed and used correctly by an expert. It's a tool, to give to scientists to assist them, not replace them.

    If you're goal to use AI to replace people, you've got a bad surprise coming.

    If you're not equipping your people with the skills and tools of AI, your people will become obsolete in short time.

    Learn AI and how to utilize it as a tool, you can train your own model on your own private data and locally interrogate the model to do unique analysis typically not possible in realtime. Learn the goods and bads of technology and let your ethics guide how you use it, but stop dismissing revolutionary technology because the earlier generative models weren't reinforced enough get fingers right.

    I'm not dismissing its use. It is a useful tool, but it cannot replace experts at this point, or maybe ever (and I'm gathering you agree on this).

    If it ever does get to that point, we need to also remedy the massive social consequences of revoking those same experts' ability to have sufficient income to have a reasonable living.

    I was being a little silly for effect.

  • 328 Stimmen
    19 Beiträge
    4 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • Secure Your Gmail Now As Google Warns Of Password Attacks

    Technology technology
    9
    1
    53 Stimmen
    9 Beiträge
    22 Aufrufe
    J
    I tried to but they wanted to force me to give them my phone number. Fuck them, they don't need it.
  • Most Common PIN Codes

    Technology technology
    50
    1
    181 Stimmen
    50 Beiträge
    32 Aufrufe
    E
    Came here for this comment. Did not disappoint!
  • 419 Stimmen
    113 Beiträge
    42 Aufrufe
    D
    Hiroshima and Nagasaki is currently livable because the bomb was detonated in the sky, the radiation disappates quickly. In constrast, Chernobyl had much more fuel and since the power plant was on the ground, it contaminated a lot of the soil, therefore, it's gonna take much much longer before Chernobyl is ever livable again. A tactical nuke is a bomb that will detonate in the air, and since its "tactical", its gonna have much less yield. Its gonna be become livable again even quickly than Hiroshima and Nagasaki.
  • YouTube’s new anti-adblock measures

    Technology technology
    57
    217 Stimmen
    57 Beiträge
    131 Aufrufe
    M
    I wish I could create playlists on Nebula.
  • Army gives shady offer to tech bros so they can play soldier

    Technology technology
    11
    1
    96 Stimmen
    11 Beiträge
    42 Aufrufe
    P
    It is common in the military to give commissioned rank to certain positions for the higher pay grade. The fast tracking takes away from the belief everyone serving with you went through (roughly) the same basic training as you.
  • Iran asks its people to delete WhatsApp

    Technology technology
    25
    1
    225 Stimmen
    25 Beiträge
    78 Aufrufe
    baduhai@sopuli.xyzB
    Communicate securely with WhatsApp? That's an oxymoron.
  • 2 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet