Skip to content

With a Trump-driven reduction of nearly 2,000 employees, F.D.A. will Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’

Technology
91 69 0
  • Your argument becomes idiotic once you understand the actual technology. The AI bullshit machine's agenda is "give nice answer" ("factual" is not an idea that has neural center in the AI brain), and "make reader happy". The human "bullshit" machine, has many agendas, but it would have not got so far if it was spouting just happy bullshit (but I guess America is a becoming a very special case).

    It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.

  • Text to avoid paywall

    The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

    Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

    “The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

    The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

    Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

    “I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

    Final stage capitalism: Purging all the experts (at catching bullshit from appllicants) before the agencies train the AI with newb level inputs.

  • It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.

    LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for "Trump is divine", the LLM will predict that Trump is divine, with no second thought (no first thought either). And it's not even great to use as a language-based database.

    Please don't even consider LLMs as "AI".

  • Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.

    I saw a paper a while back that argued that AI is being used as "moral crumple zones". For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It's an interesting concept that I've thought about a lot since I found it.

  • LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for "Trump is divine", the LLM will predict that Trump is divine, with no second thought (no first thought either). And it's not even great to use as a language-based database.

    Please don't even consider LLMs as "AI".

    Even an RNG does decision-making.

    I know what LLMs are, thank you very much!

    If you wanted to even understand my initial point, you already would have.

    Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.

  • Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

    If it actually ends up being an AI and not just some Trump cuck stooge masquerading as AI picking the drug by the company that gave the largest bribe to Trump, I 100% guarantee this AI is trained only on papers written by non-peer reviewed drug company paid "scientists" containing made up narratives.

    Those of us prescribed the drugs will be the guinea pigs because R&D costs money and hits the bottom line. The many deaths will be conveniently scape-goated on "the AI" the morons in charge promised is smarter and more efficient than a person.

    Fuck this shit.

  • Even an RNG does decision-making.

    I know what LLMs are, thank you very much!

    If you wanted to even understand my initial point, you already would have.

    Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.

    I wouldn't define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.

  • Different types of AI, different training data, different expectations and outcomes. Generative AI is but one use case.

    It's already been proven a useful tool in research, when directed and used correctly by an expert. It's a tool, to give to scientists to assist them, not replace them.

    If you're goal to use AI to replace people, you've got a bad surprise coming.

    If you're not equipping your people with the skills and tools of AI, your people will become obsolete in short time.

    Learn AI and how to utilize it as a tool, you can train your own model on your own private data and locally interrogate the model to do unique analysis typically not possible in realtime. Learn the goods and bads of technology and let your ethics guide how you use it, but stop dismissing revolutionary technology because the earlier generative models weren't reinforced enough get fingers right.

    I'm not dismissing its use. It is a useful tool, but it cannot replace experts at this point, or maybe ever (and I'm gathering you agree on this).

    If it ever does get to that point, we need to also remedy the massive social consequences of revoking those same experts' ability to have sufficient income to have a reasonable living.

    I was being a little silly for effect.

  • I saw a paper a while back that argued that AI is being used as "moral crumple zones". For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It's an interesting concept that I've thought about a lot since I found it.

    I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.

  • I wouldn't define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.

    You seem to not want any people to teach you anything.

    No, I don't seem that. I don't like being ascribed opinions I haven't expressed.

    I wouldn’t define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    When your goal is to avoid a certain most harmful subset of such decisions, and living humans always being pressured by power and corrupt profit to pick that subset, flipping coins is preferable, if that's the two variants between which we are choosing.

  • Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

    That is an underestimate, since it doesn't factor in the knockdown effect of the more lax regulations having, so people will try to sell all kinds of crap as "medicine".

  • Text to avoid paywall

    The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

    Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

    “The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

    The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

    Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

    “I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

    it's what ai is supposed to be used for, but it mabye isn't good enough

  • It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.

    It kinda seems like you don’t understand the actual technology.

  • 12 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 40K IoT cameras worldwide stream secrets to anyone with a browser.

    Technology technology
    18
    1
    119 Stimmen
    18 Beiträge
    0 Aufrufe
    T
    For the Emperor!
  • 72 Stimmen
    7 Beiträge
    3 Aufrufe
    F
    I think the issue is people started buying etf instead of using Bitcoin themselves. Bitcoin as such has no value at all, it's only valuable if people use it for transactions.
  • The largest cryptocurrency money-laundering ring

    Technology technology
    26
    327 Stimmen
    26 Beiträge
    4 Aufrufe
    ulrich@feddit.orgU
    It has their name and where it came from so. Yes? That's not what I asked. Are you expecting people to direct link everything even when it is already atributed? I mean is that really too much to expect of people? To simply copy the link where they found the information and post it along with where they shared it?
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    14 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 138 Stimmen
    16 Beiträge
    2 Aufrufe
    H
    My ports are on the front of the router. No backdoors for me, checkmate Atheists.
  • Tiny LEDs May Power Future AI Inteconnects

    Technology technology
    1
    1
    8 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • 12 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet