Skip to content

With a Trump-driven reduction of nearly 2,000 employees, F.D.A. will Use A.I. in Drug Approvals to ‘Radically Increase Efficiency’

Technology
91 69 0
  • Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.

    I saw a paper a while back that argued that AI is being used as "moral crumple zones". For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It's an interesting concept that I've thought about a lot since I found it.

  • LLM does no decision making. At all. It spouts (as you say) bullshit. If there is enough training data for "Trump is divine", the LLM will predict that Trump is divine, with no second thought (no first thought either). And it's not even great to use as a language-based database.

    Please don't even consider LLMs as "AI".

    Even an RNG does decision-making.

    I know what LLMs are, thank you very much!

    If you wanted to even understand my initial point, you already would have.

    Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.

  • Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

    If it actually ends up being an AI and not just some Trump cuck stooge masquerading as AI picking the drug by the company that gave the largest bribe to Trump, I 100% guarantee this AI is trained only on papers written by non-peer reviewed drug company paid "scientists" containing made up narratives.

    Those of us prescribed the drugs will be the guinea pigs because R&D costs money and hits the bottom line. The many deaths will be conveniently scape-goated on "the AI" the morons in charge promised is smarter and more efficient than a person.

    Fuck this shit.

  • Even an RNG does decision-making.

    I know what LLMs are, thank you very much!

    If you wanted to even understand my initial point, you already would have.

    Things have become really grim if people who can't read a small message are trying to teach me on fundamentals of LLMs.

    I wouldn't define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.

  • Different types of AI, different training data, different expectations and outcomes. Generative AI is but one use case.

    It's already been proven a useful tool in research, when directed and used correctly by an expert. It's a tool, to give to scientists to assist them, not replace them.

    If you're goal to use AI to replace people, you've got a bad surprise coming.

    If you're not equipping your people with the skills and tools of AI, your people will become obsolete in short time.

    Learn AI and how to utilize it as a tool, you can train your own model on your own private data and locally interrogate the model to do unique analysis typically not possible in realtime. Learn the goods and bads of technology and let your ethics guide how you use it, but stop dismissing revolutionary technology because the earlier generative models weren't reinforced enough get fingers right.

    I'm not dismissing its use. It is a useful tool, but it cannot replace experts at this point, or maybe ever (and I'm gathering you agree on this).

    If it ever does get to that point, we need to also remedy the massive social consequences of revoking those same experts' ability to have sufficient income to have a reasonable living.

    I was being a little silly for effect.

  • I saw a paper a while back that argued that AI is being used as "moral crumple zones". For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It's an interesting concept that I've thought about a lot since I found it.

    I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.

  • I wouldn't define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    You seem to not want any people to teach you anything. And are somehow completely dejected at such perceived actions.

    You seem to not want any people to teach you anything.

    No, I don't seem that. I don't like being ascribed opinions I haven't expressed.

    I wouldn’t define flipping coins as decision making. Especially when it comes to blanket governmental policy that has the potential to kill (or severely disable) millions of people.

    When your goal is to avoid a certain most harmful subset of such decisions, and living humans always being pressured by power and corrupt profit to pick that subset, flipping coins is preferable, if that's the two variants between which we are choosing.

  • Oh good, a 60% chance you’ll get an ineffective or killer drug because they’ll use AI to analyze the usage and AI to report on it.

    That is an underestimate, since it doesn't factor in the knockdown effect of the more lax regulations having, so people will try to sell all kinds of crap as "medicine".

  • Text to avoid paywall

    The Food and Drug Administration is planning to use artificial intelligence to “radically increase efficiency” in deciding whether to approve new drugs and devices, one of several top priorities laid out in an article published Tuesday in JAMA.

    Another initiative involves a review of chemicals and other “concerning ingredients” that appear in U.S. food but not in the food of other developed nations. And officials want to speed up the final stages of making a drug or medical device approval decision to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiraling death count.

    “The F.D.A. will be focused on delivering faster cures and meaningful treatments for patients, especially those with neglected and rare diseases, healthier food for children and common-sense approaches to rebuild the public trust,” Dr. Marty Makary, the agency commissioner, and Dr. Vinay Prasad, who leads the division that oversees vaccines and gene therapy, wrote in the JAMA article.

    The agency plays a central role in pursuing the agenda of the U.S. health secretary, Robert F. Kennedy Jr., and it has already begun to press food makers to eliminate artificial food dyes. The new road map also underscores the Trump administration’s efforts to smooth the way for major industries with an array of efforts aimed at getting products to pharmacies and store shelves quickly.

    Some aspects of the proposals outlined in JAMA were met with skepticism, particularly the idea that artificial intelligence is up to the task of shearing months or years from the painstaking work of examining applications that companies submit when seeking approval for a drug or high-risk medical device.

    “I don’t want to be dismissive of speeding reviews at the F.D.A.,” said Stephen Holland, a lawyer who formerly advised the House Committee on Energy and Commerce on health care. “I think that there is great potential here, but I’m not seeing the beef yet.”

    it's what ai is supposed to be used for, but it mabye isn't good enough

  • It doesn't. I understand the actual technology. There are applications of human decision making where it's possibly better.

    It kinda seems like you don’t understand the actual technology.

  • 254 Stimmen
    67 Beiträge
    0 Aufrufe
    L
    Maybe you're right: is there verification? Neither content policy (youtube or tiktok) clearly lays out rules on those words. I only find unverified claims: some write it started at YouTube, others claim TikTok. They claim YouTube demonetizes & TikTok shadowbans. They generally agree content restrictions by these platforms led to the propagation of circumspect shit like unalive & SA. TikTok policy outlines their moderation methods, which include removal and ineligibility to the for you feed. Given their policy on self-harm & automated removal of potential violations, their policy is to effectively & recklessly censor such language. Generally, censorship is suppression of expression. Censorship doesn't exclusively mean content removal, though they're doing that, too. (Digression: revisionism & whitewashing are forms of censorship.) Regardless of how they censor or induce self-censorship, they're chilling inoffensive language pointlessly. While as private entities they are free to moderate as they please, it's unnecessary & the effect is an obnoxious affront on self-expression that's contorting language for the sake of avoiding idiotic restrictions.
  • 1 Stimmen
    2 Beiträge
    2 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 180 Stimmen
    13 Beiträge
    0 Aufrufe
    D
    There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not. The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to. Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long? Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.
  • The world could experience a year above 2°C of warming by 2029

    Technology technology
    17
    1
    201 Stimmen
    17 Beiträge
    9 Aufrufe
    sattarip@lemmy.blahaj.zoneS
    Thank you for the clarification.
  • 763 Stimmen
    187 Beiträge
    10 Aufrufe
    O
    Not being a coward.
  • 177 Stimmen
    71 Beiträge
    4 Aufrufe
    K
    I have zero problems with this on Lineage. ?? No spoofing either, just Lineage.
  • 278 Stimmen
    100 Beiträge
    2 Aufrufe
    F
    It's not just skills, it's also capital investment.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet