Skip to content

Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists

Technology
16 11 166
  • Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

  • Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

    Honestly, that's a really sketchy thing to do. But if someone is really listening to an ai chatbot for therapy, then they've got bigger problems in their lives.

  • Honestly, that's a really sketchy thing to do. But if someone is really listening to an ai chatbot for therapy, then they've got bigger problems in their lives.

    So it’s okay to make it worse?

  • So it’s okay to make it worse?

    No? I'm just saying that it's unreasonable to trust chatbots to do anything properly, certainly not with one's mental health. If someone is listening to an ai chatbot for therapy, they probably don't have good friends, and certainly not the money for legitimate therapy.

  • I'm a real-life human therapist (honest!) and while I don't think it's a substitute for talking to a real person, I'm happy that some people get some benefit from chatbots. I had a client who used Rosebud Journal in between sessions and found it helpful. I tried out Rosebud myself and I was very impressed with how it replicated the basics like reflective listening and validation. It was even able to reframe my input using various therapy models when I requested it. I didn't use it for long because I'm not big on journaling, but I wouldn't dismiss it completely as a tool.

  • No? I'm just saying that it's unreasonable to trust chatbots to do anything properly, certainly not with one's mental health. If someone is listening to an ai chatbot for therapy, they probably don't have good friends, and certainly not the money for legitimate therapy.

    I mean, not everyone knows how these systems work so it’s not unreasonable to expect someone to believe the marketing.

    You’re right the issues go deeper than just AI systems, but the fake AI therapists are not helping.

  • I'm a real-life human therapist (honest!) and while I don't think it's a substitute for talking to a real person, I'm happy that some people get some benefit from chatbots. I had a client who used Rosebud Journal in between sessions and found it helpful. I tried out Rosebud myself and I was very impressed with how it replicated the basics like reflective listening and validation. It was even able to reframe my input using various therapy models when I requested it. I didn't use it for long because I'm not big on journaling, but I wouldn't dismiss it completely as a tool.

    I'm not worried about what it gets right, I'm worried about what it gets wrong. If it helps people, then that's a good thing. They don't have true empathy, and the user knows that. Sometimes, human experience is more valuable than the technical psychological knowledge imo. Chatgpt has never experienced the death of a family member, been broken up with, bullied, anything. I don't really expect it or trust it to properly help anyone with any personal issues or dilemmas. It's a cold, uncaring machine, and as its knowledge is probably rather flawed, could even teach dangerous ideas to users. I especially don't trust a company like Meta to be doing this thouroughly and to truly help their patients. It's cool if it works, but dangerous if it doesn't.

  • I'm a real-life human therapist (honest!) and while I don't think it's a substitute for talking to a real person, I'm happy that some people get some benefit from chatbots. I had a client who used Rosebud Journal in between sessions and found it helpful. I tried out Rosebud myself and I was very impressed with how it replicated the basics like reflective listening and validation. It was even able to reframe my input using various therapy models when I requested it. I didn't use it for long because I'm not big on journaling, but I wouldn't dismiss it completely as a tool.

    How do you feel about all the kids committing suicide after interacting with AI?

  • How do you feel about all the kids committing suicide after interacting with AI?

    I don't know about the OP, but that would be fucking fantastic! What a bullshit question

  • I don't know about the OP, but that would be fucking fantastic! What a bullshit question

    It is a bullshit question in reply to a bullshit statement. OP was not involved.

  • Yeah those people without the money or friends should just not be heard /s

  • Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

    Better than Better Help.

  • I'm not worried about what it gets right, I'm worried about what it gets wrong. If it helps people, then that's a good thing. They don't have true empathy, and the user knows that. Sometimes, human experience is more valuable than the technical psychological knowledge imo. Chatgpt has never experienced the death of a family member, been broken up with, bullied, anything. I don't really expect it or trust it to properly help anyone with any personal issues or dilemmas. It's a cold, uncaring machine, and as its knowledge is probably rather flawed, could even teach dangerous ideas to users. I especially don't trust a company like Meta to be doing this thouroughly and to truly help their patients. It's cool if it works, but dangerous if it doesn't.

    Oh I don't at all support what Meta has done, and I don't trust any company not to harm and exploit users. I was responding to your comment by saying that talking to a chatbot doesn't necessarily indicate that someone has "bigger problems." If they're not in a crisis, and they have reasonable expectations for the chatbot, I can see how it could be a helpful tool. If someone doesn't have access to a real therapist, and a chatbot helps them feel better in the meantime, I'm not going to gatekeep that experience.

  • Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

    One thing to note is that I’m pretty sure these are user-generated chatbots and not official Meta therapy chatbots.

  • Perhaps some people can't afford it. I have the luxury of paying for weekly therapy but its probably one of my biggest line item expenses.

  • Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

    Does it mean that some people take orders from AI and don't know it's AI ?

  • Police rule out using Live Facial Recognition on Surrey Street

    Technology technology
    3
    1
    55 Stimmen
    3 Beiträge
    8 Aufrufe
    A
    How? They've literally been asking for more crime cameras be installed to fight crime since a resident was murdered. "It's not in the budget." More on duty police? "Sorry, it's just not in the budget." Live facial recognition tracking system that doesn't exist anywhere else and can be used to collect data and create a giant AI database with data from every civilian it tracks. That data can then coincidentally be used to train AI models and enhance profits for companies like Palantir. "Yeah we should be able to swing that in the budget." Basically the exact same story is happening in the U.S. city where I live. We have a boil water advisory every other week, we have terrible roads, and awful schools but somehow the city has the budget to update our cameras so we will become the first city to test this out. After Palantir already secretly used our city to create and test their predictive policing model (which still fucking sucks btw). https://www.theverge.com/2018/2/27/17054740/palantir-predictive-policing-tool-new-orleans-nopd Oh also Palantir happens to be currently working with the U.S. government to create a giant database of every citizen. https://www.mercurynews.com/?p=12164379%2F
  • 951 Stimmen
    270 Beiträge
    1k Aufrufe
    F
    What a fascinating hallucination you’ve had. But the fact remains that macOS is free. Wanna approve me wrong? Show me a receipt where someone paid for it in the last 15 years. You really should talk to someone about your Apple derangement disorder.
  • 12 Stimmen
    5 Beiträge
    13 Aufrufe
    P
    This has nothing to do with training a GPT or model. It’s a point form guide to using the Custom GPT feature in the ChatGPT app. It’s really mostly just “write a prompt and upload pdfs”
  • 134 Stimmen
    12 Beiträge
    30 Aufrufe
    T
    The worst person you know just made a great point
  • 0 Stimmen
    2 Beiträge
    38 Aufrufe
    H
    Just to add — this survey is for literally anyone who's been through the project phase in college. We’re trying to figure out: What stops students from building cool stuff? What actually helps students finish a project? How mentors/teachers can support better? And whether buying/selling projects is something people genuinely do — and why. Super grateful to anyone who fills it. And if you’ve had an experience (good or bad) with your project — feel free to share it here too
  • 17 Stimmen
    2 Beiträge
    41 Aufrufe
    T
    Yeah, sure. Like the police need extra help with racial profiling and "probable cause." Fuck this, and fuck the people who think this is a good idea. I'm sure the authoritarians in power right now will get right on those proposed "safeguards," right after they install backdoors into encryption, to which Only They Have The Key, to "protect" everyone from the scary "criminals."
  • BSOD is dead, long live BSOD

    Technology technology
    14
    1
    56 Stimmen
    14 Beiträge
    147 Aufrufe
    S
    Right? I never click these useless links.
  • 0 Stimmen
    3 Beiträge
    42 Aufrufe
    thehatfox@lemmy.worldT
    The platform owners don’t consider engagement to me be participation in meaningful discourse. Engagement to them just means staying on the platform while seeing ads. If bots keep people doing that those platforms will keep letting them in.