Skip to content

[paper] Evidence of a social evaluation penalty for using AI

Technology
8 8 1
  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    A rudimentary quick Internet search will provide a good bit of the "AI benefits at work" documentation for which you seek. 🤷♂

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    a benefit of ai is that its faster than a human. on the other hand, its can be wrong

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    I think its honestly pretty undeniable that AI can be a massive help in the workplace. Not all jobs sure but using it to automate toil is incredibly useful.

  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    This kind of work I find very important when talking about AI adoption.

    I've been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and 'complete' (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful.
    Because it's for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content.
    I'm not sure that's how decoder-encoders are meant to work 🙂

  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    I don't think that people who use AI tools are idiots. I think that some of my coworkers are idiots and their use of AI has just solidified that belief. They keep pasting AI results to nuanced questions and not validating the response themselves.

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    It's nice for hints while programming. But that's mostly, because search engines suck.

  • 52 Stimmen
    18 Beiträge
    0 Aufrufe
    W
    Crypto is a solution looking for a problem. The people behind crypto never studied the history of the gold standard. Crypto is a ponzi scheme virtual asset, not a currency.
  • Bill Gates to give away 99% of his wealth in the next 20 years

    Technology technology
    19
    149 Stimmen
    19 Beiträge
    2 Aufrufe
    L
    Then you are not a decent human being?
  • 45 Stimmen
    3 Beiträge
    1 Aufrufe
    V
    I use it for my self hosted apps, but yeah, it's rarely useful for websites in the wild.
  • Meta Reportedly Eyeing 'Super Sensing' Tech for Smart Glasses

    Technology technology
    4
    1
    34 Stimmen
    4 Beiträge
    2 Aufrufe
    M
    I see your point but also I just genuinely don't have a mind for that shit. Even my own close friends and family, it never pops into my head to ask about that vacation they just got back from or what their kids are up to. I rely on social cues from others, mainly my wife, to sort of kick start my brain. I just started a new job. I can't remember who said they were into fishing and who didn't, and now it's anxiety inducing to try to figure out who is who. Or they ask me a friendly question and I get caught up answering and when I'm done I forget to ask it back to them (because frequently asking someone about their weekend or kids or whatever is their way of getting to share their own life with you, but my brain doesn't think that way). I get what you're saying. It could absolutely be used for performative interactions but for some of us people drift away because we aren't good at being curious about them or remembering details like that. And also, I have to sit through awkward lunches at work where no one really knows what to talk about or ask about because outside of work we are completely alien to one another. And it's fine. It wouldn't be worth the damage it does. I have left behind all personally identifiable social media for the same reason. But I do hate how social anxiety and ADHD makes friendship so fleeting.
  • 326 Stimmen
    20 Beiträge
    3 Aufrufe
    roofuskit@lemmy.worldR
    It's extremely traceable. There is a literal public ledger if every single transaction.
  • Apple Watch Shipments’ Continuous Decline

    Technology technology
    10
    1
    22 Stimmen
    10 Beiträge
    0 Aufrufe
    A
    i mean as a core feature of a watch/smartwatch in general. garmin is going above and beyond compared to the competition in that area, and that's great. But that doesn't mean every other smartwatch manufacturer arbitrarily locking traditional watch features behind paywalls. and yeah apple does fitness themed commercials for apple watch because it does help with fitness a ton out of the box. just not specifically guided workouts.
  • 48 Stimmen
    9 Beiträge
    0 Aufrufe
    F
    Being “locked down” is irrelevant for a device used to read and write on. All those devices are also significantly more powerful than this thing. They all also have keyboard attachments readily available across all sizes and prices. Linux isn’t at all necessary for the use cases the author talks about. Windows would be massively overkill.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    2 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.