Skip to content

[paper] Evidence of a social evaluation penalty for using AI

Technology
8 8 1
  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    A rudimentary quick Internet search will provide a good bit of the "AI benefits at work" documentation for which you seek. 🤷♂

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    a benefit of ai is that its faster than a human. on the other hand, its can be wrong

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    I think its honestly pretty undeniable that AI can be a massive help in the workplace. Not all jobs sure but using it to automate toil is incredibly useful.

  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    This kind of work I find very important when talking about AI adoption.

    I've been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and 'complete' (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful.
    Because it's for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content.
    I'm not sure that's how decoder-encoders are meant to work 🙂

  • Significance

    As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

    Abstract

    Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

    I don't think that people who use AI tools are idiots. I think that some of my coworkers are idiots and their use of AI has just solidified that belief. They keep pasting AI results to nuanced questions and not validating the response themselves.

  • This apparent tension between AI’s documented benefits

    That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.

    It's nice for hints while programming. But that's mostly, because search engines suck.

  • Microsoft Teams will soon block screen capture during meetings

    Technology technology
    30
    302 Stimmen
    30 Beiträge
    1 Aufrufe
    D
    Well, 'proven wrong' is a bit of a stretch. 'will soon block screen capture' doesn't leave a lot of wiggle room, but also isn't that crazy to read into it that maybe it would block screen capture on the presenters screen... especially if you grant that it might only have control over the teams portion of the screen. I've had it black out windows on my own machine even when not presenting. But further than that, it's not fair to say everything has to be read only from the most or the least charitable viewpoints. Context is a thing and if you're even a little bit familiar with the history of software enshittification, it's reasonable to assume that an uncharitable reading is fair without assuming the app will now melt your computer for spare parts if you try something that is disallowed. 'As shitty as we can get away with' might be a good rule of thumb.
  • 45 Stimmen
    3 Beiträge
    1 Aufrufe
    V
    I use it for my self hosted apps, but yeah, it's rarely useful for websites in the wild.
  • X blocks 8,000 accounts in India under government order

    Technology technology
    2
    1
    58 Stimmen
    2 Beiträge
    1 Aufrufe
    gsus4@mander.xyzG
    'member Aug 6 2024: https://www.ft.com/content/31919b4e-4a5a-4eba-ada7-88d3fec455f8 ;D UK faces resistance from X over taking down disinformation during riots Social media site owner Elon Musk has also been posting jibes at UK Prime Minister Keir Starmer Waiting to see those jibes at Modi... And who could forget in April 11, 2024: https://apnews.com/article/brazil-musk-x-twitter-moraes-bef06c0dbbb8ed87495b1afbb0edf211 What to know about Elon Musk’s ‘free speech’ feud with a Brazilian judge gotta see that feud with Indian judges, nobody asked him to block 8000 accounts, including western media outlets, whatever is he gonna do?
  • Instacart CEO Fidji Simo is joining OpenAI as CEO of Applications

    Technology technology
    2
    1
    20 Stimmen
    2 Beiträge
    0 Aufrufe
    paraphrand@lemmy.worldP
    overseeing product development for Facebook Video So she’s the one who oversaw the misleading Facebook Video numbers that destroyed a whole swath of websites?
  • 42 Stimmen
    7 Beiträge
    2 Aufrufe
    B
    Yesterday on reddit I saw a photo a patient shot over the shoulder of his doctor of his computer monitor. It had ChadGPT full with diagnosis requests. https://www.reddit.com/r/ChatGPT/comments/1keqstk/doctor_using_chatgpt_for_a_visit_due_to_knife_cut/
  • Windows Is Adding AI Agents That Can Change Your Settings

    Technology technology
    26
    1
    103 Stimmen
    26 Beiträge
    1 Aufrufe
    T
    Edit: no, wtf am i doing The thread was about inept the coders were. Here is your answer: They were so fucking inept they broke a fundamental function and it made it to production. Then they did it deliberately. That's how inept they are. End of.
  • 48 Stimmen
    9 Beiträge
    0 Aufrufe
    F
    Being “locked down” is irrelevant for a device used to read and write on. All those devices are also significantly more powerful than this thing. They all also have keyboard attachments readily available across all sizes and prices. Linux isn’t at all necessary for the use cases the author talks about. Windows would be massively overkill.
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    2 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry