Skip to content

Study finds AI tools made open source software developers 19 percent slower

Technology
37 31 0
  • This does not seem surprising to me:

    "Overall, the developers in the study accepted less than 44 percent of the code generated by AI without modification. A majority of the developers reported needing to make changes to the code generated by their AI companion, and a total of 9 percent of the total task time in the "AI-assisted" portion of the study was taken up by this kind of review."

    It sounds about right. The AI should be acting as an assistant. The big question to me is if the code that comes out 19% slower is at all of higher quality. Since the coder is doing more correction and review does it act a bit like a second set of eyes or a pho sort of collaboration. If so it could still be helpful. Granted my experience so far is that most of what it does can be done with plugins to an ide but like it is sorta handy to have it all set and going after an installation without having to find and start using the plugins. Im still worried about energy usage with these things but hoping that can be worked out and honestly im not sure if the energy usage for something integrated with an ide or such is as bad.

    Ad a fairly senior developer, I'm not at all surprised. AI speeds me up in some circumstances like writing boilerplate; things like kubernetes manifests. It does not speed up my coding, but it does help me explore options, expand my knowledge, and point me down the right track on new methods and packages. It also lets me do things I wouldn't normally bother with, but which are good practice like finding edge cases for unit tests, packaging for multiple architectures, writing scripts to profile my code, etc.

    Essentially, I'm likely slower writing code with AI assistance but I think the code is higher quality because it let's me quickly assess many options and implement best practices that are normally tedious to implement manually.

    I almost never accept code AI has written without modification, but I think I gain a lot from its use.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    They can't read your mind. A professional painter is going to make the exact image they want in far less time and with more accuracy than repeatedly prompting a black box to make small changes.

    But if you're an amateur and don't really know what you want, or you're not very picky or care about quality, then meh good enough. High level software developers know what they want. They are like painters. And at that point, the LLM isn't really solving problems for you. At best, it's putting the paint to the canvas. That is, saving you typing time.

    But time spent typing is definitely not the limiting factor for productivity in software.

  • They can't read your mind. A professional painter is going to make the exact image they want in far less time and with more accuracy than repeatedly prompting a black box to make small changes.

    But if you're an amateur and don't really know what you want, or you're not very picky or care about quality, then meh good enough. High level software developers know what they want. They are like painters. And at that point, the LLM isn't really solving problems for you. At best, it's putting the paint to the canvas. That is, saving you typing time.

    But time spent typing is definitely not the limiting factor for productivity in software.

    They can't read your mind. A professional painter is going to make the exact image they want in far less time and with more accuracy than repeatedly prompting a black box to make small changes.

    and this is the exact reason why I hate IDEs that relentlessly "do things" for me.

    I don't need my editor maintaining my includes or updating my lock files. I don't need them to auto complete words or fix syntax for me.

    I know exactly what I'm doing. If I don't then-- AND ONLY THEN, will I lookup what I need and fix it myself.

    if there's a problem with formatting a linter will pick it up. if there's a problem with syntax the runtime/compilation will pick it up. if there's a problem with content uat will pick it up.

    we don't need to be MORE productive, we need to be more skilled and using tools like these only soften the mind and dull the spirit.

  • Maybe they're making soup

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    Great as an assistant for boring tasks. Still needs checking.

    Can also help suggest improvements, but still needs checking.

    Have to learn when to stop interacting with it and do it yourself.

  • Maybe they're making soup

    You can tell it’s code soup by the smell

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    Their sample size was 16 people...

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    Sounds reasonable. The time and energy ive lost on trying very confident chat gpt suggestions that doesnt work must be weeks at this point.

    Sometimes its very good though and really helps, which is why its so frustrating. You never know if its going to work before you go through the process.

    It has changed how me and coworkers work now also. We just talk to chat gpt instead of even trying to look something up in the docs and trying to understand it. Too slow to do that now, it feels like. There is a pressure to solve anything quickly now that chat gpt exists.

  • Their sample size was 16 people...

    I got flamed pretty hard for pointing out that this sample size really needs to be in the title, but it needs to be said. Thank you. Sixteen people is basically a forum thread, and not a very popular one.

    It’s still useful information and a good read, but a lot of people don’t click through to the article, they just remember the title and move on.

  • Their sample size was 16 people...

    I'm not really sure why it was such a small sample size. It definitely casts doubt on some of their conclusions. I also have issues with some methodology used. I think a better study that came out a week or two ago was the one that showed visible neurological decline from AI use.

  • Great as an assistant for boring tasks. Still needs checking.

    Can also help suggest improvements, but still needs checking.

    Have to learn when to stop interacting with it and do it yourself.

    A "junior" project manager at my company vibe coded an entire full stack web app with one of those LLM IDEs. His background is industrial engineering and claims to have basically no programming experience.

    It "works", as in, it does what it's meant to, but as you can guess, it relies on calls to LLM APIs where it really doesn't have to, and has several critical security flaws, inconsistencies in project structure and convention, and uses deprecated library features.

    He already pitched it to one of our largest clients, and they're on board. They want to start testing at the end of the month.

    He's had one junior dev who's been managing to keep things somewhat stable, but the poor dude really had his work cut out for him. I only recently joined the project because "it sounded cool", so I've been trying to fix some flaws while adding new requested features.

    I've never worked with the frameworks and libraries before, so it's a good opportunity to upskill, but god damn I don't know if I want my name on this project.

    A similar thing is happening with my brother at a different company. An executive vibe coded a web application, but this thing absolutely did not work.

    My brother basically had one night to get it into a working state. He somehow (ritalin) managed to do it. The next day they presented it to one of their major clients. They really want it.

    These AI dev tools absolutely have a direct negative impact on developer productivity, but they also have an indirect impact where non-devs use them and pass their Eldritch abominations to the actual devs to fix, extend and maintain.

    Two years ago, I was worried about AI taking dev jobs, but now it feels like, to me, we'll need more human devs than ever in the long run.

    Like, weren't these things supposed to exponentially get better? Like, cool, gh copilot can fuck up my project files now.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    True and not true at the same time. Using agents indeed often don't work, mostly when I'm trying to do the wrong thing. Because then, AI agent does not say "the way you do it is overly complicated, it does not make any sense", but instead it says: "excellent idea, here are X steps I need to do to make it happen". It wasted my time many times, but it also guided me quickly though some problems that would take hours to research. Some of my projects wouldn't have been finished without AI.

  • Their sample size was 16 people...

    Who are in the process of learning to do something new, versus the workflow that they've been trained in and have a lot of experience in.

    Where was the sample of non-coders tasked with doing the same thing, using AI to help or learning without assistance?

    Where was the sample of coders prohibited from looking anything up and having to rely solely on their prior knowledge to do the job?

    It might help refine what's actually being tested.

  • Sounds reasonable. The time and energy ive lost on trying very confident chat gpt suggestions that doesnt work must be weeks at this point.

    Sometimes its very good though and really helps, which is why its so frustrating. You never know if its going to work before you go through the process.

    It has changed how me and coworkers work now also. We just talk to chat gpt instead of even trying to look something up in the docs and trying to understand it. Too slow to do that now, it feels like. There is a pressure to solve anything quickly now that chat gpt exists.

    You have to ignore the obsequious optimism bias LLM's often have. It all comes down to their training set and if they have seen more than you have.

    I don't generally use them on projects I'm already familiar with unless it's for fairly boring repetitive work that would be fiddly with search and replace, e.g. extract the common code out of these functions and refactor.

    When working with unfamiliar code they can have an edge so if I needed a simple mobile app I'd probably give the LLM a go and then tidy up the code once it's working.

    At most I'll give it 2 or 3 attempts to correct the original approach before I walk away and try something else. If it starts making up functions it APIs that don't exist that is usually a sign out didn't know so time to cut your losses and move on.

    Their real strengths come in when it comes to digesting large amounts of text and sumerising. Great for saving you reading all the documentation on a project just to try a small thing. But if your going to work on the project going forward your going to want to invest that training data yourself.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    The main issue i have with AI coding, hasn't been the code. Its a bit ham fisted and overly naive, it is as if it's speed blind.

    The main issue is that some of the code is out of date using functions that are deprecated etc, and it seems to be mixing paradigms and styles across languages in a very frustrating? way.

  • True and not true at the same time. Using agents indeed often don't work, mostly when I'm trying to do the wrong thing. Because then, AI agent does not say "the way you do it is overly complicated, it does not make any sense", but instead it says: "excellent idea, here are X steps I need to do to make it happen". It wasted my time many times, but it also guided me quickly though some problems that would take hours to research. Some of my projects wouldn't have been finished without AI.

    Some of my projects wouldn’t have been finished without AI.

    This says way more about you than it says about AI tools

  • True and not true at the same time. Using agents indeed often don't work, mostly when I'm trying to do the wrong thing. Because then, AI agent does not say "the way you do it is overly complicated, it does not make any sense", but instead it says: "excellent idea, here are X steps I need to do to make it happen". It wasted my time many times, but it also guided me quickly though some problems that would take hours to research. Some of my projects wouldn't have been finished without AI.

    Just make sure you're validating everything you produce with it.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    Studies show that the electric drills drill faster than a manual, hand-cranked drill.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    On a different note: is it just me or do images with this color scheme (that blue and black) also have a weird 3d look to them to you?

  • Their sample size was 16 people...

    Where the most experienced minority only had a few weeks of using AI inside an IDE like Cursor.

  • DIY experimental Redox Flow Battery kit

    Technology technology
    1
    1
    3 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • First time I hear about this store...

    Technology
    3
    3 Stimmen
    3 Beiträge
    0 Aufrufe
    M
    I never heard of it... News to me.
  • 195 Stimmen
    46 Beiträge
    0 Aufrufe
    V
    i use wyze, been solid for years esp for the price. local SD storage is a huge plus for me and the streaming quality is good and loads insanely fast. i have a handful of blink cameras around the property but never use them anymore bc the interface and UX is so shit
  • 235 Stimmen
    56 Beiträge
    0 Aufrufe
    M
    Yeah, I've had a fiber cable running all the way into my apartment for over a decade, and directly into my router for years.
  • 26 Stimmen
    6 Beiträge
    0 Aufrufe
    M
    PS3 was a 1080p capable device connected to our (new in 2007) 1080p living room TV, the only 1080p device for almost a year. It played BluRay discs - they had the opportunity to cooperate with Netflix and other content providers like the Smart TVs that followed, but they didn't. When they rug-pulled the "otherOS" feature that I was using to stream live (still) photos from WebCams in the Caribbean, that earned a NetTop PC a place in the living room, and from there PC based content sourcing became the norm in our house. To this day, we have no "Smart" TVs. Our BluRay players are not internet connected (and they play 99% DVDs, less than 1% BluRay content...) Consumer behavior gets ingrained, hard to change when they're happy where they are.
  • How North Korea infiltrates its IT experts into Western companies

    Technology technology
    7
    1
    60 Stimmen
    7 Beiträge
    0 Aufrufe
    higgsboson@dubvee.orgH
    паляниця Rransliterated as palianytsia, Ukraine uses this word because it is difficult for native russian-speakers to pronounce. And now they have a drone named after it. https://kyivindependent.com/everything-we-know-about-ukraines-new-palianytsia-missile-drone/
  • Giving Up on Element & Matrix.org

    Technology
    66
    1
    219 Stimmen
    66 Beiträge
    0 Aufrufe
    abnormalhumanbeing@lemmy.abnormalbeings.spaceA
    Not impossible, although, sadly - any system where anonymity is the prime focus will also invite fucked up shit in addition to legitimate use, without any complicated motives behind it. There's just a relevant fraction of humanity who are, sometimes essentially, sometimes temporarily, messed up fucks. Which is why I think providing ways to combat abuse has to be a high priority for the underlying development of any project like it, unless it explicitly doesn't aim for mainstream adoption.
  • 15 Stimmen
    3 Beiträge
    0 Aufrufe
    J
    Don't forget one needed to stay online with Xbox Live subscription even for basic functionality and gaming during XBOne's E3 Not to mention games were heavily reliant on DRM so you can't trade discs easily Meanwhile Sony...