Skip to content

Study finds AI tools made open source software developers 19 percent slower

Technology
37 31 0
  • A "junior" project manager at my company vibe coded an entire full stack web app with one of those LLM IDEs. His background is industrial engineering and claims to have basically no programming experience.

    It "works", as in, it does what it's meant to, but as you can guess, it relies on calls to LLM APIs where it really doesn't have to, and has several critical security flaws, inconsistencies in project structure and convention, and uses deprecated library features.

    He already pitched it to one of our largest clients, and they're on board. They want to start testing at the end of the month.

    He's had one junior dev who's been managing to keep things somewhat stable, but the poor dude really had his work cut out for him. I only recently joined the project because "it sounded cool", so I've been trying to fix some flaws while adding new requested features.

    I've never worked with the frameworks and libraries before, so it's a good opportunity to upskill, but god damn I don't know if I want my name on this project.

    A similar thing is happening with my brother at a different company. An executive vibe coded a web application, but this thing absolutely did not work.

    My brother basically had one night to get it into a working state. He somehow (ritalin) managed to do it. The next day they presented it to one of their major clients. They really want it.

    These AI dev tools absolutely have a direct negative impact on developer productivity, but they also have an indirect impact where non-devs use them and pass their Eldritch abominations to the actual devs to fix, extend and maintain.

    Two years ago, I was worried about AI taking dev jobs, but now it feels like, to me, we'll need more human devs than ever in the long run.

    Like, weren't these things supposed to exponentially get better? Like, cool, gh copilot can fuck up my project files now.

    These AI dev tools absolutely have a direct negative impact on developer productivity, but they also have an indirect impact where non-devs use them and pass their Eldritch abominations to the actual devs to fix, extend and maintain.

    Sounds like the next evolution of the Excel spreadsheet macro. Or maybe it's convergent evolution toward the same niche. (I still have nightmares about Excel spreadsheet macros.)

  • I like to think typos like that confirm my humanity 🙂

  • Yep I've got a working iOS app, a v.2 branched and on the way, with a ton of MapKit integrations. Unfortunately I'm getting depreciation errors and having to constantly remind the AI that it's using old code, showing it examples of new code, and then watching it forget as we keep talking.

    Still, I have a working iOS app, which only took a few hours. When Jack Dorsey said he'd vibe coded his new app in a long weekend, I'm like, hey me too.

    LLMs can't forget things because they are not capable of memory.

  • I don't doubt this is true. I've been playing with an A.I and some fairly simple python scripts and it's so tedious to get the A.I. to actually do something to the script correctly. Learning to prompt is a skill all it's own.

    In my experience it's much more useful for doing things like in AWS like create a Cloudformation template or look through user permissions for excess privileges or setup a backup schedule, like at scale when you have lots of accounts and users, etc.

    So it's like talking to women...

  • I like to think typos like that confirm my humanity 🙂

    shhh don’t let the bots in on our secret

    also now I’m hungry for phở

  • shhh don’t let the bots in on our secret

    also now I’m hungry for phở

    With enough training data from me and chatbots will spell like shit. Bad grammar as well.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    Slowing you down is the main benefit!

    It helps you to keep more brain time on solving the actual problem, and less on boring syntax crap. Of course, then it gets the syntax crap wrong and you need to waste a lot of time fixing it.

  • Some of my projects wouldn’t have been finished without AI.

    This says way more about you than it says about AI tools

    I was talking mostly about side projects. I don't have much time for them right now. Thanks to LLMs, I can spend those few hours a week on doing instead of reading what is the best way to do X in ever changing world of web front-end frameworks. I just sit down, ask: "how is it usually done?", tweak it a bit and finish.

    Example: I have published an app on flathub a while ago. Doing it from scratch is damn complicated. "Screw it" is what I would say in pre LLMs era after a few hours 😉

  • With enough training data from me and chatbots will spell like shit. Bad grammar as well.

    The future has not been written. There is no fate but what we make for ourselves.

  • LLMs can't forget things because they are not capable of memory.

    They can hold session memory including 10+ source files, and a looong chat, but when you run into the wall, suddenly it's eating its own memory to keep going, rather than forcing me to reset the session. Which is interesting, like co-coding with a mild amnesiac. "Hey remember when we just did that thing 2 minutes ago?" I should have started a new session when I branched.

  • Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.

    Having to repeatedly tweak and review AI generations is a code smell. Your gut could be telling you to start using your brain to build your code if you're at this stage.