Study finds AI tools made open source software developers 19 percent slower
-
pho
faux
I like to think typos like that confirm my humanity
-
Yep I've got a working iOS app, a v.2 branched and on the way, with a ton of MapKit integrations. Unfortunately I'm getting depreciation errors and having to constantly remind the AI that it's using old code, showing it examples of new code, and then watching it forget as we keep talking.
Still, I have a working iOS app, which only took a few hours. When Jack Dorsey said he'd vibe coded his new app in a long weekend, I'm like, hey me too.
LLMs can't forget things because they are not capable of memory.
-
I don't doubt this is true. I've been playing with an A.I and some fairly simple python scripts and it's so tedious to get the A.I. to actually do something to the script correctly. Learning to prompt is a skill all it's own.
In my experience it's much more useful for doing things like in AWS like create a Cloudformation template or look through user permissions for excess privileges or setup a backup schedule, like at scale when you have lots of accounts and users, etc.
So it's like talking to women...
-
I like to think typos like that confirm my humanity
shhh don’t let the bots in on our secret
also now I’m hungry for phở
-
shhh don’t let the bots in on our secret
also now I’m hungry for phở
With enough training data from me and chatbots will spell like shit. Bad grammar as well.
-
Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.
Slowing you down is the main benefit!
It helps you to keep more brain time on solving the actual problem, and less on boring syntax crap. Of course, then it gets the syntax crap wrong and you need to waste a lot of time fixing it.
-
Some of my projects wouldn’t have been finished without AI.
This says way more about you than it says about AI tools
I was talking mostly about side projects. I don't have much time for them right now. Thanks to LLMs, I can spend those few hours a week on doing instead of reading what is the best way to do X in ever changing world of web front-end frameworks. I just sit down, ask: "how is it usually done?", tweak it a bit and finish.
Example: I have published an app on flathub a while ago. Doing it from scratch is damn complicated. "Screw it" is what I would say in pre LLMs era after a few hours
-
With enough training data from me and chatbots will spell like shit. Bad grammar as well.
The future has not been written. There is no fate but what we make for ourselves.
-
LLMs can't forget things because they are not capable of memory.
They can hold session memory including 10+ source files, and a looong chat, but when you run into the wall, suddenly it's eating its own memory to keep going, rather than forcing me to reset the session. Which is interesting, like co-coding with a mild amnesiac. "Hey remember when we just did that thing 2 minutes ago?" I should have started a new session when I branched.
-
Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.
Having to repeatedly tweak and review AI generations is a code smell. Your gut could be telling you to start using your brain to build your code if you're at this stage.