Skip to content

AI slows down some experienced software developers, study finds

Technology
80 35 0
  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    Is “way less useful” something you can cite with a source, or is that just feelings?

  • Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.

    You can see why companies are tripping over themselves to push this new modality.

    I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.

  • Just the other day I wasted 3 min trying to get AI to sort 8 lines alphabetically.

    I wouldn’t mention this to anyone at work. It makes you sound clueless

  • Exactly what you would expect from a junior engineer.

    Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

    Something something craftsmen don’t blame their tools

    Exactly what you would expect from a junior engineer.

    Except junior engineers become seniors. If you don't understand this ... are you HR?

  • I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.

    Wasn’t it clear that our comments are in agreement?

  • Exactly what you would expect from a junior engineer.

    Except junior engineers become seniors. If you don't understand this ... are you HR?

    They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    I have limited AI experience, but so far that's what it means to me as well: helpful in very limited circumstances.

    Mostly, I find it useful for "speaking new languages" - if I try to use AI to "help" with the stuff I have been doing daily for the past 20 years? Yeah, it's just slowing me down.

  • Wasn’t it clear that our comments are in agreement?

    It wasn’t, but now it is.

  • They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

    Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.

  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

  • Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

    The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

  • Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

    So you're saying there's no such thing as complex webapps and that there's no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there's never any competent developers that work on them and there's no use for them at all?

    Where do you think we are?

  • My fear for the software industry is that we'll end up replacing junior devs with AI assistance, and then in a decade or two, we'll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

    That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

    It really sucks.

    That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

  • Is “way less useful” something you can cite with a source, or is that just feelings?

    It is based on my experience, which I trust immeasurably more than rigged "studies" done by the big LLM companies with clear conflict of interest.

  • I wouldn’t mention this to anyone at work. It makes you sound clueless

    My boss insists I use it and I insist on telling him when it can't do the simplest things.

  • It is based on my experience, which I trust immeasurably more than rigged "studies" done by the big LLM companies with clear conflict of interest.

    Understood, thanks for being honest

  • AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

    Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    LOL sure

  • My boss insists I use it and I insist on telling him when it can't do the simplest things.

    It sounds like you’ve got it all figured out. Best of luck to you

  • Ready-made stem cell therapies for pets could be coming

    Technology technology
    1
    1
    27 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • 528 Stimmen
    123 Beiträge
    493 Aufrufe
    B
    I'm not saying to waste space... but when manufacturers start a pissing match among themselves and say that it's because it's what the customers want, we end up with shit. Why does anyone need a screen that curves around the edge of the phone? What purpose does this serve? Who actually asked for this? I would give up some of my screen area to have forward facing speakers. I want a thicker phone that has better battery life. I also want to be able to swap out my battery. Oh, and I don't want the entire thing encased in glass. If we're so concerned about phone size then they should stop designing them so that a case is required.
  • Could Windows and installed apps upload all my personal files?

    Technology technology
    2
    1 Stimmen
    2 Beiträge
    17 Aufrufe
    rikudou@lemmings.worldR
    Yes, every application has access to everything. The only exception are those weird apps that use the universal framework or whatever that thing is called, those need to ask for permissions. But most of the apps on your PC have full access to everything. And Windows does collect and upload a lot of personal information and they could easily upload everything on your system. The same of course applies for the apps as well, they have access to everything except privileged folders (those usually don't contain your personal data, but system files).
  • 138 Stimmen
    15 Beiträge
    58 Aufrufe
    toastedravioli@midwest.socialT
    ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize. Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training. Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)
  • matrix is cooked

    Technology technology
    75
    1
    180 Stimmen
    75 Beiträge
    269 Aufrufe
    penguin202124@sh.itjust.worksP
    That's very fair. Better start contributing I guess.
  • 24 Stimmen
    14 Beiträge
    35 Aufrufe
    S
    I think you're missing some key points. Any file hosting service, no matter what, will have to deal with CSAM as long as people are able to upload to it. No matter what. This is an inescapable fact of hosting and the internet in general. Because CSAM is so ubiquitous and constant, one can only do so much to moderate any services, whether they're a large corporation are someone with a server in their closet. All of the larger platforms like 'meta', google, etc., mostly outsource that moderation to workers in developing countries so they don't have to also provide mental health counselling, but that's another story. The reason they own their own hardware is because the hosting services can and will disable your account and take down your servers if there's even a whiff of CSAM. Since it's a constant threat, it's better to own your own hardware and host everything from your closet so you don't have to eat the downtime and wait for some poor bastard in Nigeria to look through your logs and reinstate your account (not sure how that works exactly though).
  • 479 Stimmen
    81 Beiträge
    287 Aufrufe
    douglasg14b@lemmy.worldD
    Did I say that it did? No? Then why the rhetorical question for something that I never stated? Now that we're past that, I'm not sure if I think it's okay, but I at least recognize that it's normalized within society. And has been for like 70+ years now. The problem happens with how the data is used, and particularly abused. If you walk into my store, you expect that I am monitoring you. You expect that you are on camera and that your shopping patterns, like all foot traffic, are probably being analyzed and aggregated. What you buy is tracked, at least in aggregate, by default really, that's just volume tracking and prediction. Suffice to say that broad customer behavior analysis has been a thing for a couple generations now, at least. When you go to a website, why would you think that it is not keeping track of where you go and what you click on in the same manner? Now that I've stated that I do want to say that the real problems that we experience come in with how this data is misused out of what it's scope should be. And that we should have strong regulatory agencies forcing compliance of how this data is used and enforcing the right to privacy for people that want it removed.
  • 236 Stimmen
    80 Beiträge
    272 Aufrufe
    R
    Yeah, but that's a secondary attribute. The new ones are stupid front and center.