Skip to content

AI slows down some experienced software developers, study finds

Technology
80 35 0
  • They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

    Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.

  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

  • Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

    The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

  • Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

    So you're saying there's no such thing as complex webapps and that there's no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there's never any competent developers that work on them and there's no use for them at all?

    Where do you think we are?

  • My fear for the software industry is that we'll end up replacing junior devs with AI assistance, and then in a decade or two, we'll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

    That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

    It really sucks.

    That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

  • Is “way less useful” something you can cite with a source, or is that just feelings?

    It is based on my experience, which I trust immeasurably more than rigged "studies" done by the big LLM companies with clear conflict of interest.

  • I wouldn’t mention this to anyone at work. It makes you sound clueless

    My boss insists I use it and I insist on telling him when it can't do the simplest things.

  • It is based on my experience, which I trust immeasurably more than rigged "studies" done by the big LLM companies with clear conflict of interest.

    Understood, thanks for being honest

  • AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

    Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    LOL sure

  • My boss insists I use it and I insist on telling him when it can't do the simplest things.

    It sounds like you’ve got it all figured out. Best of luck to you

  • So you're saying there's no such thing as complex webapps and that there's no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there's never any competent developers that work on them and there's no use for them at all?

    Where do you think we are?

    None that you can make with ChatGPT in an afternoon, no.

  • None that you can make with ChatGPT in an afternoon, no.

    Who says I made my webapp with ChatGPT in an afternoon?

    I built it iteratively using ChatGPT, much like any other application. I started with the scaffolding and then slowly added more and more features over time, just like I would have done had I not used any AI at all.

    Like everybody knows, Rome wasn't built in a day.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on "misunderstanding". Last week it was suggesting a spelling fix I'd already made because it didn't understand the - in the diff meant I'd changed the line already.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

    In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    They can be helpful when using a new library or development environment which you are not familiar with. I've noticed a tendency to make up functions that arguably should exist but often don't.

  • Does every junior eventually achieve becoming a senior?

    No, but that's the only way you get senior engineers!

  • Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    LOL sure

    LOL sure

    I'm not talking about the ones that get hired in your 'leet shop, I'm talking about the whole damn crop that's just graduated.

  • That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

    It really sucks.

    That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

    You're not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

    That said, even if you and I will be fine, it's still bad for the industry. And even if we weren't the ones pulling up the ladder behind us, I'd still like to find a way to start throwing ropes back down for the newbies...

  • No, but that's the only way you get senior engineers!

    I agree, but the goal of CEOs is “line go up,” not make our eng team stronger (usually)

  • 28 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 33 Stimmen
    5 Beiträge
    28 Aufrufe
    D
    If it's so good then why does deepseek-qwen slap
  • 337 Stimmen
    19 Beiträge
    85 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • Men are opening up about mental health to AI instead of humans

    Technology technology
    339
    524 Stimmen
    339 Beiträge
    818 Aufrufe
    spankmonkey@lemmy.worldS
    I'm aware of what you are saying and disagree. You apparently take disagreement personally as most of your comments in that post to various other users are hostile too. Please be aware of how you are approaching discourse.
  • Pirate Software "Stop Killing Games" Drama

    Technology technology
    9
    37 Stimmen
    9 Beiträge
    38 Aufrufe
    V
    Crazy how big of a following he has after the drama with Only Fangs at the beginning of he year.
  • 50 Stimmen
    11 Beiträge
    53 Aufrufe
    G
    Anyone here use XING?
  • 4 Stimmen
    12 Beiträge
    20 Aufrufe
    guydudeman@lemmy.worldG
    Yeah, I don’t know how they’re doing it. They’re using some “zero trust” system. It’s beyond me.
  • 1 Stimmen
    3 Beiträge
    21 Aufrufe
    Z
    Yes i'm looking for erp system like sap