Skip to content

AI slows down some experienced software developers, study finds

Technology
71 32 0
  • AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

    Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    LOL sure

  • My boss insists I use it and I insist on telling him when it can't do the simplest things.

    It sounds like you’ve got it all figured out. Best of luck to you

  • So you're saying there's no such thing as complex webapps and that there's no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there's never any competent developers that work on them and there's no use for them at all?

    Where do you think we are?

    None that you can make with ChatGPT in an afternoon, no.

  • None that you can make with ChatGPT in an afternoon, no.

    Who says I made my webapp with ChatGPT in an afternoon?

    I built it iteratively using ChatGPT, much like any other application. I started with the scaffolding and then slowly added more and more features over time, just like I would have done had I not used any AI at all.

    Like everybody knows, Rome wasn't built in a day.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on "misunderstanding". Last week it was suggesting a spelling fix I'd already made because it didn't understand the - in the diff meant I'd changed the line already.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

    In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    They can be helpful when using a new library or development environment which you are not familiar with. I've noticed a tendency to make up functions that arguably should exist but often don't.

  • Does every junior eventually achieve becoming a senior?

    No, but that's the only way you get senior engineers!

  • Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    LOL sure

    LOL sure

    I'm not talking about the ones that get hired in your 'leet shop, I'm talking about the whole damn crop that's just graduated.

  • That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

    It really sucks.

    That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

    You're not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

    That said, even if you and I will be fine, it's still bad for the industry. And even if we weren't the ones pulling up the ladder behind us, I'd still like to find a way to start throwing ropes back down for the newbies...

  • No, but that's the only way you get senior engineers!

    I agree, but the goal of CEOs is “line go up,” not make our eng team stronger (usually)

  • You're not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

    That said, even if you and I will be fine, it's still bad for the industry. And even if we weren't the ones pulling up the ladder behind us, I'd still like to find a way to start throwing ropes back down for the newbies...

    You're not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

    True. It was a long-standing problem that entry-level jobs were mostly found in dodgy startups.

    Tbh, I think the biggest issue right now isn't even AI, but the economy. In the 2010s we had pretty much no intrest rate at all while having a pretty decent economy, at least for IT. The 2008 financial crisis hardly mattered for IT, and Covid was a massive boost for IT. There was nothing else to really spend money on.

    IT always has more projects than manpower, so with enough money to spend, they just hired everyone.

    But the sanctions against Russia in response to their invasion of Ukraine really hit the economy and rising intrest rates to combat inflation meant that suddenly nobody wanted to invest anymore.

    With no investments, startups dried up and large corporations also want to downsize. It's no coincidence that return-to-work mandates only started after the invasion and not in the two years prior of that where lockdowns were already revoked. Work from home worked totally fine for two years after covid lockdowns, and companies even praised how well it worked.

    Same with AI. While it can improve productivity in some edge cases, I think it's mostly a scapegoat to make mass-fireings sound like a great thing to investors.

    That said, even if you and I will be fine, it's still bad for the industry. And even if we weren't the ones pulling up the ladder behind us, I'd still like to find a way to start throwing ropes back down for the newbies...

    You are totally right with that, and any chance I get I will continue to push for hiring juniors.

    But I am also over corporate tears. For decades they have been crying over a lack of skilled workers in the IT and pushing for more and more people to join IT, so that they can dump wages, and as soon as the economy is bad, they instantly u-turn and dump employees.

    If corporations want to be short-sighted and make people suffer for it, they won't get compassion from me when it fails.

    Edit: Remember, we are not the ones pulling the ladder up.

  • You're not wrong, and I feel like it was a developing problem even before AI - everybody wanted someone with experience, even if the technology was brand new.

    That said, even if you and I will be fine, it's still bad for the industry. And even if we weren't the ones pulling up the ladder behind us, I'd still like to find a way to start throwing ropes back down for the newbies...

    They wanted someone with experience, who can hit the ground running, but didn't want to pay for it, either with cash or time.

    • cheap
    • quick
    • experience

    You can only pick two.

  • That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

    It really sucks.

    That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

    I would say that "replacing with AI assistance" is probably not what is actually happening. Is it economic factors reducing hiring. This isn't the first time it has happened and it won't be the last. The AI boosters are just claiming responsibility for marketing purposes.

  • I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.

    My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.

    I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.

    That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

    That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

    This is the must frustrating problem I have. With a few exceptions, LLM use seems to be inversely proportional to skill level, and having someone tell me "chatgpt said ___" when asking me for help because clearly chatgpt is not doing it for their problem makes me want to just hang up.

  • Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

    In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

    Like I said, I do find it useful at times. But not only shouldn't it replace coders, it fundamentally can't. At least, not without a fundamental rearchitecturing of how they work.

    The reason it goes down a "really bad path" is that it's basically glorified autocomplete. It doesn't know anything.

    On top of that, spoken and written language are very imprecise, and there's no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

    Take the phrase "fruit flies like a banana." Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

    It's a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we've got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don't.

  • Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

    This line of thought is short sighted. Your senior engineers will eventually retire or leave the company. If everyone replaces junior engineers with ai, then there will be nobody with the experience to fill those empty seats. Then you end up with no junior engineers and no senior engineers, so who is wrangling the ai?

  • I have limited AI experience, but so far that's what it means to me as well: helpful in very limited circumstances.

    Mostly, I find it useful for "speaking new languages" - if I try to use AI to "help" with the stuff I have been doing daily for the past 20 years? Yeah, it's just slowing me down.

    I like the saying that LLMs are good at stuff you don’t know. That’s about it.

  • By having it write a quick function to do so or to sort them alphabetically within the chat? Because I've used GPT to write boilerplate and/or basic functions for random tasks like this numerous times without issue. But expecting it to sort a block of text for you is not what LLMs are really built for.

    That being said, I agree that expecting AI to write complex and/or long-form code is a fool's hope. It's good for basic tasks to save time and that's about it.

    The tool I use can rewrite code given basic commands. Other times I might say, "Write a comment above each line" or "Propose better names for these variables" and it does a decent job.

  • This post did not contain any content.

    no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently

  • Threads is nearing X's daily app users, new data shows

    Technology technology
    28
    1
    109 Stimmen
    28 Beiträge
    97 Aufrufe
    3dcadmin@lemmy.relayeasy.com3
    X has declined yes, but threads is growing. Loads more joining recently as well. Most seem to move from FB to threads because they have a Meta account, so have an Insta account soooooo.... Threads is a dumbed down X (I can feel the heat I'm gonna get for that) Meta is by far and away the largest for users - we all know it and it means that promoting threads inside Insta and FB means people will see what it is like. Here in the UK it is refreshingly free of ads, is quick to post/reply/interact and feels new. How long that will last is anybodys guess as per usual
  • 67 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    771 Stimmen
    205 Beiträge
    656 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • 0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • Brain activity lower when using AI chatbots: MIT research

    Technology technology
    15
    1
    128 Stimmen
    15 Beiträge
    57 Aufrufe
    Z
    Depends how much clutch is left ‍
  • 163 Stimmen
    7 Beiträge
    31 Aufrufe
    L
    I wonder if they could develop this into a tooth coating. Preventing biofilms would go a long way to preventing cavities.
  • 8 Stimmen
    2 Beiträge
    17 Aufrufe
    roofuskit@lemmy.worldR
    Meta? Isn't that owned by alleged pedophile Mark Zuckerberg? I heard he was a pedo on Facebook.
  • U.S.-Sanctioned Terrorists Enjoy Premium Boost on X

    Technology technology
    5
    1
    90 Stimmen
    5 Beiträge
    33 Aufrufe
    M
    Yeah but considering who's in charge of the government, half of us will be hit with that designation sooner or later.