Skip to content

AI slows down some experienced software developers, study finds

Technology
71 32 0
  • Just the other day I wasted 3 min trying to get AI to sort 8 lines alphabetically.

    I wouldn’t mention this to anyone at work. It makes you sound clueless

  • Exactly what you would expect from a junior engineer.

    Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

    Something something craftsmen don’t blame their tools

    Exactly what you would expect from a junior engineer.

    Except junior engineers become seniors. If you don't understand this ... are you HR?

  • I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.

    Wasn’t it clear that our comments are in agreement?

  • Exactly what you would expect from a junior engineer.

    Except junior engineers become seniors. If you don't understand this ... are you HR?

    They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    I have limited AI experience, but so far that's what it means to me as well: helpful in very limited circumstances.

    Mostly, I find it useful for "speaking new languages" - if I try to use AI to "help" with the stuff I have been doing daily for the past 20 years? Yeah, it's just slowing me down.

  • Wasn’t it clear that our comments are in agreement?

    It wasn’t, but now it is.

  • They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

    Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.

  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

  • Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

    The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

  • Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

    So you're saying there's no such thing as complex webapps and that there's no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there's never any competent developers that work on them and there's no use for them at all?

    Where do you think we are?

  • My fear for the software industry is that we'll end up replacing junior devs with AI assistance, and then in a decade or two, we'll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

    That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

    It really sucks.

    That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

  • Is “way less useful” something you can cite with a source, or is that just feelings?

    It is based on my experience, which I trust immeasurably more than rigged "studies" done by the big LLM companies with clear conflict of interest.

  • I wouldn’t mention this to anyone at work. It makes you sound clueless

    My boss insists I use it and I insist on telling him when it can't do the simplest things.

  • It is based on my experience, which I trust immeasurably more than rigged "studies" done by the big LLM companies with clear conflict of interest.

    Understood, thanks for being honest

  • AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

    Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    LOL sure

  • My boss insists I use it and I insist on telling him when it can't do the simplest things.

    It sounds like you’ve got it all figured out. Best of luck to you

  • So you're saying there's no such thing as complex webapps and that there's no such thing as senior web developers, and webapps can basically be made by a monkey because they are all so simple and there's never any competent developers that work on them and there's no use for them at all?

    Where do you think we are?

    None that you can make with ChatGPT in an afternoon, no.

  • None that you can make with ChatGPT in an afternoon, no.

    Who says I made my webapp with ChatGPT in an afternoon?

    I built it iteratively using ChatGPT, much like any other application. I started with the scaffolding and then slowly added more and more features over time, just like I would have done had I not used any AI at all.

    Like everybody knows, Rome wasn't built in a day.

  • Grok, Elon Musk's AI chatbot, seems to get right-wing update

    Technology technology
    13
    1
    183 Stimmen
    13 Beiträge
    66 Aufrufe
    A
    Yep. Pretty sure that was deliberate on Musk's (or his cronies) part. Imagine working at X and being told by your boss "I'd like you to make the bot more racist please." "Can you convince it that conspiracy theories are real?"
  • 89 Stimmen
    8 Beiträge
    32 Aufrufe
    paraphrand@lemmy.worldP
    Y’all got any of that federation?
  • AI Leaves Digital Fingerprints in 13.5% of Scientific Papers

    Technology technology
    2
    1
    163 Stimmen
    2 Beiträge
    9 Aufrufe
    F
    So they established that language patterns measured by word frequency changed between 2022 and 2024. But did they also analyse frequencies across other 2-year time periods? How much difference is there for a typical word? It looks like they have a per-frequency significance threshold but then analysed all words at once, meaning that random noise would turn up a bunch of "significant" results. Maybe this is addressed in the original paper which is not linked.
  • 311 Stimmen
    37 Beiträge
    122 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • The New Digg’s Plan to Use AI for Community Moderation

    Technology technology
    17
    1
    32 Stimmen
    17 Beiträge
    66 Aufrufe
    L
    trying to be reddit 2.0
  • 92 Stimmen
    5 Beiträge
    22 Aufrufe
    H
    This is interesting to me as I like to say the llms are basically another abstraction of search. Initially it was links with no real weight that had to be gone through and then various algorithms weighted the return, then the results started giving a small blurb so one did not have to follow every link, and now your basically getting a report which should have references to the sources. I would like to see this looking at how folks engage with an llm. Basically my guess is if one treats the llm as a helper and collaborates to create the product that they will remember more than if they treat it as a servant and just instructs them to do it and takes the output as is.
  • 4 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • 109 Stimmen
    3 Beiträge
    22 Aufrufe
    M
    A private company is selling cheap tablets to inmates to let them communicate with their family. They have to use "digital stamps" to send messages, 35 cents a piece and come in packs of 5, 10 or 20. Each stamp covers up to 20,000 characters or one single image. They also sell songs, at $1.99 a piece, and some people have spent thousands over the years. That's also now just going away. Then you get to the part about the new company. Who already has a system in Tennessee where inmates have to pay 3-5 cents per minute of tablet usage. Be that watching a movie they've bought or just typing a message.