Skip to content

AI slows down some experienced software developers, study finds

Technology
71 32 0
  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    Couldn't have said it better myself. The amount of pure hatred for AI that's already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.

    Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.

  • This post did not contain any content.

    I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.

    My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.

    I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.

    That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

  • Couldn't have said it better myself. The amount of pure hatred for AI that's already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.

    Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.

    People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.

    This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren't selling (at a massive loss) the hot garbage they're selling now and focused on research.

    As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.

  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

  • This post did not contain any content.

    Writing code is the easiest part of my job. Why are you taking that away?

  • The difference being junior engineers eventually grow up into senior engineers.

    Does every junior eventually achieve becoming a senior?

  • Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

    Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.

    You can see why companies are tripping over themselves to push this new modality.

  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    Is “way less useful” something you can cite with a source, or is that just feelings?

  • Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.

    You can see why companies are tripping over themselves to push this new modality.

    I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.

  • Just the other day I wasted 3 min trying to get AI to sort 8 lines alphabetically.

    I wouldn’t mention this to anyone at work. It makes you sound clueless

  • Exactly what you would expect from a junior engineer.

    Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

    Something something craftsmen don’t blame their tools

    Exactly what you would expect from a junior engineer.

    Except junior engineers become seniors. If you don't understand this ... are you HR?

  • I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.

    Wasn’t it clear that our comments are in agreement?

  • Exactly what you would expect from a junior engineer.

    Except junior engineers become seniors. If you don't understand this ... are you HR?

    They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

  • Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

    And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

    I have limited AI experience, but so far that's what it means to me as well: helpful in very limited circumstances.

    Mostly, I find it useful for "speaking new languages" - if I try to use AI to "help" with the stuff I have been doing daily for the past 20 years? Yeah, it's just slowing me down.

  • Wasn’t it clear that our comments are in agreement?

    It wasn’t, but now it is.

  • They aren’t detail oriented enough to write full applications or complicated scripts.

    I'm not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Yep, I agree with that.

    There are definitely people misusing AI, and there is definitely lots of AI slop out there which is annoying as hell, but they also can be pretty capable for certain things too, even more than one might think at first.

    Greenfielding webapps is the easiest, most basic kind of project around. that's something you task a junior with and expect that they do it with no errors. And after that you instantly drop support, because webapps are shovelware.

  • I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

    Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

    This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    Excellent take. I agree with everything. If I give Claude a function signature, types and a description of what it has to do, 90% of the time it will get it right. 10% of the time it will need some edits or efficiency improvements but still saves a lot of time. Small scoped tasks with correct context is the right way to use these tools.

  • AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

    AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really "graduated" to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers "find their niche" doing something other than engineering with their engineering job titles, and that's great, but don't ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it's currently as good or "better" than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it's improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn't seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write "human compatible" software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

  • Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

    The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

  • 139 Stimmen
    28 Beiträge
    141 Aufrufe
    D
    Lmao it hasn't even been a year under Trump. Calm your titties
  • Iran asks its people to delete WhatsApp from their devices

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 51 Stimmen
    2 Beiträge
    18 Aufrufe
    baronvonj@lemmy.worldB
    So glad I never got on WhatsApp
  • Texting myself the weather every day

    Technology technology
    4
    15 Stimmen
    4 Beiträge
    27 Aufrufe
    G
    Even being too lazy to open the weather app, there are so many better and free ways of receiving a message on your phone. This is profoundly stupid.
  • 151 Stimmen
    23 Beiträge
    102 Aufrufe
    D
    I played around the launch and didn't realize there were bots (outside of pve)... But I also assumed I was shooting a bunch of kids that barely understood the controls.
  • 108 Stimmen
    3 Beiträge
    14 Aufrufe
    K
    The title at least dont say anything new AFAIK. Because you could already download from external sources but those apps still needed to be signed by apple. But maybe they changed?
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    206 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 44 Stimmen
    3 Beiträge
    23 Aufrufe
    V
    I use it for my self hosted apps, but yeah, it's rarely useful for websites in the wild.