Skip to content

Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes

Technology
108 80 0
  • What these companies didn't take the time to understand is, A.I. is a tool to make employees more efficient, not to replace them. Sadly the vast majority of these companies will also fail to learn this lesson now and will get rid of A.I. systems altogether rather than use them properly.

    When I write a document for my employer I use A.I. as a research and planning assistant, not as the writer. I still put in the work writing the document, I just use A.I. to simplify the tedious data gathering and organizing.

    My daughter has used AI a lot to write grant proposals, which she cleans up and rewords before submitting. In her prompts she tells it to ask her questions and incorporate her answers into the result, which she says works very well, produces high quality writing, and saves her a ton of time. She's actually a very competent writer herself, so when she compliments the quality I know it means something.

  • This post did not contain any content.

    McNamara fallacy at its finest. They hear figures and potential savings and then jump into the hype without considering the context. It is the same when they heard of lean manufacturing or Toyota way. Companies thought it is cost saving rather than process improvement.

  • As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I've been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.

    A lot of is pretty bad. It's a mess. But like I said I've been at it for awhile and I've seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn't learn anything. It's literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it's AI pretty much stating the same shit.

    I've been getting so many requests for gigs I've been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I've burned through all my contacts that now I'm just reaching out to absolute strangers to get them work.

    yes it's that bad (well bad for companies, it's fantastic for developers.)

    Throw us some work if you like, although I already work as software engineer but wouldn’t turn down a side gig cleaning up after LLMs.

  • They should have just asked me. I knew that would be the result years ago. Writing has been on the screaming wall of faces while the faces also screamed it.

    Management doesn't ask people they want to fire is firing them is a good idea. They themselves would lie like crazy to keep their job and assume therefore everything the developers say would be a lie too.

  • This post did not contain any content.

    Companies with stupid leaders deserve to fail.

  • My daughter has used AI a lot to write grant proposals, which she cleans up and rewords before submitting. In her prompts she tells it to ask her questions and incorporate her answers into the result, which she says works very well, produces high quality writing, and saves her a ton of time. She's actually a very competent writer herself, so when she compliments the quality I know it means something.

    That's a good way to use the tool. I generally use the OpenAI option to set up a custom gpt and tell it to become an expert on the subject I'm writing about, then set the parameters. Then once I've tested it on a piece of the subject matter I already understand and confirm it's working properly, I begin asking it questions. When I'm out of questions or just need a break, I go back and check the citations for each answer just to make sure I'm not getting bad data.

    Once I've run out of questions and all the data is verified, I have it create an outline with a brief summary of each section. Then I take that outline and use that to guide me as I write. Also it seems like the A.I. always puts at least one section in the wrong place so that's just another reason I like to write it myself and just use an A.I. summary outline.

  • This post did not contain any content.

    AI: The new outsourcing?

  • As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I've been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.

    A lot of is pretty bad. It's a mess. But like I said I've been at it for awhile and I've seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn't learn anything. It's literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it's AI pretty much stating the same shit.

    I've been getting so many requests for gigs I've been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I've burned through all my contacts that now I'm just reaching out to absolute strangers to get them work.

    yes it's that bad (well bad for companies, it's fantastic for developers.)

    They learned that by the time all of their shitty decisions ruin everything, they'll be able to bail with their golden parachute while everyone else has to deal with the fallout.

  • This post did not contain any content.

    AI as it exists today is only effective if used sparingly and cautiously by someone with domain knowledge who can identify the tasks (usually menial ones) that don't need a human touch.

  • Jean-Baptiste

    Emmanuel

    Zorg

  • Retired dev here, I'm curious about the nature of "the mess". Is it buggy AI-generated code that got into production? I know an active dev who uses ChatGTP every day, says it saves him a hell of a lot of work. What he does sounds like "vibe coding". If you're using AI for grunt work and keep a human is in the workflow to verify the code, I don't see how it would differ from junior devs working under a senior. Have some companies been using poorly managed all-AI tools or what? Sorry for the long question.

    Think of AI as a hard working, arrogant, knowledgeable, unimaginative junior intern.

    The vibe coding is great for small, self contained tasks. It doesn't scale to a codebase (yet?).

  • AI as it exists today is only effective if used sparingly and cautiously by someone with domain knowledge who can identify the tasks (usually menial ones) that don't need a human touch.

    This 1000x. I am a PHP developer, I found out about two months ago that the AI assistant is included in my Jetbrains subscription (All pack, it was a separate thing before). And recently found about Junie, their AI agent that has deep thinking (or whatever the hell it is called). I tried it the same day to refactor part of my test that had to migrated to stop using a deprecated function call.

    To my surprise, it required only very minor changes, but what would've taken me about 3 hours was done in half an hour. What I also liked was that it actually asked if it can run a terminal command to verify thr test results and it went back and fixed a broken test or two.

    Finally I have faith in AI being useful to programmers.

    For a test, I took our dev exam (for potential candidates) and just sent it to see what it does just based on the document, and besides a few mistakes it even used modern tools and not some 5 year old stuff (like PSR standards) and implemented core systems by itself using well known interfaces (from said PSRs). I asked it to change Dependency Injection to use Symfony DI instead of the self-made thing, and it worked flawlessly.

    Of course, the code has to be reviewed or heavily specified to make sure it does what it is told to, but all in all it doesn't look like just a gimmick anymore.

  • The line demands you cut costs but also increase service.

    The line demands it go up. It doesn't care how you get there. In many cases, decreasing service while also cutting costs is the way to do it so long as line goes up.

    See: enshittification

    Absolutely. I should have used the term productivity rather than service. Lack of caffeine had blunted my vocabulary. In essence: more output for less work. Output in this case is profit.

    Enshitification is, in essence, the push beyond diminishing returns into the 'lossy' space ... sacrificing a for b. The end result is an increasingly shitty experience.

  • As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I've been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.

    A lot of is pretty bad. It's a mess. But like I said I've been at it for awhile and I've seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn't learn anything. It's literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it's AI pretty much stating the same shit.

    I've been getting so many requests for gigs I've been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I've burned through all my contacts that now I'm just reaching out to absolute strangers to get them work.

    yes it's that bad (well bad for companies, it's fantastic for developers.)

    Sounds like you need to start a company and per diem staff.

  • Companies with stupid leaders deserve to fail.

    Well what ends up happening is some company will have a CEO.

    He'll make all the stupid decisions. But they're only stupid from everybody ELSES perspective.

    From his perspective, he uses AI, tanks the companies future in the chase of large short term stock gains. Then he gives himself a huge bonus, leaves the company, gets hired somewhere else, and gets to say "See how that company is failing without me? That's because I bring value to the brand."

    So he gets hired at the neeeext place, meanwhile that first company is failing because of the actions of a CEO no longer employed there, and whom bailed because he knew what was coming.

    These actions aren't stupid. They're plotted corruption for the benefit of one.

  • A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc.

    A lot of leadership is incompetent. In a reasonable, just, world they would not be in these decision making positions.

    Verbose blogger Ed Zitron wrote about this. He called them "Business Idiots": https://www.wheresyoured.at/the-era-of-the-business-idiot/

    I just watched an interview of Karen Hao and she mentioned something along the lines of executives being oversold AI as something to replace everyone instead of something that should exist alongside people to help them, and they believe it.

  • This 1000x. I am a PHP developer, I found out about two months ago that the AI assistant is included in my Jetbrains subscription (All pack, it was a separate thing before). And recently found about Junie, their AI agent that has deep thinking (or whatever the hell it is called). I tried it the same day to refactor part of my test that had to migrated to stop using a deprecated function call.

    To my surprise, it required only very minor changes, but what would've taken me about 3 hours was done in half an hour. What I also liked was that it actually asked if it can run a terminal command to verify thr test results and it went back and fixed a broken test or two.

    Finally I have faith in AI being useful to programmers.

    For a test, I took our dev exam (for potential candidates) and just sent it to see what it does just based on the document, and besides a few mistakes it even used modern tools and not some 5 year old stuff (like PSR standards) and implemented core systems by itself using well known interfaces (from said PSRs). I asked it to change Dependency Injection to use Symfony DI instead of the self-made thing, and it worked flawlessly.

    Of course, the code has to be reviewed or heavily specified to make sure it does what it is told to, but all in all it doesn't look like just a gimmick anymore.

    Absolutely, this matches my experience. I think this is also the experience of most coders who willingly use AI. I feel bad for the people who are forced to use it by their companies. And those who are laid off because of C-levels who think AI is capable of replacing an experienced coder.

  • Retired dev here, I'm curious about the nature of "the mess". Is it buggy AI-generated code that got into production? I know an active dev who uses ChatGTP every day, says it saves him a hell of a lot of work. What he does sounds like "vibe coding". If you're using AI for grunt work and keep a human is in the workflow to verify the code, I don't see how it would differ from junior devs working under a senior. Have some companies been using poorly managed all-AI tools or what? Sorry for the long question.

    An example from work a few weeks ago. I fixed some vibe coded UI code that had made it to prod. The layout of the UI was basically just meant to be an easy overview of information relevant to an item. The LLM had done everything right except it assumed a weird mix of tailwind and bootstrap, mixing and matching css classes from both. After I implemented the classes myself it went from a single column view to grids and nested grids grouping the data intuitively.
    I talked with the dev who implemented it, and basically it was just something quickly cobbled together with AI until it was passable. The AI had added a lot of extra that served no function and that didn’t conform to a single css framework, but looked like it could. For months noone questioned it despite talk about that part of the UI needing a facelift.

    I don’t know how representative it is, but about half the time I’m thoroughly confused about a piece of code and why it was written the way it was, the answer has turned out to be AI. And unlike when a developer wrote it, there rarely is any reason to have written it the weird way.

  • As someone who has been a consultant/freelance dev for over 20 years now this is true. Lately I've been getting offers and contacts from places to essentially clean up the mess from LLMs/AI.

    A lot of is pretty bad. It's a mess. But like I said I've been at it for awhile and I've seen this before when companies were offshoring anything and everything to India and surprise, surprise, they didn't learn anything. It's literally the exact same thing. Instead of an Indian guy that claims they know everything and will work for peanuts, it's AI pretty much stating the same shit.

    I've been getting so many requests for gigs I've been hitting up random out of work devs on linkedin in my city and referring the jobs to them. I've burned through all my contacts that now I'm just reaching out to absolute strangers to get them work.

    yes it's that bad (well bad for companies, it's fantastic for developers.)

    Send them my way! I'm freelance currently and good at cleaning up that kind of stuff

  • a negative times a negative is a positive?

    More like 0.10 + 0.05 = 0.20, in this case.

  • Apple appeals EU's €500M fine over App Store payment restraints

    Technology technology
    3
    1
    20 Stimmen
    3 Beiträge
    0 Aufrufe
    zak@lemmy.worldZ
    It's likely their priority is continuing to collect all the fees they can for as long as they can rather than the fine itself.
  • 349 Stimmen
    72 Beiträge
    117 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • 28 Stimmen
    16 Beiträge
    6 Aufrufe
    D
    The writing in this story is not accurate. Iran isn't turning it off for the country. They are talking about switching government services to use receivers that use Beidou as primary source of timing and maybe selectively turn off using GPS on those devices.
  • Delivering BlogOnLemmy worldwide in record speeds

    Technology technology
    3
    28 Stimmen
    3 Beiträge
    20 Aufrufe
    kernelle@0d.gsK
    Nice to hear! I'm glad you enjoyed it.
  • 212 Stimmen
    12 Beiträge
    46 Aufrufe
    erev@lemmy.worldE
    meanwhile i set a wait and save so i have time to finish getting ready and uber tells me it's already arrived.
  • XMPP vs everything else

    Technology technology
    10
    1
    43 Stimmen
    10 Beiträge
    11 Aufrufe
    M
    Conversely, I have seen this opinion expressed a few times. I can’t judge the accuracy but there seem to be more than a few people sharing it.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    14 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.
  • 0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet