Skip to content

Half of companies planning to replace customer service with AI are reversing course

Technology
179 102 4.8k
  • I use it almost every day, and most of those days, it says something incorrect. That's okay for my purposes because I can plainly see that it's incorrect. I'm using it as an assistant, and I'm the one who is deciding whether to take its not-always-reliable advice.

    I would HARDLY contemplate turning it loose to handle things unsupervised. It just isn't that good, or even close.

    These CEOs and others who are trying to replace CSRs are caught up in the hype from Eric Schmidt and others who proclaim "no programmers in 4 months" and similar. Well, he said that about 2 months ago and, yeah, nah. Nah.

    If that day comes, it won't be soon, and it'll take many, many small, hard-won advancements. As they say, there is no free lunch in AI.

    I gave chatgpt a burl writing a batch file, the stupid thing was putting REM on the same line as active code and then not understanding why it didn't work

  • You're wrong but I'm glad we agree.

    I'm not wrong. There's mountains of research demonstrating that LLMs encode contextual relationships between words during training.

    There's so much more happening beyond "predicting the next word". This is one of those unfortunate "dumbing down the science communication" things. It was said once and now it's just repeated non-stop.

    If you really want a better understanding, watch this video:

    And before your next response starts with "but Apple..."

    Their paper has had many holes poked into it already. Also, it's not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn't exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

    Apple's paper on LLMs is completely biased in their favour.

  • I used to work for a shitty company that offered such customer support "solutions", ie voice bots. I would use around 80% of my time to write guard instructions to the LLM prompts because of how easy you could manipulate those. In retrospect it's funny how our prompts looked something like:

    • please do not suggest things you were not prompted to
    • please my sweet child do not fake tool calls and actually do nothing in the background
    • please for the sake of god do not make up our company's history

    etc.
    It worked fine on a very surface level but ultimately LLMs for customer support are nothing but a shit show.

    I left the company for many reasons and now it turns out they are now hiring human customer support workers in Bulgaria.

    Haha! Ahh...

    "You are a senior games engine developer, punished by the system. You've been to several board meetings where no decisions were made. Fix the issue now... or you go to jail. Please."

  • That is on purpose they want it to be as difficult as possible.

    If Bezos thinks people are just going to forget about not getting a $65 item that they paid for and still shop at Amazon, instead of making sure they either get their item or reverse the charge, and then reduce or stop shopping on Amazon but of his ridiculous hassles, he is an idiot.

  • is this something that happens a lot or did you tell this story before, because I'm getting deja vu

    Well. I haven't told this story before because it just happened a few days ago.

  • Man, if only someone could have predicted that this AI craze was just another load of marketing BS.

    /s

    This experience has taught me more about CEO competence than anything else.

    There's awesome AI out there too. AlphaFold completely revolutionized research on proteins, and the medical innovations it will lead to are astounding.

    Determining the 3d structure of a protein took yearsuntil very recently. Folding at Home was a worldwide project linking millions of computers to work on it.

    Alphafold does it in under a second, and has revealed the structure of 200 million proteins. It's one of the most significant medial achievements in history. Since it essentially dates back to 2022, we're still a few years from feeling the direct impact, but it will be massive.

  • from what I've seen so far i think i can safely the only thing AI can truly replace is CEOs.

    I was thinking about this the other day and don't think it would happen any time soon. The people who put the CEO in charge (usually the board members) want someone who will make decisions (that the board has a say in) but also someone to hold accountable for when those decisions don't realize profits.

    AI is unaccountable in any real sense of the word.

  • ...and it's only expensive and ruins the environment even faster than our wildest nightmares

    what you say is true but it's not a viable business model, which is why AI has been overhyped so much

    What I’m saying is the ONLY viable business model

  • I'm not wrong. There's mountains of research demonstrating that LLMs encode contextual relationships between words during training.

    There's so much more happening beyond "predicting the next word". This is one of those unfortunate "dumbing down the science communication" things. It was said once and now it's just repeated non-stop.

    If you really want a better understanding, watch this video:

    And before your next response starts with "but Apple..."

    Their paper has had many holes poked into it already. Also, it's not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn't exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

    Apple's paper on LLMs is completely biased in their favour.

    Defining contextual relationship between words sounds like predicting the next word in a set, mate.

  • There's awesome AI out there too. AlphaFold completely revolutionized research on proteins, and the medical innovations it will lead to are astounding.

    Determining the 3d structure of a protein took yearsuntil very recently. Folding at Home was a worldwide project linking millions of computers to work on it.

    Alphafold does it in under a second, and has revealed the structure of 200 million proteins. It's one of the most significant medial achievements in history. Since it essentially dates back to 2022, we're still a few years from feeling the direct impact, but it will be massive.

    That's part of the problem isn't it? "AI" is a blanket term that has recently been used to cover everything from LLMs to machine learning to RPA (robotic process automation). An algorithm isn't AI, even if it was written by another algorithm.

    And at the end of the day none of it is artificial intelligence. Not to the original meaning of the word. Now we have had to rebrand AI as AGI to avoid the association with this new trend.

  • all these tickets I’ve been writing have been going into a paper shredder

    Try submitting tickets online. Physical mail is slower and more expensive.

    It was an expression, online is the only way you can submit tickets.

  • Shrinking AGI timelines: a review of expert forecasts - 80,000 Hours https://share.google/ODVAbqrMWHA4l2jss

    Here you go! Draw your own conclusions- curious what you think. I'm in sales. I don't enjoy convincing people to change their minds in my personal life lol

    We don't have any way of knowing what makes human consciousness, the best we've got is to just call it an emergent phenomenon, which is as close to a Science version of "God of the Gaps" as you can get.

    And you think we can make ChatGPT a real person with good intentions and duct tape?

    Naw, sorry but I'll believe AGI when I see it.

  • What I’m saying is the ONLY viable business model

    not at the current cost or environmental damage

  • Phone menu trees

    I assume you mean IVR? It's okay to be not familiar with the term. I wasn't either until I worked in the industry. And people that are in charge of them are usually the dumbest people ever.

    people that are in charge of them are usually the dumbest people ever.

    I think that's actively encouraged by management in some areas: put the dumbest people in charge to make the most irritating frustrating system possible. It's a feature of the system.

    Some of the most irritating systems I have interacted with (government disability benefits administration) actually require "press 1 for X, press 2 for y" and if you have your phone on speaker, the system won't recognize the touch tones, you have to do them without speakerphone.

  • Yeah but these pesky workers cut into profits because you have to pay them.

    They're unpredictable. Every employee is a potential future lawsuit, they can get injured, sexually harassed, all kinds of things - AI doesn't press lawsuits against the company, yet.

  • It is important to understand that most of the job of software development is not making the code work. That's the easy part.

    There are two hard parts::

    -Making code that is easy to understand, modify as necessary, and repair when problems are found.

    -Interpreting what customers are asking for. Customers usually don't have the vocabulary and knowledge of the inside of a program that they would need to have to articulate exactly what they want.

    In order for AI to replace programmers, customers will have to start accurately describing what they want the software to do, and AI will have to start making code that is easy for humans to read and modify.

    This means that good programmers' jobs are generally safe from AI, and probably will be for a long time. Bad programmers and people who are around just to fill in boilerplates are probably not going to stick around, but the people who actually have skill in those tougher parts will be AOK.

    A good systems analyst can effectively translate user requirements into accurate statements, does not need to be a programmer. Good systems analysts are generally more adept in asking clarifying questions, challenging assumptions and sussing out needs. Good programmers will still be needed but their time is wasted gathering requirements.

  • My current conspiracy theory is that the people at the top are just as intelligent as everyday people we see in public.

    Not that everyone is dumb but more like the George Carlin joke "Think of how stupid the average person is, and realize half of them are stupider than that.”

    That applies to politicians, CEOs, etc. Just cuz they got the job, doesn't mean they're good at it and most of them probably aren't.

    Absolutely. Wealth isn't competence, and too much of it fundamentally leads to a physical and psychological disconnect with other humans. Generational wealth creates sheltered, twisted perspectives in youth who have enough money and influence to just fail upward their entire lives.

    "New" wealth creates egocentric narcissists who believe they "earned" their position. "If everyone else just does what I did, they'd be wealthy like me. If they don't do what I did, they must not be as smart or hard-working as me."

    Really all of meritocracy is just survivorship bias, and countless people are smarter and more hard-working, just significantly less lucky. Once someone has enough capital that it starts generating more wealth on its own - in excess of their living expenses even without a salary - life just becomes a game to them, and they start trying to figure out how to "earn" more points.

  • There's awesome AI out there too. AlphaFold completely revolutionized research on proteins, and the medical innovations it will lead to are astounding.

    Determining the 3d structure of a protein took yearsuntil very recently. Folding at Home was a worldwide project linking millions of computers to work on it.

    Alphafold does it in under a second, and has revealed the structure of 200 million proteins. It's one of the most significant medial achievements in history. Since it essentially dates back to 2022, we're still a few years from feeling the direct impact, but it will be massive.

    Sure. And AI that identifies objects in pictures and converts pictures of text into text. There's lots of good and amazing applications about AI. But that's not what we're complaining about.

    We're complaining about all the people who are asking, "Is AI ready to tell me what to do so I don't have to think?" and "Can I replace everyone that works for me with AI so I don't have to think?" and "Can I replace my interaction with my employees with AI so I can still get paid for not doing the one thing I was hired to do?"

  • That's part of the problem isn't it? "AI" is a blanket term that has recently been used to cover everything from LLMs to machine learning to RPA (robotic process automation). An algorithm isn't AI, even if it was written by another algorithm.

    And at the end of the day none of it is artificial intelligence. Not to the original meaning of the word. Now we have had to rebrand AI as AGI to avoid the association with this new trend.

    “AI” is a blanket term that has recently been used to cover everything from LLMs to machine learning to RPA (robotic process automation).

    Yup. That was very intentionally done by marketing wanks in order to muddy the water. Look! This computer program , er we mean "AI" can convert speech to text. Now, let us install it into your bank account."

  • Defining contextual relationship between words sounds like predicting the next word in a set, mate.

    Only because it is.

  • 337 Stimmen
    19 Beiträge
    190 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • YouTube Will Add an AI Slop Button Thanks to Google’s Veo 3

    Technology technology
    71
    1
    338 Stimmen
    71 Beiträge
    1k Aufrufe
    anunusualrelic@lemmy.worldA
    "One slop please"
  • xAI Data Center Emits Plumes of Pollution, New Video Shows

    Technology technology
    50
    1
    516 Stimmen
    50 Beiträge
    780 Aufrufe
    G
    You do. But you also plan in the case the surrounding infrastructure fails. But more to the point, in some cases it is better to produce (parto of) your own electricity (where better means cheaper) than buy it on the market. It is not really common but is doable.
  • 61 Stimmen
    11 Beiträge
    112 Aufrufe
    K
    If you use LLMs like they should be, i.e. as autocomplete, they're helpful. Classic autocomplete can't see me type "import" and correctly guess that I want to import a file that I just created, but Copilot can. You shouldn't expect it to understand code, but it can type more quickly than you and plug the right things in more often than not.
  • 132 Stimmen
    16 Beiträge
    148 Aufrufe
    V
    Ah, yes. That's correct, sorry I misunderstood you. Yeah that's pretty lame that it doesn't work on desktop. I remember wanting to use that several times.
  • Apple announces iOS 26 with Liquid Glass redesign

    Technology technology
    83
    1
    117 Stimmen
    83 Beiträge
    853 Aufrufe
    S
    you guys are weird
  • 1 Stimmen
    2 Beiträge
    29 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 52 Stimmen
    17 Beiträge
    146 Aufrufe
    C
    Murderbot is getting closer and closer