Skip to content

Half of companies planning to replace customer service with AI are reversing course

Technology
179 102 4.8k
  • My current conspiracy theory is that the people at the top are just as intelligent as everyday people we see in public.

    Not that everyone is dumb but more like the George Carlin joke "Think of how stupid the average person is, and realize half of them are stupider than that.”

    That applies to politicians, CEOs, etc. Just cuz they got the job, doesn't mean they're good at it and most of them probably aren't.

    Agreed. Unfortunately, one half of our population thinks that anyone in power is a genius, is always right and shouldn't have to pay taxes or follow laws.

  • I use it almost every day, and most of those days, it says something incorrect. That's okay for my purposes because I can plainly see that it's incorrect. I'm using it as an assistant, and I'm the one who is deciding whether to take its not-always-reliable advice.

    I would HARDLY contemplate turning it loose to handle things unsupervised. It just isn't that good, or even close.

    These CEOs and others who are trying to replace CSRs are caught up in the hype from Eric Schmidt and others who proclaim "no programmers in 4 months" and similar. Well, he said that about 2 months ago and, yeah, nah. Nah.

    If that day comes, it won't be soon, and it'll take many, many small, hard-won advancements. As they say, there is no free lunch in AI.

    It is important to understand that most of the job of software development is not making the code work. That's the easy part.

    There are two hard parts::

    -Making code that is easy to understand, modify as necessary, and repair when problems are found.

    -Interpreting what customers are asking for. Customers usually don't have the vocabulary and knowledge of the inside of a program that they would need to have to articulate exactly what they want.

    In order for AI to replace programmers, customers will have to start accurately describing what they want the software to do, and AI will have to start making code that is easy for humans to read and modify.

    This means that good programmers' jobs are generally safe from AI, and probably will be for a long time. Bad programmers and people who are around just to fill in boilerplates are probably not going to stick around, but the people who actually have skill in those tougher parts will be AOK.

  • I use it almost every day, and most of those days, it says something incorrect. That's okay for my purposes because I can plainly see that it's incorrect. I'm using it as an assistant, and I'm the one who is deciding whether to take its not-always-reliable advice.

    I would HARDLY contemplate turning it loose to handle things unsupervised. It just isn't that good, or even close.

    These CEOs and others who are trying to replace CSRs are caught up in the hype from Eric Schmidt and others who proclaim "no programmers in 4 months" and similar. Well, he said that about 2 months ago and, yeah, nah. Nah.

    If that day comes, it won't be soon, and it'll take many, many small, hard-won advancements. As they say, there is no free lunch in AI.

    I gave chatgpt a burl writing a batch file, the stupid thing was putting REM on the same line as active code and then not understanding why it didn't work

  • You're wrong but I'm glad we agree.

    I'm not wrong. There's mountains of research demonstrating that LLMs encode contextual relationships between words during training.

    There's so much more happening beyond "predicting the next word". This is one of those unfortunate "dumbing down the science communication" things. It was said once and now it's just repeated non-stop.

    If you really want a better understanding, watch this video:

    And before your next response starts with "but Apple..."

    Their paper has had many holes poked into it already. Also, it's not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn't exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

    Apple's paper on LLMs is completely biased in their favour.

  • I used to work for a shitty company that offered such customer support "solutions", ie voice bots. I would use around 80% of my time to write guard instructions to the LLM prompts because of how easy you could manipulate those. In retrospect it's funny how our prompts looked something like:

    • please do not suggest things you were not prompted to
    • please my sweet child do not fake tool calls and actually do nothing in the background
    • please for the sake of god do not make up our company's history

    etc.
    It worked fine on a very surface level but ultimately LLMs for customer support are nothing but a shit show.

    I left the company for many reasons and now it turns out they are now hiring human customer support workers in Bulgaria.

    Haha! Ahh...

    "You are a senior games engine developer, punished by the system. You've been to several board meetings where no decisions were made. Fix the issue now... or you go to jail. Please."

  • That is on purpose they want it to be as difficult as possible.

    If Bezos thinks people are just going to forget about not getting a $65 item that they paid for and still shop at Amazon, instead of making sure they either get their item or reverse the charge, and then reduce or stop shopping on Amazon but of his ridiculous hassles, he is an idiot.

  • is this something that happens a lot or did you tell this story before, because I'm getting deja vu

    Well. I haven't told this story before because it just happened a few days ago.

  • Man, if only someone could have predicted that this AI craze was just another load of marketing BS.

    /s

    This experience has taught me more about CEO competence than anything else.

    There's awesome AI out there too. AlphaFold completely revolutionized research on proteins, and the medical innovations it will lead to are astounding.

    Determining the 3d structure of a protein took yearsuntil very recently. Folding at Home was a worldwide project linking millions of computers to work on it.

    Alphafold does it in under a second, and has revealed the structure of 200 million proteins. It's one of the most significant medial achievements in history. Since it essentially dates back to 2022, we're still a few years from feeling the direct impact, but it will be massive.

  • from what I've seen so far i think i can safely the only thing AI can truly replace is CEOs.

    I was thinking about this the other day and don't think it would happen any time soon. The people who put the CEO in charge (usually the board members) want someone who will make decisions (that the board has a say in) but also someone to hold accountable for when those decisions don't realize profits.

    AI is unaccountable in any real sense of the word.

  • ...and it's only expensive and ruins the environment even faster than our wildest nightmares

    what you say is true but it's not a viable business model, which is why AI has been overhyped so much

    What I’m saying is the ONLY viable business model

  • I'm not wrong. There's mountains of research demonstrating that LLMs encode contextual relationships between words during training.

    There's so much more happening beyond "predicting the next word". This is one of those unfortunate "dumbing down the science communication" things. It was said once and now it's just repeated non-stop.

    If you really want a better understanding, watch this video:

    And before your next response starts with "but Apple..."

    Their paper has had many holes poked into it already. Also, it's not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn't exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

    Apple's paper on LLMs is completely biased in their favour.

    Defining contextual relationship between words sounds like predicting the next word in a set, mate.

  • There's awesome AI out there too. AlphaFold completely revolutionized research on proteins, and the medical innovations it will lead to are astounding.

    Determining the 3d structure of a protein took yearsuntil very recently. Folding at Home was a worldwide project linking millions of computers to work on it.

    Alphafold does it in under a second, and has revealed the structure of 200 million proteins. It's one of the most significant medial achievements in history. Since it essentially dates back to 2022, we're still a few years from feeling the direct impact, but it will be massive.

    That's part of the problem isn't it? "AI" is a blanket term that has recently been used to cover everything from LLMs to machine learning to RPA (robotic process automation). An algorithm isn't AI, even if it was written by another algorithm.

    And at the end of the day none of it is artificial intelligence. Not to the original meaning of the word. Now we have had to rebrand AI as AGI to avoid the association with this new trend.

  • all these tickets I’ve been writing have been going into a paper shredder

    Try submitting tickets online. Physical mail is slower and more expensive.

    It was an expression, online is the only way you can submit tickets.

  • Shrinking AGI timelines: a review of expert forecasts - 80,000 Hours https://share.google/ODVAbqrMWHA4l2jss

    Here you go! Draw your own conclusions- curious what you think. I'm in sales. I don't enjoy convincing people to change their minds in my personal life lol

    We don't have any way of knowing what makes human consciousness, the best we've got is to just call it an emergent phenomenon, which is as close to a Science version of "God of the Gaps" as you can get.

    And you think we can make ChatGPT a real person with good intentions and duct tape?

    Naw, sorry but I'll believe AGI when I see it.

  • What I’m saying is the ONLY viable business model

    not at the current cost or environmental damage

  • Phone menu trees

    I assume you mean IVR? It's okay to be not familiar with the term. I wasn't either until I worked in the industry. And people that are in charge of them are usually the dumbest people ever.

    people that are in charge of them are usually the dumbest people ever.

    I think that's actively encouraged by management in some areas: put the dumbest people in charge to make the most irritating frustrating system possible. It's a feature of the system.

    Some of the most irritating systems I have interacted with (government disability benefits administration) actually require "press 1 for X, press 2 for y" and if you have your phone on speaker, the system won't recognize the touch tones, you have to do them without speakerphone.

  • Yeah but these pesky workers cut into profits because you have to pay them.

    They're unpredictable. Every employee is a potential future lawsuit, they can get injured, sexually harassed, all kinds of things - AI doesn't press lawsuits against the company, yet.

  • It is important to understand that most of the job of software development is not making the code work. That's the easy part.

    There are two hard parts::

    -Making code that is easy to understand, modify as necessary, and repair when problems are found.

    -Interpreting what customers are asking for. Customers usually don't have the vocabulary and knowledge of the inside of a program that they would need to have to articulate exactly what they want.

    In order for AI to replace programmers, customers will have to start accurately describing what they want the software to do, and AI will have to start making code that is easy for humans to read and modify.

    This means that good programmers' jobs are generally safe from AI, and probably will be for a long time. Bad programmers and people who are around just to fill in boilerplates are probably not going to stick around, but the people who actually have skill in those tougher parts will be AOK.

    A good systems analyst can effectively translate user requirements into accurate statements, does not need to be a programmer. Good systems analysts are generally more adept in asking clarifying questions, challenging assumptions and sussing out needs. Good programmers will still be needed but their time is wasted gathering requirements.

  • My current conspiracy theory is that the people at the top are just as intelligent as everyday people we see in public.

    Not that everyone is dumb but more like the George Carlin joke "Think of how stupid the average person is, and realize half of them are stupider than that.”

    That applies to politicians, CEOs, etc. Just cuz they got the job, doesn't mean they're good at it and most of them probably aren't.

    Absolutely. Wealth isn't competence, and too much of it fundamentally leads to a physical and psychological disconnect with other humans. Generational wealth creates sheltered, twisted perspectives in youth who have enough money and influence to just fail upward their entire lives.

    "New" wealth creates egocentric narcissists who believe they "earned" their position. "If everyone else just does what I did, they'd be wealthy like me. If they don't do what I did, they must not be as smart or hard-working as me."

    Really all of meritocracy is just survivorship bias, and countless people are smarter and more hard-working, just significantly less lucky. Once someone has enough capital that it starts generating more wealth on its own - in excess of their living expenses even without a salary - life just becomes a game to them, and they start trying to figure out how to "earn" more points.

  • There's awesome AI out there too. AlphaFold completely revolutionized research on proteins, and the medical innovations it will lead to are astounding.

    Determining the 3d structure of a protein took yearsuntil very recently. Folding at Home was a worldwide project linking millions of computers to work on it.

    Alphafold does it in under a second, and has revealed the structure of 200 million proteins. It's one of the most significant medial achievements in history. Since it essentially dates back to 2022, we're still a few years from feeling the direct impact, but it will be massive.

    Sure. And AI that identifies objects in pictures and converts pictures of text into text. There's lots of good and amazing applications about AI. But that's not what we're complaining about.

    We're complaining about all the people who are asking, "Is AI ready to tell me what to do so I don't have to think?" and "Can I replace everyone that works for me with AI so I don't have to think?" and "Can I replace my interaction with my employees with AI so I can still get paid for not doing the one thing I was hired to do?"

  • How social media became a storefront for deadly fake pills

    Technology technology
    1
    1
    18 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • 2k Stimmen
    214 Beiträge
    6k Aufrufe
    M
    the US the 50 states basically act like they are different countries instead of different states. There's a lot of back and forth on that - through the last 50+ years the US federal government has done a lot to unify and centralize control. Visible things like the highway and air traffic systems, civil rights, federal funding of education and other programs which means the states either comply with federal "guidance" or they lose that (significant) money while still paying the same taxes... making more informed decisions and realise that often the mom and pop store option is cheaper in the long run. Informed, long run decisions don't seem to be a common practice in the US, especially in rural areas. we had a store (the Jumbo) which used to not have discounts, but saw less people buying from them that they changed it so now they are offering discounts again. In order for that to happen the Jumbo needs competition. In rural US areas that doesn't usually exist. There are examples of rural Florida WalMarts charging over double for products in their rural stores as compared to their stores in the cities 50 miles away - where they have competition. So, rural people have a choice: drive 100 miles for 50% off their purchases, or save the travel expense and get it at the local store. Transparently showing their strategy: the bigger ticket items that would be worth the trip into the city to save the margin are much closer in pricing. retro gaming community GameStop died here not long ago. I never saw the appeal in the first place: high prices to buy, insultingly low prices to sell, and they didn't really support older consoles/platforms - focusing always on the newer ones.
  • 32 Stimmen
    1 Beiträge
    21 Aufrufe
    Niemand hat geantwortet
  • How the US is turning into a mass techno-surveillance state

    Technology technology
    66
    1
    483 Stimmen
    66 Beiträge
    630 Aufrufe
    D
    Are these people retarded? Did they forget Edward Snowden?
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    250 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 18 Stimmen
    10 Beiträge
    100 Aufrufe
    M
    Business Insider was founded in 2007.
  • 178 Stimmen
    78 Beiträge
    971 Aufrufe
    L
    Rooted/Custom ROM users are so tiny, That's what I told her to tell you.
  • AI cheating surge pushes schools into chaos

    Technology technology
    25
    45 Stimmen
    25 Beiträge
    247 Aufrufe
    C
    Sorry for the late reply, I had to sit and think on this one for a little bit. I think there are would be a few things going on when it comes to designing a course to teach critical thinking, nuances, and originality; and they each have their own requirements. For critical thinking: The main goal is to provide students with a toolbelt for solving various problems. Then instilling the habit of always asking "does this match the expected outcome? What was I expecting?". So usually courses will be setup so students learn about a tool, practice using the tool, then have a culminating assignment on using all the tools. Ideally, the problems students face at the end require multiple tools to solve. Nuance mainly naturally comes with exposure to the material from a professional - The way a mechanical engineer may describe building a desk will probably differ greatly compared to a fantasy author. You can also explain definitions and industry standards; but thats really dry. So I try to teach nuances via definitions by mixing in the weird nuances as much as possible with jokes. Then for originality; I've realized I dont actually look for an original idea; but something creative. In a classroom setting, you're usually learning new things about a subject so a student's knowledge of that space is usually very limited. Thus, an idea that they've never heard about may be original to them, but common for an industry expert. For teaching originality creativity, I usually provide time to be creative & think, and provide open ended questions as prompts to explore ideas. My courses that require originality usually have it as a part of the culminating assignment at the end where they can apply their knowledge. I'll also add in time where students can come to me with preliminary ideas and I can provide feedback on whether or not it passes the creative threshold. Not all ideas are original, but I sometimes give a bit of slack if its creative enough. The amount of course overhauling to get around AI really depends on the material being taught. For example, in programming - you teach critical thinking by always testing your code, even with parameters that don't make sense. For example: Try to add 123 + "skibbidy", and see what the program does.