• applebusch@lemmy.world
      link
      fedilink
      English
      arrow-up
      80
      arrow-down
      13
      ·
      1 year ago

      Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.

      • RogueBanana@lemmy.zip
        link
        fedilink
        English
        arrow-up
        26
        ·
        1 year ago

        Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.

        • hitmyspot
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Who also don’t have the information or data that I need.

        • Ann Archy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          It isn’t going to completely replace whole business departments, only 90% of them, right now.

          In five years it’s going to be 100%.

      • thetreesaysbark@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.

        If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.

      • guacupado@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.

      • Ann Archy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        6
        ·
        edit-2
        1 year ago

        You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.

    • GALM@lemmy.world
      link
      fedilink
      English
      arrow-up
      61
      arrow-down
      1
      ·
      1 year ago

      And the way customer support staff can be/is abused in the US is so dehumanizing. Nobody should have to go through that wrestling ring.

      • fluxion@lemmy.world
        link
        fedilink
        English
        arrow-up
        57
        arrow-down
        5
        ·
        1 year ago

        A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it’s justified, it’s just that is doesn’t necessarily have to be such a draining job if not for the greed that puts them in that situation.

        • BlanketsWithSmallpox@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.

          It’s a great show for anyone interested in fleshing some of the more mundane topics of ai out. I’ve read and watched a lot of scifi and it hit some novel stuff for me.

          https://reddit.com/r/anime/s/0uSwOo9jBd

    • DessertStorms@kbin.social
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      I’m pretty sure it’d be way nicer experience for the customers.

      Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is

        • cley_faye@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          defacto instant reply

          Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.

          if trained right, way more knowledgeable that the human counterparts

          Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.

          At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.

          Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.

          • testfactor@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            While properly training a model does take a lot of money, it’s probably a lot less money than paying 1.6 million people for any number of years.

    • philodendron@lemdro.id
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yeah but are you ready for “my grandma used to tell me $10 off coupon codes as I fell asleep…”

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.

      Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.