• Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 months ago

    Even if it didn’t, any middle manager who decides to replace their dev team with AI is going to realize pretty quickly that actually writing code is only a small part of the job.

    Won’t stop 'em from trying, of course. But when the laid-off devs get frantic calls from management asking them to come back and fix everything, they’ll be in a good position to negotiate a raise.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      If anything. AI could be used to replace managers 😆 I mean lots of management seems to be just pushing paper to me. Ideal to be handled by AI. But I think we still need people to do the real work for quite some time to come. Especially software architecture and coding (complex) stuff ain’t easy. Neither is project management. So I guess even some managers can stay.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        Good management is almost all people skills. It needs to be influenced by domain knowledge for sure, but it’s almost all about people.

        You can probably match trash managers, but you won’t replace remotely competent ones

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          I’m not even sure about the “people skills” of ChapGPT. Maybe it’s good at that. It always says …you have to consider this side but also the other side… …This is like that, however it might… It can weasel itself out of situations (as it did in this video). It makes a big effort to keep a very friendly tone in all circumstances. I think OpenAI has put a lot of effort in ChatGPT having something that resembles a portion of people skills.

          I’ve used those capabilities to rephrase emails that needed to tell some uncomfortable truths but at the same time not scare someone away. And it did a halfway decent job. Better than I could do. And we already see those people skills in use by the companies who replace their first level support with AI. I read somewhere it has a better customer satisfaction rate than a human powered callcenter. It’s good at pacifying people, being nice to them and answering the most common 90% of questions over and over again.

          So I’m not sure what to make of this. I think my point still remains valid. AI (at least ChatGPT) is orders of magnitude better at people skills than at programming. I’m not sure what kind of counterexamples we have… Sure, it can’t come to your desk, look you in the eyes and see if you’re happy or need something. Because it doesn’t have any eyes. But at the same time that’s a thing I rarely see with average human managers in big offices, either…

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Using flowery language isn’t “people skills”.

            People skills means handling conflict and competing objectives between people fairly and efficiently. It’s a trait based almost entirely on empathy, with a level of ingenuity mixed in, and GPT isn’t anywhere within many orders of magnitude of either. It will be well after it “can code” that it does anything remotely in the neighborhood of the soft skills of being a competent manager.

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 months ago

              Yeah. I mean the fundamental issue is: ChatGPT isn’t human. It just mimics things. That’s the way it generates text, audio and images. And it’s also the way it handles “empathy”. It mimicks what it’s learned from human interactions during training.

              But in the end: Does it really matter where it comes from and why? I mean the goal of a venture is to produce or achieve something. And that isn’t measured in where it comes from. But in actual output. I don’t want to speculate too much. But despite not having real empathy, it could theoretically achieve the same thing by faking it well enough. And that has been proven in some narrow tasks already. We have customer satisfaction rates. And quite some people saying it helps them with different things. We need to measure that and do some more studies of what’s the actual outcome of replacing something with AI. It could very well be that our perspective is wrong.

              And with that said: I tried roleplaying with AI. It seems to have some theory of mind. Not really of course. But it get’s what I’m hinting at. The desires and behaviour of characters. And so on. Lot’s of models are very agreeable. Some can role play conflict. I think the current capabilities of these kinds of AI are enough to fake some things well enough to get somewhere and be actually useful. I don’t say it has or hasn’t people skills. I think it’s somewhere on the spectrum between the two. I can’t really tell where because I havent yet read any research considering this context.

              And of course there is a big difference between everyday tasks and handling a situation that went completely haywire. We have to factor that in. But in reality there are ways to handle that. For example AI and humans could split up the tasks amongst them. And things can get escalated and humans make difficult decisions. But that could already mean 80% of the labor gets replaced.

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                3 months ago

                The actual empathy (actually being able to understand people’s perspectives) is how you get to places everyone is OK with. Empathy isn’t language. It’s using the understanding of what people feel and want to find solutions that work well for everyone. Without understanding that perspective at a deep and intuitive level, you don’t solve actual problems. You don’t routinely preempt problems by seeing them before they have a chance of happening and working around them.

                Actual leadership isn’t stepping in when people are almost at blows and parroting “conflict resolution” at them. It’s understanding who your people are and what they want and putting them in position to succeed.

                • hendrik@palaver.p3x.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 months ago

                  I get what you’re saying. I think we’re getting a bit philosophical here with the empathy. My point was: Sometimes, what matters is if something get’s a job done. And I see some reason to believe it might become capable, despite doing it differently and having shortcomings.

                  I think it’s true that empathy get’s the job done. But I think it’s a logical flaw to say, because empathy can do it, it’s ONLY empathy that can do it. It might very well be the case that it’s not like that. I think we don’t know yet. I’m not set on one side or the other. I just want to see some research done and get a definitive answer instead of speculating.

                  And I see some reason to believe it’s more complicated than that. What I outlined earlier is that it can apply something loosely resembling a theory of mind and get some use out of that. But we can also log every interaction of someone. Do sentiment analysis and find out with great accuracy if someone sitting at a computer is happy, angry, resignating or frustrated. AI can do that all day for each employer and outperform any human manager. On the flipside it can’t do other things. And we already have “algorithms” for example on TikTok, Youtube etc that can tell a lot about someone and predict things about them. That works quite well as we all know. All of that makes me believe there is some potential to do things like what we’re currently discussing.

                  • conciselyverbose@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    3 months ago

                    I’m not arguing philosophy. I’m saying that the core definition of the job description is “understand people and use that understanding to get shit done”. A middle manager doesn’t decide strategy. They just make their team work well together. Understanding people is the whole job.

                    TikTok and YouTube algorithms don’t (and don’t have any desire to) care what people actually want or value. They just care what results in the highest amount of time wasted on their platform, and it results in creators explicitly telling their viewers (who also don’t want the nonsense) that they’re doing bullshit like clickbait thumbnails and titles because YouTube makes it impossible to succeed if they don’t. They (along with almost all other social media) are prime examples of what bad, toxic algorithms look like.

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Don’t even need an AI. Just teach a parrot to say “let’s circle back on this” and “how many story points is that?”