How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

  • IHeartBadCode@kbin.run
    link
    fedilink
    arrow-up
    130
    arrow-down
    4
    ·
    5 months ago

    I had my fun with Copilot before I decided that it was making me stupider - it’s impressive, but not actually suitable for anything more than churning out boilerplate.

    This. Many of these tools are good at incredibly basic boilerplate that’s just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

    There’s a reality to these tools. That reality is they’re helpful at times, but they are hardly transformative at the levels the grifters go on about.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      44
      ·
      5 months ago

      I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

      The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn’t correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn’t catch it, and we ended up pointing it out again.

      Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they’d need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they’d be ready to ship it.

      They didn’t pass the interview.

      And that’s generally my opinion about AI in general, it’s probably making you stupider.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        1
        ·
        edit-2
        5 months ago

        I’ve seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don’t need to.

        And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn’t making them dumber. They were already dumb. What the AI does is make code they don’t understand more accessible, which is to say, it’s just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 months ago

          Yup. And I’m unwilling to be the QC in a coding assembly line, I want competent peers who catch things before I do.

          But my point isn’t that AI actively makes individuals dumber, it’s making people in general dumber. I believe that to be true about a lot of technology. In the 80s, people were familiar with command-line interfaces, and jumping to some coding wasn’t a huge leap, but today, people can’t figure out how to do a thing unless there’s an app for it. AI is just the next step along that path, soon, even traditionally competent industries will be little more than QC and nobody will remember how the sausage is made.

          If they can demonstrate that they know how the sausage is made and how to inspect a sausage of packages, I’m fine with it. But if they struggle to even open the sausage package, we’re going to have problems.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 months ago

          Yeah, I honestly don’t have any real issue with using it to accelerate your workflow. I think it’s hit or miss how much it does, but it’s probably slightly stepped up from code completion without “AI”.

          But if you don’t understand every line of code “you” write completely, you’re being grossly negligent and begging for a shitshow.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          5 months ago

          I just don’t bother, under the assumption that I’ll spend more time correcting the mistakes than actually writing the code myself. Maybe that’s faulty, as I haven’t tried it myself (mostly because it’s hard to turn on in my editor, vim).

          • IHeartBadCode@kbin.run
            link
            fedilink
            arrow-up
            6
            ·
            4 months ago

            Maybe that’s faulty, as I haven’t tried it myself

            Nah perfectly fine take. Each their own I say. I would absolutely say that where it is, not bothering with it is completely fine. You aren’t missing all that much really. At the end of the day it might have saved me ten-fifteen minutes here and there. Nothing that’s a tectonic shift in productivity.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              4 months ago

              Yeah, most of my dev time is spent reading, and I’m a pretty fast typist, so I never bothered.

              Maybe I’ll try it eventually. But my boss isn’t a fan anyway, so I’m in no hurry.

              • SkyeStarfall@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                It can be useful in explaining concepts you’re unsure about, in regards to the reading part, but you should always verify that information.

                But it has helped me understand certain concepts in the past, where I struggled with finding good explanations using a search engine.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  4 months ago

                  Ah, ok. I’m pretty good with concepts (been a dev for 15-ish years), I’m usually searching for specific API usage or syntax, and the official docs are more reliable anyway. So the biggest win would probably be codegen, but that’s also a relatively small part of my job, which is mostly code reviews and planning.

        • manicdave@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          it’s pretty good for things that I can eye scan and verify that’s what I would have typed anyway. But I’ve found it suggesting things I wouldn’t remotely permit to things that are “sort of” correct.

          Yeah. I haven’t bothered with it much but the best use I can see of it is just rubber ducking.

          Last time I used it was to asked how to change contrast in a numpy image. It said to multiply each channel by contrast. (I don’t even think this is right and it should be ((original value-128) * contrast) + 128) not original value * contrast as it suggested), but it did remind me I can just run operations on colour channels.

          Wait what’s my point again? Oh yeah, don’t trust anyone that can’t tell you what the output is supposed to do.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Wait wait wait so… this person forgot the pythagorean theorem?

        Like that is the most basic task. It’s d = sqrt((x1 - x2)^2 + (y1 - y2)^2), right?

        That was off the top of my head, this person didn’t understand that? Do I get a job now?

        I have seen a lot of programmers talk about how much time it saves them. It’s entirely possible it makes them very fast at making garbage code. One thing I’ve known for a long time is that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it’s just creating harder work for you, unless you don’t care about getting it right.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          5 months ago

          Yup, you’re hired as whatever position you want. :)

          Our instructions were basically:

          1. randomly place N coordinates on a 2D grid, and a random target point
          2. report the closest of those N coordinates to the target point

          It was technically different (we phrased it as a top-down game, but same gist). AI generated manhattan distance (abs(x2 - x1) + abs(x2 - x1)) probably due to other clues in the text, but the instructions were clear. The candidate didn’t notice what it was doing, we pointed it out, then they asked for the algorithm, which we provided.

          Our better candidates remember the equation like you did. But we don’t require it, since not all applicants finished college (this one did). We’re more concerned about code structure, asking proper questions, and software design process, but math knowledge is cool too (we do a bit of that).

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            7
            ·
            4 months ago

            College? Pythagorean Theorem is mid-level high school math.

            I did once talk to a high school math teacher about a graphics program I was hacking away on at the time, and she was surprised that I actually use the stuff she teaches. Which is to say that I wouldn’t expect most programmers to know it exactly off the top of their head, but I would expect they’ve been exposed to it and can look it up if needed. I happen to have it pretty well ingrained in my brain.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              ·
              4 months ago

              Yes, you learn it in the context of finding the hypotenuse of a triangle, but:

              • a lot of people are “bad” at math (more unconfident), but good with logic
              • geometry, trig, etc require a lot of memorization, so it’s easy to forget things
              • interviews are stressful, and good applicants will space on basic things

              So when I’m interviewing, I try to provide things like algorithms that they probably know but are likely to space on, and focus on the part I care about: can they reason their way through a problem and produce working code, and then turn around and review their code. Programming is mostly googling stuff (APIs, algorithms, etc), I want to know if they can google the right stuff.

              And yeah, we let applicants look stuff up, we just short circuit the less important stuff so they have time to show us the important parts. We dedicate 20-30 min to coding (up to an hour if they rocked at questions and are struggling on code), and we expect a working solution and for them to ask questions about vague requirements. It’s a software engineering test, not a math test.

              • Excrubulent@slrpnk.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 months ago

                Yeah, that’s absolutely fair, and it’s a bit snobby of me to get all up in arms about forgetting a formula - although it is high school level where I live. But to be handed the formula, informed that there’s an issue and still not fix it is the really hard part to wrap my head around, given it’s such a basic formula.

                I guess I’m also remembering someone I knew who got a programming job off the back of someone else’s portfolio, who absolutely couldn’t program to save their life and revealed that to me in a glaring way when I was trying to help them out. It just makes me think of that study that was done that suggested that there might be a “programmer brain” that you either have or you don’t. They ended up costing that company a lot to my knowledge.

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        I don’t want to believe that coders like these exist and are this confident in an AI’s ability to code.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          My co-worker said told me another story.

          His friend was in a programming class, and made it nearly to the end, when he asked my friend for help. Basically, he had already written the solution, but it wasn’t working, and he needed help debugging it. My friend looked at the code, and it looked AI generated because there were obvious mistakes throughout, so he asked his friend to walk him through the code, and that’s when his friend admitted to AI generating the whole thing. My friend refused to help.

          They do exist, but this candidate wasn’t that. I think they were just under pressure and didn’t know the issue. The red flag for me wasn’t AI or not catching the AI issues, it was that when I asked how confident they were about the code (after us catching the same bug twice), they said 100% and they didn’t need any extra assurance (I would’ve wanted to write tests).

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      44
      ·
      5 months ago

      I use them like wikipedia: it’s a good starting point and that’s it (and this comparison is a disservice to wikipedia).

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      30
      ·
      5 months ago

      Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

      So it’s helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it’s not going to do the job itself.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        5 months ago

        So it’s helpful for saving time typing some stuff

        Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it’ll do the bulk of it, and then I just go through and edit out all little mistakes.

        And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they’re leaving out the part where they have to edit the output too.

        At the end of the day, we’ve had this technology for a while, it’s just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it’s just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.

    • grrgyle@slrpnk.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      I think we all had that first moment where copilot generates a good snippet, and we were blown away. But having used it for a while now, I find most of what it suggests feels like jokes.

      Like it does save some typing / time spent checking docs, but you have to be very careful to check its work.

      I’ve definitely seen a lot more impressively voluminous, yet flawed pull requests, since my employer started pushing for everyone to use it.

      I foresee a real reckoning of unmaintainable codebases in a couple years.

    • Shadywack@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      5 months ago

      Looks like two people suckered by the grifters downvoted your comment (as of this writing). Should they read this, it is a grift, get over it.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      24
      ·
      5 months ago

      Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”

      It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.

      • foenix@lemm.ee
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        1
        ·
        5 months ago

        I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.

        The 2 main problems with agentic approaches I’ve discovered this far:

        • One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:

        • It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.

        One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.

        If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.

        Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        7
        ·
        5 months ago

        Also, a lot of people who are using AI have become quiet about it of late exactly because of reactions like this article’s. Okay, you’ll “piledrive” me if I mention AI? So I won’t mention AI. I’ll just carry on using it to make whatever I’m making without telling you.

        There’s some great stuff out there, but of course people aren’t going to hear about it broadly if every time it gets mentioned it gets “piledriven.”

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          edit-2
          4 months ago

          Pretty much me. I am using it everywhere but usually not interested in mentioning it to some internet trolls.

          You can check my profile if you want, or not. 7 months ago I baked my first loaf of bread. I got the recipe from chatgpt. Over 7 months I have been going over with it on recipes and techniques, and as of this month I now have a part time gig job making artisan breads for a restaurant.

          There is no way I could have progressed this fast without that tool. Keep in mind I have a family and a career in engineering, not exactly an abundance of time to take classes.

          I mentioned this once on lemmy and some boomer shit starting screaming how learning how to bake with the help of an AI didn’t count and I need to buy baking books.

          Edit: spelling

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            3
            arrow-down
            7
            ·
            4 months ago

            And if you need examples of people being piledriven, you can browse my history a bit. :) Since I’m not doing anything with AI that would suffer “professionally” from backlash (such as might happen to an artist who becomes the target of anti-AI witch-hunters) I’ve not been shy about talking about the good things AI can do and how I use it. Or at calling out biased or inaccurate arguments against various AI applications. As a result I get a lot of downvotes.

            Fundamentally, I think it’s just that people are afraid. They’re seeing a big risk from this new technology of losing their jobs, their lifestyles, and control over their lives. And that’s a real concern that should be treated seriously, IMO. But fear is not a good cultivator of rational thought or honest discourse. It’s not helping people work towards solving those real concerns.

            • AIhasUse@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              7
              ·
              4 months ago

              Yeah, this is exactly what I think it is. I’m a bit concerned about how hard it’s going to hit a large number of people when they realize that they’re echo chamber of “LLMs are garbage and have no benefits” was so completely wrong. I agree that there are scary aspects of all this, but pretending like they don’t exist will just make it harder to deal with. It’s like denying that the smoke alarm is going off until your arm is on fire.

              • atrielienz@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 months ago

                I’m inclined to believe, based on this thread, that you and the person you’re replying to didn’t read the article because the person who wrote it and most of the replies to it are not saying “LLM’s are garbage and have no benefits”.

                The post is specifically calling out companies that have jumped on the “AI LLM” train who are trying to force feed it into every single project and service regardless of whether it will be useful or beneficial or not. And they will not listen to people working in the field who tell them no it will not be beneficial.

                The hype is what people are upset about because companies are selling something that is useful in selective cases as something that will be useful to everyone universally for just about everything and they’re making products worse.

                Just look at Google and their implementation of AI LLM’S in search results. That’s a product that isn’t useful unless it’s accurate. And it was not ready to be a public facing service. In their other products it’s promising more but actually breaking or removing features that users have been using for years. That’s why people are upset. This isn’t even taking into account the theft that went on of people’s work to get these LLM’S trained.

                This is literally just about companies having more FOMO than sense. This is about them creating and providing to the public broken interactions of products filled with the newest “tech marvel” to increase sales or stock price while detrimentally affecting the common user.

                For every case of an LLM being useful there are several where it’s not. That’s the point.

  • Spesknight@lemmy.world
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    1
    ·
    5 months ago

    I don’t fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      42
      ·
      5 months ago

      I know AI can’t replace me. But my boss’s boss’s boss doesn’t know that.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        22
        ·
        5 months ago

        Fortunately, it’s my job as your boss to convince my boss and boss’ boss that AI can’t replace you.

        We had a candidate spectacularly fail an interview when they used AI and didn’t catch the incredibly obvious errors it made. I keep a few examples of that handy to defend my peeps in case my boss or boss’s boss decide AI is the way to go.

        I hope your actual boss would do that for you.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            ·
            5 months ago

            I’m so sorry.

            My boss asked if I wanted to be a manager, and I said no, but I’ll take the position if offered so it doesn’t go to a non-technical person. I wish that was more common elsewhere.

            Good luck sir or madame.

            • bionicjoey@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              5 months ago

              Well, my office recently announced that we’ll be going from 0 days mandatory in office to 3 days a week. After working fully remote for the last few years, I’ll kms before going back, so I’m on the way out anyway.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 months ago

                That sucks. We do 2-days in office, but that was also always the agreement, we were just temporarily remote during COVID (though almost all of us were hired during COVID). My boss tried 3-days in office due to company policy, but we hated it and went back to two.

                I cannot stand orgs going back on their word without agreement from the team. I hope you find someplace better.

                • bionicjoey@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  5 months ago

                  Thanks, I’m sure I’ll land on my feet. I have a pretty unique skillset for IT (Science HPC admin) and I’m thinking about maybe going back to school and doing a Master’s.

        • Kaput@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          They’ll replace you first, so they can replace your employees… even though you are clearly right.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      Worst part is some of them aren‘t even idiots, just selfish and reckless. They don‘t care if the company still exists in a year so as long as they can make millions driving it into the ground.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    59
    ·
    edit-2
    4 months ago

    Hacker News was silencing this article outright. That’s typically a sign that its factual enough to strike a nerve with the potential CxO libertarian [slur removed] crowd.

    If this is satire, I don’t see it. Because i’ve seen enough of the GenAI crowd openly undermine society/the environment/the culture and be brazen about it; violence is a perfectly normal response.

    • xavier666@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      What happened to HN? I have now heard HN silencing cetain posts multiple times. Is this enshittification?

    • Alphane Moon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 months ago

      Fascinating, I am not surprised at all.

      Even beyond AI, some of the implicit messaging has got to strike a nerve with that kind of crowd.

      I don’t think this is satire either, more like a playful rant (as opposed to a formal critique).

    • nialv7@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      4 months ago

      “If something is silenced, then that must mean it is right” is a pretty bad argument. There are genuinely good reasons to ban something. Being unnecessarily aggressive can be one.

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 months ago

      I’m libertarian, I’m against this. I’m also against blockchain scams.

      My ideas on digital currencies and something like artificial intelligence are simply an extension of the usual ancap\panarchy ideas. It’s actually a very good test for any libertarian you meet - they’ll usually agree that a “meta-society” consisting of voluntary exterritorial jurisdictions (which can be anything from crack-smoking ancap tribes to solarpunk communes), with some overarching security system to protect those jurisdictions from being ignored by somebody well-armed, is good, then you just have to ask why the systems they like for currencies and this are clearly manifestations of a different ideology.

  • deweydecibel@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I’m going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you’re an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

    This aspect of it isn’t getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn’t devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there’s actual people working round the clock pruning and curating it.

    You know, humans, with actual human intelligence, without which these miraculous “artificial intelligence” tools would not work as they seem to.

    If the "AI’ needs a human support team to keep it “intelligent”, it’s less AI and more a really fancy kind of puppet.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    edit-2
    4 months ago

    I work in AI as a software engineer. Many of my peers have PhD’s, and have sunk a lot of research into their field. I know probably more than the average techie, but in the grand scheme of things I know fuck all. Hell, if you were to ask the scientists I work with if they “know AI” they’ll probably just say “yeah, a little”.

    Working in AI has exposed me to so much bullshit, whether it’s job offers for obvious scams that’ll never work, or for “visionaries” that work for consultancies that know as little about AI as the next person, but market themselves as AI experts. One guy had the fucking cheek to send me a message on LinkedIn to say “I see you work in AI, I’m hosting a webinar, maybe you’ll learn something”.

    Don’t get me wrong, there’s a lot of cool stuff out there, and some companies are doing some legitimately cool stuff, but the actual use-cases for these tools where they won’t just be productivity enhancers/tools is low at best. I fully support this guy’s efforts to piledrive people, and will gladly lend him my sword.

  • Rumbelows@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    5 months ago

    I feel like some people in this thread are overlooking the tongue in cheek nature of this humour post and taking it weirdly personally

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      44
      ·
      5 months ago

      Yeah, that’s what happens when the LLM they use to summarize these articles strips all nuance and comedy.

    • amio@kbin.run
      link
      fedilink
      arrow-up
      14
      ·
      5 months ago

      Even for the internet, this place is truly extremely fond of doing that.

  • madsen@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    4
    ·
    5 months ago

    This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      49
      ·
      5 months ago

      It blatantly contradicts itself. I would wager good money that you read the headline and didn’t go much further because you assumed it was agreeing with you. Despite the subject matter, this is objectively horribly written. It lacks a cohesive narrative.

      • Alphane Moon@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        edit-2
        5 months ago

        I don’t think it’s supposed to have a cohesive narrative structure (at least in context of a structured, more formal critique). I read the whole thing and it’s more like a longer shitpost with a lot of snark.

      • madsen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        I read every single word of it, twice, and I was laughing all the way through. I’m sorry you don’t like it, but it seems strange that you immediately assume that I haven’t read it just because I don’t agree with you.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        31
        ·
        5 months ago

        There is literally not a chance that anyone downvoting this actually read it. It’s just a bunch of idiots that read the title, like the idea that llms suck and so they downvoted. This paper is absolute nonsense that doesn’t even attempt to make a point. I seriously think it is ppprly ai generated and just taking the piss out of idiots that love anything they think is anti-ai, whatever that means.

        • decivex@yiffit.net
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          5 months ago

          It’s not a paper, it’s a stream-of-consciousness style blog post.

        • megaman@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          4 months ago

          I read the fun blogpost that is not an academic paper and ive downvoted you. Does that mean i dont actually exist or that u dont actually exist???

          • megaman@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            4 months ago

            Everyone who downvoted me didnt read the article, or didnt read what i said, or didnt read op, or something, i dont remember what they didnt read but they cannot be real because the only way to disagree with me is to not have read something or other (or did read it, cant remember which)

            • AIhasUse@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              12
              ·
              4 months ago

              Because the headline goes along with all the people that thoughtlessly think ai is pointless, but the blog post itself is an incoherent mess that actually sometimes talks about how ai is useful and rapidly improving. It is a rambling mess. People who read it realise this. People who just read the headline assume it will say what they think. The chances that you made it through that whole thing are slim to none, but sure, maybe you read it, whatever. Congratulations, I’m sure it really improved your understanding.

              • atrielienz@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                4 months ago

                Which makes the point that while AI LLM’s can be useful and can be improved, hamfisting them into every product you make as a company because you have FOMO is ill advised and aggravating, especially when you pay people to be subject matter experts in the field and they tell you it’s a bad idea. That’s what the article said in some very verbose language. Your attention span must be severely lacking because you couldn’t read the article and glean that simple point from the words on the page. I read it and it was entertaining and insightful.

                You seem like someone who might need paragraphs to be a single sentence.

                • AIhasUse@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  5
                  ·
                  4 months ago

                  Other than having to scroll down an extra 3 centimeters to see your Google results, have you actually been inconvenienced by ai being used somewhere? All this outrageous about terrible ai getting in the way all the time is hilarious because it is absolutely manufactured by people who are obsessed with complaining and then parroted by people incapable of thinking for themselves. Nobody’s actually living worse lives because a few companies are trying out new tech. The fact of the matter is that there are obnoxious karens online, just like in real life.

                  You seem like someone who is probably self-righteous, obnoxious, and annoying to be around in real life, just like you are online.

          • AIhasUse@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            11
            ·
            4 months ago

            What a good full set of possibilities since it’s certainly impossible for anyone on the internet to lie. How fun for a blog to contradict its main point.

  • tron@midwest.social
    link
    fedilink
    English
    arrow-up
    41
    ·
    4 months ago

    Oh my god this whole post is amazing, thought I’d share my favorite excerpt:

    This entire class of person is, to put it simply, abhorrent to right-thinking people. They’re an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they’ve learned their lesson, a prison I’m fundraising for. Every morning, a figure in a dark hood7, whose voice rasps like the etching of a tombstone, spends sixty minutes giving a TedX talk to the jailed managers about how the institution is revolutionizing corporal punishment, and then reveals that the innovation is, as it has been every day, kicking you in the stomach very hard.

    Where the fuck do I donate???

      • pyldriver@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 months ago

        Right as in the actual definition of the word, no political

        Conforming with or conformable to justice, law, or morality.

        In accordance with fact, reason, or truth; correct.

        Fitting, proper, or appropriate.

        • WldFyre@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          4 months ago

          I get that, didn’t think it was a political meaning. Just seems like an iffy word to me personally, hard to put my finger on it.

          Maybe since the inverse would be “wrong-think”?

          • Alphane Moon@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 months ago

            It’s not that commonly used these days (especially online?), I think the phrasing is a bit old school, but it’s a totally legitimate phrase.

            • WldFyre@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              4 months ago

              Fair enough, had never heard it before but that makes sense

          • Cryophilia@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            4 months ago

            English your second language? Phrases that seem common to natives may seem off to those who learned English later in life. 'Tis a silly language.

              • DragonTypeWyvern@midwest.social
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                4 months ago

                No, you’re correct, it’s always a little suspect on usage. The kind of thing you only say when you’re up on a high horse, fairly or not.

  • dumples@midwest.social
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    5 months ago

    I’ve been a professional data scientist for 5+ years and I’m okay at my job. Good enough to get 3 different jobs at non FAANG companies and I have already 3 or so hype trains and name changes of what words we use for the same tools and techniques. This AI hype is going to be another one of these with a few niche cases.

    Most of my job is insisting on doing something correctly and then being told that doesn’t give the “correct” response based on leadership expectations. I just change what I do until I get the results that people want to see. I’ll just ride this hype wave out here for a few years here learning nothing new again. I’ll find another job based on my experience and requirements gathering to start the cycle again. Maybe I’ll get more data engineering skills which are actually valid

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      4 months ago

      Similarly, my current job (now ending as they want to end remote work and I don’t want to move to a desert in a very red/religious area)- I guided them out of “block chain for supply chain” (lmao it’s cringe to even say that now) into “AI for productivity automation”

      I give it 3 years max before all mentions of AI are scrubbed from the home page

      • dumples@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Yup. I hope we move back to data mining. I loved to joke that I put on my hardhat and go into the data mine every morning

    • morbidcactus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      I’m a data engineer/architect and it’s the same over here, I get asked constantly “how can we stuff AI into this solution?”, never “should we consider using AI here? Is there a value?”, my view, people don’t understand their data and don’t want to put in the effort to understand their data and think that it’ll magically pull actionable insights from their dataswamp, nothing new, that’s been a constant for as long as I recall.

      Like I totally understand the draw of new and exciting, but there’s so much you can do with traditional analytics, and in my view you really need to have a good foundation before doing anything else.

      • dumples@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Well all the fancy tools give people confidence into their terrible data. Crap in leads to crap out

  • Spesknight@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 months ago

    Hey, we can always say: how can you check if an AI is working, it doesn’t come to the office? 🤔

    • widw@ani.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      This is actually more terrifying than you might have intended.

      I’ve long thought that the greatest danger AI poses is going to be the “man behind the curtain” effect. If people can blame everything on AI then AI can be a blanket covering deliberate harm.

      Imagine if government starts using AI for decision making. You could easily end up with a “man behind the curtain” who’s actually calling all the shots and just pretending it’s the AI doing it. Then you’d effectively have a dictatorship where nobody knows/believes they’re in a dictatorship.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        Yep AI will definitely be used as stamp of approval for bad decisions. Just give it input and questions in different ways until you get the answer you want, and you can say, hey the fancy AI advised it!

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      4 months ago

      That is some good stuff actually. All the haters can focus on non-existent AI and the rest of us can work on improving society while they are distracted. Perfect scapegoat.

  • Shadywack@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    5 months ago

    Using satire to convey a known truth some already understand implicitly, some don’t want to acknowledge, some refuse it outright, but when you think about it, we’ve always known how true it is. It’s tongue-in-cheek but it’s necessary in order to convince all these AI-washing fuckheads what a gimmick it is to really be making sweeping statements about a chatbot that still can’t spell lollipop backwards.

      • Eranziel@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.

        However, that would be indistinguishable from our current reality, which would make it poor satire.

  • Elias Griffin@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    4 months ago

    This gets a vote from me for “Best of the Internet 2024”, brilliant pacing, super braced, and with precision bluntness. I’m going to pretend the Monero remark is not even there, that’s how good it was.