• Uniquitous@lemmy.one
    link
    fedilink
    English
    arrow-up
    78
    ·
    9 months ago

    AI is actually the king of bullshit. It might give you some code, but will it compile? Probably about as well as an AI art knee would actually bear your weight.

    • RandoCalrandian@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      9 months ago

      And the author clearly has no idea what she’s talking about, or the impact of AI on CS.

      I use chatgpt regularly to build outlines and boilerplate to code I want to write. Yes it’s code I can’t trust and almost entirely rewrite, but simply the act of it naming the variables saves me time.

      And even if it did get to the point of good code, you have to be a developer to know what it created is what you wanted in the first place.

      This is fearmongering targeting tech people by claiming their jobs are at the same level of risk of disruption by ai as other white collar jobs (like hers!).

  • Naatan@lemmy.one
    link
    fedilink
    arrow-up
    53
    ·
    9 months ago

    We are nowhere near AI writing our software unattended. Not even close. People really over estimate the state of AI.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      9 months ago

      I’m an AI nerd and yes, nowhere close. AI can write code snippets pretty well, and that’ll get better with time, but a huge part of software development is translating client demands into something sane and actionable. If a CEO of a 1-man billion dollar company asks his super-AI to “build the next Twitter”, that leaves so many questions on the table that the result will be completely unpredictable. Humans have preferences and experiences which can inform and fill in those implicit questions. They’re generally much better suited as tools and copilots than autonomous entities.

      Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results, but I wouldn’t trust that without a lot more research. I’ve found individual LLMs with a given task tend to get tunnel vision, so they could easily get stuck in a loop trying the same wrong code or design repeatedly.

      (I think this was the paper, reminiscent of the generative agent simulacra paper, but I also found this)

      • realharo@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        9 months ago

        Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results

        Dude, you need to take a closer look at that paper you linked, if you consider that “pretty good results”. They have a github repo with screenshots of some of the “products”, which should give you some idea https://github.com/OpenBMB/ChatDev/tree/main/misc .

        Not to mention the terrible decision making of the fake company (desktop app you have to download? no web/mobile version? for a virtual board game?)

        (Also the paper never even tried to prove its main hypothesis, that all this multi agent song and dance would somehow reduce hallucinations and improve performance. There is a lot of good AI stuff coming out daily, but that particular paper - and the articles reporting on it - was pure garbage.)

    • realharo@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      9 months ago

      True, as of today. On the other hand, future advancements could very easily change that. On the other other hand, people have been saying the same about self driving cars 10 years ago, and while they do basically work, and are coming eventually, progress there has been a lot slower than predicted.

      So who knows. Could go either way.

      • Naatan@lemmy.one
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        It’s almost a philosophical question of whether I can replace us though. Because for it to be anything more than a tool it needs real intelligence, compassion, etc. Basically it would need a conscious.

        I’m certain it’ll replace some jobs without that, just because being a tool it’ll make us more efficient and that efficiency will eliminate jobs. But I’m not seeing it replace or assimilate entire industries at this stage.

    • violetsareblue@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      Yea…anyone who has asked chatgpt to help them fix a piece of code or write one would know it requires a lot of human editing/good prompting. And a lot of time, what I was trying to accomplish still wouldn’t work.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      True. But if AI makes people more productive it could make it really hard to find work. Especially if you’re straight out of college with zero experience.

      • sanzky@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        finding job as a junior is already a bit harder than it was because so many developers are working remote, which is way harder to do when you are a junior developer.

  • AaronMaria@lemmy.ml
    link
    fedilink
    arrow-up
    42
    ·
    9 months ago

    You can’t write this kind of thing if you understand what a programmer does. The biggest part of the job is finding a good way to break down a problem into executable steps, not just actually writing the code.

  • HairHeel@programming.dev
    link
    fedilink
    English
    arrow-up
    38
    ·
    9 months ago

    software developers with access to GitHub’s Copilot chatbot were able to finish a coding task 56 percent faster than those who did it solo

    Are these competent developers, or the kind who already take 4 or 5 times longer to do a task than their peers?

    • HobbitFoot @thelemmy.club
      link
      fedilink
      English
      arrow-up
      23
      ·
      9 months ago

      Part of the problem with telling people “learn to code” is that a lot of them are bad at it. There may be some diamonds in the rough, but there is a lot of rough out there.

      • upstream@beehaw.org
        link
        fedilink
        arrow-up
        15
        ·
        9 months ago

        Author seems to think that starting salary for developers working for Google is representative as well. The average computer science graduate does not get a job at Google.

        People who learn to code because it means job security are not the ones we look to hire. We look for people who are passionate about it, whose interest in the subject is deeper than skin deep.

        Not looking for people who live and breathe code, but you need to like to solve problems and like to learn new things.

        • anlumo@feddit.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          We look for people who are passionate about it, whose interest in the subject is deeper than skin deep.

          Doesn’t it hurt those people a lot more when their project nearly inevitably gets shut down?

          I’m still bitter about the project I worked on that got killed at my company three years ago.

          • upstream@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            9 months ago

            Where I work we haven’t really shut down any projects in the last six years.

            We’ve had some smaller projects which got parked due to shifting priorities, but other than that we’ve shipped everything else.

            But inevitably, over a career in software there will be projects that don’t make it to production for one reason or another.

            Personally I’m very pragmatic about it, but I know people who get very attached to the code they write.

            I’m the kind of guy that is passionate about what I’m doing when I’m doing it, not necessarily for all eternity. I’ve written stuff that I’d be more than happy for someone to come and replace, but the thing about revenue generating systems (most people say “legacy”, but I prefer this term) is that they aren’t always easy to replace.

            I know we’re not all wired that way, and some people find it harder to see an older system get retired. A consultant I use is more attached to my code than I am, for instance.

    • thesmokingman@programming.dev
      link
      fedilink
      arrow-up
      20
      ·
      9 months ago

      I am a very competent developer. Copilot makes me a lot faster with net new code and tests because a lot of that stuff is very close to boilerplate so Copilot can build 95% of it for me. Declarative stuff like HCL is so much faster. Copilot doesn’t necessarily speed me up for things like bug fixes because a lot of that is code reading. Refactoring? Hell yeah. Way faster.

      Here’s the study. If you look at the actual prompt (near the end), it’s exactly the kind of thing Copilot kicks ass at: something that’s super fucking common all over GitHub (a toy JavaScript server). I really don’t think my job is in jeopardy yet.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      Are these competent developers

      Here’s an example - a few minutes ago I wrote this line of code:

      // date
      

      … and a split second later copilot auto completed exactly the seven lines of code that I would have typed. I read the code, tested it, and moved on to the next block of code.

      Yes, I could have written those seven lines. They were pretty basic - read a value from the database, transform it to a string human form, and send that the user. CoPilot types a lot faster than me (words per second, instead of per minute) and it makes fewer typos.

      It’s also more familiar than I am with all the major libraries. I find I’m spending a lot less time reading documentation or searching the web these days.

      But the real kicker… I work on a small team. My project is full of code that I didn’t write and it isn’t as well documented as I’d like it to be. It’s also not publicly documented, so I can’t use Google or Stack Overflow to find answers. CoPilot has indexed the project, and it knows how to read the date from the database. It also knows what human readable date string format has been used elsewhere in the user interface.

    • astronaut_sloth@mander.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      This is a much better article. OP’s article just shows the author’s surface understanding of how coding works and how well an LLM can actually code. There’s way more that goes into a programming task than just coding.

      I see LLMs as having the potential of being almost like a super library. I can prompt GPT, Claude, etc. to write me a custom function that I copy, paste, test, scrutinize, and almost certainly change. It’s a tool that will make someone a more productive programmer. It won’t completely subsume a human’s ability to be creative and put the pieces together.

      At the absolute worst over the next decade, I could see programming changing from writing and debugging code to prompting, stitching together, and debugging.

      • SenorBolsa@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        It’s the same with CAM software in CNC, like sure, If you set it up right (which is a skill in and of itself) it can spit out a decent toolpath, but there’s tons of magic to be done by hand and understanding the way the underlying G code works allows you to make small changes on the fly.

  • dark_stang@beehaw.org
    link
    fedilink
    English
    arrow-up
    35
    ·
    9 months ago

    Stakeholders struggle to give accurate requirements most of the time, they’re not gonna be programming with ChatGPT soon. AI can really improve a good developer’s output though.

    • ryannathans
      link
      fedilink
      arrow-up
      23
      ·
      9 months ago

      Haven’t found a usecase for it yet where it doesn’t shit out gift wrapped garbage

      • dark_stang@beehaw.org
        link
        fedilink
        English
        arrow-up
        15
        ·
        9 months ago

        You have to give it very specific instructions and small, targeted things to do. I’ve used it to write a lot of terraform, I hate writing IaC.

        • anlumo@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Cods you and your company don’t own, of course, as automatically generated content isn’t copyrightable

          If you combine enough of that code in a creative way, the work will be copyrightable. Unlike the GPL, public domain isn’t viral.

            • abhibeckert@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              9 months ago

              First of all, this isn’t a settled issue. Some people would argue Zarya of the Dawn is owned by everyone who created a copyrighted work that was used to train Midjourney. I hope these people are wrong, but it’s a legal grey area right now.

              The copyright office is not an authoritative source on legal issues. For that you need to find a criminal copyright infringement court case where someone with good lawyers enters a not guilty plea and the case goes all the way through to a final verdict.

              Second - if your code is so simple that you can just ask an LLM to write the entire thing for you… then who cares if it’s copyrighted? Anyone else in the world can just ask the LLM to write it separately for them. Why would they risk a lawsuit by copying your work? They’ll get a better end product by using the latest version of the LLM anyway.

                • abhibeckert@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  9 months ago

                  Even manually composed code (like APIs) can be free of copyright, as Google v Oracle turned out to.

                  We never got a final verdict on that. They settled out of court.

                  It went backwards and forward on appeals/etc with some judges ruling in Google’s favor and some ruling in Oracle’s favor.

                  I listened to a lot of podcasts by IP lawyers throughout the court case and they were often quite confused by a lot of the rulings that were made, which, I guess, is why both corporations had so much success appealing previous rulings. Ultimately we just don’t know.

                  But yes - in general it is a fact that source code often isn’t protected by copyright. Patents should be the “right” tool for protecting source code. Unfortunately patents are even more of a mess than copyright. I’m not a lawyer but I’m 90% sure the answer to “can you patent something created by an AI” is “yes, as long as nobody else has patented it first”.

                  I don’t have access to GPT 4.5

                  I expect it will basically be the same as GPT-4:pretty much useless for writing code. It can only output a few hundred lines at the most, and you can’t give it enough input as context to ask it to incrementally write an entire project a few hundred lines at a time.

                  It’s great at “how do I do X?” but pretty close to worthless at “write real code I’m going to use to do X”. Anything more complex than a 50 line shell script and GPT-4 falls over.

                  CoPilot is what you want for real code in large projects, it does the work to summarise your context (other code you’ve already written) into just the things that are likely to be relevant. However, it can’t write a few hundred lines of code. It will often only output half a line of code, and you need to write the rest. Sometimes it might give you a dozen lines, but only if your code is very predictable and repetitive.


                  Think of GPT-4.5 as stack overflow which can answer almost any question you ask, in a second or two, and without deleting it as a duplicate of someone else who asked a completely different question.

                  Think of CoPilot as really good auto-complete.

                  Neither one is replacing a human programmer. But both are very useful tools for certain tasks.

            • anlumo@feddit.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              https://www.law.cornell.edu/wex/compilation

              Under the Copyright Act, a compilation is a “work formed by the collection and assembling of preexisting materials or of data that are selected, coordinated, or arranged in such a way that the resulting work as a whole constitutes an original work of authorship. The term compilation includes collective works” 17 U.S.C. 101. This gives the compilation a separate copyright from any of the individual pieces within it. An author who creates a compilation owns the copyright of the compilation but not of the component parts.

  • AbstractifyBot@beehaw.orgB
    link
    fedilink
    arrow-up
    8
    ·
    9 months ago
    TL;DR for the linked article

    The article discusses how the rise of AI may impact computer science careers going forward. While coding jobs have long been seen as stable career paths, chatbots can now generate code in various languages. Developers are using AI tools like Copilot to accelerate routine coding tasks. Within a decade, coding bots may be able to do much more than basic tasks. However, programmers will still be needed to guide AI toward productive solutions. Teaching coding is also becoming more challenging, as students could use chatbots to cheat. Conceptual problem-solving skills will remain important for programmers to apply their expertise where AI falls short. The future may belong to those who can think entrepreneurially about how technology solves problems.

    In the end, what students study may matter less than their ability to apply knowledge to technology challenges.


    This comment was generated by a bot. Send comments and complaints via private message.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      However, programmers will still be needed to guide AI toward productive solutions

      So it would still be safe, they’d just be doing different work from what they do now. Same as how other advances in tech stacks made it so we do things differently now than 30 years ago.

      People are very adaptable

      • Em Adespoton@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        Indeed. Do people still use emacs to code, for example?

        Technologies evolve. People coding today in COBOL or Fortran are few and far between (but very well compensated).

      • anlumo@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Yes, that’s the key. I haven’t written assembly code since the 1990s, I use higher-level abstractions to get to the goal more quickly now. AI-generated code is just yet another layer of abstraction away from machine language.

  • mPony@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    I just want to mention the clever graphic design of the Illustration by Ben Kothe

  • HisNoodlyServant@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    There is still going to be a need for programmers and don’t think “AI” is going to be on its own anytime soon. Even using it as an assistant you need to know what you want and have some understanding of the code. Feel like most of what we see of “AI” right now is just propaganda so investors will throw money at them.