• merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    80% accuracy, that is trash

    More than 80% of most codebases is boilerplate stuff: including the right files for dependencies, declaring functions with the right number of parameters using the right syntax, handling basic easily anticipated errors, etc. Sometimes there’s even more boilerplate, like when you’re iterating over a list, or waiting for input and handling it.

    The rest of the stuff is why programming is a highly paid job. Even a junior developer is going to be much better than an LLM at this stuff because at least they understand it’s hard, and at least often know when they should ask for help because they’re in over their heads. An LLM will “confidently” just spew out plausible bullshit and declare the job done.

    Because an LLM won’t ask for help, won’t ask for clarifications, and can’t understand that it might have made a mistake, you’re going to need your highly paid programmers to go in and figure out what the LLM did and why it’s wrong.

    Even perfecting self-driving is going to be easier than a truly complex software engineering project. At least with self-driving, the constraints are going to be limited because you’re dealing with the real world. The job is also always the same – navigate from A to B. In the software world you’re only limited by the limits of math, and math isn’t very limiting.

    I have no doubt that LLMs and generative AI will change the job of being a software engineer / programmer. But, fundamentally programming comes down to actually understanding the problem, and while LLMs can pretend they understand things, they’re really just like well-trained parrots who know what sounds to make in specific situations, but with no actual understanding behind it.