I’ve started noticing articles and YouTube videos touting the benefits of branchless programming, making it sound like this is a hot new technique (or maybe a hot old technique) that everyone should be using. But it seems like it’s only really applicable to data processing applications (as opposed to general programming) and there are very few times in my career where I’ve needed to use, much less optimize, data processing code. And when I do, I use someone else’s library.

How often does branchless programming actually matter in the day to day life of an average developer?

  • marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 year ago

    If you want your code to run on the GPU, the complete viability of your code depend on it. But if you just want to run it on the CPU, it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

    The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

    • LaggyKar@programming.dev
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      And the branchless version may end up being slower on the CPU, because the compiler does a better job optimizing the branching version.

    • Ethan@programming.devOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      If you want your code to run on the GPU, the complete viability of your code depend on it.

      Because of the performance improvements from vectorization, and the fact that GPUs are particularly well suited to that? Or are GPUs particularly bad at branches.

      it is only one of the many micro-optimization techniques you can do to take a few nanoseconds from an inner loop.

      How often do a few nanoseconds in the inner loop matter?

      The thing to keep in mind is that there is no such thing as “average developer”. Computing is way too diverse for it.

      Looking at all the software out there, the vast majority of it is games, apps, and websites. Applications where performance is critical, such as control systems, operating systems, databases, numerical analysis, etc, are relatively rare compared to apps/etc. So statistically speaking the majority of developers must be working on the latter (which is what I mean by an “average developer”). In my experience working on apps there are exceedingly few times where micro-optimizations matter (as in things like assembly and/or branchless programming as opposed to macro-optimizations such as avoiding unnecessary looping/nesting/etc).

      Edit: I can imagine it might matter a lot more for games, such as in shaders or physics calculations. I’ve never worked on a game so my knowledge of that kind of work is rather lacking.

      • LaggyKar@programming.dev
        link
        fedilink
        English
        arrow-up
        22
        ·
        edit-2
        1 year ago

        Or are GPUs particularly bad at branches.

        Yes. GPUs don’t have per-core branching, they have dozens of cores running the same instructions. So if some cores should run the if branch and some run the else branch, all cores in the group will execute both branches, and mask out the one they shouldn’t have run. I also think they they don’t have the advanced branch prediction CPUs have.

        https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads

        • Ethan@programming.devOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Makes sense. The most programming I’ve ever done for a GPU was a few simple shaders for a toy project.

      • ishanpage@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 year ago

        How often do a few nanoseconds in the inner loop matter?

        It doesn’t matter until you need it. And when you need it, it’s the difference between life and death

      • graphicsguy@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Also if you branch on a GPU, the compiler has to reserve enough registers to walk through both branches (handwavey), which means lower occupancy.

        Often you have no choice, or removing the branch leaves you with just as much code so it’s irrelevant. But sometimes it matters. If you know that a particular draw call will always use one side of the branch but not the other, a typical optimization is to compile a separate version of the shader that removes the unused branch and saves on registers

      • 0x0@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        How often do a few nanoseconds in the inner loop matter?

        Fintech. Stock exchanges will go to extreme lengths to appease their wolves of Wallstreet.

    • 18107
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes GPUs are bad at branching. But my ray tracer that is made of 90% branches still runs faster on the GPU than the CPU.

      In general you are still correct.