• GregorGizeh@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 months ago

    While I dislike corporate ai as much as the next guy I am quite interested in open source, local models. If i can run it on my machine, with the absolute certainty that it is my llm, working for my benefit, that’s pretty cool. And not feeding every miniscule detail about me to corporate.

    • anarchrist@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I mean that’s that thing. They’re kind of black boxes so it can be hard to tell what they’re doing, but yeah local hardware is the absolute minimum. I guess places like huggingface are at least working to try and apply some sort of standard measures to the LLM space at least through testing…

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I mean, as long as you can tell it’s not opening up any network connections (e.g. by not giving the process network permission), it’s fine.

        'Course, being built into a web browser might not make that easy…

        • GregorGizeh@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Sums up my thoughts nicely. I am by no means able to make sense of the inner workings of an llm anyway, even if I can look at its code. At best i would be able to learn how to tweak its results to my needs or maybe provide it with additional datasets over time.

          I simply trust that an open source model that is able to run offline, and doesnt call home somewhere with telemetry, has been vetted for trustworthiness by far more qualified people than me.