Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

  • kitnaht@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Those were statements. Statements of fact.

        Once the models are already trained, it uses almost no power to use them.

        • dwindling7373@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

          It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.

        • dwindling7373@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

          It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.