Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    2 months ago

    Wanting answers to things you don’t want google to know that you don’t know.

      • umami_wasabi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        IMO LLMs are ok to get a head start of searching. Like got a vague idea of something but don’t know the exact keywords. LLMs can help and use the output on whatever search engine you like. This saves a lots of time tinkering the right keywords.

        • dwindling7373@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          Sure, or you could send an email to the leading international institution on the matter to get a very accurate answer!

          Is it the most reasonable course of action? No. Is it more reasonable than waste a gazillion Watt so you can maybe get some better keywords to then paste in a search engine? Yes.

          • kitnaht@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.

              • kitnaht@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                Those were statements. Statements of fact.

                Once the models are already trained, it uses almost no power to use them.

                • dwindling7373@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

                  It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.

                • dwindling7373@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

                  It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.