Hi,

I was using my search engine to look for available Emacs integrations for the open (and local) https://gpt4all.io/ when I realized that I could not find a single one.

Is there somebody who’s using GPT4All with Emacs already and did not publish his/her integration?

  • karthink@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In that case you can use it right now with gptel, which supports an Org interface for chat.

    Enable the server mode in the desktop app, and in Emacs, run

    (setq-default gptel-model "gpt4all-j-v1.3-groovy"
                  gptel-host "http://localhost:4891/v1"
                  gptel-api-key "--")
    

    Then you can spawn a dedicated chat buffer with M-x gptel or chat from any buffer by selecting a region of text and running M-x gptel-send.

    • ahyatt@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It isn’t actually the same, though - they don’t support streaming. How are you getting around this?

      • nickanderson5308@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I have played with this a bit in the last few days.

        It’s nice and minimal, but I am hitting some issues with not enough memory. It seems gptel wants to load whatever model is specified, but I don’t have enough memory to run the model GPT4All desktop loads by default plus what gptel wants to load.

      • karthink@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In the meantime I added explicit support for GPT4All, the above instructions may be incorrect by the time you get to it. The Readme should have updated instructions (if it mentions support for local LLMs at all).