Seems like a really cool project. Lowering the barrier to entry of locally run models. As llamacpp supports a ton of models, I imagine it be easy to adapt this for other models other than the prebuilt ones.
Seems like a really cool project. Lowering the barrier to entry of locally run models. As llamacpp supports a ton of models, I imagine it be easy to adapt this for other models other than the prebuilt ones.
When I upload an image, it seems to be getting re-encoded every time I send a message, which takes a while. I’ve never really messed around with images in Llama - is this a normal/necessary thing?