Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.
Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.
I’ve run LLMs locally before, it’s the unified API for digital assistants that would be interesting to me. Then we’d just need an easy way to acquire LLMs that laymen could use, but probably any bigger DE or distro can create a setup wizard.
Not just hypothetically but practically too. A foss program called koboldai let’s you run LLMs locally on your computer and a project that takes advantage of this is the koboldassistant project. You can essentially make your own Alexa,Cortana,Siri whatever that doesn’t collect your data and belongs to you
To be fair - people don’t know what they want until they get it.
In 2005 people would’ve asked for faster flip phones, not smartphones.
I don’t have much faith in current gen AI assistants actually being useful though, but the fact that no one has asked for it doesn’t necessarily mean much.
To be fair, in 2005 a lot of people dreamed of “mini portable computers that could fit in their hands”. They just didn’t associate it to the form created with smartphones, and when the smartphones came to be, people were amazed by it. I don’t see the same level of reception when it comes to AI assistants.
I don’t think speed was a complaint anyone had about phones right before smartphones launched.
People were mostly concerned with cell phone plans. Talking used to be charged by the minute, texting was charged per text, and data was practically non-existent.
Cell phones have come a long way, but I think a lot of people take for granted just how much cell service has improved. I pay $25/month for a single line that gives me unlimited talk, text, and data (Visible). Couldn’t be happier.
Would be a cool feature if it could be leveraged in a secure, private, efficient way that was more useful than 99% of the algorithmic monkey typewriter garbage that’s on the market these days. I don’t need a glorified Cleverbot rifling through my unspeakables.
Local LLMs are getting better at a very rapid pace. Still a bit too resource hungry to have running in the background all the time, but for example Mistral-7b is quite competent for its size.
Notice how none of these replies are “AI assistant”?
“AI assistant” just seems like a euphemism for “increased tracking”.
dsfsdfds
Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.
Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.
I’ve had this idea for a long time now, but I don’t know shit about LLMs. GPT can be run locally though, so I guess only the API part is needed.
I’ve run LLMs locally before, it’s the unified API for digital assistants that would be interesting to me. Then we’d just need an easy way to acquire LLMs that laymen could use, but probably any bigger DE or distro can create a setup wizard.
Check out koboldAI and koboldassistabt projects. That’s Litterally the thing you are describing and is Open source
deleted by creator
Yeah. I’m really annoyed by this trend of having programs that could function offline require connecting to a server.
deleted by creator
Not just hypothetically but practically too. A foss program called koboldai let’s you run LLMs locally on your computer and a project that takes advantage of this is the koboldassistant project. You can essentially make your own Alexa,Cortana,Siri whatever that doesn’t collect your data and belongs to you
Open source locally run LLM that runs on GPU or dedicated PCIe open hardware that doesn’t touch the cloud…
To be fair - people don’t know what they want until they get it. In 2005 people would’ve asked for faster flip phones, not smartphones.
I don’t have much faith in current gen AI assistants actually being useful though, but the fact that no one has asked for it doesn’t necessarily mean much.
To be fair, in 2005 a lot of people dreamed of “mini portable computers that could fit in their hands”. They just didn’t associate it to the form created with smartphones, and when the smartphones came to be, people were amazed by it. I don’t see the same level of reception when it comes to AI assistants.
I don’t think speed was a complaint anyone had about phones right before smartphones launched.
People were mostly concerned with cell phone plans. Talking used to be charged by the minute, texting was charged per text, and data was practically non-existent.
Cell phones have come a long way, but I think a lot of people take for granted just how much cell service has improved. I pay $25/month for a single line that gives me unlimited talk, text, and data (Visible). Couldn’t be happier.
cries in Canadian
What if it’s a friendly purple gorilla
Would be a cool feature if it could be leveraged in a secure, private, efficient way that was more useful than 99% of the algorithmic monkey typewriter garbage that’s on the market these days. I don’t need a glorified Cleverbot rifling through my unspeakables.
Local LLMs are getting better at a very rapid pace. Still a bit too resource hungry to have running in the background all the time, but for example Mistral-7b is quite competent for its size.
deleted by creator