- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I felt it was quite good, I only mildly fell in love with Maya and couldn’t just close the conversation without saying goodbye first
So I’d say we’re just that little bit closer to having our own Joi’s in our life 😅
I do not look forward to how often the Chinese Room is about to come up.
I tested a little and while it got janky with the conversational back and forth processing, the voices are an improvement. Maya has a nice sound for sure. I would love to be able to run this locally but I wonder how much compute is required for a really good voice model.
They haven’t released the models yet, but seem to suggest they will as Apache licensed. The voice models are in 1B/3B/8B sizes, so that sounds relatively reasonable for consumer hardware.
Yeah that roughly translates to GB vram, but with quantization and stuff it gets more complicated.
Sounds pretty applicable tho in these sizes ^^
I think this is the natural next evolution of LLMs. We’re starting to max out on how good a string of text can represent human knowledge, so now we need to tokenize more things and make multimodal LLMs. After all, humans are far more than just speech machines.
Approximating human emotion and speech cadence is very interesting.
Hmmh, always the same thing. On release, it’s just an announcement with a promise to open “key components” sometime. I’ll add this to the list of bookmarks to revisit at a later date. I wish they’d just get it ready and only then publish things.