- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I felt it was quite good, I only mildly fell in love with Maya and couldn’t just close the conversation without saying goodbye first
So I’d say we’re just that little bit closer to having our own Joi’s in our life 😅
I tested a little and while it got janky with the conversational back and forth processing, the voices are an improvement. Maya has a nice sound for sure. I would love to be able to run this locally but I wonder how much compute is required for a really good voice model.
They haven’t released the models yet, but seem to suggest they will as Apache licensed. The voice models are in 1B/3B/8B sizes, so that sounds relatively reasonable for consumer hardware.
Yeah that roughly translates to GB vram, but with quantization and stuff it gets more complicated.
Sounds pretty applicable tho in these sizes ^^