I tested out a Deepseek model the other day. It took one minute to generate text and used up all my context space in one message. Local consumer models and “small” server hosted models are probably different classes because for my home pc it was a big performance downgrade.
I tested out a Deepseek model the other day. It took one minute to generate text and used up all my context space in one message. Local consumer models and “small” server hosted models are probably different classes because for my home pc it was a big performance downgrade.