Deepseek is welcome in Europe as all others, as long as it complies with EU’s GDPR and the law: A quick reminder that Deepseek is being probed so far in Italy (where it’s prohibited), in France, and Ireland. We’ll see whether other countries follow.
The model has no idea how much time has passed unless it is explicitly told what time has passed. They’re not capable of forming new memories during routine operation, the black box remains immutable unless you explicitly do additional training on it - in which case you’re supply it with the training materials yourself and you know exactly what’s in them. People who use LLMs for coding already know they’re not perfect, and they’re not going to be all that helpful unless you know enough of the programming language to know what it’s trying to do. I don’t think the sort of subtlety you’re suggesting is really possible to train into an LLM with our current technology level.
And even though it’s a black box, it’s not magic. It can’t communicate with the outside world in any way other than the ways you provide it, and it can’t do anything unless you’re actively empowering it to do something.
So I’m not really concerned that DeepSeek has some kind of super secret hidden “programming” that’s going to jump out and stab us. I think its only “threat” is what we already see on the surface - it’s hugely disruptive to the business plans of companies like OpenAI, who were betting on AI remaining a hugely expensive and centralized affair.
Deepseek is welcome in Europe as all others, as long as it complies with EU’s GDPR and the law: A quick reminder that Deepseek is being probed so far in Italy (where it’s prohibited), in France, and Ireland. We’ll see whether other countries follow.
It’s not the DeepSeek service that’s providing a huge opportunity, it’s the model. That can be run locally without any sort of privacy concerns.
Sure. But it’s a black box, no? That model might as well have been trained to sabotage in subtle ways after several weeks.
If developers are asking for code help and copy-paste without really understanding all the pieces, it seems like it could get real bad.
Edit: Am I technically wrong, or do you think I’m just overblowing the risk?
The model has no idea how much time has passed unless it is explicitly told what time has passed. They’re not capable of forming new memories during routine operation, the black box remains immutable unless you explicitly do additional training on it - in which case you’re supply it with the training materials yourself and you know exactly what’s in them. People who use LLMs for coding already know they’re not perfect, and they’re not going to be all that helpful unless you know enough of the programming language to know what it’s trying to do. I don’t think the sort of subtlety you’re suggesting is really possible to train into an LLM with our current technology level.
And even though it’s a black box, it’s not magic. It can’t communicate with the outside world in any way other than the ways you provide it, and it can’t do anything unless you’re actively empowering it to do something.
So I’m not really concerned that DeepSeek has some kind of super secret hidden “programming” that’s going to jump out and stab us. I think its only “threat” is what we already see on the surface - it’s hugely disruptive to the business plans of companies like OpenAI, who were betting on AI remaining a hugely expensive and centralized affair.