• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    5 hours ago

    The model has no idea how much time has passed unless it is explicitly told what time has passed. They’re not capable of forming new memories during routine operation, the black box remains immutable unless you explicitly do additional training on it - in which case you’re supply it with the training materials yourself and you know exactly what’s in them. People who use LLMs for coding already know they’re not perfect, and they’re not going to be all that helpful unless you know enough of the programming language to know what it’s trying to do. I don’t think the sort of subtlety you’re suggesting is really possible to train into an LLM with our current technology level.

    And even though it’s a black box, it’s not magic. It can’t communicate with the outside world in any way other than the ways you provide it, and it can’t do anything unless you’re actively empowering it to do something.

    So I’m not really concerned that DeepSeek has some kind of super secret hidden “programming” that’s going to jump out and stab us. I think its only “threat” is what we already see on the surface - it’s hugely disruptive to the business plans of companies like OpenAI, who were betting on AI remaining a hugely expensive and centralized affair.