Executives and researchers leading Meta’s AI efforts obsessed over beating OpenAI’s GPT-4 model while developing Llama 3, according to internal messages unsealed by a court on Tuesday in one of the company’s ongoing AI copyright cases, Kadrey v. Meta.
there are efficient, self hostable models. i believe phi can run on mobile devices without too much trouble?
but the smaller the model, the less reliable (mostly)… meta is focusing on large, reliable models because that’s probably what they’re going to use for eg moderation (ha!), generating bullshit bot profiles (🤮), etc… they WANT people to rely on the “send to the server in plain text” architecture rather than efficient on-device stuff