They support Claude, ChatGPT, Gemini, HuggingChat, and Mistral.

  • dukatos@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    9 hours ago

    And I still can’t convince it to stop caching the images because it does not follows the RFC.

    • HouseWolf@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      15 hours ago

      I switched a while back before all the Ai and “privacy preserving” telemetry stuff.

      Every update note I see for Firefox now just reinforces my decision.

  • JokeDeity@lemm.ee
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    1 day ago

    Unpopular opinion, I think they’re doing it right as well as it can be at least. It’s completely optional and doesn’t seem to be intrusive.

  • fibojoly@sh.itjust.works
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    1 day ago

    Didn’t want it in Opera, don’t want it in Firefox. I mean they can keep trying and I’ll just keep on ignoring this shit :/

  • Scrollone@feddit.it
    link
    fedilink
    arrow-up
    11
    arrow-down
    5
    ·
    1 day ago

    Wow, great job Firefox. Thanks.

    If I wanted unreliable bullshit like AI, I’d use Chrome.

  • nu11@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    7
    ·
    2 days ago

    I don’t understand the hate. It’s just a sidebar for the supported LLMs. Maybe I’m misunderstanding?

    Yes, I would prefer Mozilla focus on the browser, but to me, this seems like it was done in an afternoon.

    • PrefersAwkward@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      20 hours ago

      It seems like common cynicism. Mozilla add this feature, as not to yield major features to other browsers. Mozilla’s lets you natively have lots of different AI solutions to pick from.

      Not every feature is for everyone. Not every feature is done being improved on at release.

      And in spite of popular opinions, organizations don’t do just one thing and then do just the next thing and the thing after that. Organizations can and do focus on and prioritize many things at the same time.

      And for people who are naysaying AI at every mention, it has a lot of great and fascinating uses, and if you think otherwise, you really should try them more. I’ve used it plenty for work and life. It’s not going away, might as well do some nice things with it.

    • Scrollone@feddit.it
      link
      fedilink
      arrow-up
      10
      arrow-down
      5
      ·
      1 day ago

      I want my browser to be a browser. I don’t want Pocket, I don’t want AI, I don’t want bullshit. There are plugins for that.

  • celeste@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    1 day ago

    If they do it in a privacy-preseeving way, this could help them get back market share which will generally benefit an open internet.

      • celeste@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        Because browsers are the most useful tool on most computers. Ordinary People go on google/ask chatgpt for mundane questions. If their browser can do that they need 1 app less and it will be more convenient which is what especially non-tech savy people care about.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 day ago

    I will say, the Le Chat provider is pretty decent. You really can use it more natural language. “Rewrite it with a better rhyme scheme” “remove the last line” and it just got it.

    Why no local option though? Why no anonmysing option?

    Edit: There is a right click option which does make this officially actually useful for me now (summarize this!).

    Other models do have RAG options and Mist real supports making agents with specified documentation too to at least fine tune too (not as good as full grounding though IMHO)

  • ocassionallyaduck@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    2 days ago

    Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn’t be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.

    The not so dirty secret is that ChatGPT 3 vs 4 isn’t that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.

    And the simplified models that run “only” 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.

    Running a a “smol” model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.

    I’ve been yelling from the rooftops to some stupid corporate types that once the model is trained, it’s trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.

    • LWD@lemm.ee
      link
      fedilink
      arrow-up
      29
      ·
      2 days ago

      There’s the tragedy with this new feature: they fast-tracked this past more popular requests, sticking it into Release Firefox.

      But they only rushed the part that connects to third parties. There was also a “localhost” option which was originally alongside the Big Five corporate offerings, but Mozilla ultimately decided to bury that one inside of the about:config settings.

      • MrOtherGuy@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        2 days ago

        I’m guessing that the reason (and a good one at that) is that simply having an option to connect to a local chatbot leads to just confused users because they also need the actual chatbot running on their system. If you can set up that, then you can certainly toggle a simple switch in about:config to show the option.

    • ilhamagh@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Can you point me to some resources to running smol llm?

      My use case prob just to help “typing” miscellaneous idea I have or check for my grammatical error, in english.

      Thanks, in advance.

    • Lojcs@lemm.ee
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Last time I tried using a local llm (about a year ago) it generated only a couple words per second and the answers were barely relevant. Also I don’t see how a local llm can fulfill the glorified search engine role that people use llms for.

      • ocassionallyaduck@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        Try again. Simplified models take the large ones and pare them down in terms of memory requirements, and can be run off the CPU even. The “smol” model I mentioned is real, and hyperfast.

        Llama 3.2 is pretty solid as well.

        • Lojcs@lemm.ee
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 day ago

          These are the answers they gave the first time.

          Qwencoder is persistent after 6 rerolls.

          Anyways, how do I make these use my gpu? ollama logs say the model will fit into vram / offloaing all layers but gpu usage doesn’t change and cpu gets the load. And regardless of the model size vram usage never changes and ram only goes up by couple hundred megabytes. Any advice? (Linux / Nvidia) Edit: it didn’t have cuda enabled apparently, fixed now

          • ocassionallyaduck@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            1 day ago

            Nice.

            Yea I don’t trust any AI models for facts, period. They all just lie. Confidently. The smol model there at least tried and got it right at first… Before confusing the sentence context.

            Qwen is a good model too. But if you wanted something to run home automation or do text summaroes, smol is solid enough. I’m using CPU so it’s good enough.

      • TheDorkfromYork@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        They’re fast and high quality now. ChatGPT is the best, but local llms are great, even with 10gb of vram.