Ouch.

  • ikt
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Must be gemini specific, couldn’t replicate locally

    • serenissi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      LLMs are inherently probabilistic. A response can’t be reliability reproduced with exact same tokens on exact same model with exact same params.

    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Maybe it being 16 questions in had an effect on it? I don’t know how much it keeps on it’s “memory” for one person/conversation.