Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • OpenStars@discuss.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Now who is anthropomorphizing? It’s not about “blame” so much as needing words to describe the event. When the AI cannot be relied upon, bc it was insufficiently trained to be able to distinguish truth from reality, which btw many humans struggle with these days too, that is not its fault but it would be our fault if we in turn relied upon it as a source of authoritative knowledge, merely bc it was presented in a confident sounding manner.

    No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

    Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion? The OP wasn’t about running an AI model in this direct manner, it was about doing Google searches, where the results are already precomputed. It does not become a “hallucination” until whoever asked for the socks to be considered as edible tries to pass those results off in a wider context - where they are generally speaking considered inedible - as being applicable, when they would not be.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion?

      Because that’s exactly what happened here. When someone Googles “how can I make my cheese stick to my pizza better?” Google does a web search that comes up with various relevant pages. One of the pages has some information in it that includes the suggestion to use glue in your pizza sauce. The Google Overview AI is then handed the text of that page and told “write a short summary of this information.” And the Overview AI does so, accurately and without hallucination.

      “Hallucination” is a technical term in LLM parliance. It means something specific, and the thing that’s happening here does not fit that definition. So the fact that my socks example is not a hallucination is exactly my point. This is the same thing that’s happening with Google Overview, which is also not a hallucination.