Less horrifying conceptually, but in Canada a major airline tried to replace their support services with a chatbot. The chatbot then invented discounts that didn’t actually exist, and the courts ruled that the airline had to honour them. The chatbot was, for all intents and purposes, no more or less official a source of data than any other information they put out, such as their website and other documentation.
The part that’s being ignored is that it’s a problem, not the existence of the hallucinations themselves. Currently a lot of enthusiasts are just brushing it off with the equivalent of boys will be boys AIs will be AIs, which is fine until an AI, say, gets someone jailed by providing garbage caselaw citations.
And, um, you’re greatly overestimating what someone like my technophobic mother knows about AI ( xkcd 2501: Average Familiarity seems apropos). There are a lot of people out there who never get into a conversation about LLMs.
It really needs to be a disqualifying factor for generative AI. Even using it for my hobbies is useless when I can’t trust it knows dick about fuck. Every time I test the new version out it gets things so blatantly wrong and contradictory that I give up; it’s not worth the effort. It’s no surprise everywhere I’ve worked has outright banned its use for official work.
Maybe on Lemmy and in some pockets of social media. Elsewhere it definitely doesn’t.
EDIT: Also I usually talk with IRL non-tech people about AI, just to check what they feel about it. Absolutely no one so far knew what hallucinations were.
Who’s ignoring hallucinations? It gets brought up in basically every conversation about LLMs.
People who suggest, let’s say, firing employees of crisis intervention hotline and replacing them with llms…
“Have you considered doing a flip as you leap off the building? That way your death is super memorable and cool, even if your life wasn’t.”
-Crisis hotline LLM, probably.
Less horrifying conceptually, but in Canada a major airline tried to replace their support services with a chatbot. The chatbot then invented discounts that didn’t actually exist, and the courts ruled that the airline had to honour them. The chatbot was, for all intents and purposes, no more or less official a source of data than any other information they put out, such as their website and other documentation.
i approve of that. it is funny and there is no harm to anyone else other than the shareholders, so… 😆
The part that’s being ignored is that it’s a problem, not the existence of the hallucinations themselves. Currently a lot of enthusiasts are just brushing it off with the equivalent of
boys will be boysAIs will be AIs, which is fine until an AI, say, gets someone jailed by providing garbage caselaw citations.And, um, you’re greatly overestimating what someone like my technophobic mother knows about AI ( xkcd 2501: Average Familiarity seems apropos). There are a lot of people out there who never get into a conversation about LLMs.
It really needs to be a disqualifying factor for generative AI. Even using it for my hobbies is useless when I can’t trust it knows dick about fuck. Every time I test the new version out it gets things so blatantly wrong and contradictory that I give up; it’s not worth the effort. It’s no surprise everywhere I’ve worked has outright banned its use for official work.
Maybe on Lemmy and in some pockets of social media. Elsewhere it definitely doesn’t.
EDIT: Also I usually talk with IRL non-tech people about AI, just to check what they feel about it. Absolutely no one so far knew what hallucinations were.