Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!
Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
Most upvoted means most correct.
“If it’s on the internet it must be true” implemented in a billion dollar project.
Pretty sure AI will start telling us “You should not believe everything you see on the internet as told by Abraham Lincoln”
Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated
Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!
Can’t even rly blame the AI at that point
Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
It should if you are gonna feed it satire to learn from
Sarcasm detection is a very hard problem in NLP to be fair
If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that