![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://aussie.zone/api/v3/image_proxy?url=https%3A%2F%2Ffry.gs%2Fpictrs%2Fimage%2Fc6832070-8625-4688-b9e5-5d519541e092.png)
The issue with LLMs that I have is that while they are great at certain tasks, they are bad at anything, let’s call it factual, due to their nature.
I can for example use it to quickly draft up a email or a piece of python code, and I can immediately see whether or not the response it generated is actually what I want.
If I go ask it what the hottest day in a given country was or ask it to explain something, I have absolutely no idea whether it’s bullshit or not and I will have to double check it anways.
I think the learning curve with LLMs as a tool is to be able to know when to use it and when to rely on other sources instead.
Any hyrule well