![](https://fedia.io/media/8b/63/8b63f851bbb819f39f203252aae905f7939f496d660e9914db240bf6db605f34.webp)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I keep having to repeat this, but the conversation does keep going on a loop: LLMs aren’t entirely useless and they’re not search engines. You shouldn’t ask it any questions you don’t already know the answer to (or have the tools to verify, at least).
Yep. Or because you can recognize the answer but can’t remember it off the top of my head. Or to check for errors on a piece of text or code or a translation, or…
It’s not “trust but verify”, which I hate as a concept. It’s just what the tech can and cannot do. It’s not a search engine finding matches to a query inside a large set of content. It’s a stochastic text generator giving you the most likely follow up based on its training dataset. It’s very good autocorrect, not mediocre search.