![](https://lemmy.jelliefrontier.net/pictrs/image/4674667d-6a51-4f15-be2f-a6c8a874201f.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
0·
3 days agoIf you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.
This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.
Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.
It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.