Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long-term understanding for short-term productivity. We’re optimizing for today’s commit at the cost of tomorrow’s ability.
“Every time we use a lever to lift a stone, we’re trading long term strength for short term productivity. We’re optimizing for today’s pyramid at the cost of tomorrow’s ability.”
Precisely. If you train by lifting stones you can still use the lever later, but you’ll be able to lift even heavier things by using both your new strength AND the leaver’s mechanical advantage.
By analogy, if you’re using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and can’t lift shit if your lever breaks or doesn’t fit under a particular rock or whatever.
Also: assuming you know what the easy bits are before you actually have experience doing them is a recipe to end up training incorrectly.
I use plenty of tools to assist my programming work. But I learn what I’m doing and why first. Then once I have that experience if there’s a piece of code I find myself having to use frequently or having to look up frequently, I make myself a template (vscode’s snippet features are fucking amazing when you build your own snips well, btw).
I like the sentiment of the article; however this quote really rubs me the wrong way:
I’m not suggesting we abandon AI tools—that ship has sailed.
Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?
I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.
Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.
https://nmn.gl/blog/ai-illiterate-programmers
Relevant quote
Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long-term understanding for short-term productivity. We’re optimizing for today’s commit at the cost of tomorrow’s ability.
Hey that sounds exactly like what the last company I worked at did for every single project 🙃
“Every time we use a lever to lift a stone, we’re trading long term strength for short term productivity. We’re optimizing for today’s pyramid at the cost of tomorrow’s ability.”
Precisely. If you train by lifting stones you can still use the lever later, but you’ll be able to lift even heavier things by using both your new strength AND the leaver’s mechanical advantage.
By analogy, if you’re using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and can’t lift shit if your lever breaks or doesn’t fit under a particular rock or whatever.
Also: assuming you know what the easy bits are before you actually have experience doing them is a recipe to end up training incorrectly.
I use plenty of tools to assist my programming work. But I learn what I’m doing and why first. Then once I have that experience if there’s a piece of code I find myself having to use frequently or having to look up frequently, I make myself a template (vscode’s snippet features are fucking amazing when you build your own snips well, btw).
I like the sentiment of the article; however this quote really rubs me the wrong way:
Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?
I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.
Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.
Because the tools are here and not going anyway
The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation
You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do
Swiss army knives have had awls for ages. I’ve never used one. The fact that the tool exists doesn’t mean that anybody has to use it.
Which is?