What you’re describing is called a linter, and they’ve existed for ages.
The only way I can really think of to improve them would be to give them a full understanding of your codebase as a whole, which would require a deeper understanding than current gen AI is capable of. There might be some marginal improvements possible with current gen, but it’s not going to be groundbreaking.
What I have found AI very useful for is basic repetitive stuff that isn’t easily automated in other ways or that I simply can’t be bothered to write again. eg: “Given this data model, generate a validated CRUD form” or “write a bash script that renames all the files in a folder to follow this pattern”
You still need to check what it produces though because it will happily hallucinate parameters that don’t exist, or entire validation libraries that don’t exist, but it’s usually close enough to be used as a starting point.
re: The warning/grammer checking system.
What you’re describing is called a linter, and they’ve existed for ages.
The only way I can really think of to improve them would be to give them a full understanding of your codebase as a whole, which would require a deeper understanding than current gen AI is capable of. There might be some marginal improvements possible with current gen, but it’s not going to be groundbreaking.
What I have found AI very useful for is basic repetitive stuff that isn’t easily automated in other ways or that I simply can’t be bothered to write again. eg: “Given this data model, generate a validated CRUD form” or “write a bash script that renames all the files in a folder to follow this pattern”
You still need to check what it produces though because it will happily hallucinate parameters that don’t exist, or entire validation libraries that don’t exist, but it’s usually close enough to be used as a starting point.