This comment is evidence that your write-up would have been just fine and understandable by other humans. Using AI to write your technical writing for you makes me lose trust in what you're saying. No one cares if you're a non-native English speaker, just write.
Your English seems good enough to communicate. I'd encourage you to trust your abilities; any misunderstandings can be clarified with follow-up questions if necessary.
This is why you should set up a project ruleset/constitution when you start. Do you want to prefer libraries or inline code? You can even choose at what point you think the trade off is worth it. 1000 lines of code? 10 functions? You can choose whatever.
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
Hmm, I don’t see this much anymore. I typically start a project in plan mode and tell Claude to do some research to bring me 2-3 alternatives. Then we talk about the pros and cons before deciding on the libraries, etc.
On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
honest question, no shade, wasn't that a but your fault for not googling or asking it to consiser existing approaches and solutions? AI will be as dumb as you let it imo. i always ask it to do a bit of research as i craft a plan with it.
I consider myself AI skeptical-ish and I detest when people defend LLMs with "it's user error, prompt better," but in this case it actually is user error.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
On the other hand, I often want an LLM to write things from scratch instead of bringing in 10x the surface area in unnecessary dependencies, and I very, very rarely get better results when these things are let loose on a cesspool of a web. Given that real people have vastly different preferences, you either have to cater to a subset or else require everyone to be a bit more specific with their desires. It's not that surprising.
Yeah, and you can tell the AI to just write the bits of the code that it actually needs for the functionality you are using. If you end up needing more of it, that is fine, the AI will just write more of it when it needs it.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.
https://www.pangram.com/history/dee030c0-0362-43d0-8fbd-bbab...
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
If you tell it to leverage dependencies, it will. If you (like me) prefer that it avoid dependencies, it will.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.