Rewritten from https://www.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_year_as/
Initial conditions are vital for trajectory
The first few thousand lines set the patterns for the project. When starting a new project, pay attention to getting the process, guidelines, and guardrails right from the start. Whenever something is being done for the first time make sure it's done clean. Those early patterns are what the agent replicates across for the rest of the project. Getting design wrong early and the whole project turns to garbage. If your codebase is clean, AI makes it cleaner and faster. If it's a mess, AI makes it messier faster.
Producing code does not imply value
The temporary dopamine hit from shipping with AI agents makes you blind. You think you're going fast, but zoom out and you actually go slower because of constant refactors from technical debt ignored early.
Project health and one-shot prompts
Measure of project health: when you want to do something, can you do it in 1 shot? If no then either the code is becoming a mess, you don't understand some part of the system well enough to craft a good prompt, or the problem is too big to tackle all at once and needs breaking down.
Skill and expertise still matter
There's a big difference between technical and non-technical people using LLMs to build production apps. Engineers who built projects before LLMs know what to watch out for and can detect when things go sideways. Non-technical people can't. Architecture, system design, security, and infra decisions will bite them later.
Design decisions are significant investments
Choosing the right framework, dependencies, or database schema, the foundation everything else is built on can't be done by giving your LLM a one-liner prompt. These decisions deserve more time than adding a feature.
LLM code is not optimized by default
LLM-generated code is not optimized for security, performance, or scalability by default. You have to explicitly ask for it and verify it yourself.
Review code changes
The LLM might use created_at as a fallback for birth_date. That won't be caught with just testing whether the feature works or not. The LLM is an actor pretending to fulfill a role.
LLMs are not databases of facts
Avoid using LLMs for facts and things that could be evaluated with a deterministic function (e.g., 1+1)
No comments:
Post a Comment