Ask HN: How are you LLM-coding in an established code base?
Here’s how we’re working with LLMs at my startup. We have a monorepo with scheduled Python data workflows, two Next.js apps, and a small engineering team. We use GitHub for SCM and CI/CD, deploy to GCP and Vercel, and lean heavily on automation. Local development: Every engineer gets Cursor Pro (plus Bugbot), Gemini Pro, OpenAI Pro, and optionally Claude Pro. We don’t really care which model people use. In practice, LLMs are worth about 1.5 excellent junior/mid-level engineers per engineer, so paying for multiple models is easily worth it. We rely heavily on pre-commit hooks: ty, ruff, TypeScript checks, tests across all languages, formatting, and other guards. Everything is auto-formatted. LLMs make types and tests much easier to write, though complex typing still needs some hand-holding. GitHub + Copilot workflow: We pay for GitHub Enterprise primarily because it allows assigning issues to Copilot, which then opens a PR. Our rule is simple: if you open an issue, you assign
DeepCamp AI