Beyond Vibe Coding
My previous post was on AI transformation in our org. This post is about a critical piece of that transformation. Aligning AI agent usage in engineering workflows. Our 5 key takeaways are,
1. AI Assisted Engineering
Senior engineers benefit the most from coding agents. The reason is that agents amplify existing capabilities and gaps. Hence, solid engineering practices are a prerequisite for successful AI adoption. It is also important that engineers understand transformers and LLMs at a reasonable depth, which pays dividends regularly. Vibe coding is still useful as a mental paradigm, but AI in software engineering has evolved into “AI Assisted Engineering”, where every stage of the software engineering pipeline is significantly influenced by AI usage.
2. Spec Driven
Agents are highly dependent on the context you give them. One key source is the documentation our org maintains. Claude Code’s “supposed” leaked system prompt reminds us how detailed specs are used to control agents. When using Claude Code (CC), we build context dynamically from spec assets (requirements, design, detailed design etc). Hence, specs must be high quality, agent friendly, and structured for scaled iteration.
Just like code, documentation is for both human and agent consumption, and is pair-developed. When CC fails, I debug with CC on what in the context was missing, request an document update proposal, review it (like all code), and then update the doc. How and when you document should be guided by agent usage patterns to avoid documentation hell. If your lead engineers can continually leverage micro-spec changes to align agents to org needs, you can unlock scale.
In addition to traditional specs, several special specs are used to improve the agent’s behavior. Text summaries of repos (gitingest, repo2txt), module specific AGENT*.md files, workflow specific skills (same as slash commands) are all steps towards “the perfect context”. While traditional specs are more about “what”, these special specs are more about “how”.
There is broader effort in the industry to best represent experience/skill digitally, and then optimally retrieve it into context to simulate what our brains naturally do. Human + agent spec driven development is a pragmatic step in that direction.
3. Bootstrapped Building
‘Non-optimal context’ and ‘task too complex’ are the two most common reasons for failure. Smaller context for the same information is also better. We trained our engineers to deconstruct-for-agents methodically. Start small, build incrementally while aggressively auto-testing what is already built (agile in action).
A pattern in agent design is to have agents write code, which then enables validation (CC does this very well). This instinct is useful in engineering workflows as well. Where useful, we have the agent build tools that it can then reuse (via MCP). This acts as bootstrapped scaffolding, where specs, test suites, and tools are continually built and help improve further reliable building.
4. Multiple Models
Each frontier model presents specific strengths. We leverage this by switching agents and models, using skills like “consult-codex”, using “gemini-code-assist” for PR reviews, Qwen on Ollama locally. In addition to vendor neutrality, this allows us to adopt best-of-breed, build cost policies, and position the org to maximize net leverage of AI.
5. Background Agents
We are increasingly leaning into background agents for CI/CD workflows, repo level refactoring, security patching etc using long running background agents powered by CC. Spotify’s usage is inspiring in this area.
AI assisted engineering has increased our productivity and the joy of creating software. It is building a competitive moat for our team. It is also reasonable to say now that this is non-optional going forward. I hope our experience helps your journey.