The human-code-context problem
Agentic coding is creating a new problem that's not getting enough attention: human engineers are losing context.
At this point, most engineers have tried "vibe-coding." You type out a prompt and repeatedly tell Claude Code to "continue, continue, continue." Soon, you're accepting each new version without carefully checking it. The quick pace pulls you in, and your carefulness fades.
Eventually, though, you hit a wall. The AI-generated code grows quickly, and small features suddenly become thousands of tokens long. Errors stack up, causing endless debugging loops. Your progress slows to a crawl.
You stop to think: your app seems to work—at least on the surface. Even the documentation feels unfamiliar, like someone else's shallow understanding of your project. It reminds you of finding a disappointing two-star GitHub repo. You wonder about the app’s structure, how parts connect, and how it can grow.
The core issue here is losing context. LLMs might manage external details well, but human understanding quickly fades when you're not actively involved with the code.
Previously, gaining context happened naturally. Hours of manual coding forced you to learn every small detail. You wrote tests, fixed logic errors, thought about state management, and carefully chose endpoint designs. You planned for future growth and built thoughtful abstractions. This careful involvement built a strong mental map of your project, making navigation easy, even months later.
Today, code created by LLMs can feel strange. Claude, your AI helper, "knows" the code briefly, but only on a shallow level. When the task ends, its brief understanding disappears. Both you and Claude approach the code fresh every time, losing all context.
Losing context matters greatly. Context makes engineers uniquely valuable. It’s why even highly skilled new engineers spend months "ramping up" when joining a new project. This period is about deeply learning the specific details, patterns, and risks of the project. Only after gaining this deep understanding can engineers contribute strong opinions, important improvements, and good strategic decisions.
Without context, engineers struggle to guide architecture or solve difficult problems, significantly reducing their value. Agentic coding risks taking away engineers' most important asset: accumulated context. This loss reduces engineers to passive reviewers, disconnected from the detailed knowledge necessary for thoughtful design and innovation.
The closest experience I have to agentic coding is when I moved from being an individual contributor to an engineering manager. In management, my hands-on involvement with the code dropped a lot. My understanding became surface-level, based mostly on short conversations, quick reviews of pull requests, and occasional bug checks. My focus shifted from technical details to people-focused concerns—a valuable but fundamentally different skill.
That's exactly how I feel when vibe-coding. I'm supervising an eager intern who quickly produces code that mostly works. Neither of us deeply understands the structure or inner workings of the application.
Some people believe future AI models will soon write flawless code. Upcoming models like Claude 5 or 6 will certainly improve. But regardless of how advanced AI gets, humans must remain the architects and final decision-makers. Security, accountability, and strategic choices require engineers to keep ultimate responsibility. Preserving context is therefore crucial. Without it, we risk blindly approving AI-written code without true oversight.
Maintaining context requires deliberate efforts. Engineers need to regularly set aside uninterrupted time to deeply read and understand AI-generated code. Writing some code yourself, even occasionally, helps embed key details in your mind. Co-writing interactively with AI, similar to traditional pair programming, ensures continued engagement. Keeping code changes small and easy to review, even if AI quickly generates larger updates, is important.
Ultimately, the engineer—not the computer—is responsible for the codebase. Keeping context isn’t just helpful; it’s essential. As agentic coding grows more common, actively investing in human understanding will become a major advantage. Engineers must protect and strengthen their roles as knowledgeable caretakers of their projects. Without deep context, engineers risk becoming passive watchers, overseeing large, unfamiliar codebases without true mastery or meaningful control.