From Fagan inspections to pull requests: how code review became daily
Code review is so ingrained in modern software development that it can feel like it’s always been part of the process. In reality, the path here was winding and occasionally contentious. We started out holding formal “inspections” over printed code listings, gradually moved toward informal “peer reviews,” then embraced online patch exchanges over email, and finally settled into today’s familiar pull request model. Each step brought its own debates - about the value of rigor versus speed, the role of human judgment, and who should hold the power to approve changes.
The Era of Printed Code and Formal Inspections
In the mid-1970s, a rigorous approach known as “Fagan Inspections” emerged. Engineers would meet face-to-face, carefully reading through physical printouts to detect errors. The emphasis was strictly on finding bugs, not discussing architecture or style. Everyone had a defined role, and the process felt like a controlled experiment rather than a conversation. It worked well at surfacing defects, but these sessions demanded time, discipline, and a willingness to treat code review as a heavyweight procedure. For many teams, that was a tall order.
(Note: Michael Fagan’s original work laid a foundation for these practices, though plenty of contemporaries contributed to the concept of formal inspections around that time.)
From Rigid Ritual to Peer Review
By the 1980s and ’90s, as software sprawled and teams diversified, developers sought methods that felt more natural. Instead of strict, multi-person inspections, many gravitated toward simpler “peer reviews.” These might be as informal as a colleague scanning through code on a lunch break. Some industry voices - like Karl Wiegers in the early 2000s - described a spectrum of review methods, ranging from highly formal inspections down to casual “deskchecks.” This evolution reflected a shift in culture. Code review was becoming less about meeting a rigid standard and more about fostering shared understanding and improving the codebase incrementally.
Still, adoption varied. Even with evidence showing that review, in almost any form, helped quality, many teams struggled to integrate it fully. Some asked: Is this worth the overhead when compilation and testing are cheap? Others countered that no automated tool could match human insight into design and logic. The debate continues to this day - and is becoming more complicated with LLMs.
When Code Review Hit the Inbox
As development moved online, so did code review. Early open-source communities - famously, the Linux kernel - began exchanging patches via email. The conversation and critique unfolded in the same threads, making the process feel continuous and distributed. Version control systems like CVS and Subversion enabled more flexible merging, encouraging developers to propose changes early and get feedback before integration.
This stage was pivotal. Email review was scrappier and less structured than inspections but still drove meaningful discussion. It also hinted that review could happen anywhere, anytime, without everyone crammed into the same conference room.
The Web Interface Revolution
The mid-2000s brought a leap forward: web-based code review tools. Inside Google, Mondrian - a tool influenced by Guido van Rossum and others - experimented with features we now take for granted: personalized inboxes of pending changes, easy diff navigation, and inline commenting. Although Mondrian never saw public release, it demonstrated that review could be integrated seamlessly into a developer’s daily flow.
Public tools soon followed. Gerrit and Phabricator offered structured, web-based workflows that improved on email’s limitations. Then GitHub arrived with its “pull request” model, baking code review into the very core of repository hosting. Rather than separate files of emailed patches, you had a single, central conversation tied directly to the code, with inline comments and a clear audit trail. The friction to reviewing code plummeted.
Classical Debate #1: Should code review focus on correctness and quality, or also shape team norms and coding style? Early tools emphasized correctness. Today’s platforms encourage collaborative refinement - sometimes sparking heated debates over whether a comment addresses a “real bug” or just personal style.
Pull Requests as the New Norm
By the 2010s, the pull request had taken the crown. Whether on GitHub, GitLab, or in internal tools at major tech companies, the review step became baked into the daily push-and-merge cycle. Some companies invented their own variations - Facebook’s Phabricator had “Diffs,” Google’s Critique had “Changelists,” or enterprise platforms tuned to their workflows - but the pattern remained: small changes, proposed early, discussed asynchronously, then integrated quickly.
Classical Debate #2: Is speed or thoroughness more important? Modern teams often clash over how aggressively to push changes through review. Some strive for near-instant approvals to maintain velocity, others fear that rushing sacrifices the original inspection-era emphasis on quality. Overtime, code reviews appear to be trending faster and faster.
Where We Are and Where We’re Headed
Today’s code review practices represent a compromise between the old and the new. We’ve traded some of the formal rigor of Fagan Inspections for a more lightweight, conversational process. This trade-off suits most environments - startups optimize for speed, open-source communities value transparency and inclusivity, and large enterprises split the difference, often imposing automated checks and policies to maintain quality at scale.
The future might bring new twists. AI tools already attempt to highlight suspicious code automatically. Will automated suggestions free reviewers to focus on higher-level concerns? Will some teams rediscover the value of more formal methods for critical code paths, while others double down on frictionless reviews?
We’ve come a long way from printed listings and marathon inspection sessions. The modern code review story is one of steady cultural adaptation: from strict gatekeeping exercises into a more iterative, human, and connected practice. The exact right balance continues to be hotly debated - just as it has been for decades.