Janelle Klein of New Iron, the Program Chair of AgileAustin, hosted a meeting that was an open space on: How do we go from principals/practices/enthusiasm to REAL honest-to-goodness improvement? For example, Janelle offered:
- For everything you really want to improve, make a story for it and put it into your upcoming sprint. Make it an action item. Beware the constant pressure to just live in the now (and "get stuff done") and not improve.
The meeting was on December 6. I am just now getting around to making my notes presentable as I have been sick. I attended a group discussion that had to do with how to have code reviews that "you can live with" given that most employers do not want to budget for pair programming. My notes:
- Why do teams want to do code reviews? Answer: Programmers care about maintainability.
- FishEye and Crucible are code review tools not unlike SmartBear (which I used at framesdirect.com). Crucible works in tandem with FishEye and is used for collocated teams.
- If a defect makes it through a review, perhaps the reviewer should have to fix the defect. This was a suggestion.
- A point of discussion: Should a code review be communal or between two individuals (the reviewer and the reviewed)? If you send a swath of code to the whole team for review, no one owns it and, likely, no one is going to comment on it. Yet, a goal of a review may be to educate the team as to how the code works. Perhaps communal ownership should not be measured in finding defects. These insights suggested a division between reviewing code for knowledge sharing and reviewing code for bug hunting and maintainability with regards to who should be involved.
- VNC (Virtual Network Computing) screen sharing reduces the cost of reviews in many cases, taking reviews from formally scheduled events to the realm of "Hey dude, got a second?"
- It is wise to have defect retrospectives.
- There was discussion over a concern that a code review process which is too formal leads participants to zone out and participate less. The need for structure was nonetheless emphasized. The end consensus seemed to be that a team should start with a rigorous process and then perhaps scale back to a lightweight process. (The team should build up instincts for what works.)
- Should a code review be subjective or objective? There seemed to be a desire for both. Hard limits for code review such as code formatting requirements seem prudent, yet other concerns (example: architectural approaches) are indeed more subjective. Zen and the Art of Motorcycle Maintenance was referenced underscoring that objective is not superior to subjective.
- Does a junior/junior code review work to some or any extent as a junior/senior code review might? Such reviews could allow developers to grow together or could allow developers to reinforce each other's bad work. There was not a consensus.
- Michael Fagan invented an inspection process for defects. The reference point for a defect is a shared standard. This develops a community approach and takes the personal sting out of criticism.
- One participant was "not too worried about design" (in terms of reviews) given how community-owned it is. Teams whiteboard design all the time.
- Of checklists:
- NASA used to use checklists. Once they found an escape they would add it to their checklists. When the trouble no longer appeared in checking it came off the checklist(s).
- Community has to own a checklist and keeping a checklist community owned can be a challenge. Different checklists to come in from different angles (design, etc.) is a good solution here.
- A record of commonly occurring bugs may be addressed and documented by way of a checklist.
- Things that might appear on checklists:
- Review of Dependencies!
- Are there any one-off loop errors?
- Look for inappropriate mutuable state and inappropriate public setters.
- Every time I do one of these I have to remember to do this other thing over here.
Interesting things said that were slightly off topic:
- Architecture is a process of restraints!
- It was argued that tests that straddle a lot of concerns may be, in fact, very effective. It was also argued that obscure tests are bad because the day the architecture changes, the tests are hard to understand and that tests should be clear when they fail as to why they are failing.
No comments:
Post a Comment