harborsparrow said:
Peer code reviews are highly effective IF there is no negative consequence for anyone when faults are detected. The problem is, the theoretical software engineering "experts" decided to count everything, including faults found, and then managers (egged on by HR) decided to weaponize the "scores" from peer reviews to determine who was better at coding. Once that happened, by tacit agreement, none of us ever again turned in a report showing any faults for anyone's software (because, "do unto others what you would have them do to you"). If there are going to be "reports" of faults found from peer reviews, those reports must be anonymized. They were, at first, and then we found all kinds of things in any given review. It was painful to go through, but also extremely helpful to programmers as well as improving the resulting software. IMHO.
Your experience with code reviews is bizarrely different than mine.
1) If I thought someone was holding back on finding or reporting errors in my code, I would visit their office and work things out with them.
2) Reviewers should be getting a "review package" that includes the what is required of the code changes, code additions and updates, the test code additions and updates, and test reports - including a regression test report (along with things like project charge codes).
The first item reviewed is whether that review package is complete. Reviewers can certainly ask for additional test cases if it looks like something is missing.
Supposedly, the author has already written and tested the changes - so their shouldn't be that many bugs.
And from the authors point of view, addressing review issues is the short home-stretch to a much longer process of discovering exactly what the requirements are and putting a solution together.
Most coders prefer coding over reviewing. So, the politics in the places I have worked is more like, if you get to my reviews quickly, I will get to your reviews quickly. Another "politic" that I try to add is to make the review package as useful easy to understand as possible - hoping that certain practices (such as automatically capturing the full test environment in summary reports) catch on.
I certainly understand why "experts", especially "subject-matter experts" are used in a peer-review process - and, hopefully well before the review begins. But if the author looses ownership of the project to those experts, then it isn't really a "peer" review.
From time to time, I point out that software engineering is not a good sport for those who hate being wrong. If only 99% of your lines of code are perfect, your program will simply not work. At 99.9%, it will probably show signs of life, but is unlikely to be useful. At 99.99%, it will probably be usable - but those last bugs will definitely be noticed - and you are not done.