73

Over the summer I changed jobs from a big company as a senior engineer to a way smaller company as a principal engineer. Now I oversee about 20 jr-to-mid-level engineers working on 3 different reservation-booking systems. All the booking systems run on the same proprietary back-end framework, managed by another team that I don't oversee.

Last month on the night before launching a big release for one of the booking systems, a senior engineer ("Clint") on the back-end support team left a bunch of comments on a Pull Request we had opened to merge our release candidate into Master

The feedback ranged from somewhat helpful to somewhat unhelpful. We merged to Master anyway and the next day I asked if he could have reviewed that code earlier instead of waiting until after integration testing. When he left the feedback, it was the end of the day and we were preparing a ticket for our release engineer to deploy at 4:30am the next morning.

He told me that it's not his job to teach my engineers (he's right, it isn't). But it's hard for me to coach 20 engineers at once, even with them doing peer reviews and policing each other's code. I'm also worried my team was a little demotivated since they were unable to do anything to address the feedback.

We have another release scheduled right after we get back from Thanksgiving, and based on how Clint's declined all our code review meeting invitations this month, I think I'm going to see a repeat of the same thing in a couple days.

I can't tell if Clint really wants to help or just flex his ego. I would love his help coaching our junior developers, but the way he's doing it is unhelpful. I don't think my engineers will ever be able to catch everything Clint can.

How can I tell Clint if he wants to provide feedback, it needs to be on our terms?

EDIT: I am embarrassed I left this detail out but our engineers open pull requests from their feature branches to development which is where this feedback should go (on those pull requests)... when those changes are all ready to go to our production environment (after our QA engineers do integration testing and verify the changes are safe according to them) we open a PR to merge to Master after eng's approve the changes and agree we didn't introduce anything horrible and then deploy the next morning

Henrik Rosseau
  • 493
  • 1
  • 4
  • 6
  • 18
    What is Clint's role in the PR/merge process? Do you need his review to be allowed to merge? Is he just providing advice? – sleske Nov 20 '18 at 07:55
  • 10
    What's the overall process here? From what you have written, integration testing has been done and you have a PR open to merge to master, and you also imply you deploy from master as a result. Why is the PR done after the integration testing? How long was the PR open for? What other opportunities existed for feedback? What happens to your testing if a PR comment is made which highlights a fundamental issue? It sounds like there is a lot to be discussed here in a wider context with the entire team, and not necessarily an issue with Clint leaving comments on the PR imho. –  Nov 20 '18 at 09:29
  • 24
    Why did you wait until the last possible moment to merge? Is there a reason you didn't aim to merge a few hours earlier, at which point there probably would've been enough time to address any comments? If a pull request is open, it's expected that people will leave comments - that seems more like a problem on your side than their side. Also, how was he supposed to know he shouldn't comment? Note that if the comments themselves are requesting that you basically waste time changing things unnecessarily, you have a different question on your hands. – Bernhard Barker Nov 20 '18 at 09:43
  • 3
    It's a matter for OP to clarify, but to all of the commenters suggesting the PR should have been done earlier, if their process is anything like where I work (git flow) then the PR/merge to master is how the deploy to production is triggered. There would have been many earlier PRs to a develop (or trunk) branch for each individual feature, and PRs to various release branches for QA or other environments. Code reviews should have happened then, not on the final merge to master. – David Conrad Nov 20 '18 at 22:30
  • @sleske Great question! That may be the source of our problem, but I see his role as letting us know none of our changes are harmful based on what he understands of the booking platform. I can merge without his review, and he is just providing advice – Henrik Rosseau Nov 21 '18 at 03:10
  • @Moo Thank you for the thoughtful response, I hope I can describe the overall process well enough: QA engineers merge feature branches one by one to our development branch, then we deploy the development branch to our dev environment for integration testing. If tests pass, they merge development up to master for smoke tests on a staging environment. Changes are only merged to master after passing integration tests, and by way of pull request as an additional opportunity to catch breaking changes. I'd like his feedback earlier when we can react properly, but he doesn't want to provide it then – Henrik Rosseau Nov 21 '18 at 03:23
  • @Dukeling We try to keep the Master branch in sync with production and deploy shortly after merging to it. Our QA engineers expect a day for testing the changes on development branch before merging to Master for smoke testing and deploying to production shortly after. My question is about keeping feedback limited to things that would block a deploy when the person offering feedback doesn't want that kind of restriction – Henrik Rosseau Nov 21 '18 at 03:28
  • @DavidConrad Yes, close enough. It's not automatic, but we try to keep new commits off Master until they have been thoroughly tested and we are ready to deploy to production – Henrik Rosseau Nov 21 '18 at 03:29
  • 2
    Code review his code review? Code Review Review: "Interesting feed back. Wrong in places. Needs more proactive approach. Practice better timing." – WernerCD Nov 22 '18 at 00:35
  • You don't mention ever informing Clint of when you expect him to complete feedback by. Have you communicated any sort of timeline with him at all? If you have timelines he needs to work with, he needs to know that so he communicate his tasks and priorities to his superiors. – jpmc26 Nov 22 '18 at 00:38

10 Answers10

200

Unless Clint finds any major, “do not deploy”, bugs, I would simply thank him for his feedback and explain to him how you intend to address the points he raises (if valid) in future releases.

If he has a problem with this, then you have the opportunity to explain why it would be better for him to give feedback earlier.

Ultimately, it is his problem he is giving feedback so late. Don’t make it yours by taking the feedback as a personal / team failing.

Joe Stevens
  • 4,258
  • 4
  • 16
  • 21
  • 44
    +1 this is the right answer. Feedback is feedback, but as the answer says unless "Clint" found some kind of blocker issue, the feedback will addressed later. If "Clint" is not happy about that, they can review the code earlier. – jcmack Nov 20 '18 at 05:54
  • 4
    Would you still say it's his problem if he didn't know they wanted to merge that evening? – Bernhard Barker Nov 20 '18 at 09:55
  • 18
    It's not his problem if this was his first real opportunity to give feedback. And even if it was, his problem is your problem is everybody's problem because you're all part of the same team and all wish it to succeed, no? – Lightness Races in Orbit Nov 20 '18 at 10:35
  • 5
    I fail to see how it's Clint's problem. Clint is a part of another team, it seems he is giving feedback independently of the process. The problem is that OP does not perceive it as very useful (because of timing). I think no one there has a clear idea what the code review should be about, or everyone has their own idea. It can serve many purposes. They should get on the same page. – luk32 Nov 20 '18 at 11:59
  • 6
    I think this answer is right - I just don't see any 'problem' here. Maybe I'm misunderstanding? Clint provided new information, some of which OP believes is valuable. Awesome. That's always a good thing. If that info indicates that you can't deploy or merge or whatever, then it's BETTER to know asap. If that info indicates minor issues, awesome, log it and triage it appropriately. No problems here. If you want to be more responsive to Clint's feedback then sure, you need to get him involved earlier but I still see this as a perfectly fine situation as-is. – Rob P. Nov 20 '18 at 15:56
  • 4
    This is the way a company should handle it. It's a mistake to think that every code review comment requires an action. Once you've caught any obvious mistakes (hopefully by the first person to review) a lot of comments will be "things that would be nice to do" or "things to do better next time" or just "things you could do differently but maybe will keep on doing the same". – Stuart F Nov 20 '18 at 16:05
  • 2
    Lol, it's Clint's problem in the sense that IF he wants his feedback to be taken seriously, THEN it is up to Clint to provide such feedback sufficiently early in the process. – industry7 Nov 20 '18 at 18:38
  • 1
    End of the day comment for a 4:30 am deployment the next day? Heck, I changed procedures to allow deployments only on Mondays to Wednesday because someone f/ked up on a Friday post workday deployment and created a huge mountain of work on Monday. Clint shouldn't have expected the feedback to be actionable except for the most severe show-stopper type bugs. – Nelson Nov 21 '18 at 01:10
  • 1
    This doesn't address how to discuss this issue with the team. But it's fairly obvious -- you explain to the team that this is very helpful feedback that will allow them to do better next time. No deployed code is ever perfect and there are always things you can do better and someone who points that out to you is helping you learn. – David Schwartz Nov 21 '18 at 17:28
  • I would also add that this is also a management issue. All code reviews should be done before QA. Pre-QA code fixes are cheap, Post-QA code fixes are expensive because it has to pass QA again; it's only getting fixed if it's a show stopper. If someone is doing a code-review post QA, that is wasting the companies money. Either the process should be addressed so that code-review happens on time, or don't bother at all. – Tezra Nov 21 '18 at 20:57
  • @RobP. The problem I see is that OP is taking Clint's feedback too seriously. It's clearly not intended to apply to the release being made, since Clint is unlikely to be an idiot, and so should be kept in mind for future releases. – David Thornley Nov 27 '18 at 22:58
67

Either a code review is required for the release, or it is not required. If it is required, then the reviewer must be responsible for reviewing this in time. You must have the power to go to his desk and say "drop everything else and do this review, or we can't make the release date". And you must have the power to say "sorry, we couldn't release in time because the review wasn't done in time".

If it is not required, then you release.

PS. If Clint is given one day to review weeks or months of work, that seems to indicate that the review wasn’t needed. But if Clint finds problems in this short time that seems to indicate that previous reviews were not quite as good as they should have been.

gnasher729
  • 169,032
  • 78
  • 316
  • 508
  • 22
    +1 - responsibility without the power to really do things is a problem I've seen more often than I'd like to. – Mołot Nov 20 '18 at 11:50
  • 9
    Given that "Clint" was apparently given less than a day to review man-weeks or perhaps man-months of work, this answer seems quite a way off the mark. Making the reviewer "responsible" for reviewing code faster than it's actually possible for them to review it is clearly unreasonable. And since Clint apparently did manage to leave some useful comments in the few hours he had, it seems all the more perverse to suggest that anything would be fixed if only the OP had the ability to force Clint to drop his own priorities and review or to blame Clint for being too slow. – Mark Amery Nov 20 '18 at 12:03
  • 8
    @MarkAmery I don't see in the question any indication that Clints time was limited by OP. If OP had the power to ask "how much time do you need for review?" and then power to say "OK, so you need to start at or before X date, because we need feedback before Y", then it would be OK. If OP can't do it, he shouldn't be held responsible for failure to respond to feedback. It should be management problem, not OP. It is not about blaming Clint, but about not requiring OP and his team to do the impossible. – Mołot Nov 20 '18 at 12:51
  • +1 Also, OP's team is free to address non-blocker feedback before the next release. – knallfrosch Nov 20 '18 at 14:15
  • @Mołot Peter Parker's Uncle Ben was close when he said "with great power comes great responsibility". They don't come together. They're the same thing. If I don't have the power to control how something is done, I am not responsible for how it's done wrong. Too many people try to divorce power and responsibility from one another, but it simply can't be done, and the attempts to do so fail. Whenever someone tries to hold me responsible where I have no power, I say "what exactly do you think I could have done differently to prevent ${BAD_THING}?"; the answer usually amounts to "nothing". – Monty Harder Nov 20 '18 at 20:10
  • @MontyHarder Redefining the term "responsible" to suit the current discussion is hardly beneficial. If they ask you about the review, and expect your response, and you do respond (mainly because you tie internally the response to your professional self-image), then that's it: you are as responsible for the thing as it gets. You can fit such definition without having the complete power or control over every aspect. – kubanczyk Nov 21 '18 at 10:55
  • @Mołot My reading of the sequence of events was that there were less than 24 hours between the PR being opened (and thus available for Clint to review) and it being merged over his objections. That interpretation is now confirmed by the OP. It was opened at some point during a workday, Clint managed to ram in some review, and then the team ignored it all and merged at 4:30 AM the next day, with the OP blaming Clint for not having reviewed earlier the PR that didn't yet exist. It seems pretty nuts to me. – Mark Amery Nov 26 '18 at 21:36
  • @MarkAmery in such circumstances it is a problem with schedule. And I still think OP should either be able to ask how much time Clint needs and schedule accordingly or, if he doesn't have sucj power, OP shouldn't be blamed for short time Clint had or the fact review was ignored. Only person to blame is the one who designed such too tight schedule. – Mołot Nov 26 '18 at 21:58
  • @Mołot Sure, it may be out of the OP's hands. (Or may not. You're assuming, I think, that the schedule was outside the OP's control, but nothing corroborates that in the question; the OP manages a large team and plausibly sets or at least influences their schedule.) Regardless, that doesn't redeem an answer that blames the situation specifically and solely on the OP's lack of power over Clint when the OP concedes that Clint immediately left useful feedback the same day that he was given something to review. No amount of power will let the OP compel Clint to review code before it exists. – Mark Amery Nov 26 '18 at 22:08
  • @MarkAmery According to the question, Clint has been invited to code reviews but has not accepted. There's no evidence that OP is withholding the code until late in the process. Clint apparently chooses on his own to review only at the end of the process. The conclusion is that Clint doesn't think what input he could get isn't really worth his time. – David Thornley Nov 27 '18 at 22:47
  • @DavidThornley Yes, according to the question, Clint has now been invited to code review meetings. Even if we take his declining them as evidence that he's choosing not to review until the end of the process (as opposed to, say, them clashing with more important meetings, or him being instructed by his manager not to spend time doing another team's work for them), that's got nothing to do with the prior incident described by the OP. The conclusion doesn't follow at all, since we have no information on why Clint declined the meetings. – Mark Amery Nov 27 '18 at 23:06
8

It sounds like there are a number of issues here, but all of them are addressable.

Process issues:

  • What is the purpose of this pull request? Is it meant to be reviewed, or is it simply a means to merge to master from a QA'ed and tested branch? If the former, then you should consider different scheduling for the review process. If the latter, it should be merged almost the instant it's created.
  • What do you do with "late" review comments? It sounds like Clint's comments didn't go through the full review process, even though they were "late". (What defines "late"?) Even if you've committed to merging and none of the comments are show-stoppers, it should be possible to classify his comments as actionable or not, and reply in the PR as appropriate.

Personal issues:

  • I'm getting a bit of a "we're not all one team" vibe here. This may stem from understandable frustration on your part, but if you respect Clint -- and I gather you do -- it's appropriate to try to fix this. He may be turning down your review meeting requests now because he's busy, or it may be that he feels that you really don't want his input anyway.
  • Backfill to show your real intentions. If you think that some of the things he said were real issues, file tickets, and make sure that he's included on them.

Leaving the PR open for "real review" up till the evening before the push means that either you need to take all the reviews seriously and not merge and not push if there are unaddressed comments, or you need to change the push scheduling so that there's sufficient time to complete the review and to schedule or cancel the push (e.g., PRs not merged at least 24 hours before the scheduled push don't go out).

It might be a good idea to invite him for a coffee and talk it over, letting him know that yes, you do value his input, but that the process as it is right now meant that you'd not gotten his review early enough to act on it this time, emphasizing "this time". Let him know you're looking at changing the process, and ask him if he has any suggestions.

Ask him what would make it easier for him to contribute, and make sure he understands that you appreciate that he bothered. If he is really trying to help, then no action on his comments demotivates him too. It sounds like there were no stoppers, since you went ahead with the merge, but it also sounds like the work, from his point of view, was wasted effort.

Thank him for finding the issues he pointed out and let him know that you intend to address the significant things he brought up. You should consider doing that both one-to-one with him and publicly in the closed PR to make it clear to him and the team that you're glad he's helping. Showing gratitude, in private and in public, for his caring and volunteering to help, even if none of the review comments were actionable from your team's point of view, is just the polite thing to do, and it helps build solidarity between teams.

Joe McMahon
  • 181
  • 2
  • 6
  • To be more specific on the "busy" reason, Clint might feel like he's being dragged into a team and a workload that's not his responsibility when he already has enough (possibly even too much!) of his own to worry about. In that case, including him on bug reports and making other efforts to make him feel more involved/valued may actually work against the relationship. – jpmc26 Nov 22 '18 at 01:42
  • IME support engineers are often over-burdened. It would he helpful to find out Clint's reasons for declining the code review invitations (he may simply be too busy). Letting Clint know that his contributions are valuable, however late, would be a good way to bring him closer to the OP's team. If Clint has capacity, but feels like he's not close enough to get involved earlier, he may just feel uncomfortable about treading on other people's toes. – linguamachina Nov 22 '18 at 21:06
  • Yeah, this is why you go out for coffee and talk it over. This one time he seems to have volunteered, so checking in and seeing if and how he'd like to contribute seems like the most direct way to work things out, and in an informal setting, he can blow off steam if he feels like he wasn't valued. – Joe McMahon Nov 24 '18 at 02:42
7

How can I tell Clint if he wants to provide feedback, it needs to be on our terms?

Unless Clint works for you, or there is some sort of formal process which dictates when comments are not permitted, you can't force Clint to adhere to "your terms".

You can ignore his comments. You can thank him for the "somewhat helpful" comments to encourage more of those. You can add items to your technical backlog to address those helpful comments. You can continue to invite him to code review meetings. You can encourage your team not to get demotivated based on a few late comments.

Joe Strazzere
  • 382,456
  • 185
  • 1,077
  • 1,492
4

1) Is there nobody between you and 20 engineers who can help you mentor them? This is why 20 people is too many people on a team, because there's no way one person can manage and mentor 20 people. You need to cut the size of your team, or at least the size of your responsibility, because you're spread way too thin.

2) Regarding releasing the code, is Clint on your team? If Clint is on your team, then Clint should be involved in doing code reviews, especially if he is a senior engineer. You should encourage your mentees to send code reviews to Clint where possible (don't overload him, but encourage them to involve Clint in the process). If Clint is not on your team, then unless the issues Clint finds are critical operation-breaking bugs, have him log them as bug reports; he's not on your team, he's not responsible for your product.

4) As for Clint missing "code review meetings": Firstly, I'm not sure what a "code review meeting" is. Code reviews are asynchronous operations: Developer submits code, code reviewer does review while developer does something else, review finishes, developer addresses comments, rinse, repeat. I don't know what a "code review meeting" is, it sounds silly. But beside that, if Clint is on your team, Clint should be attending team meetings. If Clint is not on your team, Clint does not need to attend team meetings. It's easy as that.

Ertai87
  • 45,600
  • 9
  • 73
  • 144
  • 7
    "Code review meetings" - We used to do these. Once a week, one of the lead devs would go through various recent merge requests and point out different examples of particularly good and bad code, and specifically what was good or bad about it. Then everyone would have a big discussion about it. It's a great way to distill knowledge from experience into something that juniors can understand and absorb. – industry7 Nov 20 '18 at 18:42
  • 3
    Code reviews can be async. They can also be group meetings where the code is walked through. Turns out having a group can be more productive as different people catch different things. – DaveG Nov 20 '18 at 21:12
2

Somebody needs to define the "process", e.g. the sequence in which things happen.

As a developer, if I'm not the one defining the process then I'll do a code review (if that's my job, as expected by my manager) when the process tells me to and/or when someone asks me to.

I don't understand the bit in the question which says, "declined all our code review meeting invitations this month".

I'm not sure what a "code review meeting" is, by the way. My idea of a code review is:

  1. Someone codes it
  2. I review it (in my own time) and make notes
  3. I meet with the coder to discuss my review

If I'm "senior" then perhaps I'm not listening to other people's review meetings.

To spare my time (reduce my effort) I like to review code after it has been tested. I might still find bugs (e.g. if the testing isn't perfectly complete), but it's easier to review code that works than code which doesn't. Reviewing code which doesn't work is called "development and debugging" and is more time-consuming.

Can you make "integration testing" any easier, more automated perhaps? So that you could do:

  • Development
  • Integration testing
  • Review
  • Fix review comments
  • Integration testing again

Alternatively you merge before code review (if you trust that integration testing was sufficient without review), do the code review on the main branch, and use the code review comments as input for what you might want to improve on a subsequent iteration (a subsequent "sprint").

ChrisW
  • 2,726
  • 12
  • 22
  • 4
    -1 for "code review should be done after testing". Some testing can be automated, but a lot of testing is manual (this is particularly true for frontend code, but can be true for backend code as well). Manual testing can take a long time; code reviews are generally short. – Ertai87 Nov 20 '18 at 18:10
  • 1
    By "after testing" I mean, at a minimum, after rerunning automated smoke tests plus whatever you do to test that the new feature works as specified. Different systems have different amount of automation in their testing (and I said that if their testing was more automated then they might afford to do it more often). – ChrisW Nov 20 '18 at 18:33
  • Testing during development is different than testing in preparation for release and depending on the tests they could take a significant amount of time so you want to limit how often you run them. – Joe W Nov 20 '18 at 19:43
  • @JoeW Yes and "the night before a big release" does sound rather [too] late. I guess that "the feedback ranged from somewhat helpful to somewhat unhelpful" means he didn't find any show-stoppers, and so the OP was right to release on time when they did. And that's not all bad -- the fact a code review doesn't find show-stoppers isn't necessarily a bad thing. What is his job, if not to "teach my engineers"? Is it to just look at (i.e. to smoke) the changes in the integration branch before release? – ChrisW Nov 20 '18 at 20:06
  • @ChrisW I definitely agree that untested code should not be reviewed (I would go one step further and say it should not be committed at all!). However, as Joe said, the tests a developer should be expected to run against their code is different from the tests a QA would run. The former, sure. The latter, not so much. – Ertai87 Nov 20 '18 at 20:13
  • 1
    I think you missed a critical issue with any testing before reviewing the code itself. If the integration/smoke tests will any signifiant amount of time to run does it make sense to run the tests and than do a peer review which could find issues requiring the tests to be run again? Wouldn't it make more sense to have the review done before hand so any potential issues could be found before testing happens so you minimize how often you need to run those tests? – Joe W Nov 20 '18 at 20:48
  • @JoeW I suppose you're right, i.e. the OP implied that his "integration tests" were costly or time-consuming -- so maybe the "process" should specify that inspection be done before integration testing? If it is before integration testing, is that before or after integration (I don't know)? Perhaps I was trying to make excuses for Clive, i.e. guess at what Clive motives might be. If he wants to review that late then maybe his inspection should be viewed as part of the integration test, i.e. just "go/no-go", and all his semi-helpful style comments discarded ... – ChrisW Nov 20 '18 at 21:00
  • ... or triaged, e.g. the OP could optionally add them to his team's code review guidelines, or use them for code review training. – ChrisW Nov 20 '18 at 21:02
  • "Reviewing code which doesn't work is called 'development and debugging' and is more time-consuming." I think this is more a trade-off, and it might depend on the ability/experience of the developers. Reviewing before QA tests can save overall if you're finding major problems that mean largely revamping the work; there's not much point in testing code that needs to be largely thrown away (e.g., you find out some code fetching a few thousand report results grabs related data one record at a time). I try not to search for actual bugs but instead focus on things like overall approach and norms. – jpmc26 Nov 22 '18 at 01:36
2

Pretty sure this guy is just telling you you're doing a bad job. That's what "it's not my job to teach your engineers" means. He's not interested in doing your job for you, he wants you to do it better. That's why he reviewed code destined for production (code you signed off on), not code in the pipeline. As for what to do about that, well, that's a topic for another question.

  • I honestly think this hits the nail on the head. Most people not on your team don't want extra work from your team. They just want your team to do good work so they don't have to think about it anymore. – jpmc26 Nov 22 '18 at 01:40
  • I agree that this is a likely scenario. Since "Clint" has worked on the codebase longer, he likely knows it better as well. – Katinka Hesselink Nov 22 '18 at 13:34
1

Beware.

✓ Senior Developer
✓ Finds issues in the code nobody else did
✓ Team discouraged because of the late reporting
✓ Reports something about "not his job" but reviewed anyway
✗ is setting up for a repeat of the same scenario

It doesn't look all that good for the team, or for the Sr. Developer either.

But I say to you beware. Make sure you understand all of the comments on that pull review. And I mean really understand. "That one's just wrong." doesn't count unless you really expended the effort on checking.

Probably there's nothing major waiting to be destructive to you. But there could be an extremely subtle data loss bug that he found that cannot be revealed in testing. You won't know the difference until you understand all the comments.

Addendum: To restore team morale, make sure get enough time to fix all of the issues the Sr. Developer found in that pull request that the rest of the team agree should be fixed.

Joshua
  • 636
  • 6
  • 14
0

The issue here is your workflow process is flawed. Peer Reviews should happen prior to integration testing for the simple fact that waiting until after testing is done can sow down the process. If the peer review finds issues that require changes and integration testing is already done more time will be needed to go back and redo all the testing after the changes are complete.

Ideally the developer should be writing and performing tests as they are developing the code and any final testing should be done after the code is peer reviewed to ensure that it is performing as expected. One thing to remember is that a peer review can spot bugs in the code before they can take down a system during integration testing.

Joe W
  • 897
  • 12
  • 18
  • "The issue here is your workflow process is flawed." I don't understand this perspective. To me it seems that the team is trying to do peer reviews early, and the issue is that Clint is ignoring them until the last minute. The workflow process seems fine, but Clint appears to be refusing to participate effectively. – DarthFennec Nov 21 '18 at 00:28
  • @DarthFennec Why would you ever want to do integration testing before doing a peer review of the code? To me that seems flawed as you will have to repeat any of the testing that was done if any issue are found with the code in the review. – Joe W Nov 21 '18 at 01:16
  • I'm not disagreeing with that. I'm saying OP doesn't appear to be disagreeing either. "the next day I asked if he could have reviewed that code earlier instead of waiting until after integration testing." It sounds like OP and his team are trying to do the peer review before integration testing, and the stated problem is that the reviewer (Clint) is acting against that workflow process. – DarthFennec Nov 21 '18 at 01:31
  • @DarthFennec Part of the problem here is the original question's lack of detail which lets us read very different underlying narratives into the story. Your reading, I guess, is that the team wanted Clint to review but he shirked his duties until the last minute. My reading was that Clint was uninvolved in the project and had no authority over it, then got ambushed on the day of the deadline with a giant PR of man-months of crap code that he'd had no prior chance to review, and then the OP got annoyed at the "late" feedback and merged it over Clint's objections. – Mark Amery Nov 21 '18 at 12:14
  • 1
    @MarkAmery "then got ambushed on the day of the deadline" does that not seems to contradict OP's behavior? Why would OP say "I asked if he could have reviewed that code earlier instead of waiting until after integration testing" if Clint was ambushed with the code after integration testing? Why would Clint's response be "it's not my job" instead of "I didn't know about it before integration testing"? I'm not saying you're wrong, I just don't understand how you reached your conclusion. I do agree that it would have been better if OP had been more clear about this, though. – DarthFennec Nov 26 '18 at 21:21
  • @DarthFennec I agree that the OP's attitude seems irrational given my interpretation, but it's confirmed by the OP's edit. The sequence of events is: 1. a bunch of development that Clint has no role in gets done by the OP's team, 2. release PR gets opened, 3. Clint gets brought in to sanity-check and does so the same day, leaving some negative feedback, 4. release happens at 4:30AM the next day anyway, 5. OP complains to Clint about the "late" feedback. The "it's not my job" comment seems reasonable from someone who went beyond the call of duty to leave extensive feedback for members of ... – Mark Amery Nov 26 '18 at 21:47
  • @DarthFennec ... another development team and then got berated for it and told they should've done more, and earlier, on a project that wasn't theirs to begin with. The OP's response to it just seems like petty blame-shifting without much justification behind it - which is, at least, something human and understandable. – Mark Amery Nov 26 '18 at 21:48
  • @MarkAmery From the edit: "we open a PR to merge to Master after eng's approve the changes" I was interpreting this to mean that Clint was one of these eng's, and he waited to approve the changes until after the PR to Master was made, instead of doing it before or parallel with QA's integration testing like he was supposed to. But it really could go either way. The fact that OP was asked what Clint's role was multiple times and didn't directly answer that in the edit is suspicious; if he's sidestepping the question on purpose, that lends credence to the blame-shifting interpretation. – DarthFennec Nov 26 '18 at 22:01
  • @MarkAmery There's nothing to show that Clint's input was required. (Indeed, the deployment was made before his comments could be considered.) OP is inviting Clint to code reviews and is afraid that the last-minute comments that can't possibly be useful for that release will happen again. OP obviously wants Clint to help educate his team, and obviously Clint doesn't want to do it. Any accusation of blame-shifting really should show that blame exists in the first place, and I don't see that. – David Thornley Nov 27 '18 at 22:56
0

You have two different kinds of reviews (three if you count your in-person meeting). That's not intrinsically bad, but it does mean you need to make your expectations very clear for each circumstance. I would create a pull request template for your final review that looked something like the following. This makes it very clear not only what is expected of this review, but also how to get your feedback addressed more successfully in the future.


<Summary of changes>

This is a pre-production review. Its purpose is to find major problems like:

  • Merging mistakes
  • Accidentally including an incomplete feature
  • Showstopper bugs

Barring any of those sorts of problems, this pull request will be merged at <date and time>.

The individual design reviews, where we discuss readability, maintainability, and overall architecture, have already been completed. If you find those sorts of issues here, please open an issue or make the changes in your own pull request to development instead of cluttering this pull request. The design reviews were performed in the following pull requests:

  • <Link to PR 1>
  • <Link to PR 2>
Karl Bielefeldt
  • 20,730
  • 7
  • 41
  • 68