Skip Navigation


Stay Connected and Keep Current


Keeping you informed with the latest and greatest.



Blog - How to elevate your code review game

Code review mastery: 8 practical examples that that will help you elevate your code review game

Szymon Sadowski | April 2024


There aren’t many aspects of software engineering as crucial as code reviews. In this article, we’ll delve into 8 practical examples that will help you elevate your code review game. Let’s begin!

Example 1 - The style
// Author: Clara Commits 👩‍💻
while (inventoryCount >= orderCount)
{ (1)
  1. Reviewer (Nigel Nitpick 👨‍💼): You should have used the K&R indentation style here and placed the bracket in the line above.

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? An authoritative style guide should settle matters of style. Every organisation should have an established style guide that should be enforced automatically (e.g. via the virtue of pre-commit hooks) wherever possible. In this case – indentation style – is something that can be automated away, and it is not worth spending time and effort discussing it and correcting it manually.

Example 2 - This is awesome

// Author: Luna Legend 👩‍💼
productLanguageEntity.setQuantityIncludedName(description.getIncludedQuantityName()); (1)
  1. Reviewer (Archie Analyzer 🧔🏻): Nice catch here. Thanks!

What do you think about this feedback – is it good or bad?

This is good feedback!

Why? We often tend to treat code reviews as a contest of "who can spot more flaws in the submitted code". Hence, we frequently overlook the fact that code reviews can also serve as a platform for conveying positive feedback too! This offers numerous benefits as it reinforces positive behaviours, fosters a positive work environment, and boosts morale.

Next time you come across someone’s outstanding work in a pull request, make sure to give them the recognition they deserve!

Example 3 - The small remark about the design

# Author: Safia Syntax 🧕🏼
class CPreProcessor:
    """ A pre-processor for C source code """

    logger = logging.getLogger("preprocessor")

    def __init__(self, coptions):
        self.coptions = coptions

    def include(self, filename, loc): (1)
        full_path = self.locate_include(filename, loc)
  1. Reviewer (Jade Judge 🙃): Actually, I don’t quite like this "#include" preprocessor directive idea here. Maybe we should rather have a real module system in C?

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? Well, imagine such a situation, huh? You’ve dedicated an immense amount of time and effort to implementing a feature that handles the "#include" directive in a C preprocessor, only to learn, after submitting your pull request, that maybe the whole idea that you implemented is wrong at its core, and you should rather do a major rework.

This is not good feedback because pointing out a major design flaw after the feature has been implemented is already too late. The optimal time to identify major design flaws and make critical design decisions is during the design review process.

Example 4 - The "small improvement while you’re at it"

// Author: Robert Refactor 😎
- def podConfig = findPodDefinitionFile("legacy-image.yaml", branchLabel)
+ def podConfig = findPodDefinitionFile("modern-image.yaml", branchLabel)
pipeline {
    options {
        timeout(time: 4, unit: 'HOURS')

    agent {
        kubernetes {
            yaml podConfig

    stages {
        stage('setup environment') {
        stage('mvn validate') {
            steps {
            -               mvnValidate()
            +               mvnValidateModern() (1)
  1. Reviewer (Henry Heuristic 🤵🏾‍♂️): This is some great work, thanks! While you’re at it, could you refactor all of our 100 other pipeline jobs to use this new modernised docker image?

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? While code reviews are indeed valuable for enhancing the overall health of your codebase, it’s important to request only slight improvements each time a particular area of your code is touched. Introducing them gradually ensures that the codebase remains stable and manageable.

Example 5 - I don’t quite get this one

// Author: Xavier Quantum 🧑🏽‍🚀

public class QuantumFluxGenerator {
    public static void main(String[] args) {
        new QuantumFluxGenerator().initiateSequence();

    private void initiateSequence() { (1)
        IntStream.rangeClosed(1, 10).mapToObj(i -> i % 2 == 0 ? "Quantum" : "Flux")
                .map(s -> s + IntStream.rangeClosed(1, 3).reduce(0, (a, b) -> a + b * (this.hashCode() % 2)))
  1. Reviewer (Helena Holography 👩‍🚀): I’m not entirely sure if I understand this part. Could you please provide some clarification on the functionality of this code?

What do you think about this feedback – is it good or bad?

This is good feedback!

It is common to be afraid to ask for an explanation of what the code exactly does - after all, there may be a lot of people viewing that pull request – and you might be afraid of appearing clueless. However, observation that you cannot comprehend the code, could be a clear signal that it needs to be rewritten to allow for easier understanding. Of course, there could be situations where you cannot make the code much more readable. In that case, the PR (pull request) author can either bedizen the code with some comments to make it more understandable, or they could respond to your comment with a written explanation, which will help you to get back on track with reviewing the code and also will be persisted for the future readers in your code review tool.

Example 6 - Why didn’t you just

// Author: Midnight Merge Miyu 🥷
public class UserService {
    private Map<String, UserDetails> cache = new HashMap<>(); (1)

    public UserDetails getUserDetails(String userId) {
        if (cache.containsKey(userId)) {
            return cache.get(userId);
        } else {
            UserDetails userDetails = database.queryUserDetails(userId);
            cache.put(userId, userDetails);
            return userDetails;
  1. Reviewer (Griffin Grumble 🤨): Why didn’t you just use Guava’s LoadingCache for caching?

What do you think about this feedback – is it good or bad?

This is a bad feedback.

Why? The comment raises a couple of concerns. First of all, it introduces personal bias into the discussion (note the use of "you"). It is highly unlikely that a single person will have ownership over the codebase, more likely it is owned by the entire team. In that case, we should refrain from the use of the word "you", we can replace it with "we", where it’s possible. Also, the comment does indeed carry a passive-aggressive and judgmental tone. Instead, the PR (pull request) author should be more courteous and respectful and phrase the comment kindly. A more effective version of the comment should provide additional context about the concerns raised. For instance:

I noticed we’re implementing manual caching in the current solution. Exploring the possibility of integrating Guava’s LoadingCache as a replacement might be beneficial. It’s designed to handle automatic cache population, invalidation, and cleanup, potentially simplifying the codebase and enhancing efficiency. LoadingCache also offers advanced features that could improve the resilience and maintainability of the application

— Griffin Grumble 🤨

Example 7 - Nitpick:

// Author: Theresa Terabyte 🏄‍♀️
return -> {
    Builder builder = ProductTrends.builder(); (1)
  1. Reviewer (Felix Feedback 👨🏿‍💻): Nitpick (non-blocking) let’s chain these

This is good feedback!

Why? It is a good idea to mark the severity of a comment.

Is the issue you’ve spotted something that requires immediate attention, or can it be addressed at a later stage? In other words, is it a blocking issue or a non-blocking one? Is the comment a nitpick, some optional suggestion, some to-do item, a major showstopper or perhaps just some FYI (for your information) type comment? Make it clear in your comment like in the example above!

Example 8 - Demonstrating data

# Author: Diana Deploy 😉🎉
def query_bazel_dependencies(target_directory):
    command = f'bazel query "kind(rule, deps({target_directory}:all))" --output=package'  (1) (2)

        output =, shell=True, capture_output=True).stdout.decode()
        return parse_bazel_dependencies(output)
    except subprocess.CalledProcessError as e:
        return []
  1. Reviewer (Paris Patch 👩🏻): If we recursively query only for java_ rules here, it may give us a performance boost.
  2. Author (Diana Deploy 😉🎉) RE: I don’t think it will, I guess the Bazel server startup will consume more time than we could save from omitting non-java rules.

What do you think about this response from the PR (pull request) author?

This is a bad response from the PR (pull request) author.

Why? Guessing is rarely a good idea in engineering. It’s far more effective to conduct experiments and present factual evidence to validate our hypotheses. Decisions should be made based on data.

An example of a good response from the PR (pull request) author could be as follows:

I did some tests using the "time" linux command line utility and the results indicate that the current strategy using ":all" aggregate target performs better than recursively querying only for java_ rules:

strategy all (no cache after bazel clean --expunge):
real 1m22.039s
user 0m4.894s
sys 0m0.398s

strategy querying only for java_ rules (no cache after bazel clean --expunge):
real 2m7.919s
user 2m1.356s
sys 0m14.606s

— Diana Deploy 😉🎉️

Other principles

Having explored those 8 practical examples, let’s top it off with some additional best practices for both PR (pull request) authors and code reviewers.

As a PR (pull request) reviewer, you should also:

  1. Perform a self-review before submitting a change. Even if it’s not your regular practice, you might be surprised by the number of issues you can catch by reviewing your own pull request - especially after some time has passed since completing your work on it.
  2. Write a good pull request description. As a rule of thumb, you should make it as easy as possible for the reviewer to check your code. This is a win-win situation. The reviewer will have an easier job reviewing your PR (pull request). You will get better feedback on your pull request, and the merged code could end up in better shape than otherwise.
  3. Split up your PRs (pull requests) to a manageable size. A SmartBear study indicated that developers are advised to review between 200 and 400 lines of code (LOC) in one sitting. This recommendation stems from the brain’s capacity to process information effectively; it’s found that beyond 400 LOC, a developer’s proficiency in identifying errors decreases. Another practice that you may try is crafting your commits so that the reviewers can read them one by one and follow the same thought process as you had during the implementation of the feature.

Conversely, as a PR (pull request) reviewer, you should strive for:

  1. Making the code review process quick. Why? According to the DORA research 2023, "teams with shorter code review times have 50% better software delivery performance". Long and tiresome code review processes can be a major source of developer frustration. To increase code review efficiency, consider automating manual tasks, keeping pull requests (PRs) small, and allocating dedicated time for reviews.
  2. Engaging junior members of the team. There is a common misconception that pull requests should be reviewed and accepted only by senior members of the team. However, reviewing pull requests or part of those by more junior members of the team could be a great way for them to get familiar with the codebase and gain additional experience. Transferring knowledge should be one of the main functions of the code review process apart from ensuring the quality of the code before it is merged. You may be also surprised how much can you learn from the insights of junior members of the team!

Closing thoughts

Code review stands as a crucial pillar in maintaining high code quality and facilitating the transfer of knowledge among team members, fostering an environment of continuous learning and improvement. By applying the best practices of code reviews, teams can significantly enhance their engineering output, making it an indispensable tool in the arsenal of software development.