

Adrienne Braganza - Looks Good To Me: Constructive Code Reviews

Looks Good To Me: Constructive Code Reviews
Meet Adrienne Braganza: engineer, author, and code review advocate. Adrienne is a Filipina software engineer and bestselling author who’s passionate about making tech relatable and fun. From writing Coding for Kids: Python to educating tens of thousands of students on LinkedIn and speaking on global stages, Adrienne knows how to bring stories and skills to life.
In her latest book, Looks Good To Me: Constructive Code Reviews, Adrienne dives into a topic that every developer encounters but few master. Packed with lessons from her career, interviews with other developers, and practical tips, it’s the ultimate guide to making code reviews not just bearable, but constructive and rewarding.
We caught up with Adrienne to chat about the new book, who it’s for, and why code reviews matter more than you think.
What motivated you to write “Looks Good to Me”?
Bad code reviews. No, really! I've had enough less-than-ideal experiences and have confirmed with too many others that code reviews are a pain rather than a positive. There was also no key resource on code reviews that was as comprehensive or cherished as some other topics like system design or pair programming. I wanted to change that.
Who do you think will benefit most from this book, developers, team leads, or entire organisations?
I truly believe all will benefit; the impact this book can have cascades in all directions. If developers improve their code reviews - whether by making them more useful, less of a bottleneck, or some other way - that's a good thing. If team leads can improve the dynamic on their teams through more communicative and empathetic code reviews, that's a good thing. And when any of these things happen, organisations will always benefit.
Why do you believe code reviews are so often misunderstood or underutilsed in software development?
Rampant "rubberstamping" (thoughtlessly signing off on your colleagues' code just to progress it through the process) and having no collective purpose for code reviews seem to contribute to their bad rap. Unfortunately, both are pretty common. When your team loses confidence and trust in code reviews, the negative experiences and stories tend to persist. These influence others or even adversely shape the next team's code review process; that's why there's still so many teams that despise code reviews.
What common challenges do teams face when conducting code reviews, and how does your book address them?
I address two of the most discussed dilemmas (in my experience and research): communication and structure.
Communication is hard; good communication is harder; clear, empathetic communication seems to be the hardest, especially in an environment that requires some critique and is usually online. I give practical tactics on how to communicate more clearly, like making your PRs as detailed as possible, and empathetically, like using the Triple-R method to structure feedback in a comment.
For structure, I touch on quite a bit: building a code review process from scratch, aligning your team's code review goals, strategies to mitigate bottlenecks, and workflows to run with your team to find existing gaps and issues (as well as suggestions on how to fix them), as a start.
What tools or techniques from the book do you think are the most transformative for improving the code review process?
I think the chapters on automation (chapter 5) and composing effective comments (chapter 6) will be clear favourites for those that read this book. I've seen so many issues come up due to silly issues like nitpicking or formatting; issues that, in my opinion, should not make it into the code review at all. My chapter on automation shows that these silly bottlenecks can be eradicated. Similarly, I think developers will enjoy my chapter on writing effective comments, specifically with structure and consideration of their colleagues in mind. The faster colleagues can align on their intent, the better that the review and feedback mechanisms can be in code reviews.
Your book mentions aligning team goals and expectations, what’s a good starting point for teams struggling with this?
In chapter 9, I outline a process for teams to define their (true) code review process, weaknesses and gaps and all. Then, once that's laid out, I encourage teams to go through each of their defined weaknesses and figure out a solution to solve it. (I guess the caveat to this answer is good and open communication! If your team is to ever resolve the friction it experiences, they need to be able to talk to each other and be willing to collaboratively solve problems.) But going through this process together shows the while team where they stand and go from there.
Looking ahead, what do you see as the future of code reviews, especially with advances in AI and automation?
AI will be integrated, no doubt. What I predict is that we'll use AI to make code reviews faster and more thorough by doing the mundane parts: auto-generating PR titles and descriptions, offering code suggestions and fixes to comments, and other things like that. My hope, though, is that we don't lose our critical thinking skills due to over reliance on AI tools. I'd argue that we actually need to be even more critical of the code we review because more of it will be written by AI!
Can you give me one key takeaway that you hope readers will implement immediately after reading your book?
Use the Triple-R method to structure your feedback, especially when it involves some change on the author's part. It's Request, Rationale, Result: Give your Request, give the Rationale behind that request, and then give a Result, which is something your colleague can compare their change to.
For example, if you want to ask the author to change a variable name: "Hey, can we change 'item' to a more descriptive variable name (Request). 'item' is a bit vague and doesn't capture the context that it is in. (Rationale) Maybe a name like 'discountItem' or 'eligibleDiscountItem' is a more descriptive name (Result).
Structuring feedback in this way not only makes it objective, but gives the author that receives this feedback a measurable or clearer end state of what you are asking them to do.
Where can you purchase the book?
You can purchase LGTM on Manning's website (https://www.manning.com/
Latest opportunities
Infrastructure Engineer
I’m sure by now Artificial intelligence has been crossed off your 2025 buzzword bingo sheet.
Site Reliability Engineer
Time to enhance your scope; broaden your horizon by delving into site reliability engineering.
Senior Kubernetes Consultant
You’re a Kubernetes expert thriving on designing, deploying, and managing large-scale containerised environments.