Anyone who has worked in software development professionally knows about code review: The idea that developers review the code of other developers to spot errors, propose suggestions for improvement and to ensure that knowledge is shared within the development team.
Code reviews are effective, common in the industry, and at the same time really hard to do. One would think that university would teach you a best practice from the industry — but that is not the case for code reviews.
There are multiple reasons why lecturers in university do not teach students how to do code review:
- They don’t know how to teach it
- They don’t know how to assess it
- They come from an academic background where code review rarely happens
I am a software developer and have done some code reviewing myself. I am also a university lecturer in computer science. To be honest, reviewing code made by others is one of the hardest things I know.
I will outline an approach to teaching code reviews with peer feedback. Beyond teaching students to do code reviews, peer feedback has the added benefits of teaching students about the subject matter and how to think critically.
How to teach code reviews with peer feedback
In essence, the way I propose that you can teach the skill of code review is the following setup:
- Let students submit their code for review prior to the actual submission deadline.
- Ask each student to review code from 2–3 other students using a feedback rubric to help them focus their review.
- When students receive their reviews, ask them to give feedback to the reviewer on the usefulness of the review.
- Finally ask the students to use their received feedback to improve their code for the final submission.
Let students resubmit the work
Receiving feedback is only useful if you can use that feedback for something. Providing a review to someone that can’t use it will still teach both the reviewer and the submitter something valuable. However, the effect is much clearer if students get a chance to improve their submissions based on the feedback.
One of the challenges this brings, is that it takes more time since students will need time to do their reviews and make improvements to their work after submitting the first version of their work. There is really no good solution to this problem.
Another challenge is the potential for increased plagiarism. If all students are working on very similar tasks (like solving a simple programming exercise) then students will (being somewhat rational as they are) steal good ideas and solutions from the work they review. When students are working on more open-ended and different projects, this is less of a problem though.
Let them give feedback on the review
In order to incentivize students to make great reviews, I propose letting them give feedback on the reviews they receive. One way this feedback could be given is by asking the receiver of a review to answer the following survey for each review:
- Constructivity: Is the review helpful and explain how to improve your code? (Possible answers: No needs more work / Somewhat / Yes it was great).
- Specificity: Does the review point to specific things in your code? (Possible answers: No needs more work / Somewhat / Yes it was great).
- Justification: Does the review provide explanations and give reasons and arguments? (Possible answers: No needs more work / Somewhat / Yes it was great).
- Kindness: Is the review kind and uses friendly language? (Possible answers: It was too harsh / It was neutral or friendly).
- Open feedback: Do you have any comments to the reviewer?
Using the answers to the first four questions above, it is even possible to compute a review score — for example by treating “No needs more work” as 0% and “Yes it was great” as 100% and then averaging the scores together). This review score can either be used as a guide for the teacher on which students need more help with their reviews, or (as in my course) even be a part of the final grade in the course.
Some research actually suggests that this approach of giving marks for providing helpful feedback is a great way to encourage students to assess work accurately.
Limit the size and scope of the review task
Reading other people’s code is hard. To make code review effective, refrain from asking students to review too much code at the same time. This resource claims that “… the single best piece of advice we can give is to review between 100 and 300 lines of code at a time and spend 30–60 minutes to review it.”.
Another way to help students provide good reviews is to help them focus their feedback on specific things. Generally, you should not use code review to check if the code is working correctly (that is what tests and a compiler is for). Ask students to focus on style, comments and documentation, modularity, use of testing, error handling, security, algorithms etc. These are things that humans are (still) somewhat useful for checking.
One way to help students focus is by using a feedback rubric. You can present it in the form of a small survey, for example like this:
- Documentation: Is the code properly documented and commented? (Possible answers: It needs more work / Somewhat / Yes it is great). Where should the documentation be better?
- Error handling: Does the code handle errors properly? (Possible answers: It needs more work / Somewhat / Yes it is great). Where should the error-handling be better?
- Suggestions: Provide two suggestions for the author on how to improve the code.
Allow students to object and discuss the reviews
Quality of code is rarely black and white. There will be disagreements, different correct solutions and personal preferences. After students have submitted their feedback on the reviews they received, let them continue the discussion both with each other and with you as the teacher. Having a discussion around the code review is a great way for students to learn from each other and learn to communicate about a technical subject.
If the review process has any influence on the grades that students are getting, it is very important that students have a chance to object and get a teacher evaluation of their work if they disagree. Similarly if the quality of a review contributes to the grade, then they should also be able to object to the feedback on their reviews.
Teaching code review with Eduflow
Beyond teaching university, I am also co-founder of Eduflow, a product for instructors to run engaging online learning experiences (for example peer review). The way we have designed Eduflow is very much in line with the pedagogical design mentioned above.
Students can submit their work either as code-files, iPython notebooks or just as links to Github (this all happens in a submission activity). It is possible to run the entire process of peer review in a double-blind anonymous setting (you can use our peer review flow for this).
Eduflow automatically takes care of assigning reviewers to code in a smart way and allows you to set up a feedback rubric easily. You also get the feedback-on-feedback (through the feedback reflection activity) easily. Finally, instructors get a quick and thorough overview over the entire process so they can quickly get an understanding of what students are excellent at and where they need to focus their teaching.
Edit: As with all good ideas, I am of course not the first person to think of this. In a series of blog posts and finally a thesis project, Mike Conley (who is now a software developer at Mozilla) goes through an experiment on the effectiveness of teaching students to do code reviews through peer assessment. In his work he comes to the conclusion that students learn from reviewing code, that students are able to fairly accurately assess code quality, but also that students generally don’t like to grade the work of their peers (they don’t mind reviewing it — it is just the grading). Part of his setup is letting teaching assistants grade the work first, and then giving students credit based on how close their marks are to those given by the TA — which according to the research I mentioned above might actually have the opposite effect!