Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Exploring the Advances in Identifying Useful Code Review Comments (2307.00692v2)

Published 3 Jul 2023 in cs.SE

Abstract: Effective peer code review in collaborative software development necessitates useful reviewer comments and supportive automated tools. Code review comments are a central component of the Modern Code Review process in the industry and open-source development. Therefore, it is important to ensure these comments serve their purposes. This paper reflects the evolution of research on the usefulness of code review comments. It examines papers that define the usefulness of code review comments, mine and annotate datasets, study developers' perceptions, analyze factors from different aspects, and use machine learning classifiers to automatically predict the usefulness of code review comments. Finally, it discusses the open problems and challenges in recognizing useful code review comments for future research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. M. Fagan, “Design and code inspections to reduce errors in program development,” in Software pioneers.   Springer, 2002, pp. 575–607.
  2. R. Priest and B. Plimmer, “Rca: experiences with an ide annotation tool,” in Proceedings of the 7th ACM SIGCHI New Zealand chapter’s international conference on Computer-human interaction: design centered HCI, 2006, pp. 53–60.
  3. O. Kononenko, O. Baysal, and M. W. Godfrey, “Code review quality: How developers see it,” in Proceedings of the 38th international conference on software engineering, 2016, pp. 1028–1038.
  4. A. Bosu, M. Greiler, and C. Bird, “Characteristics of useful code reviews: An empirical study at microsoft,” in 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories.   IEEE, 2015, pp. 146–156.
  5. T. Pangsakulyanont, P. Thongtanunam, D. Port, and H. Iida, “Assessing mcr discussion usefulness using semantic similarity,” in 2014 6th International Workshop on Empirical Software Engineering in Practice.   IEEE, 2014, pp. 49–54.
  6. B. S. Meyers, N. Munaiah, E. Prud’hommeaux, A. Meneely, J. Wolff, C. O. Alm, and P. Murukannaiah, “A dataset for identifying actionable feedback in collaborative software development,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2018, pp. 126–131.
  7. M. M. Rahman, C. K. Roy, and R. G. Kula, “Predicting usefulness of code review comments using textual features and developer experience,” in 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR).   IEEE, 2017, pp. 215–226.
  8. A. K. Turzo and A. Bosu, “What makes a code review useful to opendev developers? an empirical investigation,” Empirical Software Engineering, 2023.
  9. V. Efstathiou and D. Spinellis, “Code review comments: language matters,” in Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, 2018, pp. 69–72.
  10. M. Hasan, A. Iqbal, M. R. U. Islam, A. Rahman, and A. Bosu, “Using a balanced scorecard to identify opportunities to improve code review effectiveness: an industrial experience report,” Empirical Software Engineering, vol. 26, no. 6, pp. 1–34, 2021.
  11. M. Beller, A. Bacchelli, A. Zaidman, and E. Juergens, “Modern code reviews in open-source projects: Which problems do they fix?” in Proceedings of the 11th working conference on mining software repositories, 2014, pp. 202–211.
  12. O. Kononenko, O. Baysal, L. Guerrouj, Y. Cao, and M. W. Godfrey, “Investigating code review quality: Do people and participation matter?” in 2015 IEEE international conference on software maintenance and evolution (ICSME).   IEEE, 2015, pp. 111–120.
  13. L. MacLeod, M. Greiler, M.-A. Storey, C. Bird, and J. Czerwonka, “Code reviewing in the trenches: Challenges and best practices,” IEEE Software, vol. 35, no. 4, pp. 34–42, 2017.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube