Journal article Open Access

Does Reviewer Recommendation Help Developers?

Vladimir Kovalenko; Nava Tintarev; Evgeny Pasynkov; Christian Bird; Alberto Bacchelli


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Vladimir Kovalenko</dc:creator>
  <dc:creator>Nava Tintarev</dc:creator>
  <dc:creator>Evgeny Pasynkov</dc:creator>
  <dc:creator>Christian Bird</dc:creator>
  <dc:creator>Alberto Bacchelli</dc:creator>
  <dc:date>2018-08-28</dc:date>
  <dc:description>Selecting reviewers for code changes is a critical step for an efficient code review process. Recent studies propose automated reviewer recommendation algorithms to support developers in this task. However, the evaluation of recommendation algorithms, when done apart from their target systems and users (i.e., code review tools and change authors), leaves out important aspects: perception of recommendations, influence of recommendations on human choices, and their effect on user experience.

This study is the first to evaluate a reviewer recommender in vivo. We compare historical reviewers and recommendations for over 21,000 code reviews performed with a deployed recommender in a company environment and set out to measure the influence of recommendations on users' choices, along with other performance metrics.

Having found no evidence of influence, we turn to the users of the recommender. Through interviews and a survey we find that, though perceived as relevant, reviewer recommendations rarely provide additional value for the respondents. We confirm this finding with a larger study at another company. The confirmation of this finding brings up a case for more user-centric approaches to designing and evaluating the recommenders.

Finally, we investigate information needs of developers during reviewer selection and discuss promising directions for the next generation of reviewer recommendation tools.

Preprint: https://doi.org/10.5281/zenodo.1404814</dc:description>
  <dc:identifier>https://zenodo.org/record/1404814</dc:identifier>
  <dc:identifier>10.5281/zenodo.1404814</dc:identifier>
  <dc:identifier>oai:zenodo.org:1404814</dc:identifier>
  <dc:relation>info:eu-repo/grantAgreement/SNSF/Careers/PP00P2_170529/</dc:relation>
  <dc:relation>info:eu-repo/grantAgreement/NWO//2300187620/</dc:relation>
  <dc:relation>doi:10.5281/zenodo.1404755</dc:relation>
  <dc:relation>doi:10.5281/zenodo.1404813</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:source>IEEE Transactions on Software Engineering</dc:source>
  <dc:subject>code review</dc:subject>
  <dc:subject>recommender systems</dc:subject>
  <dc:subject>user-centric evaluation</dc:subject>
  <dc:subject>empirical software engineering</dc:subject>
  <dc:title>Does Reviewer Recommendation Help Developers?</dc:title>
  <dc:type>info:eu-repo/semantics/article</dc:type>
  <dc:type>publication-article</dc:type>
</oai_dc:dc>
510
314
views
downloads
All versions This version
Views 510510
Downloads 314314
Data volume 1.8 GB1.8 GB
Unique views 455455
Unique downloads 284284

Share

Cite as