Journal article Open Access

A Replication Package For The Paper "Improved Retrieval of Programming Solutions with Code Examples Using a Multi-featured Score"

Rodrigo F. Silva; Mohammad Masudur Rahman; CARLOS EDUARDO DE CARVALHO DANTAS; Chanchal Roy; Foutse Khomh; Marcelo A. Maia


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.5115300</identifier>
  <creators>
    <creator>
      <creatorName>Rodrigo F. Silva</creatorName>
      <affiliation>Faculty of Computing, Federal University of Uberlândia</affiliation>
    </creator>
    <creator>
      <creatorName>Mohammad Masudur Rahman</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-3821-5990</nameIdentifier>
      <affiliation>Faculty of Computer Science, Dalhousie University</affiliation>
    </creator>
    <creator>
      <creatorName>CARLOS EDUARDO DE CARVALHO DANTAS</creatorName>
      <affiliation>Faculty of Computing, Federal University of Uberlândia</affiliation>
    </creator>
    <creator>
      <creatorName>Chanchal Roy</creatorName>
      <affiliation>Department of Computer Science, University of Saskatchewan, Canada</affiliation>
    </creator>
    <creator>
      <creatorName>Foutse Khomh</creatorName>
      <affiliation>École Polytechnique de Montréal, Canada</affiliation>
    </creator>
    <creator>
      <creatorName>Marcelo A. Maia</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-3578-1380</nameIdentifier>
      <affiliation>Faculty of Computing, Federal University of Uberlândia</affiliation>
    </creator>
  </creators>
  <titles>
    <title>A Replication Package For The Paper "Improved Retrieval of Programming Solutions with Code Examples Using a Multi-featured Score"</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2021</publicationYear>
  <subjects>
    <subject>Mining Crowd Knowledge, Stack Overflow, Word Embedding</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2021-07-24</date>
  </dates>
  <resourceType resourceTypeGeneral="JournalArticle"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5115300</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsSupplementTo">https://github.com/ISEL-UFU/crar-replication-package</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5115299</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Developers often depend on code search engines to obtain solutions for their programming tasks. However, finding an expected solution containing code examples along with their explanations is challenging due to several issues. There is a vocabulary mismatch between the search keywords (the query) and the appropriate solutions. Semantic gap may increase for similar bag of words due to antonyms and negation. Moreover, documents retrieved by search engines might not contain solutions containing both code examples and their explanations. So, we propose CRAR (Crowd Answer Recommender) to circumvent those issues aiming at improving retrieval of relevant answers from Stack Overflow containing not only the expected code examples for the given task but also their explanations. Given a programming task, we investigate the effectiveness of &amp;nbsp;combining information retrieval techniques along with a set of features to enhance the ranking of important threads (i.e., the units containing questions along with their answers) for the given task and then selects relevant answers contained in those threads, including semantic features, like word embeddings and sentence embeddings, for instance, a Convolutional Neural Network (CNN). CRAR also leverages social aspects of Stack Overflow discussions like popularity to select relevant answers for the tasks. Our experimental evaluation shows that the combination of the different features performs better than each one individually. We also compare the retrieval performance with the state-of-art CROKAGE (Crowd Knowledge Answer Generator), which is also a system aimed at retrieving relevant answers from Stack Overflow. We show that CRAR outperforms CROKAGE &amp;nbsp;in Mean Reciprocal Rank and Mean Recall with small and medium effect sizes, respectively.&lt;/p&gt;</description>
  </descriptions>
</resource>
193
84
views
downloads
All versions This version
Views 193193
Downloads 8484
Data volume 266.0 GB266.0 GB
Unique views 144144
Unique downloads 4949

Share

Cite as