Conference paper Open Access
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.5118719</identifier> <creators> <creator> <creatorName>Chiodini Luca</creatorName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-2712-9248</nameIdentifier> <affiliation>Software Institute - USI</affiliation> </creator> </creators> <titles> <title>Wrong Answers for Wrong Reasons: The Risks of Ad Hoc Instruments</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2021</publicationYear> <dates> <date dateType="Issued">2021-07-21</date> </dates> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5118719</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5118718</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>This folder contains anonymized data from the experiment described in the paper &quot;Wrong Answers for Wrong Reasons: The Risks of Ad Hoc Instruments&quot;.</p> <ul> <li><code>questions.md</code>&nbsp;contains the 14 questions used as pre/post-test</li> <li>Participants are identified using numeric identifiers (from 1 to 40).</li> <li><code>conditions.csv</code> contains the mapping between participants identifiers and their condition</li> <li> <p><code>{test}_{assessment_type}_anon.csv</code> contains data for a given test (<code>pretest</code> or <code>posttest</code>) using a certain assessment type. Two assessment types are used:</p> <ul> <li><code>mc</code> (multiple choice as assessed by Moodle: <code>0</code> means wrong, <code>1</code> means correct, <code>-</code> means missing)</li> <li><code>revised</code> (human revision after reading the explanation: the judgment is either a <code>0</code> or a <code>1</code> following what presented in the paper)</li> </ul> </li> <li> <p>Statistics are computed with Python and SciPy. You can execute the script by running:</p> <pre><code>python3 main.py </code></pre> <p>and you should get this output:</p> <pre><code>=== Analyzing answers using mc as assessment type === Results on pretest - Condition Text: N: 21, mean ± std: 9.81 ± 3.03 Results on pretest - Condition Graphic: N: 19, mean ± std: 9.68 ± 2.11 Mann Whitney U on pretest scores: U=184.0, p=0.3406 Results on posttest - Condition Text: N: 21, mean ± std: 11.19 ± 2.75 Results on posttest - Condition Graphic: N: 19, mean ± std: 10.74 ± 2.23 Mann Whitney U on posttest scores: U=164.0, p=0.1681 === Analyzing answers using revised as assessment type === Results on pretest - Condition Text: N: 20, mean ± std: 9.15 ± 3.10 Results on pretest - Condition Graphic: N: 19, mean ± std: 7.89 ± 3.16 Mann Whitney U on pretest scores: U=150.5, p=0.1344 Results on posttest - Condition Text: N: 20, mean ± std: 11.15 ± 2.39 Results on posttest - Condition Graphic: N: 19, mean ± std: 9.42 ± 3.11 Mann Whitney U on posttest scores: U=123.0, p=0.0299 </code></pre> <p>You might need to install <code>scipy</code> (e.g., depending on your setup, <code>pip install scipy</code>).</p> </li> </ul></description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 77 | 77 |
Downloads | 33 | 33 |
Data volume | 112.8 kB | 112.8 kB |
Unique views | 67 | 67 |
Unique downloads | 28 | 28 |