Other Open Access

What cognitive and affective states should technology monitor to support learning?

Olugbade, Temitayo; Cuturi, Luigi; Cappagli, Giulia; Volta, Erica; Alborno, Paolo; Newbold, Joseph; Bianchi-Berthouze, Nadia; Baud-Bovy, Gabriel; Volpe, Gualtiero; Gori, Monica


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">learning</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">cognition</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">affect</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">self-efficacy</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">curiosity</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">reflectivity</subfield>
  </datafield>
  <controlfield tag="005">20200120173230.0</controlfield>
  <controlfield tag="001">1156895</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">13-13 November 2017</subfield>
    <subfield code="g">MIE 2017</subfield>
    <subfield code="a">1st ACM SIGCHI International Workshop on Multimodal Interaction for Education</subfield>
    <subfield code="c">Glasgow, UK</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">IIT Genoa, Italy</subfield>
    <subfield code="a">Cuturi, Luigi</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">IIT Genoa, Italy</subfield>
    <subfield code="a">Cappagli, Giulia</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Genoa, Italy</subfield>
    <subfield code="a">Volta, Erica</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Genoa, Italy</subfield>
    <subfield code="a">Alborno, Paolo</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University College London, UK</subfield>
    <subfield code="a">Newbold, Joseph</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University College London, UK</subfield>
    <subfield code="a">Bianchi-Berthouze, Nadia</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">IIT Genoa, Italy</subfield>
    <subfield code="a">Baud-Bovy, Gabriel</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Genoa, Italy</subfield>
    <subfield code="a">Volpe, Gualtiero</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">IIT Genoa, Italy</subfield>
    <subfield code="a">Gori, Monica</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">760753</subfield>
    <subfield code="z">md5:546ba7b1b6f70cd1b9f337a379187547</subfield>
    <subfield code="u">https://zenodo.org/record/1156895/files/ICMI Workshop 2017 Camera Ready format corrections made - repository version.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2017-11-13</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-h2020_wedraw</subfield>
    <subfield code="o">oai:zenodo.org:1156895</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">University College London, UK</subfield>
    <subfield code="a">Olugbade, Temitayo</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">What cognitive and affective states should technology monitor to support learning?</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-h2020_wedraw</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">732391</subfield>
    <subfield code="a">Exploiting the best sensory modality for learning arithmetic and geometrical concepts based on multisensory interactive Information and Communication Technologies and serious games</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1145/3139513.3139522</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">other</subfield>
  </datafield>
</record>
134
80
views
downloads
Views 134
Downloads 80
Data volume 60.9 MB
Unique views 127
Unique downloads 77

Share

Cite as