4034268
doi
10.5281/zenodo.4034268
oai:zenodo.org:4034268
user-eu
Weir, Daryl
Aalto University
Oulasvirta, Antti
Aalto University
How We Type: Movement Strategies and Performance in Everyday Typing
Feit, Anna Maria
Aalto University
doi:10.1145/2858036.2858233
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial 4.0 International
https://creativecommons.org/licenses/by-nc/4.0/legalcode
Text entry
Typing
Touch typing
Keyboard
Human-Computer Interaction
<p>Tihs dataset contains motion capture, keylog, eye tracking, and video data of 30 participants, transcribing regular sentences. It is part of the following publication:</p>
<p>Anna Maria Feit, Daryl Weir, Antti Oulasvirta. 2016.How We Type: Movement Strategies and Performance in Everyday Typing.In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4262-4273</p>
<p>The paper revisits the present understanding of typing, which originates mostly from studies of trained typists using the tenfinger touch typing system. Our goal was to characterise the majority of present-day users who are untrained and employ diverse, self-taught techniques. In a transcription task, we compared self-taught typists and those that took a touch typing course. We reported several differences in performance, gaze deployment and movement strategies. The most surprising finding was that self-taught typists can achieve performance levels comparable with touch typists, even when using fewer fingers. Motion capture data exposed 3 predictors of high performance: 1) unambiguous mapping (a letter is consistently pressed by the same finger), 2) active preparation of upcoming keystrokes, and 3) minimal global hand motion. </p>
<p>The dataset is free for non-commercial use. Please cite the above work. </p>
<p>Note that participants wrote in either Finnish or English. </p>
Zenodo
2016-05-05
info:eu-repo/semantics/other
4034267
user-eu
award_title=Computational User Interface Design; award_number=637991; award_identifiers_scheme=url; award_identifiers_identifier=https://cordis.europa.eu/projects/637991; funder_id=00k4n6c32; funder_name=European Commission;
1600932953.944768
981077704
md5:03e27cb33b946b66217c93915babd68f
https://zenodo.org/records/4034268/files/Motion Capture.zip
733527
md5:c3d020b88d41069172814721481f2752
https://zenodo.org/records/4034268/files/Typing.zip
29292760284
md5:193daa9779eb1a862b809dd6772314be
https://zenodo.org/records/4034268/files/Reference video.zip
6827
md5:58b5e068625bbfe06af1551d8fa3d85f
https://zenodo.org/records/4034268/files/Readme.txt
903405
md5:7511817b97668250315a3007c9ae499b
https://zenodo.org/records/4034268/files/hand_markers.png
15457687855
md5:dd6e89b3b694a3ee552fa1b06f4e37e5
https://zenodo.org/records/4034268/files/Eye tracking.zip
1594
md5:39c8e67a4a01debe06d388f07fd05a39
https://zenodo.org/records/4034268/files/keyboard_flat_coordinates.csv
16224
md5:ebcfd13a312aaf203ebca0d09526eed7
https://zenodo.org/records/4034268/files/Background.xlsx
public
10.1145/2858036.2858233
Is part of
doi
10.5281/zenodo.4034267
isVersionOf
doi