Conference paper Open Access

Liu, Hao; Ye, Hanting; Yang, Jie; Wang, Qing

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/5646942</identifier>
<creators>
<creator>
<creatorName>Liu, Hao</creatorName>
<givenName>Hao</givenName>
<familyName>Liu</familyName>
<affiliation>TU Delft</affiliation>
</creator>
<creator>
<creatorName>Ye, Hanting</creatorName>
<givenName>Hanting</givenName>
<familyName>Ye</familyName>
<affiliation>TU Delft</affiliation>
</creator>
<creator>
<creatorName>Yang, Jie</creatorName>
<givenName>Jie</givenName>
<familyName>Yang</familyName>
<affiliation>TU Delft</affiliation>
</creator>
<creator>
<creatorName>Wang, Qing</creatorName>
<givenName>Qing</givenName>
<familyName>Wang</familyName>
<affiliation>TU Delft</affiliation>
</creator>
</creators>
<titles>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2021</publicationYear>
<dates>
<date dateType="Issued">2021-11-05</date>
</dates>
<resourceType resourceTypeGeneral="ConferencePaper"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5646942</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1145/3485730.3493454</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;Motivated by the trend of realizing full screens on devices such as smartphones, in this work we propose through-screen sensing with visible light for the application of fingertip air-writing. The system can recognize handwritten digits with under-screen photodiodes as the receiver. The key idea is to recognize the weak light reflected by the finger when the finger writes the digits on top of a screen. The proposed air-writing system has immunity to scene changes because it has a fixed screen light source. However, the screen is a double-edged sword as both a signal source and a noise source. We propose a data preprocessing method to reduce the interference of the screen as a noise source. We design an embedded deep learning model, a customized model ConvRNN, to model the spatial and temporal patterns in the dynamic and weak reflected signal for air-writing digits recognition. The evaluation results show that our through-screen fingertip air-writing system with visible light can achieve accuracy up to 91%. Results further show that the size of the customized ConvRNN model can be reduced by 94% with less&lt;br&gt;
than a 10% drop in performance.&lt;/p&gt;</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/814215/">814215</awardNumber>
<awardTitle>European Training Network in Low-energy Visible Light IoT Systems</awardTitle>
</fundingReference>
</fundingReferences>
</resource>

24
12
views