Conference paper Open Access

Liu, Hao; Ye, Hanting; Yang, Jie; Wang, Qing

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Liu, Hao</dc:creator>
<dc:creator>Ye, Hanting</dc:creator>
<dc:creator>Yang, Jie</dc:creator>
<dc:creator>Wang, Qing</dc:creator>
<dc:date>2021-11-05</dc:date>
<dc:description>Motivated by the trend of realizing full screens on devices such as smartphones, in this work we propose through-screen sensing with visible light for the application of fingertip air-writing. The system can recognize handwritten digits with under-screen photodiodes as the receiver. The key idea is to recognize the weak light reflected by the finger when the finger writes the digits on top of a screen. The proposed air-writing system has immunity to scene changes because it has a fixed screen light source. However, the screen is a double-edged sword as both a signal source and a noise source. We propose a data preprocessing method to reduce the interference of the screen as a noise source. We design an embedded deep learning model, a customized model ConvRNN, to model the spatial and temporal patterns in the dynamic and weak reflected signal for air-writing digits recognition. The evaluation results show that our through-screen fingertip air-writing system with visible light can achieve accuracy up to 91%. Results further show that the size of the customized ConvRNN model can be reduced by 94% with less
than a 10% drop in performance.</dc:description>
<dc:identifier>https://zenodo.org/record/5646942</dc:identifier>
<dc:identifier>10.1145/3485730.3493454</dc:identifier>
<dc:identifier>oai:zenodo.org:5646942</dc:identifier>
<dc:relation>info:eu-repo/grantAgreement/EC/H2020/814215/</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
<dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>

24
12
views