Conference paper Open Access

Through-Screen Visible Light Sensing Empowered by Embedded Deep Learning

Liu, Hao; Ye, Hanting; Yang, Jie; Wang, Qing

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="URL"></identifier>
      <creatorName>Liu, Hao</creatorName>
      <affiliation>TU Delft</affiliation>
      <creatorName>Ye, Hanting</creatorName>
      <affiliation>TU Delft</affiliation>
      <creatorName>Yang, Jie</creatorName>
      <affiliation>TU Delft</affiliation>
      <creatorName>Wang, Qing</creatorName>
      <affiliation>TU Delft</affiliation>
    <title>Through-Screen Visible Light Sensing Empowered by Embedded Deep Learning</title>
    <date dateType="Issued">2021-11-05</date>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1145/3485730.3493454</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;Motivated by the trend of realizing full screens on devices such as smartphones, in this work we propose through-screen sensing with visible light for the application of fingertip air-writing. The system can recognize handwritten digits with under-screen photodiodes as the receiver. The key idea is to recognize the weak light reflected by the finger when the finger writes the digits on top of a screen. The proposed air-writing system has immunity to scene changes because it has a fixed screen light source. However, the screen is a double-edged sword as both a signal source and a noise source. We propose a data preprocessing method to reduce the interference of the screen as a noise source. We design an embedded deep learning model, a customized model ConvRNN, to model the spatial and temporal patterns in the dynamic and weak reflected signal for air-writing digits recognition. The evaluation results show that our through-screen fingertip air-writing system with visible light can achieve accuracy up to 91%. Results further show that the size of the customized ConvRNN model can be reduced by 94% with less&lt;br&gt;
than a 10% drop in performance.&lt;/p&gt;</description>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/814215/">814215</awardNumber>
      <awardTitle>European Training Network in Low-energy Visible Light IoT Systems</awardTitle>
Views 24
Downloads 12
Data volume 15.6 MB
Unique views 24
Unique downloads 12


Cite as