Conference paper Open Access

Through-Screen Visible Light Sensing Empowered by Embedded Deep Learning

Liu, Hao; Ye, Hanting; Yang, Jie; Wang, Qing


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="URL">https://zenodo.org/record/5646942</identifier>
  <creators>
    <creator>
      <creatorName>Liu, Hao</creatorName>
      <givenName>Hao</givenName>
      <familyName>Liu</familyName>
      <affiliation>TU Delft</affiliation>
    </creator>
    <creator>
      <creatorName>Ye, Hanting</creatorName>
      <givenName>Hanting</givenName>
      <familyName>Ye</familyName>
      <affiliation>TU Delft</affiliation>
    </creator>
    <creator>
      <creatorName>Yang, Jie</creatorName>
      <givenName>Jie</givenName>
      <familyName>Yang</familyName>
      <affiliation>TU Delft</affiliation>
    </creator>
    <creator>
      <creatorName>Wang, Qing</creatorName>
      <givenName>Qing</givenName>
      <familyName>Wang</familyName>
      <affiliation>TU Delft</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Through-Screen Visible Light Sensing Empowered by Embedded Deep Learning</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2021</publicationYear>
  <dates>
    <date dateType="Issued">2021-11-05</date>
  </dates>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5646942</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1145/3485730.3493454</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Motivated by the trend of realizing full screens on devices such as smartphones, in this work we propose through-screen sensing with visible light for the application of fingertip air-writing. The system can recognize handwritten digits with under-screen photodiodes as the receiver. The key idea is to recognize the weak light reflected by the finger when the finger writes the digits on top of a screen. The proposed air-writing system has immunity to scene changes because it has a fixed screen light source. However, the screen is a double-edged sword as both a signal source and a noise source. We propose a data preprocessing method to reduce the interference of the screen as a noise source. We design an embedded deep learning model, a customized model ConvRNN, to model the spatial and temporal patterns in the dynamic and weak reflected signal for air-writing digits recognition. The evaluation results show that our through-screen fingertip air-writing system with visible light can achieve accuracy up to 91%. Results further show that the size of the customized ConvRNN model can be reduced by 94% with less&lt;br&gt;
than a 10% drop in performance.&lt;/p&gt;</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/814215/">814215</awardNumber>
      <awardTitle>European Training Network in Low-energy Visible Light IoT Systems</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
24
12
views
downloads
Views 24
Downloads 12
Data volume 15.6 MB
Unique views 24
Unique downloads 12

Share

Cite as