There is a newer version of this record available.

Dataset Open Access

SEENIC: dataset for Spacecraft posE Estimation with NeuromorphIC vision

Elms, Ethan; Jawaid, Mohsi; Latif, Yasir; Chin, Tat-Jun


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.7214231</identifier>
  <creators>
    <creator>
      <creatorName>Elms, Ethan</creatorName>
      <givenName>Ethan</givenName>
      <familyName>Elms</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-2051-7464</nameIdentifier>
      <affiliation>The University of Adelaide</affiliation>
    </creator>
    <creator>
      <creatorName>Jawaid, Mohsi</creatorName>
      <givenName>Mohsi</givenName>
      <familyName>Jawaid</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-7233-9853</nameIdentifier>
      <affiliation>The University of Adelaide</affiliation>
    </creator>
    <creator>
      <creatorName>Latif, Yasir</creatorName>
      <givenName>Yasir</givenName>
      <familyName>Latif</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-2529-5322</nameIdentifier>
      <affiliation>The University of Adelaide</affiliation>
    </creator>
    <creator>
      <creatorName>Chin, Tat-Jun</creatorName>
      <givenName>Tat-Jun</givenName>
      <familyName>Chin</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-2423-9342</nameIdentifier>
      <affiliation>The University of Adelaide</affiliation>
    </creator>
  </creators>
  <titles>
    <title>SEENIC: dataset for Spacecraft posE Estimation with NeuromorphIC vision</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2022</publicationYear>
  <subjects>
    <subject>computer vision</subject>
    <subject>event camera</subject>
    <subject>space</subject>
    <subject>pose estimation</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2022-10-17</date>
  </dates>
  <resourceType resourceTypeGeneral="Dataset"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/7214231</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsPublishedIn" resourceTypeGeneral="Preprint">10.48550/arXiv.2209.11945</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.7214230</relatedIdentifier>
  </relatedIdentifiers>
  <version>1.0</version>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Dataset used in the paper &amp;quot;Towards Bridging the Space Domain Gap for Satellite Pose Estimation using Event Sensing&amp;quot; (&lt;a href="https://doi.org/10.48550/arXiv.2209.11945"&gt;link&lt;/a&gt;), for the purpose of satellite pose estimation with an event camera.&lt;/p&gt;

&lt;p&gt;Both events and ground truth camera poses were captured across the 20 scenes in total. There are two trajectories, five lighting configurations and two camera speeds. All combinations of trajectory type, speed and lighting configuration were enumerated for capture. Sample event frames and dataset statistics are available in &lt;a href="https://doi.org/10.48550/arXiv.2209.11945"&gt;the paper linked above&lt;/a&gt;, along with our pose estimation method used on this dataset.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Scene names use the following encoding: {satellite model}-{trajectory}-{speed}-{lighting configuration}&lt;/p&gt;

&lt;p&gt;The calibration scene (calibration.tar.gz) includes multiple views of a chessboard used to calibrate the camera intrinsics and extrinsics. Camera parameters calibrated using this scene can be found in the &lt;strong&gt;calib.txt&lt;/strong&gt; file, with the format: fx fy cx cy k1 k2 p1 p2 k3.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;All scenes have the same data format:&lt;/p&gt;

&lt;p&gt;scene/&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;poses/ -- Raw timestamped robot gripper to base transforms&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cam-poses.csv -- Ground truth camera poses with the format {timestamp, Rx, Ry, Rz, x, y, z}&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;events.csv -- Event stream with the format {timestamp, x, y, polarity (0=off, 1=on)}&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;meta.json -- Metadata file with camera frame dimensions&lt;/p&gt;

&lt;p&gt;Note: all timestamps are in microseconds&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When using the data in an academic context, please cite the following paper.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Jawaid, M., Elms, E., Latif, Y., &amp;amp; Chin, T. J. (2022). Towards Bridging the Space Domain Gap for Satellite Pose Estimation using Event Sensing. &lt;em&gt;arXiv preprint arXiv:2209.11945&lt;/em&gt;.&lt;/p&gt;</description>
  </descriptions>
</resource>
233
375
views
downloads
All versions This version
Views 233187
Downloads 375246
Data volume 74.2 GB45.4 GB
Unique views 190165
Unique downloads 6648

Share

Cite as