Dataset Open Access
Elms, Ethan;
Jawaid, Mohsi;
Latif, Yasir;
Chin, Tat-Jun
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.7214231</identifier> <creators> <creator> <creatorName>Elms, Ethan</creatorName> <givenName>Ethan</givenName> <familyName>Elms</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-2051-7464</nameIdentifier> <affiliation>The University of Adelaide</affiliation> </creator> <creator> <creatorName>Jawaid, Mohsi</creatorName> <givenName>Mohsi</givenName> <familyName>Jawaid</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-7233-9853</nameIdentifier> <affiliation>The University of Adelaide</affiliation> </creator> <creator> <creatorName>Latif, Yasir</creatorName> <givenName>Yasir</givenName> <familyName>Latif</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-2529-5322</nameIdentifier> <affiliation>The University of Adelaide</affiliation> </creator> <creator> <creatorName>Chin, Tat-Jun</creatorName> <givenName>Tat-Jun</givenName> <familyName>Chin</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-2423-9342</nameIdentifier> <affiliation>The University of Adelaide</affiliation> </creator> </creators> <titles> <title>SEENIC: dataset for Spacecraft posE Estimation with NeuromorphIC vision</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2022</publicationYear> <subjects> <subject>computer vision</subject> <subject>event camera</subject> <subject>space</subject> <subject>pose estimation</subject> </subjects> <dates> <date dateType="Issued">2022-10-17</date> </dates> <resourceType resourceTypeGeneral="Dataset"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/7214231</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsPublishedIn" resourceTypeGeneral="Preprint">10.48550/arXiv.2209.11945</relatedIdentifier> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.7214230</relatedIdentifier> </relatedIdentifiers> <version>1.0</version> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>Dataset used in the paper &quot;Towards Bridging the Space Domain Gap for Satellite Pose Estimation using Event Sensing&quot; (<a href="https://doi.org/10.48550/arXiv.2209.11945">link</a>), for the purpose of satellite pose estimation with an event camera.</p> <p>Both events and ground truth camera poses were captured across the 20 scenes in total. There are two trajectories, five lighting configurations and two camera speeds. All combinations of trajectory type, speed and lighting configuration were enumerated for capture. Sample event frames and dataset statistics are available in <a href="https://doi.org/10.48550/arXiv.2209.11945">the paper linked above</a>, along with our pose estimation method used on this dataset.</p> <p>&nbsp;</p> <p>Scene names use the following encoding: {satellite model}-{trajectory}-{speed}-{lighting configuration}</p> <p>The calibration scene (calibration.tar.gz) includes multiple views of a chessboard used to calibrate the camera intrinsics and extrinsics. Camera parameters calibrated using this scene can be found in the <strong>calib.txt</strong> file, with the format: fx fy cx cy k1 k2 p1 p2 k3.</p> <p>&nbsp;</p> <p>All scenes have the same data format:</p> <p>scene/</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;poses/ -- Raw timestamped robot gripper to base transforms</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;cam-poses.csv -- Ground truth camera poses with the format {timestamp, Rx, Ry, Rz, x, y, z}</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;events.csv -- Event stream with the format {timestamp, x, y, polarity (0=off, 1=on)}</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;meta.json -- Metadata file with camera frame dimensions</p> <p>Note: all timestamps are in microseconds</p> <p>&nbsp;</p> <p><strong>When using the data in an academic context, please cite the following paper.</strong></p> <p>Jawaid, M., Elms, E., Latif, Y., &amp; Chin, T. J. (2022). Towards Bridging the Space Domain Gap for Satellite Pose Estimation using Event Sensing. <em>arXiv preprint arXiv:2209.11945</em>.</p></description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 233 | 187 |
Downloads | 375 | 246 |
Data volume | 74.2 GB | 45.4 GB |
Unique views | 190 | 165 |
Unique downloads | 66 | 48 |