USPTO_PIP_Dataset
Creators
Contributors
Other:
Description
USPTO-PIP Dataset: For the perspective classification task, we used the dataset presented by Wei et al. [32] based on patents collected from the USPTO. In
this dataset, meta information including image perspectives has been automatically extracted from captions. We processed the data to extract the most common (more than 1000 samples) perspective labels (e.g., left view, perspective) and identified a class taxonomy (Table 1, right) covering 2, 4, and 7 classes. We
utilize this information to compile the USPTO-PIP dataset for patent image perspective (PIP) classification from images
Paper title: Classification of Visualization Types and Perspectives in Patents
Paper link: https://link.springer.com/chapter/10.1007/978-3-031-43849-3_16
Git repository link: https://github.com/TIBHannover/PatentImageClassification
Cite as:
Ghauri, J.A., Müller-Budack, E., Ewerth, R. (2023). Classification of Visualization Types and Perspectives in Patents. In: Alonso, O., Cousijn, H., Silvello, G., Marrero, M., Teixeira Lopes, C., Marchesin, S. (eds) Linking Theory and Practice of Digital Libraries. TPDL 2023. Lecture Notes in Computer Science, vol 14241. Springer, Cham. https://doi.org/10.1007/978-3-031-43849-3_16
OR
@InProceedings{10.1007/978-3-031-43849-3_16,
author="Ghauri, Junaid Ahmed
and M{\"u}ller-Budack, Eric
and Ewerth, Ralph",
editor="Alonso, Omar
and Cousijn, Helena
and Silvello, Gianmaria
and Marrero, M{\'o}nica
and Teixeira Lopes, Carla
and Marchesin, Stefano",
title="Classification of Visualization Types and Perspectives in Patents",
booktitle="Linking Theory and Practice of Digital Libraries",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="182--191",
abstract="Due to the swift growth of patent applications each year, information and multimedia retrieval approaches that facilitate patent exploration and retrieval are of utmost importance. Different types of visualizations (e.g., graphs, technical drawings) and perspectives (e.g., side view, perspective) are used to visualize details of innovations in patents. The classification of these images enables a more efficient search in digital libraries and allows for further analysis. So far, datasets for image type classification miss some important visualization types for patents. Furthermore, related work does not make use of recent deep learning approaches including transformers. In this paper, we adopt state-of-the-art deep learning methods for the classification of visualization types and perspectives in patent images. We extend the CLEF-IP dataset for image type classification in patents to ten classes and provide manual ground truth annotations. In addition, we derive a set of hierarchical classes from a dataset that provides weakly-labeled data for image perspectives. Experimental results have demonstrated the feasibility of the proposed approaches. Source code, models, and datasets are publicly available (https://github.com/TIBHannover/PatentImageClassification).",
isbn="978-3-031-43849-3"
}
Files
USPTO_PIP_Dataset.zip
Files
(6.5 GB)
Name | Size | Download all |
---|---|---|
md5:bdc9e3e04f463b79143297f2cfe24e6a
|
6.5 GB | Preview Download |
Additional details
Funding
- Federal Ministry of Education and Research
- ExpResViP 01IO2004A
Dates
- Accepted
-
2023-09-26