Easy ORCID
Creators
Description
The first-party ORCID data dump uses a data structure that is overly complex for most use cases. This Zenodo record contains a derived version that is much more straightforwards, accessible, and smaller. So far, this includes employers, education, external identifiers, and publications linked to PubMed. It adds additional processing to ground employers and educational instutitions using the Research Organization Registry (ROR). It also does some minor string processing, such as standardization of education types (e.g., Bachelor of Science, Master of Science) and standardization of PubMed references.
Records
The records.jsonl.gz
file is a JSON Lines file where each row represents a single ORCID record in a simple, well-defined schema (see schema.json
). The records_hq.jsonl.gz
file is a subset of the full records file that only contains records that have at least one ROR-grounded employer, at least one ROR-grounded education, or at least one publication indexed in PubMed. The point of this subset is to remove ORCID records that are generally not possible to match up to any external information.
This record also contains a SQLite database orcid.db
that contains tables for researchers and for organizations. This is useful for quick lookup of data based on an ORCID local unique identifier.
Employers, educational institution, and memberships that couldn't be grounded to an ROR record are listed in affiliation_missing_ror.tsv
.
Nomenclature Authority Cross-References
Websites, social links, and other identifiers are parsed and standardized to comply with the Bioregistry then shared using the Simple Standard for Sharing Ontological Mappings (SSSOM) in the sssom.tsv.gz
file. This allows for getting Scopus, Web of Science, GitHub, Google Scholar, and other profiles for records that include them. This information is also available through the main records file.
Authorship Links
Authorships are extracted and standardized in the pubmeds.tsv.gz
file, which contains an ORCID column and PubMed column that has been pre-sanitized to only contain local unique identifiers. This information is also available through the main records file.
Lexical Indexes
It includes two pre-built Gilda indexes for named entity recognition (NER) and named entity normalization (NEN). One contains all records, and the second is filtered to high-quality records. The following Python code snipped can be used for grounding:
from gilda import Grounder
url = "https://zenodo.org/records/11474470/files/gilda_hq.tsv.gz?download=1"
grounder = Grounder(url)
results = grounder.ground("Charles Tapley Hoyt")
Ontology Artifacts
The file orcid.ttl.gz
is an OWL-ready RDF file that can be opened in Protégé or used with the Ontology Development Kit. It can also be converted into OWL XML, OWL Functional Notation, or other OWL formats using ROBOT. This artifact can serve as a replacement for the ones generated by https://github.com/cthoyt/orcidio, which was a smaller-scale way of turning ORCID records for contributors to OBO Foundry Ontologies into a small OWL file. Now, the export here contains all ORCID records with names.
Reproduction
It is automatically generated with code in https://github.com/cthoyt/orcid_downloader.
Files
schema.json
Files
(6.5 GB)
Name | Size | Download all |
---|---|---|
md5:2da93dfebc320038f9ba2aa267b39300
|
110.2 MB | Download |
md5:1f5ed44633130811cfb974a8f47464a4
|
1.9 GB | Download |
md5:902226e7e330bf513c121d8611af426a
|
661.8 MB | Download |
md5:56de51e912121ad10fac43583bf2f1ca
|
1.7 GB | Download |
md5:d656e8a766158abc05c8e6bc262ac4a9
|
385.1 MB | Download |
md5:02fd19f4ef86443429ab643d4228bb96
|
19.2 MB | Download |
md5:8829b83ba498b901fc7a1d7320040520
|
920.8 MB | Download |
md5:e18decdd695473e5c6b040f699e63d04
|
662.1 MB | Download |
md5:2c301b44e63079aa460e6ca74b3c42a3
|
5.7 kB | Preview Download |
md5:7f141e4eb0e86a41d97fc5307b3061d4
|
96.0 MB | Download |
Additional details
Related works
- Is derived from
- Dataset: 10.23640/07243.24204912.v1 (DOI)
- Requires
- Software: 10.5281/zenodo.11371784 (DOI)