Poster Open Access

Free-Range Spiderbots!

Boruta, Luc


JSON-LD (schema.org) Export

{
  "inLanguage": {
    "alternateName": "eng", 
    "@type": "Language", 
    "name": "English"
  }, 
  "description": "<p><strong>Free-range what!?</strong></p>\n\n<p>The robots exclusion standard, a.k.a. robots.txt, is used to give instructions as to which resources of a website can be scanned and crawled by bots.<br>\nInvalid or overzealous robots.txt files can lead to a loss of important data, breaking archives, search engines, and any app that links or remixes scholarly data.</p>\n\n<p><strong>Why should I care?</strong></p>\n\n<p>You care about open access, don&rsquo;t you? This is about open access for bots, which fosters open access for humans.</p>\n\n<p><strong>Mind your manners</strong></p>\n\n<p>The standard is purely advisory, it relies on the politeness of the bots. Disallowing access to a page doesn&rsquo;t protect it: if it is referenced or linked to, it can be found.<br>\nWe don&rsquo;t advocate the deletion of robots.txt files. They are a lightweight mechanism to convey crucial information, e.g. the location of sitemaps. We want better robots.txt files.</p>\n\n<p><strong>Bots must be allowed to roam the scholarly web freely</strong></p>\n\n<p>Metadata harvesting protocols are great, but there is a lot of data, e.g. pricing, recommendations, that they do not capture, and, at the scale of the web, few content providers actually use these protocols.<br>\nThe web is unstable: content drifts and servers crash, this is inevitable. Lots of copies keep stuff safe, and crawlers are essential in order to maintain and analyze the permanent record of science.<br>\nWe want to start an informal open collective to lobby publishers, aggregators, and other stakeholders to standardize and minimize their robots.txt files, and other related directives like noindex tags.</p>\n\n<p><strong>Our First Victory</strong></p>\n\n<p>In September, we noticed that Hindawi prevented polite bots from accessing pages relating to retracted articles and peer-review fraud. Hindawi fixed their robots.txt after we brought the problem to their attention via Twitter. We can fix the web, one domain at a time!</p>", 
  "license": "http://creativecommons.org/licenses/by/4.0/legalcode", 
  "creator": [
    {
      "affiliation": "Thunken, Inc.", 
      "@id": "https://orcid.org/0000-0003-0557-1155", 
      "@type": "Person", 
      "name": "Boruta, Luc"
    }
  ], 
  "url": "https://zenodo.org/record/1453453", 
  "datePublished": "2018-10-09", 
  "keywords": [
    "crawling", 
    "robots.txt", 
    "digital preservation"
  ], 
  "@context": "https://schema.org/", 
  "identifier": "https://doi.org/10.5281/zenodo.1453453", 
  "@id": "https://doi.org/10.5281/zenodo.1453453", 
  "@type": "CreativeWork", 
  "name": "Free-Range Spiderbots!"
}
146
45
views
downloads
All versions This version
Views 146147
Downloads 4545
Data volume 31.4 MB31.4 MB
Unique views 131132
Unique downloads 3838

Share

Cite as