Poster Open Access

Free-Range Spiderbots!

Boruta, Luc

JSON-LD ( Export

  "inLanguage": {
    "alternateName": "eng", 
    "@type": "Language", 
    "name": "English"
  "description": "<p><strong>Free-range what!?</strong></p>\n\n<p>The robots exclusion standard, a.k.a. robots.txt, is used to give instructions as to which resources of a website can be scanned and crawled by bots.<br>\nInvalid or overzealous robots.txt files can lead to a loss of important data, breaking archives, search engines, and any app that links or remixes scholarly data.</p>\n\n<p><strong>Why should I care?</strong></p>\n\n<p>You care about open access, don&rsquo;t you? This is about open access for bots, which fosters open access for humans.</p>\n\n<p><strong>Mind your manners</strong></p>\n\n<p>The standard is purely advisory, it relies on the politeness of the bots. Disallowing access to a page doesn&rsquo;t protect it: if it is referenced or linked to, it can be found.<br>\nWe don&rsquo;t advocate the deletion of robots.txt files. They are a lightweight mechanism to convey crucial information, e.g. the location of sitemaps. We want better robots.txt files.</p>\n\n<p><strong>Bots must be allowed to roam the scholarly web freely</strong></p>\n\n<p>Metadata harvesting protocols are great, but there is a lot of data, e.g. pricing, recommendations, that they do not capture, and, at the scale of the web, few content providers actually use these protocols.<br>\nThe web is unstable: content drifts and servers crash, this is inevitable. Lots of copies keep stuff safe, and crawlers are essential in order to maintain and analyze the permanent record of science.<br>\nWe want to start an informal open collective to lobby publishers, aggregators, and other stakeholders to standardize and minimize their robots.txt files, and other related directives like noindex tags.</p>\n\n<p><strong>Our First Victory</strong></p>\n\n<p>In September, we noticed that Hindawi prevented polite bots from accessing pages relating to retracted articles and peer-review fraud. Hindawi fixed their robots.txt after we brought the problem to their attention via Twitter. We can fix the web, one domain at a time!</p>", 
  "license": "", 
  "creator": [
      "affiliation": "Thunken, Inc.", 
      "@id": "", 
      "@type": "Person", 
      "name": "Boruta, Luc"
  "url": "", 
  "datePublished": "2018-10-09", 
  "keywords": [
    "digital preservation"
  "@context": "", 
  "identifier": "", 
  "@id": "", 
  "@type": "CreativeWork", 
  "name": "Free-Range Spiderbots!"
All versions This version
Views 158159
Downloads 9393
Data volume 64.8 MB64.8 MB
Unique views 143144
Unique downloads 6666


Cite as