Poster Open Access

Free-Range Spiderbots!

Boruta, Luc


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.1453453</identifier>
  <creators>
    <creator>
      <creatorName>Boruta, Luc</creatorName>
      <givenName>Luc</givenName>
      <familyName>Boruta</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-0557-1155</nameIdentifier>
      <affiliation>Thunken, Inc.</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Free-Range Spiderbots!</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2018</publicationYear>
  <subjects>
    <subject>crawling</subject>
    <subject>robots.txt</subject>
    <subject>digital preservation</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2018-10-09</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Text">Poster</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/1453453</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.1453452</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/force2018</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="http://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;&lt;strong&gt;Free-range what!?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The robots exclusion standard, a.k.a. robots.txt, is used to give instructions as to which resources of a website can be scanned and crawled by bots.&lt;br&gt;
Invalid or overzealous robots.txt files can lead to a loss of important data, breaking archives, search engines, and any app that links or remixes scholarly data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why should I care?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You care about open access, don&amp;rsquo;t you? This is about open access for bots, which fosters open access for humans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mind your manners&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The standard is purely advisory, it relies on the politeness of the bots. Disallowing access to a page doesn&amp;rsquo;t protect it: if it is referenced or linked to, it can be found.&lt;br&gt;
We don&amp;rsquo;t advocate the deletion of robots.txt files. They are a lightweight mechanism to convey crucial information, e.g. the location of sitemaps. We want better robots.txt files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bots must be allowed to roam the scholarly web freely&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Metadata harvesting protocols are great, but there is a lot of data, e.g. pricing, recommendations, that they do not capture, and, at the scale of the web, few content providers actually use these protocols.&lt;br&gt;
The web is unstable: content drifts and servers crash, this is inevitable. Lots of copies keep stuff safe, and crawlers are essential in order to maintain and analyze the permanent record of science.&lt;br&gt;
We want to start an informal open collective to lobby publishers, aggregators, and other stakeholders to standardize and minimize their robots.txt files, and other related directives like noindex tags.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our First Victory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In September, we noticed that Hindawi prevented polite bots from accessing pages relating to retracted articles and peer-review fraud. Hindawi fixed their robots.txt after we brought the problem to their attention via Twitter. We can fix the web, one domain at a time!&lt;/p&gt;</description>
  </descriptions>
</resource>
146
45
views
downloads
All versions This version
Views 146147
Downloads 4545
Data volume 31.4 MB31.4 MB
Unique views 131132
Unique downloads 3838

Share

Cite as