Poster Open Access

Free-Range Spiderbots!

Boruta, Luc

DCAT Export

<?xml version='1.0' encoding='utf-8'?>
<rdf:RDF xmlns:rdf="" xmlns:adms="" xmlns:cnt="" xmlns:dc="" xmlns:dct="" xmlns:dctype="" xmlns:dcat="" xmlns:duv="" xmlns:foaf="" xmlns:frapo="" xmlns:geo="" xmlns:gsp="" xmlns:locn="" xmlns:org="" xmlns:owl="" xmlns:prov="" xmlns:rdfs="" xmlns:schema="" xmlns:skos="" xmlns:vcard="" xmlns:wdrs="">
  <rdf:Description rdf:about="">
    <rdf:type rdf:resource=""/>
    <dct:type rdf:resource=""/>
    <dct:identifier rdf:datatype=""></dct:identifier>
    <foaf:page rdf:resource=""/>
      <rdf:Description rdf:about="">
        <rdf:type rdf:resource=""/>
        <foaf:name>Boruta, Luc</foaf:name>
            <foaf:name>Thunken, Inc.</foaf:name>
    <dct:title>Free-Range Spiderbots!</dct:title>
    <dct:issued rdf:datatype="">2018</dct:issued>
    <dcat:keyword>digital preservation</dcat:keyword>
    <dct:issued rdf:datatype="">2018-10-09</dct:issued>
    <dct:language rdf:resource=""/>
    <owl:sameAs rdf:resource=""/>
        <skos:notation rdf:datatype=""></skos:notation>
    <dct:isVersionOf rdf:resource=""/>
    <dct:isPartOf rdf:resource=""/>
    <dct:description>&lt;p&gt;&lt;strong&gt;Free-range what!?&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;The robots exclusion standard, a.k.a. robots.txt, is used to give instructions as to which resources of a website can be scanned and crawled by bots.&lt;br&gt; Invalid or overzealous robots.txt files can lead to a loss of important data, breaking archives, search engines, and any app that links or remixes scholarly data.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Why should I care?&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;You care about open access, don&amp;rsquo;t you? This is about open access for bots, which fosters open access for humans.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Mind your manners&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;The standard is purely advisory, it relies on the politeness of the bots. Disallowing access to a page doesn&amp;rsquo;t protect it: if it is referenced or linked to, it can be found.&lt;br&gt; We don&amp;rsquo;t advocate the deletion of robots.txt files. They are a lightweight mechanism to convey crucial information, e.g. the location of sitemaps. We want better robots.txt files.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Bots must be allowed to roam the scholarly web freely&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;Metadata harvesting protocols are great, but there is a lot of data, e.g. pricing, recommendations, that they do not capture, and, at the scale of the web, few content providers actually use these protocols.&lt;br&gt; The web is unstable: content drifts and servers crash, this is inevitable. Lots of copies keep stuff safe, and crawlers are essential in order to maintain and analyze the permanent record of science.&lt;br&gt; We want to start an informal open collective to lobby publishers, aggregators, and other stakeholders to standardize and minimize their robots.txt files, and other related directives like noindex tags.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Our First Victory&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;In September, we noticed that Hindawi prevented polite bots from accessing pages relating to retracted articles and peer-review fraud. Hindawi fixed their robots.txt after we brought the problem to their attention via Twitter. We can fix the web, one domain at a time!&lt;/p&gt;</dct:description>
    <dct:accessRights rdf:resource=""/>
      <dct:RightsStatement rdf:about="info:eu-repo/semantics/openAccess">
        <rdfs:label>Open Access</rdfs:label>
          <dct:RightsStatement rdf:about="">
            <rdfs:label>Creative Commons Attribution 4.0 International</rdfs:label>
        <dcat:accessURL rdf:resource=""/>
All versions This version
Views 175176
Downloads 100100
Data volume 69.7 MB69.7 MB
Unique views 160161
Unique downloads 7272


Cite as