10.5281/zenodo.3633737
https://zenodo.org/records/3633737
oai:zenodo.org:3633737
Ruest, Nick
Nick
Ruest
0000-0003-1891-1112
York University
Green, Karen
Karen
Green
Ivy Plus Libraries Confederation
Wenzel, Sarah
Sarah
Wenzel
Ivy Plus Libraries Confederation
Abrams, Samantha
Samantha
Abrams
Ivy Plus Libraries Confederation
Global Webcomics Web Archive collection derivatives
Zenodo
2020
web archives
parquet
dataframes
Arts & Humanities
Society & Culture
Webcomics
2020-02-01
10.5281/zenodo.3633736
https://zenodo.org/communities/wahr
Creative Commons Attribution 4.0 International
Web archive derivatives of the Global Webcomics Web Archive collection from the Ivy Plus Libraries Confederation. The derivatives were created with the Archives Unleashed Toolkit and Archives Unleashed Cloud.
The ivy-10181-parquet.tar.gz derivatives are in the Apache Parquet format, which is a columnar storage format. These derivatives are generally small enough to work with on your local machine, and can be easily converted to Pandas DataFrames. See this notebook for examples.
Domains
.webpages().groupBy(ExtractDomainDF($"url").alias("url")).count().sort($"count".desc)
Produces a DataFrame with the following columns:
domain
count
Web Pages
.webpages().select($"crawl_date", $"url", $"mime_type_web_server", $"mime_type_tika", RemoveHTMLDF(RemoveHTTPHeaderDF(($"content"))).alias("content"))
Produces a DataFrame with the following columns:
crawl_date
url
mime_type_web_server
mime_type_tika
content
Web Graph
.webgraph()
Produces a DataFrame with the following columns:
crawl_date
src
dest
anchor
Image Links
.imageLinks()
Produces a DataFrame with the following columns:
src
image_url
Binary Analysis
Audio
Images
PDFs
Presentation program files
Spreadsheets
Text files
Word processor files
The ivy-10181-auk.tar.gz derivatives are the standard set of web archive derivatives produced by the Archives Unleashed Cloud.
Gephi file, which can be loaded into Gephi. It will have basic characteristics already computed and a basic layout.
Raw Network file, which can also be loaded into Gephi. You will have to use that network program to lay it out yourself.
Full text file. In it, each website within the web archive collection will have its full text presented on one line, along with information around when it was crawled, the name of the domain, and the full URL of the content.
Domains count file. A text file containing the frequency count of domains captured within your web archive.