arkdb

Package rationale

Increasing data sizes create challenges for the fundamental tasks of publishing, distributing, and preserving data. Despite (or perhaps because of) the diverse and ever-expanding number of database and file formats, the humble plain text file such as comma or tab-separated-values 1 (.tsv files) remains the gold standard for data archiving and distribution. These files can read on almost any platform or tool and can be efficiently compressed using long-standing and widely available standard open source libraries like gzip or bzip2. In contrast, database storage formats and dumps are usually particular to the database platform used to generate them, and will likely not be compatible between different database engines (e.g. PostgreSQL -> SQLite) or even between different versions of the same engine. Researchers unfamiliar with these databases will have difficulty accessing such data, and these dumps may also be in formats that are less efficient to compress.

Working with tables that are too large for working memory on most machines by using external relational database stores is now a common R practice, thanks to ever-rising availability of data and increasing support and popularity of packages such as DBI, dplyr, and dbplyr. Working with plain text files becomes increasingly difficult in this context. Many R users will not have sufficient RAM to simply read in a 10 GB .tsv file into R. Similarly, moving a 10 GB database out of a relational data file and into a plain text file for archiving and distribution is similarly challenging from R. While most relational database back-ends implement some form of COPY or IMPORT that allows them to read in and export out plain text files directly, these methods are not consistent across database types and not part of the standard SQL interface. Most importantly for our case, they also cannot be called directly from R, but require a separate stand-alone installation of the database client. arkdb provides a simple solution to these two tasks.

The goal of arkdb is to provide a convenient way to move data from large compressed text files (e.g. *.tsv.bz2) into any DBI-compliant database connection (see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once. This will be slower than reading the file into memory at one go, but can be scaled to larger data and larger data with no additional memory requirement.

Installation

You can install arkdb from GitHub with:

Tutorial

Creating an archive of an existing database

First, we’ll need an example database to work with. Conveniently, there is a nice example using the NYC flights data built into the dbplyr package.

To create an archive, we just give ark the connection to the database and tell it where we want the *.tsv.bz2 files to be archived. We can also set the chunk size as the number of lines read in a single chunk. More lines per chunk usually means faster run time at the cost of higher memory requirements.

We can take a look and confirm the files have been written. Note that we can use fs::dir_info to get a nice snapshot of the file sizes. Compare the compressed sizes to the original database:

Unarchive

Now that we’ve gotten all the database into (compressed) plain text files, let’s get them back out. We simply need to pass unark a list of these compressed files and a connection to the database. Here we create a new local SQLite database. Note that this design means that it is also easy to use arkdb to move data between databases.

files <- fs::dir_ls(dir, glob = "*.tsv.bz2")
new_db <- src_sqlite("local.sqlite", create=TRUE)

As with ark, we can set the chunk size to control the memory footprint required:

unark returns a dplyr database connection that we can use in the usual way:

A note on compression

unark can read from a variety of compression formats recognized by base R: bzip2, gzip, zip, and xz, and ark can choose any of these as the compression algorithm. Note that there is some trade-off between speed of compression and efficiency (i.e. the final file size). ark uses the bz2 compression algorithm by default, supported in base R, to compress tsv files. The bz2 offers excellent compression levels, but is considerably slower to compress than gzip or zip. It is comparably fast to uncompress. For faster archiving when maximum file size reduction is not critical, gzip will give nearly as effective compression in significantly less time.

Distributing data

Once you have archived your database files with ark, consider sharing them privately or publicly as part of your project GitHub repo using the piggyback R package. For more permanent, versioned, and citable data archiving, upload your *.tsv.bz2 files to a data repository like Zenodo.org.


  1. The tsv standard is particularly attractive because it side-steps some of the ambiguities present in the CSV format due to string quoting. That is: “when should a comma be a literal part of a text string and when is it the column separator”. The convention that a comma inside quotations is a literal creates a new problem about when is a quote a literal and when is it just quoting a string. The IANA Standard for TSV neatly avoids this for tab-separated values by insisting that a tab can only ever be a separator. Therefore, quotes are always literal and you never need to worry about escaping them.