Published December 9, 2023 | Version v1
Dataset Open

DistilKaggle: a distilled dataset of Kaggle Jupyter notebooks

Description

Overview

DistilKaggle is a curated dataset extracted from Kaggle Jupyter notebooks spanning from September 2015 to October 2023. This dataset is a distilled version derived from the download of over 300GB of Kaggle kernels, focusing on essential data for research purposes. The dataset exclusively comprises publicly available Python Jupyter notebooks from Kaggle. The essential information for retrieving the data needed to download the dataset is obtained from the MetaKaggle dataset provided by Kaggle.

Contents

The DistilKaggle dataset consists of three main CSV files:

code.csv: Contains over 12 million rows of code cells extracted from the Kaggle kernels. Each row is identified by the kernel's ID and cell index for reproducibility.

markdown.csv: Includes over 5 million rows of markdown cells extracted from Kaggle kernels. Similar to code.csv, each row is identified by the kernel's ID and cell index.

notebook_metrics.csv: This file provides notebook features described in the accompanying paper released with this dataset. It includes metrics for over 517,000 Python notebooks.

Directory Structure

The kernels directory is organized based on Kaggle's Performance Tiers (PTs), a ranking system in Kaggle that classifies users. The structure includes PT-specific directories, each containing user ids that belong to this PT, download logs, and the essential data needed for downloading the notebooks.

The utility directory contains two important files:

aggregate_data.py: A Python script for aggregating data from different PTs into the mentioned CSV files.

application.ipynb: A Jupyter notebook serving as a simple example application using the metrics dataframe. It demonstrates predicting the PT of the author based on notebook metrics.

DistilKaggle.tar.gz: It is just the compressed version of the whole dataset. If you downloaded all of the other files independently already, there is no need to download this file.

Usage

Researchers can leverage this distilled dataset for various analyses without dealing with the bulk of the original 300GB dataset. For access to the raw, unprocessed Kaggle kernels, researchers can request the dataset directly.

Note

The original dataset of Kaggle kernels is substantial, exceeding 300GB, making it impractical for direct upload to Zenodo. Researchers interested in the full dataset can contact the dataset maintainers for access.

Citation

If you use this dataset in your research, please cite the accompanying paper or provide appropriate acknowledgment as outlined in the documentation.

If you have any questions regarding the dataset, don't hesitate to contact me at mohammad.abolnejadian@gmail.com

Thank you for using DistilKaggle!

Files

code.csv

Files (7.1 GB)

Name Size Download all
md5:b0dc06bbdfbdf3203afbbbe9f1661bd4
3.9 GB Preview Download
md5:0796053f90b0c17e9454373ee266a5d9
1.6 GB Download
md5:e7c5f315c35b848b112e05158e26bbbd
55.3 MB Download
md5:00d45f084e6ad9f008618f1bd747f4d5
1.5 GB Preview Download
md5:8168f24a9b13f77461227f7c598d2a01
121.1 MB Preview Download
md5:8880133e299861e8496f8c89020b6c7e
2.5 kB Preview Download
md5:85c22f790cb3952746dbda316f626e8b
236.7 kB Download