The Pile: An 800GB Dataset of Diverse Text for Language Modeling

0
6

What is the Pile?

The Pile is a 825 GiB diverse, open
source language modelling data set that consists of 22 smaller,
high-quality datasets combined together.

Download

Have a model that uses or evaluates on the Pile?
Let us know!

Why is the Pile a good training set?

Recent work has shown that especially for large models, diversity
in data sources improves general cross-domain knowledge of the
model, as well as downstream generalization capability. In our
evaluations, not only do models trained on the Pile show moderate
improvements in traditional language modeling benchmarks, they
also show significant improvements on Pile BPB.

Why is the Pile a good benchmark?

To score well on Pile BPB (bits per byte), a model must be able to
understand many disparate domains including books, github
repositories, webpages, chat logs, and medical, physics, math,
computer science, and philosophy papers. Pile BPB is a measure of
world knowledge and reasoning ability in these domains, making it
a robust benchmark of general, cross-domain text modeling ability
for large language models.

Leaderboard

* indicates potential test-set overlap. Zero-shot indicates that
not all of the components of the Pile were present in the training
data.

Rank Model Test BPB

1.

Jan 1.2021

GPT-3 (Zero-Shot)*

OpenAI

0.7177

2.

Jan 1.2021

GPT-2 (Zero-Shot)*

OpenAI

1.2253

LEAVE A REPLY

Please enter your comment!
Please enter your name here