Add link to paper

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +32 -24
README.md CHANGED
@@ -1,43 +1,50 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path:
7
- - gharchive/v0/documents/*.jsonl.gz
8
- task_categories:
9
- - text-generation
10
  language:
11
  - en
12
- pretty_name: GitHub Archive
 
 
13
  ---
14
- # GitHub Archive
 
15
 
16
  ## Description
17
- According to [GitHub’s terms of service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service), issues and pull request descriptions—along with the their comments—inherit the license of their associated repository.
18
- To collect this data, we used the [GitHub Archive’s](https://www.gharchive.org/) public BigQuery table of events to extracted all issue, pull request, and comment events since 2011 and aggregated them into threads.
19
- The table appeared to be missing “edit” events so the text from each comment is the original from when it was first posted.
20
- We filtered out comments from bots.
21
- This resulted in approximately 177 million threads across 19 million repositories.
22
- We then removed threads whose repositories did not have a Blue Oak Council-approved license.
23
- License information for each repository comes from either 1) the “public-data:github_repos” BigQuery Table, 2) metadata from the StackV2, or 3) the GitHub API.
24
- License filtering left 10 million repositories.
25
- PyMarkdown was used to convert from GitHub-flavored markdown to plain text.
26
- When parsing failed, the raw markdown was kept.
 
 
 
 
 
 
 
 
 
 
 
 
27
  Per-document license information is available in the `license` entry of the `metadata` field of each example.
28
  Code for collecting, processing, and preparing this dataset is available in the [common-pile GitHub repo](https://github.com/r-three/common-pile).
29
 
30
  ## Dataset Statistics
31
  | Documents | UTF-8 GB |
32
  |-----------|----------|
33
- | 30,318,774 | 54.7 |
34
 
35
  ## License Issues
36
  While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
37
 
38
  ## Other Versions
39
- This is the "raw" version of the GitHub Archive dataset.
40
- If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/github_archive_filtered).
41
 
42
  ## Citation
43
  If you use this dataset, please cite:
@@ -46,6 +53,7 @@ If you use this dataset, please cite:
46
  title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
47
  author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
48
  journal={arXiv preprint},
49
- year={2025}
 
50
  }
51
  ```
 
1
  ---
 
 
 
 
 
 
 
 
2
  language:
3
  - en
4
+ task_categories:
5
+ - text-generation
6
+ pretty_name: News
7
  ---
8
+
9
+ # News
10
 
11
  ## Description
12
+ We scrape the news sites that publish content under CC BY or CC BY-SA according to [opennewswire](https://feed.opennewswire.org/).
13
+ These include [360info](https://360info.org/), [Africa is a Country](https://africasacountry.com/),
14
+ [Alt News](https://www.altnews.in/),
15
+ [Balkan Diskurs](https://balkandiskurs.com/en/),
16
+ [Factly](https://factly.in/),
17
+ [Freedom of the Press Foundation](https://freedom.press/),
18
+ [Agenzia Fides](https://www.fides.org/en),
19
+ [Global Voices](https://globalvoices.org/),
20
+ [Meduza](https://meduza.io/en),
21
+ [Mekong Eye](https://www.mekongeye.com/),
22
+ [Milwaukee Neighborhood News Service](https://milwaukeenns.org/),
23
+ [Minority Africa](https://minorityafrica.org/),
24
+ [New Canadian Media](https://www.newcanadianmedia.ca/),
25
+ [SciDev.Net](https://www.scidev.net/global/),
26
+ [The Solutions Journalism Exchange](https://sojoexchange.solutionsjournalism.org/),
27
+ [Tasnim News Agency](https://www.tasnimnews.com/en),
28
+ [ZimFact](https://zimfact.org/).
29
+ [Oxpeckers](https://oxpeckers.org),
30
+ [Propastop](https://www.propastop.org/en/),
31
+ and
32
+ [The Public Record](https://thepublicrecord.ca/).
33
+ Plain text was extracted from the HTML using a custom pipeline, including extraction of the title and byline to include at the beginning of each article.
34
  Per-document license information is available in the `license` entry of the `metadata` field of each example.
35
  Code for collecting, processing, and preparing this dataset is available in the [common-pile GitHub repo](https://github.com/r-three/common-pile).
36
 
37
  ## Dataset Statistics
38
  | Documents | UTF-8 GB |
39
  |-----------|----------|
40
+ | 172,308 | 0.4 |
41
 
42
  ## License Issues
43
  While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
44
 
45
  ## Other Versions
46
+ This is the "raw" version of the News dataset.
47
+ If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/news_filtered).
48
 
49
  ## Citation
50
  If you use this dataset, please cite:
 
53
  title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
54
  author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben and Elie Bakouch and John David and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
55
  journal={arXiv preprint},
56
+ year={2025},
57
+ url={https://huggingface.co/papers/2506.05209}
58
  }
59
  ```