The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 47, in _split_generators
reader = pa.ipc.open_stream(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 190, in open_stream
return RecordBatchStreamReader(source, options=options,
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 52, in __init__
self._open(source, options=options, memory_pool=memory_pool)
File "pyarrow/ipc.pxi", line 974, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
OSError: Invalid flatbuffers message.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 49, in _split_generators
reader = pa.ipc.open_file(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 234, in open_file
return RecordBatchFileReader(
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 110, in __init__
self._open(source, footer_offset=footer_offset,
File "pyarrow/ipc.pxi", line 1058, in pyarrow.lib._RecordBatchFileReader._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not an Arrow file
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Fiction 1B
More than 1B words of narrative fiction sourced from Project Gutenberg, AO3, and Internet Archive.
Dataset Details
Dataset Description
This contains the text of roughly 20,000 works of narrative fiction from the above sources. From the original full texts, a genre classifier was applied at the paragraph level to remove license text, metadata, and other content suspected not to be narrative prose.
Misc
- Curated by: Shawn Rushefsky - 🤗 | github
- Funded by: Salad Technologies
- Language(s) (NLP): English
- License: MIT
Dataset Sources
More information about specific source documents can be found in doc_index.csv
- Project Gutenberg: 76.4%
- Archive of our Own (AO3): 22.2%
- Internet Archive: 1.4%
Uses
The dataset is intended to be used for training language models on the syntactic patterns of narrative fiction.
Direct Use
- Fill-Mask training
- Text Generation training
- Research
Out-of-Scope Use
- Applications outside of fiction
Dataset Structure
data.zip contains a CSV file where each row contains the source, a document ID, paragraph index, approximately 500 words of text, and a word count for that section.
Dataset Creation
Curation Rationale
While much of this content is already present in extremely large web-scaped datasets, there is a scarcity of more approachable medium-sized datasets that focus specifically on narrative fiction. Datasets such as FineWeb with trillions of tokens are not practical for the average developer to work with.
Source Data
Data Collection and Processing
Project Gutenberg
Project Gutenberg hosts a catalog CSV that includes metadata such as title, author, and subjects. I filtered based on the presence of fiction-related keywords in the Subjects column, and used a python script to bulk download texts.
fiction_keywords = [
'fiction', 'novel', 'stories', 'tale', 'adventure',
'mystery', 'romance', 'fantasy', 'horror', 'detective',
'science fiction', 'historical fiction', 'western',
'thriller', 'suspense'
]
AO3
For AO3, I used the ao3-api python package to gradually paginate through the archive, filtering to English language work with at least 15,000 words but fewer than 500,000, sorted by “Kudos”, a measure of user favor.
Internet Archive
For Internet Archive, I used their search endpoint, and a significant amount of keyword filtering. Ultimately I did not get much content from this source due to licensing restrictions.
Who are the source data producers?
Professional and amateur writers of long-form narrative fiction in the English language over the last few hundred years.
Personal and Sensitive Information
This dataset contains only works of fiction.
Bias, Risks, and Limitations
The source text comes from a diverse set of english-language narrative fiction spanning hundreds of years of authorship, and may include subject matter and phrasing that offend. The age of much of the material from Project Gutenberg is such that white men from before the civil rights movement are vastly disproportionately represented as authors. Additionally, contemporary commercial fiction is nearly all but excluded due to licensing restrictions.
Recommendations
Use at your own risk.
- Downloads last month
- 35