File size: 5,748 Bytes
1fa60bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
---
language:
- en
- zh
license: other
license_name: mixed-cc-by-4.0-apache-2.0
task_categories:
- text-to-speech
pretty_name: Multilingual TTS Dataset (LJSpeech Format)
size_categories:
- 10K<n<100K
tags:
- audio
- speech
- tts
- text-to-speech
- multilingual
- english
- chinese
---

# Multilingual TTS Dataset (LJSpeech Format)

A high-quality multilingual Text-to-Speech dataset combining English and Chinese speech data, optimized for TTS training and suitable for commercial use.

## 🎯 Quick Start

```python
from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("ayousanz/multi-dataset-v2")

# Access data
for item in dataset["train"]:
    audio = item["audio"]          # 22050Hz mono audio
    text = item["transcription"]   # Original text
    speaker = item["speaker_id"]   # Speaker identifier  
    language = item["language"]    # "en" or "zh"
```

## πŸ“Š Dataset Statistics

| **Metric** | **Value** |
|------------|-----------|
| **Total Duration** | 97.2 hours |
| **Total Utterances** | 95,568 |
| **Languages** | English, Chinese |
| **Speakers** | 421 unique speakers |
| **Audio Format** | 22050Hz, 16-bit, mono WAV |

### Language Breakdown

| **Language** | **Hours** | **Speakers** | **Utterances** |
|--------------|-----------|--------------|----------------|
| English | 48.6 | 247 | 32,310 |
| Chinese | 48.6 | 174 | 63,258 |

### Duration Distribution

| **Range** | **Count** | **Percentage** |
|-----------|-----------|----------------|
| 0-2s | 28,555 | 29.9% |
| 2-5s | 48,261 | 50.5% |
| 5-10s | 14,167 | 14.8% |
| 10-15s | 3,417 | 3.6% |
| 15-20s | 1,168 | 1.2% |
| 20s+ | 0 | 0.0% |

## πŸ“ Repository Structure

```
β”œβ”€β”€ audio/                          # Audio files (ZIP compressed)
β”‚   β”œβ”€β”€ train_english.zip          # English training audio
β”‚   β”œβ”€β”€ train_chinese.zip          # Chinese training audio  
β”‚   β”œβ”€β”€ validation_english.zip     # English validation audio
β”‚   β”œβ”€β”€ validation_chinese.zip     # Chinese validation audio
β”‚   β”œβ”€β”€ test_english.zip           # English test audio
β”‚   └── test_chinese.zip           # Chinese test audio
β”œβ”€β”€ metadata/                       # Metadata files
β”‚   β”œβ”€β”€ train.csv                  # Training metadata (all languages)
β”‚   β”œβ”€β”€ validation.csv             # Validation metadata (all languages)
β”‚   └── test.csv                   # Test metadata (all languages)
β”œβ”€β”€ dataset_info.json              # Dataset statistics and info
β”œβ”€β”€ multilingual_tts_ljspeech.py   # Dataset loader script
└── README.md                      # This file
```

## πŸ’Ύ Download Instructions

### Option 1: Using Hugging Face CLI (Recommended)

```bash
# Install Hugging Face CLI
pip install huggingface-hub

# Download entire dataset
huggingface-cli download ayousanz/multi-dataset-v2 --repo-type dataset --local-dir ./multilingual-tts

# Download specific files only
huggingface-cli download ayousanz/multi-dataset-v2 audio/train_english.zip metadata/train.csv --repo-type dataset --local-dir ./multilingual-tts
```

### Option 2: Using Python

```python
from huggingface_hub import snapshot_download

# Download entire dataset
snapshot_download(
    repo_id="ayousanz/multi-dataset-v2",
    repo_type="dataset", 
    local_dir="./multilingual-tts"
)
```

### Extracting Audio Files

After downloading, extract the ZIP files:

```bash
cd multilingual-tts
for zip_file in audio/*.zip; do
    unzip "$zip_file" -d audio_extracted/
done
```

## πŸš€ Usage Examples

### Basic Usage

```python
from datasets import load_dataset

dataset = load_dataset("ayousanz/multi-dataset-v2")

# Filter by language
english_data = dataset["train"].filter(lambda x: x["language"] == "en")
chinese_data = dataset["train"].filter(lambda x: x["language"] == "zh")

# Filter by speaker
speaker_data = dataset["train"].filter(lambda x: x["speaker_id"] == "en_1234")
```

### For TTS Training

```python
# Example with PyTorch DataLoader
from torch.utils.data import DataLoader

def collate_fn(batch):
    audios = [item["audio"]["array"] for item in batch]
    texts = [item["transcription"] for item in batch]
    speakers = [item["speaker_id"] for item in batch]
    return audios, texts, speakers

dataloader = DataLoader(
    dataset["train"], 
    batch_size=32, 
    collate_fn=collate_fn,
    shuffle=True
)
```

## πŸ“‹ Data Format

Each sample contains:

- **`audio_id`**: Unique identifier for the audio file
- **`audio`**: Audio data (22050Hz, 16-bit, mono)
- **`transcription`**: Original text transcription
- **`normalized_text`**: Normalized text for TTS training
- **`speaker_id`**: Speaker identifier with language prefix (`en_*` or `zh_*`)
- **`language`**: Language code (`en` for English, `zh` for Chinese)

## πŸ“œ License

This dataset combines data from multiple sources:

- **English data (LibriTTS-R)**: CC BY 4.0 - requires attribution
- **Chinese data (AISHELL-3)**: Apache 2.0

### Attribution Requirements

When using this dataset, please cite:

```bibtex
@dataset{{multilingual_tts_ljspeech,
  title={{Multilingual TTS Dataset in LJSpeech Format}},
  year={{2024}},
  note={{English: LibriTTS-R (CC BY 4.0), Chinese: AISHELL-3 (Apache 2.0)}}
}}
```

## πŸ”— Source Datasets

- **LibriTTS-R**: https://openslr.org/141/
- **AISHELL-3**: https://openslr.org/93/

## ⚑ Performance Notes

- Audio files are stored in ZIP format for faster download
- Use `datasets` library's built-in caching for optimal performance
- Consider using `streaming=True` for large-scale training to save memory

## 🀝 Contributing

Found an issue? Please report it on the [repository issues page](https://huggingface.co/datasets/ayousanz/multi-dataset-v2/discussions).