|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: audio |
|
|
dtype: audio |
|
|
- name: ipa |
|
|
dtype: string |
|
|
- name: speaker_code |
|
|
dtype: string |
|
|
- name: speaker_gender |
|
|
dtype: string |
|
|
- name: speaker_native_language |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 44466139 |
|
|
num_examples: 129 |
|
|
download_size: 44312132 |
|
|
dataset_size: 44466139 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
license: cc-by-nc-4.0 |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- Speech |
|
|
- IPA |
|
|
- Arabic |
|
|
- Mandarin |
|
|
- Spanish |
|
|
- Hindi |
|
|
- Vietnamese |
|
|
- Korean |
|
|
pretty_name: L2-ARCTIC Suitcase |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
# L2-ARCTIC Suitcase: a spontaneous non-native English speech corpus |
|
|
L2-ARCTIC Suitcase Corpus contains English speech from 22 non-native speakers |
|
|
of Vietnamese, Korean, Mandarin, Spanish, Hindi, and Arabic backgrounds. |
|
|
It contains phonemic annotations using the sounds supported by [ARPABet](https://en.wikipedia.org/wiki/ARPABET). |
|
|
It was compiled by researchers at [Texas A&M University](http://www.tamu.edu/) and [Iowa State University](http://www.iastate.edu/). |
|
|
Read more on [their official website](https://psi.engr.tamu.edu/l2-arctic-corpus/). |
|
|
|
|
|
## This Processed Version |
|
|
We have processed the dataset into an easily consumable [Hugging Face dataset](https://huggingface.co/docs/datasets/en/index) using [this data processing script](https://github.com/KoelLabs/ML/blob/main/scripts/data_loaders/L2ARCTIC.py). |
|
|
This maps the phoneme annotations to [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) as supported by libraries like [ipapy](https://pypi.org/project/ipapy/0.0.1.0/) and [panphon](https://pypi.org/project/panphon/0.5/). |
|
|
We also correct typos, malformatted annotation files, and split the original 22 samples of varied lengths into 129 samples each between 10 and 12.5 seconds. |
|
|
We do this naturally at existing silences in the samples to preserve as much complete semantic meaning of each utterance and make it clear |
|
|
which portion of the transcriptions belong to which of the split samples. |
|
|
|
|
|
**NOTE**: we have a [cleaned version of the full L2-ARCTIC](https://huggingface.co/datasets/KoelLabs/L2Arctic) dataset as well which contains the original 22 samples with just the data cleaning. |
|
|
|
|
|
- The train set has 129 samples (around 26 minutes of un-scripted speech). |
|
|
|
|
|
All audio has been converted to float32 in the -1 to 1 range at 16 kHz sampling rate. |
|
|
|
|
|
## Usage |
|
|
0. Request access to [this dataset](https://huggingface.co/datasets/KoelLabs/L2ArcticSpontaneousSplit) on the Hugging Face website. You will be automatically approved upon accepting the terms. |
|
|
1. `pip install datasets` |
|
|
2. [Login to Hugging Face](https://huggingface.co/docs/huggingface_hub/en/guides/cli#huggingface-cli-login) using `huggingface-cli login` with a token that has gated read access. |
|
|
3. Use the dataset in your scripts: |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("KoelLabs/L2ArcticSpontaneousSplit") |
|
|
spontaneous_ds = dataset['spontaneous'] |
|
|
scripted_ds = dataset['scripted'] |
|
|
|
|
|
sample = scripted_ds[0] |
|
|
print(sample) |
|
|
``` |
|
|
|
|
|
## License |
|
|
The original dataset is released under the Creative Commons Attribution Non Commercial 4.0, a summary of the license can be found [here](https://creativecommons.org/licenses/by-nc/4.0/), and the full license can be found [here](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
|
|
This processed dataset follows the same license. For any usage that is not covered by this license, please contact [the dataset authors](https://psi.engr.tamu.edu/l2-arctic-corpus/). |
|
|
Please also cite their paper if you use L2-ARCTIC for any publications, |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{zhao2018l2arctic, |
|
|
author={Guanlong {Zhao} and Sinem {Sonsaat} and Alif {Silpachai} and Ivana {Lucic} and Evgeny {Chukharev-Hudilainen} and John {Levis} and Ricardo {Gutierrez-Osuna}}, |
|
|
title={L2-ARCTIC: A Non-native English Speech Corpus}, |
|
|
year=2018, |
|
|
booktitle={Proc. Interspeech}, |
|
|
pages={2783–2787}, |
|
|
doi={10.21437/Interspeech.2018-1110}, |
|
|
url={http://dx.doi.org/10.21437/Interspeech.2018-1110} |
|
|
} |
|
|
``` |