WSC-Train / README.md
ASLP-lab's picture
Update README.md
4a1f764 verified
---
license: apache-2.0
---
<h1 align="center">
WenetSpeech-Chuan: A Large-Scale Sichuanese Corpus With Rich Annotation For Dialectal Speech Processing
</h1>
<p align="center">
Yuhang Dai<sup>1</sup><sup>,*</sup>, Ziyu Zhang<sup>1</sup><sup>,*</sup>, Shuai Wang<sup>4</sup><sup>,5</sup>,
Longhao Li<sup>1</sup>, Zhao Guo<sup>1</sup>, Tianlun Zuo<sup>1</sup>,
Shuiyuan Wang<sup>1</sup>, Hongfei Xue<sup>1</sup>, Chengyou Wang<sup>1</sup>,
Qing Wang<sup>3</sup>, Xin Xu<sup>2</sup>, Hui Bu<sup>2</sup>, Jie Li<sup>3</sup>,
Jian Kang<sup>3</sup>, Binbin Zhang<sup>5</sup>, Lei Xie<sup>1</sup><sup>,╀</sup>
</p>
<p align="center">
<sup>1</sup> Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University <br>
<sup>2</sup> Beijing AISHELL Technology Co., Ltd. <br>
<sup>3</sup> Institute of Artificial Intelligence (TeleAI), China Telecom <br>
<sup>4</sup> School of Intelligence Science and Technology, Nanjing University <br>
<sup>5</sup> WeNet Open Source Community <br>
</p>
<p align="center">
📑 <a href="https://arxiv.org/abs/2509.18004">Paper</a> &nbsp&nbsp | &nbsp&nbsp
🐙 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan">GitHub</a> &nbsp&nbsp | &nbsp&nbsp
🤗 <a href="https://huggingface.co/collections/ASLP-lab/wenetspeech-chuan-68bade9d02bcb1faece65bda">HuggingFace</a>
<br>
🎤 <a href="https://aslp-lab.github.io/WenetSpeech-Chuan/">Demo Page</a> &nbsp&nbsp | &nbsp&nbsp
💬 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan?tab=readme-ov-file#contact">Contact Us</a>
</p>
<div align="center">
<img width="800px" src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/logo/WenetSpeech-Chuan-Logo.png?raw=true" />
</div>
## Dataset
### WenetSpeech-Chuan Overview
* Contains 10,000 hours of large-scale Chuan-Yu dialect speech corpus with rich annotations, the largest open-source resource for Chuan-Yu dialect speech research.</li>
* Stores metadata in a single JSON file, including audio path, duration, text confidence, speaker identity, SNR, DNSMOS, age, gender, and character-level timestamps. Additional metadata tags may be added in the future.</li>
* Covers ten domains: Short videos, Entertainment, Live streams, Documentary, Audiobook, Drama, Interview, News and others.</li>
<div align="center">
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/figs/domain.png?raw=true" width="300" style="display:inline-block; margin-right:10px;" />
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/figs/quality_distribution.jpg?raw=true" width="300" style="display:inline-block;" />
</div>
### Metadata Format
We store all audio metadata in a standardized JSON format, where the core fields include `utt_id` (unique identifier for each audio segment), `rover_result` (ROVER result of three ASR transcriptions), `confidence` (confidence score of text transcription), `jyutping_confidence` (confidence score of Cantonese pinyin transcriptions), and `duration` (audio duration); speaker attributes include `speaker_id`, `gender`, and `age`; audio quality assessment metrics include `sample_rate`, `DNSMOS`, and `SNR`; timestamp information includes `timestamp` (precisely recording segment boundaries with `start` and `end`); and extended metadata under the `meta_info` field includes `program` (program name), `region` (geographical information), `link` (original content link), and `domain` (domain classification).
#### 📂 Content Tree
```
WenetSpeech-Chuan
├── metadata.jsonl
├── .gitattributes
└── README.md
```
<!-- WenetSpeech-Chuan
├── metadata.jsonl
├── audio_labels/
│ ├── wav_utt_id.jsonl
│ ├── wav_utt_id.jsonl
│ ├── ...
│ └── wav_utt_id.jsonl
├── .gitattributes
└── README.md -->
#### Data sample:
###### metadata.jsonl
{<br>
"utt": 音频id, <br>
"filename":音频文件名(type: str), <br>
"text": 转录抄本(type: str), <br>
"domain": 参考领域信息(type: list[str]), <br>
"gender": 说话人性别(type: str), <br>
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)), <br>
"wvmos": 音频质量分数(type: float), <br>
"confidence": 转录文本置信度(0-1)(type: str), <br>
"emotion": 说话人情感标签 (type: str,eg: 愤怒), <br>
} <br>
**example:**
{ <br>
"utt": "013165495633_09mNC_9_5820", <br>
"filename": "013165495633_09mNC_9_5820.wav", <br>
"text": "还是选二手装好了的别墅诚心入如意的直接入住的好好", <br>
"domain": [ <br>
"短视频" <br>
], <br>
"gender": "Male", <br>
"age": "YOUTH", <br>
"wvmos": 2.124380588531494, <br>
"confidence": 0.8333, <br>
"emotion": angry, <br>
} <br>
<!-- ###### audio_labels/wav_utt_id.jsonl:
{ <br>
"wav_utt_id_timestamp": 以 转化为wav后的长音频id_时间戳信息 作为切分后的短音频id (type: str), <br>
"wav_utt_id_timestamp_path": 短音频数据路径 (type: str), <br>
"audio_clip_id": 该段短音频在长音频中的切分顺序编号, <br>
"timestamp": 时间戳信息, <br>
"wvmos_score": wvmos分数,衡量音频片段质量 (type: float), <br>
"text": 对应时间戳的音频片段的抄本 (type: str), <br>
"text_punc": 带标点的抄本 (type: str), <br>
"spk_num": 音频片段说话人个数,single/multi (type: str) <br>
"confidence": 抄本置信度 (type: float), <br>
"emotion": 说话人情感标签 (type: str,eg: 愤怒), <br>
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)), <br>
"gender": 说话人性别标签 (type: str,eg: 男/女), <br>
} <br>
-->
<!-- #### Data sample(EN):
###### metadata.jsonl
{ <br>
"utt_id": Original long audio ID, <br>
"wav_utt_id": Converted long audio ID after transforming to WAV format, <br>
"source_audio_path": Path to the original long audio file, <br>
"audio_labels": Path to the label file of short audio segments cut from the converted long audio, <br>
"url": Download link for the original long audio <br>
} <br>
###### audio_labels/wav_utt_id.jsonl:
{ <br>
"wav_utt_id_timestamp": Short audio segment ID, composed of the converted long audio ID + timestamp information (type: str), <br>
"wav_utt_id_timestamp_path": Path to the short audio data (type: str), <br>
"audio_clip_id": Sequence number of this short segment within the long audio, <br>
"timestamp": Timestamp information, <br>
"wvmos_score": WVMOS score, measuring the quality of the audio segment (type: float), <br>
"text": Transcript of the audio segment corresponding to the timestamp (type: str), <br>
"text_punc": Transcript with punctuation (type: str), <br>
"spk_num": Number of speakers in the audio segment, single/multi (type: str), <br>
"confidence": Confidence score of the transcript (type: float), <br>
"emotion": Speaker’s emotion label (type: str, e.g., anger), <br>
"age": Speaker’s age label (type: int range, e.g., middle-aged (36–59)), <br>
"gender": Speaker’s gender label (type: str, e.g., male/female) <br>
} <br>
-->
### WenetSpeech Usage
You can obtain the original video source through the `link` field in the metadata file (`metadata.json`). Segment the audio according to the `timestamps` field to extract the corresponding record. For pre-processed audio data, please contact us using the information provided below.
## Contact
If you have any questions or would like to collaborate, feel free to reach out to our research team via email: yhdai@mail.nwpu.edu.cn or ziyu_zhang@mail.nwpu.edu.cn.
You’re also welcome to join our WeChat group for technical discussions, updates, and — as mentioned above — access to pre-processed audio data.
<p align="center">
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/raw/main/src/figs/wechat.jpg" width="300" alt="WeChat Group QR Code"/>
<em>Scan to join our WeChat discussion group</em>
</p>
<p align="center">
<img src="https://github.com/ASLP-lab/WenetSpeech-Yue/raw/main/figs/npu@aslp.jpeg" width="300" alt="Official Account QR Code"/>
</p>