Instructions to use SLSCU/thai-dialect_thai-central_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- ESPnet
How to use SLSCU/thai-dialect_thai-central_model with ESPnet:
from espnet2.bin.asr_inference import Speech2Text model = Speech2Text.from_pretrained( "SLSCU/thai-dialect_thai-central_model" ) speech, rate = soundfile.read("speech.wav") text, *_ = model(speech)[0] - Notebooks
- Google Colab
- Kaggle
Model Card for Model ID
This is the baseline model of Thai-central in Thai-dialect corpus.
The training recipe was based on wsj recipe in espnet.
Model Description
This model is a Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
This model was trained on Thai-central to be used as a supervised pre-trained model in order to be used for finetuning to other Thai dialects. (Experiment 2 in the paper).
We provide some demo code to do inference with this model on colab here. (Please note that you cannot inference >4 seconds of audio with free Google colab)
Evaluation
For evaluation, the metrics are CER and WER. Before WER evaluation, transcriptions were re-tokenized using newmm tokenizer in PyThaiNLP
from pythainlp import word_tokenize
tokenized_sentence_list = word_tokenize(<your_sentence>)
The CER and WER results on the test set are:
CER = 2.0
WER = 6.9
Paper
@inproceedings{suwanbandit23_interspeech,
author={Artit Suwanbandit and Burin Naowarat and Orathai Sangpetch and Ekapol Chuangsuwanich},
title={{Thai Dialect Corpus and Transfer-based Curriculum Learning Investigation for Dialect Automatic Speech Recognition}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
pages={4069--4073},
doi={10.21437/Interspeech.2023-1828}
}