google/fleurs
Viewer • Updated • 768k • 57.2k • 402
How to use cahya/whisper-tiny-id with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="cahya/whisper-tiny-id") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("cahya/whisper-tiny-id")
model = AutoModelForSpeechSeq2Seq.from_pretrained("cahya/whisper-tiny-id")This model is a fine-tuned version of openai/whisper-tiny on the mozilla-foundation/common_voice_11_0, magic_data, titml and google/fleurs dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.4103 | 0.66 | 1000 | 0.3802 | 27.0497 |
| 0.2682 | 1.32 | 2000 | 0.3223 | 22.9365 |
| 0.2381 | 1.99 | 3000 | 0.2884 | 20.8245 |
| 0.1606 | 2.65 | 4000 | 0.2727 | 20.1928 |
| 0.1246 | 3.31 | 5000 | 0.2596 | 18.9984 |
| 0.1344 | 3.97 | 6000 | 0.2482 | 18.7540 |
| 0.0975 | 4.63 | 7000 | 0.2471 | 18.6388 |
| 0.0916 | 5.29 | 8000 | 0.2436 | 18.9615 |
| 0.0854 | 5.96 | 9000 | 0.2413 | 18.3114 |
| 0.0812 | 6.62 | 10000 | 0.2409 | 18.2837 |