Update README.md
Browse files
README.md
CHANGED
|
@@ -62,33 +62,34 @@ More information needed
|
|
| 62 |
|
| 63 |
The following hyperparameters were used during training:
|
| 64 |
|
| 65 |
-
learning_rate: 0.000095637994662983496
|
| 66 |
-
train_batch_size: 16
|
| 67 |
-
eval_batch_size: 16
|
| 68 |
-
seed: 13
|
| 69 |
-
gradient_accumulation_steps: 16
|
| 70 |
-
total_train_batch_size: 316
|
| 71 |
-
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 72 |
-
lr_scheduler_type: cosine_with_restarts
|
| 73 |
-
lr_scheduler_warmup_steps: 500
|
| 74 |
-
num_epochs: 100
|
| 75 |
-
mixed_precision_training: Native AMP
|
| 76 |
|
| 77 |
|
| 78 |
### Training results
|
| 79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
-
Step Training Loss Validation Loss Wer
|
| 82 |
-
500 4.825900 1.001413 0.810308
|
| 83 |
-
1000 0.561400 0.202275 0.361987
|
| 84 |
-
1500 0.298900 0.169643 0.326449
|
| 85 |
-
2000 0.236500 0.168602 0.316215
|
| 86 |
-
2500 0.199100 0.182484 0.308587
|
| 87 |
-
3000 0.179100 0.178076 0.303005
|
| 88 |
-
3500 0.161500 0.179107 0.299935
|
| 89 |
-
4000 0.151700 0.183371 0.295283
|
| 90 |
-
4500 0.143700 0.184443 0.295283
|
| 91 |
-
5000 0.138900 0.184265 0.292771
|
| 92 |
|
| 93 |
### Framework versions
|
| 94 |
- Transformers 4.16.0.dev0
|
|
@@ -104,4 +105,3 @@ Step Training Loss Validation Loss Wer
|
|
| 104 |
python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_7_0 --config hu --split test
|
| 105 |
```
|
| 106 |
|
| 107 |
-
|
|
|
|
| 62 |
|
| 63 |
The following hyperparameters were used during training:
|
| 64 |
|
| 65 |
+
- learning_rate: 0.000095637994662983496
|
| 66 |
+
- train_batch_size: 16
|
| 67 |
+
- eval_batch_size: 16
|
| 68 |
+
- seed: 13
|
| 69 |
+
- gradient_accumulation_steps: 16
|
| 70 |
+
- total_train_batch_size: 316
|
| 71 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 72 |
+
- lr_scheduler_type: cosine_with_restarts
|
| 73 |
+
- lr_scheduler_warmup_steps: 500
|
| 74 |
+
- num_epochs: 100
|
| 75 |
+
- mixed_precision_training: Native AMP
|
| 76 |
|
| 77 |
|
| 78 |
### Training results
|
| 79 |
|
| 80 |
+
Step | Training Loss | Validation Loss | Wer
|
| 81 |
+
------|---------------|-----------------|----------
|
| 82 |
+
500 | 4.825900 | 1.001413 | 0.810308
|
| 83 |
+
1000 | 0.561400 | 0.202275 | 0.361987
|
| 84 |
+
1500 | 0.298900 | 0.169643 | 0.326449
|
| 85 |
+
2000 | 0.236500 | 0.168602 | 0.316215
|
| 86 |
+
2500 | 0.199100 | 0.182484 | 0.308587
|
| 87 |
+
3000 | 0.179100 | 0.178076 | 0.303005
|
| 88 |
+
3500 | 0.161500 | 0.179107 | 0.299935
|
| 89 |
+
4000 | 0.151700 | 0.183371 | 0.295283
|
| 90 |
+
4500 | 0.143700 | 0.184443 | 0.295283
|
| 91 |
+
5000 | 0.138900 | 0.184265 | 0.292771
|
| 92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
### Framework versions
|
| 95 |
- Transformers 4.16.0.dev0
|
|
|
|
| 105 |
python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_7_0 --config hu --split test
|
| 106 |
```
|
| 107 |
|
|
|