Datasets:
added blog
Browse files
README.md
CHANGED
|
@@ -95,9 +95,9 @@ This collection is made up of five datasets: [AkademikDerlem](https://huggingfac
|
|
| 95 |
|
| 96 |
Each of the subcorpus has their own HF repos. The cleaning of `AkademikDerlem` mostly focuses of fixing OCR mistakes.
|
| 97 |
The cleaning pipelines of `Temiz mC4` and `Temiz OSCAR` focus on filtering low quality content, such as ads, repetetive content and adult content.
|
| 98 |
-
`OzenliDerlem`'s text had already been chosen from high quality content promising websites,
|
| 99 |
|
| 100 |
-
For more details about the cleaning pipeline, compilation process and more of Bella Turca; please refer to [the publication](https://link.springer.com/chapter/10.1007/978-3-031-70563-2_16).
|
| 101 |
|
| 102 |
|
| 103 |
| Dataset | num instances | size | num of words|
|
|
|
|
| 95 |
|
| 96 |
Each of the subcorpus has their own HF repos. The cleaning of `AkademikDerlem` mostly focuses of fixing OCR mistakes.
|
| 97 |
The cleaning pipelines of `Temiz mC4` and `Temiz OSCAR` focus on filtering low quality content, such as ads, repetetive content and adult content.
|
| 98 |
+
`OzenliDerlem`'s text had already been chosen from high quality content promising websites, yet we still did some rounds of cleaning for quality. `ForumSohbetleri` was cleaned carefully especially for undesired characters, without losing emoji characters.
|
| 99 |
|
| 100 |
+
For more details about the cleaning pipeline, compilation process and more of Bella Turca; please refer to [the publication](https://link.springer.com/chapter/10.1007/978-3-031-70563-2_16) as well as the [blog post](https://turkish-nlp-suite.github.io/2025/10/05/bellaturca/).
|
| 101 |
|
| 102 |
|
| 103 |
| Dataset | num instances | size | num of words|
|