| language: | |
| - en | |
| license: mit | |
| library_name: sentence-transformers | |
| pipeline_tag: text-ranking | |
| # tiny-bert-ranker model card | |
| This model is a fine-tuned version of [prajjwal1/bert-tiny](https://web.archive.org/web/20240315094214/https://huggingface.co/prajjwal1/bert-tiny) | |
| as part of our submission to [ReNeuIR 2024](https://web.archive.org/web/20240704171521/https://reneuir.org/shared_task.html). | |
| ## Model Details | |
| ### Model Description | |
| <!-- Provide a longer summary of what this model is. --> | |
| The model is based on the pre-trained [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny). It is fine-tuned on a 1GB subset of data | |
| extracted from msmarco's [Train Triples Small](https://web.archive.org/web/20231209043304/https://microsoft.github.io/msmarco/Datasets.html). | |
| Tiny-bert-ranker is part of our investigation into the tradeoffs between efficiency and effectiveness in ranking models. | |
| This approach does not involve BM25 score injection or distillation. | |
| - **Developed by:** Team FSU at ReNeuIR 2024 | |
| - **Model type:** sequence-to-sequence model | |
| - **License:** mit | |
| - **Finetuned from model:** prajjwal1/bert-tiny | |