Video-Text-to-Text
Transformers
Safetensors
English
qwen3_vl
image-text-to-text
video
retrieval
reranking
qwen3-vl
Instructions to use hltcoe/RankVideo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hltcoe/RankVideo with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("hltcoe/RankVideo") model = AutoModelForImageTextToText.from_pretrained("hltcoe/RankVideo") - Notebooks
- Google Colab
- Kaggle
Add GitHub link, paper metadata, and improve model card
#1
by nielsr HF Staff - opened
Hi! I'm Niels, part of the community science team at Hugging Face.
This PR improves the model card for RankVideo by:
- Adding the
arxivmetadata tag to link the repository to its research paper. - Adding
library_name: transformersto the metadata based on theconfig.jsonarchitecture. - Adding a link to the official GitHub repository.
- Fixing the formatting of the usage example and the BibTeX block.
These changes make the model more discoverable and provide better context for users.
tskow21 changed pull request status to merged