Video-Text-to-Text
Transformers
Safetensors
English
internvl_chat
feature-extraction
multimodal
custom_code
Eval Results (legacy)
Instructions to use OpenGVLab/InternVL_2_5_HiCo_R16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVL_2_5_HiCo_R16 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVL_2_5_HiCo_R16", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Load model as torch.bfloat16
#2
by martin-q-ma - opened
Hi Authors,
Thank you very much for releasing the code.
In Line 14 of the example inference code, should it be:
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda().to(torch.bfloat16)
instead of
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda()
According to https://huggingface.co/OpenGVLab/InternVideo2_5_Chat_8B?
Thanks!