Instructions to use OpenGVLab/InternVL-14B-224px with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVL-14B-224px with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="OpenGVLab/InternVL-14B-224px", trust_remote_code=True)# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("OpenGVLab/InternVL-14B-224px", trust_remote_code=True) model = AutoModel.from_pretrained("OpenGVLab/InternVL-14B-224px", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
InternViT-6B + QLLaMA, can be used for image-text retrieval like CLIP
4
#5 opened over 1 year ago
by
vitvit
Fix incorrect image embedding when running with a single GPU and 24GB VRAM
1
#3 opened almost 2 years ago
by
xdedss