Perception-LM-1B Int4-bit Quantized

This repository contains a 4-bit quantized version of Perception-LM-1B β€” optimized for reduced memory usage and faster inference, while retaining most of the capabilities of the full-precision model.

βš™οΈ Model Description

  • Base model: facebook/Perception-LM-1B
  • Quantization: 4-bit integer quantization (INT4).
  • Purpose: Provide a lighter, more resource-efficient variant for inference, deployment on resource-constrained hardware, or quick prototyping.

βœ… Intended Use & Use Cases

This quantized model is suited for:

  • Fast inference when GPU/CPU memory or VRAM is limited
  • Prototyping or integrating into applications where resource efficiency matters
  • Use in research or production pipelines where quantization is acceptable

⚠️ Limitations (Things to Watch Out For)

  • Quantization can introduce slight degradation compared to full-precision: responses may be less accurate or fluent in edge cases.
  • Not recommended for use-cases requiring maximum fidelity (e.g. very fine-grained reasoning, sensitive safety-critical tasks).
  • Performance may depend on hardware: quantized weights may require specific inference settings (device map, memory constraints).

πŸ”„ How to Use

Here is an example of how you can load the quantized model using transformers:


import torch
from transformers import AutoProcessor, AutoModelForImageTextToText

model_id = "Dhruvil03/Perception-LM-1B-Int4bit"

processor = AutoProcessor.from_pretrained(model_id)

model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    torch_dtype=torch.float16
).to("cuda").eval()

conversation = [{
    "role": "user",
    "content": [
        {"type": "video", "url": "test.mp4"},
        {"type": "text", "text": "Can you describe the video in detail?"},
    ],
}]

inputs = processor.apply_chat_template(
    conversation,
    num_frames=16,   # change number of frames as per the CUDA memory availability
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
    video_load_backend="pyav",
)

inputs = {k: (v.to("cuda") if hasattr(v, "to") else v) for k, v in inputs.items()}

with torch.inference_mode():
    outputs = model.generate(**inputs, max_new_tokens=64)

ilen = inputs["input_ids"].shape[1]
decoded = processor.batch_decode(outputs[:, ilen:], skip_special_tokens=True)
print(decoded[0])
Downloads last month
60
Safetensors
Model size
2B params
Tensor type
F32
Β·
F16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Dhruvil03/Perception-LM-1B-Int4bit

Quantized
(1)
this model