Instructions to use ruv/ruvltra-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MambaSSM
How to use ruv/ruvltra-medium with MambaSSM:
from mamba_ssm import MambaLMHeadModel model = MambaLMHeadModel.from_pretrained("ruv/ruvltra-medium") - llama-cpp-python
How to use ruv/ruvltra-medium with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ruv/ruvltra-medium", filename="ruvltra-1.1b-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ruv/ruvltra-medium with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ruv/ruvltra-medium:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ruv/ruvltra-medium:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ruv/ruvltra-medium:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ruv/ruvltra-medium:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ruv/ruvltra-medium:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf ruv/ruvltra-medium:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ruv/ruvltra-medium:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf ruv/ruvltra-medium:Q4_K_M
Use Docker
docker model run hf.co/ruv/ruvltra-medium:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use ruv/ruvltra-medium with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ruv/ruvltra-medium" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ruv/ruvltra-medium", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ruv/ruvltra-medium:Q4_K_M
- Ollama
How to use ruv/ruvltra-medium with Ollama:
ollama run hf.co/ruv/ruvltra-medium:Q4_K_M
- Unsloth Studio new
How to use ruv/ruvltra-medium with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ruv/ruvltra-medium to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ruv/ruvltra-medium to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ruv/ruvltra-medium to start chatting
- Docker Model Runner
How to use ruv/ruvltra-medium with Docker Model Runner:
docker model run hf.co/ruv/ruvltra-medium:Q4_K_M
- Lemonade
How to use ruv/ruvltra-medium with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ruv/ruvltra-medium:Q4_K_M
Run and chat with the model
lemonade run user.ruvltra-medium-Q4_K_M
List all available models
lemonade list
Overview
RuvLTRA Medium provides the sweet spot between capability and resource usage. Ideal for desktop applications, development workstations, and moderate-scale deployments.
Model Card
| Property | Value |
|---|---|
| Parameters | 1.1 Billion |
| Quantization | Q4_K_M |
| Context | 8,192 tokens |
| Size | ~669 MB |
| Min RAM | 2 GB |
| Recommended RAM | 4 GB |
π Quick Start
# Download
wget https://huggingface.co/ruv/ruvltra-medium/resolve/main/ruvltra-1.1b-q4_k_m.gguf
# Run inference
./llama-cli -m ruvltra-1.1b-q4_k_m.gguf \
-p "Explain quantum computing in simple terms:" \
-n 512 -c 8192
π‘ Use Cases
- Development: Code assistance and generation
- Writing: Content creation and editing
- Analysis: Document summarization
- Chat: Conversational AI applications
π§ Integration
Rust
use ruvllm::hub::ModelDownloader;
let path = ModelDownloader::new()
.download("ruv/ruvltra-medium", None)
.await?;
Python
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
model_path = hf_hub_download("ruv/ruvltra-medium", "ruvltra-1.1b-q4_k_m.gguf")
llm = Llama(model_path=model_path, n_ctx=8192)
OpenAI-Compatible Server
python -m llama_cpp.server \
--model ruvltra-1.1b-q4_k_m.gguf \
--host 0.0.0.0 --port 8000
Performance
| Platform | Tokens/sec |
|---|---|
| M2 Pro (Metal) | 65 tok/s |
| RTX 4080 (CUDA) | 95 tok/s |
| i9-13900K (CPU) | 25 tok/s |
License: Apache 2.0 | GitHub: ruvnet/ruvector
β‘ TurboQuant KV-Cache Compression
RuvLTRA models are fully compatible with TurboQuant β 2-4 bit KV-cache quantization that reduces inference memory by 6-8x with <0.5% quality loss.
| Quantization | Compression | Quality Loss | Best For |
|---|---|---|---|
| 3-bit | 10.7x | <1% | Recommended β best balance |
| 4-bit | 8x | <0.5% | High quality, long context |
| 2-bit | 32x | ~2% | Edge devices, max savings |
Usage with RuvLLM
cargo add ruvllm # Rust
npm install @ruvector/ruvllm # Node.js
use ruvllm::quantize::turbo_quant::{TurboQuantCompressor, TurboQuantConfig, TurboQuantBits};
let config = TurboQuantConfig {
bits: TurboQuantBits::Bit3_5, // 10.7x compression
use_qjl: true,
..Default::default()
};
let compressor = TurboQuantCompressor::new(config)?;
let compressed = compressor.compress_batch(&kv_vectors)?;
let scores = compressor.inner_product_batch_optimized(&query, &compressed)?;
v2.1.0 Ecosystem
- Hybrid Search β Sparse + dense vectors with RRF fusion (20-49% better retrieval)
- Graph RAG β Knowledge graph + community detection for multi-hop queries
- DiskANN β Billion-scale SSD-backed ANN with <10ms latency
- FlashAttention-3 β IO-aware tiled attention, O(N) memory
- MLA β Multi-Head Latent Attention (~93% KV-cache compression)
- Mamba SSM β Linear-time selective state space models
- Speculative Decoding β 2-3x generation speedup
RuVector GitHub | ruvllm crate | @ruvector/ruvllm npm
Benchmarks (L4 GPU, 24GB VRAM)
| Metric | Result |
|---|---|
| Inference Speed | 62.6 tok/s |
| Model Load Time | 1.1s |
| Parameters | 3B |
| TurboQuant KV (3-bit) | 10.7x compression, <1% PPL loss |
| TurboQuant KV (4-bit) | 8x compression, <0.5% PPL loss |
Benchmarked on Google Cloud L4 GPU via ruvltra-calibration Cloud Run Job (2026-03-28)
- Downloads last month
- 60
4-bit