Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
1
vlmrun
dinesh-vlmrun
Follow
nwaughachukwuma's profile picture
PhysiQuanty's profile picture
spillai's profile picture
4 followers
ยท
1 following
AI & ML interests
None yet
Recent Activity
replied
to
spillai
's
post
2 days ago
mm-ctx โ fast, multimodal context for agents. LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills. Try it interactively in Spaces: https://huggingface.co/spaces/vlm-run/mm-ctx Readme: https://vlm-run.github.io/mm/ PyPI: https://pypi.org/project/mm-ctx SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI. - mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches - mm cat <document>.pdf returns a metadata description of the file - mm cat <photo>.jpg returns a caption of the photo - mm cat <video>.mp4 returns a caption of the video A few things we obsessed over: โก Speed: Rust core for the hot paths ๐ Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V). ๐ Composable: stdin + structured outputs ๐ค Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw. Weโd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.
reacted
to
spillai
's
post
with ๐ฅ
2 days ago
mm-ctx โ fast, multimodal context for agents. LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills. Try it interactively in Spaces: https://huggingface.co/spaces/vlm-run/mm-ctx Readme: https://vlm-run.github.io/mm/ PyPI: https://pypi.org/project/mm-ctx SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI. - mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches - mm cat <document>.pdf returns a metadata description of the file - mm cat <photo>.jpg returns a caption of the photo - mm cat <video>.mp4 returns a caption of the video A few things we obsessed over: โก Speed: Rust core for the hot paths ๐ Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V). ๐ Composable: stdin + structured outputs ๐ค Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw. Weโd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.
authored
a paper
6 months ago
Reconstructing Animatable Categories from Videos
View all activity
Organizations
Papers
2
arxiv:
2511.14210
arxiv:
2305.06351
spaces
1
Sleeping
Agents
Orion Visual Agent
๐ฌ
A Unified Visual Agent for Multimodal Perception, Advanced V
models
0
None public yet
datasets
1
dinesh-vlmrun/finevision-sample
Viewer
โข
Updated
Sep 7, 2025
โข
172k
โข
3.07k