Instructions to use nllg/tikzero-plus-10b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nllg/tikzero-plus-10b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nllg/tikzero-plus-10b")# Load model directly from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("nllg/tikzero-plus-10b", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nllg/tikzero-plus-10b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nllg/tikzero-plus-10b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nllg/tikzero-plus-10b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/nllg/tikzero-plus-10b
- SGLang
How to use nllg/tikzero-plus-10b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nllg/tikzero-plus-10b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nllg/tikzero-plus-10b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nllg/tikzero-plus-10b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nllg/tikzero-plus-10b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use nllg/tikzero-plus-10b with Docker Model Runner:
docker model run hf.co/nllg/tikzero-plus-10b
Model Card for TikZero+ (10b)
TikZero+ (10b) is a multimodal language model that automatically synthesizes scientific figures as editable, semantics-preserving TikZ graphics programs conditioned on text captions. It is based on DeTikZifyv2 (8b) and LLaMA3.2 (1b), and integrates TikZero with additional end-to-end fine-tuning. Check out the DeTikZify project for more information and tips on how to best run the model.
Usage
from detikzify.model import load
from detikzify.infer import DetikzifyPipeline
caption = "A multi-layer perceptron with two hidden layers."
pipeline = DetikzifyPipeline(*load(
model_name_or_path="nllg/tikzero-plus-10b",
device_map="auto",
torch_dtype="bfloat16",
))
# generate a single TikZ program
fig = pipeline.sample(text=caption)
# if it compiles, rasterize it and show it
if fig.is_rasterizable:
fig.rasterize().show()
Acknowledgments
This model was trained using computational resources provided by the bwForCluster Helix, as part of the bwHPC-S5 project. The authors acknowledge support from the state of Baden-Württemberg through the bwHPC initiative and the German Research Foundation (DFG) under grant INST 35/1597-1 FUGG.
- Downloads last month
- 12