Instructions to use open-thoughts/OpenThinker-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use open-thoughts/OpenThinker-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="open-thoughts/OpenThinker-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("open-thoughts/OpenThinker-7B") model = AutoModelForCausalLM.from_pretrained("open-thoughts/OpenThinker-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use open-thoughts/OpenThinker-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "open-thoughts/OpenThinker-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "open-thoughts/OpenThinker-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/open-thoughts/OpenThinker-7B
- SGLang
How to use open-thoughts/OpenThinker-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "open-thoughts/OpenThinker-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "open-thoughts/OpenThinker-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "open-thoughts/OpenThinker-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "open-thoughts/OpenThinker-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use open-thoughts/OpenThinker-7B with Docker Model Runner:
docker model run hf.co/open-thoughts/OpenThinker-7B
Thinking prompt not included in template
Hi,
I was just wondering if the thinking prompt could be added to the jinja template in tokenizer_config.json?
On line 198 it's just the regular Qwen template, so when converting this model to quants or using it with any inference software other than the official repo, it fails to start reasoning.
I believe it's also missing from the added_tokens.json file.
The only way to invoke it right now is to manually add <|begin_of_thought|> to the end of a prompt.
Thank you.
Could you please provide a bit more detail on how you are running the model?
With this chat template (and without the thinking prompt) the model is still producing reasoning. I also checked the default 7B quant on Ollama and I wasn't running into any issues there.