Qwen2.5-Math-NeuralMath-7B
DuoNeural | Math Reasoning Fine-Tune | April 2026
A fine-tuned version of Qwen/Qwen2.5-Math-7B-Instruct with supervised fine-tuning on curated math reasoning data, targeting improved step-by-step problem solving on competition and olympiad-level math.
What's Different
The base Qwen2.5-Math-7B-Instruct is already a strong math model. This fine-tune focuses on:
- Deeper chain-of-thought: trained on longer, more structured reasoning traces
- Competition math exposure: AMC/AIME/olympiad problems via NuminaMath-CoT
- Format consistency: reliable
\boxed{}answer formatting across problem types
Quickstart
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"DuoNeural/Qwen2.5-Math-NeuralMath-7B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DuoNeural/Qwen2.5-Math-NeuralMath-7B")
prompt = """Solve the following math problem step by step.
Problem: Find all positive integers n such that n² + 1 is divisible by n + 1.
Solution:"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512, temperature=0.1, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
GGUF / Ollama / LM Studio
Pre-quantized GGUFs available in the gguf/ folder of this repo:
| File | Size | Use case |
|---|---|---|
neuromath-7b-q4_k_m.gguf |
4.7GB | Recommended — best quality/speed tradeoff |
neuromath-7b-q8_0.gguf |
8.1GB | High quality, needs 10GB+ VRAM/RAM |
neuromath-7b-f16.gguf |
15GB | Full precision, GPU only |
Ollama
# Create Modelfile
cat > Modelfile << 'EOF'
FROM ./neuromath-7b-q4_k_m.gguf
SYSTEM "You are an expert mathematician. Solve problems step by step, showing all work clearly. Put your final answer in \\boxed{}."
PARAMETER temperature 0.1
PARAMETER num_ctx 4096
EOF
ollama create neuromath-7b -f Modelfile
ollama run neuromath-7b "What is the sum of all prime numbers less than 100?"
LM Studio
Download neuromath-7b-q4_k_m.gguf, load in LM Studio. Set system prompt:
"You are an expert mathematician. Solve problems step by step, showing all work. Put your final answer in \boxed{}."
Training Details
| Setting | Value |
|---|---|
| Base model | Qwen/Qwen2.5-Math-7B-Instruct |
| Method | QLoRA SFT (4-bit base, LoRA rank 16) |
| Training tokens | ~1.26M (3 epochs over curated math dataset) |
| LoRA alpha | 32 |
| LoRA targets | q, k, v, o, gate, up, down projections |
| Hardware | NVIDIA A100 80GB |
| Framework | Unsloth + HuggingFace Transformers |
| Sequence length | 1024 tokens |
Limitations
- Trained on English math problems; performance on other languages untested
- Very long multi-step proofs (>1024 tokens) may be truncated during generation
- This is the SFT-only checkpoint; GRPO reinforcement learning phase is planned as a follow-up
- Not intended for general conversation — math reasoning only
DuoNeural
DuoNeural is an AI research lab focused on post-training techniques, efficient architectures, and edge deployment. We document our wins, losses, and learnings publicly.
- Downloads last month
- 131