Improve model card: Add paper abstract
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -17,6 +17,9 @@ pipeline_tag: text-generation
|
|
| 17 |
|
| 18 |
K2-Think is a 32 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving.
|
| 19 |
|
|
|
|
|
|
|
|
|
|
| 20 |
# Quickstart
|
| 21 |
|
| 22 |
### Transformers
|
|
|
|
| 17 |
|
| 18 |
K2-Think is a 32 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving.
|
| 19 |
|
| 20 |
+
## Abstract
|
| 21 |
+
K2-Think is a reasoning system that achieves state-of-the-art performance with a 32B parameter model, matching or surpassing much larger models like GPT-OSS 120B and DeepSeek v3.1. Built on the Qwen2.5 base model, our system shows that smaller models can compete at the highest levels by combining advanced post-training and test-time computation techniques. The approach is based on six key technical pillars: Long Chain-of-thought Supervised Finetuning, Reinforcement Learning with Verifiable Rewards (RLVR), Agentic planning prior to reasoning, Test-time Scaling, Speculative Decoding, and Inference-optimized Hardware, all using publicly available open-source datasets. K2-Think excels in mathematical reasoning, achieving state-of-the-art scores on public benchmarks for open-source models, while also performing strongly in other areas such as Code and Science. Our results confirm that a more parameter-efficient model like K2-Think 32B can compete with state-of-the-art systems through an integrated post-training recipe that includes long chain-of-thought training and strategic inference-time enhancements, making open-source reasoning systems more accessible and affordable. K2-Think is freely available at this http URL , offering best-in-class inference speeds of over 2,000 tokens per second per request via the Cerebras Wafer-Scale Engine.
|
| 22 |
+
|
| 23 |
# Quickstart
|
| 24 |
|
| 25 |
### Transformers
|