Update README.md (#3)
Browse files- Update README.md (abf4cecdc448f81e98e52f67cab8c1d54883623b)
README.md
CHANGED
|
@@ -10,9 +10,9 @@ library_name: transformers
|
|
| 10 |
|
| 11 |
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> | 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope </a> | 🐙 <a href="https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI">Experience Now</a></p>
|
| 12 |
|
| 13 |
-
# Ring-1T
|
| 14 |
|
| 15 |
-
Today, we officially launch the trillion-parameter thinking model, Ring-1T. It is open-source upon release—developers can download the model weights from Hugging Face and ModelScope, or experience direct chat interactions and API calls via the Ling Chat page and ZenMux (links provided at the end of the article).
|
| 16 |
|
| 17 |
Building upon the preview version released at the end of last month, Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model. Through RLHF training, the model's general abilities have also been refined, making this release of Ring-1T more balanced in performance across various tasks.
|
| 18 |
|
|
@@ -24,26 +24,26 @@ You can download Ring-1T from the following table. If you are located in mainlan
|
|
| 24 |
|
| 25 |
<center>
|
| 26 |
|
| 27 |
-
|
|
| 28 |
-
|
|
| 29 |
-
|
|
| 30 |
-
|
|
| 31 |
</center>
|
| 32 |
|
| 33 |
-
Note: If you are interested in previous version, please visit the past model collections
|
| 34 |
|
| 35 |
|
| 36 |
## Continuously Evolving Deep Reasoning Capabilities
|
| 37 |
|
| 38 |
-
To evaluate the deep reasoning capabilities of Ring-1T, we selected representative open-source thinking models (Ring-1T-preview, Deepseek-V3.1-Terminus-Thinking, Qwen-235B-A22B-Thinking-2507) and closed-source APIs (Gemini-2.5-Pro and GPT-5-Thinking(High)) as benchmarks. First, compared to the previously open-sourced preview version, Ring-1T demonstrates more balanced performance across various tasks. Furthermore, Ring-1T achieves open-source
|
| 39 |
|
| 40 |
<p align="center">
|
| 41 |
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/5TBESJNjsbAAAAAAYYAAAAgADod9AQFr/original" />
|
| 42 |
</p>
|
| 43 |
|
| 44 |
-
Although we have implemented string-level and semantic-level contamination filtering for benchmark tasks across all training stages—including pre-training, fine-tuning instructions, and reinforcement learning prompts—rigorous decontamination for earlier published benchmarks remains a significant challenge in the industry. To more objectively analyze Ring-1T's deep reasoning capabilities, we conducted tests using the IMO 2025 (International Mathematical Olympiad) held in July this year and the recently concluded ICPC World Finals 2025 (International Collegiate Programming Contest World Finals).
|
| 45 |
|
| 46 |
-
For the **IMO 2025** test, similar to the previous preview version, we integrated Ring-1T into the multi-agent framework AWorld (https://github.com/inclusionAI/AWorld) and used pure natural language reasoning to solve the problems. The results show that Ring-1T solved Problems 1, 3, 4, and 5 in a single attempt (silver medal level at IMO). On the third attempt, it also produced a nearly perfect proof for Problem 2, a geometry proof. For the most challenging Problem 6 (which no AI contestant in IMO 2025 solved correctly), Ring-1T converged to the same answer as Gemini 2.5 Pro—"4048" (the correct answer is 2112). We believe that with ongoing optimizations, Ring-1T has the potential to reach gold medal level at IMO in a single attempt in the future.
|
| 47 |
|
| 48 |
<p align="center">
|
| 49 |
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/mnRJTa5a00gAAAAAQ2AAAAgADod9AQFr/original" width="500"/>
|
|
@@ -83,7 +83,7 @@ Figure 1: The training-inference discrepancy of GRPO increases exponentially wit
|
|
| 83 |
|
| 84 |
Figure 2: Maximum training-inference discrepancy—GRPO shows a significant rise with training, whereas Icepop maintains a low level.
|
| 85 |
|
| 86 |
-
##
|
| 87 |
|
| 88 |
To ensure stable and efficient reinforcement learning training for trillion-parameter foundation models, we independently developed a high-performance reinforcement learning system—ASystem. ASystem adopts a SingleController + SPMD architecture. In terms of training and inference engines, it has been meticulously optimized to address memory management and weight exchange challenges specific to trillion-parameter models. Leveraging our self-developed unified memory pool technology for training and inference, it achieves transparent memory offloading, efficiently releases memory fragmentation, and reduces the risk of insufficient memory. Through techniques such as direct P2P communication between GPUs and in-place updates, it enables second-level, zero-redundant model weight exchange.
|
| 89 |
|
|
@@ -100,7 +100,7 @@ You can experience Ring-1T online at: [ZenMux](https://zenmux.ai/inclusionai/rin
|
|
| 100 |
|
| 101 |
You can also use Ring-1T through API calls:
|
| 102 |
|
| 103 |
-
```python
|
| 104 |
from openai import OpenAI
|
| 105 |
|
| 106 |
# 1. Initialize the OpenAI client
|
|
@@ -133,16 +133,16 @@ print(completion.choices[0].message.content)
|
|
| 133 |
|
| 134 |
#### Environment Preparation
|
| 135 |
|
| 136 |
-
We will later submit our model to SGLang official release
|
| 137 |
```shell
|
| 138 |
pip3 install -U sglang sgl-kernel
|
| 139 |
```
|
| 140 |
|
| 141 |
#### Run Inference
|
| 142 |
|
| 143 |
-
BF16 and FP8 models are supported by SGLang now
|
| 144 |
|
| 145 |
-
Here is the example to run Ring-1T with multiple
|
| 146 |
|
| 147 |
- Start server:
|
| 148 |
```bash
|
|
@@ -158,11 +158,10 @@ python -m sglang.launch_server --model-path $MODEL_PATH --tp-size 8 --pp-size 4
|
|
| 158 |
# Node 3:
|
| 159 |
python -m sglang.launch_server --model-path $MODEL_PATH --tp-size 8 --pp-size 4 --dp-size 1 --trust-remote-code --dist-init-addr $MASTER_IP:$PORT --nnodes 4 --node-rank 3
|
| 160 |
|
| 161 |
-
# This is only an example
|
| 162 |
```
|
| 163 |
|
| 164 |
-
MTP is supported for base model,
|
| 165 |
-
to start command.
|
| 166 |
|
| 167 |
- Client:
|
| 168 |
|
|
@@ -176,11 +175,11 @@ More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.h
|
|
| 176 |
|
| 177 |
## Finetuning
|
| 178 |
|
| 179 |
-
We recommend you
|
| 180 |
|
| 181 |
## Limitations and Future Plans
|
| 182 |
|
| 183 |
-
Ring-1T represents the Bailing team
|
| 184 |
|
| 185 |
We will continue to optimize these aspects in future releases and highly welcome feedback from the community. Furthermore, training for Ring-1T is still ongoing. We are committed to further unlocking the reasoning potential of this trillion-parameter foundation model and look forward to sharing more mature upgraded versions with everyone as soon as possible.
|
| 186 |
|
|
|
|
| 10 |
|
| 11 |
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> | 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope </a> | 🐙 <a href="https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI">Experience Now</a></p>
|
| 12 |
|
| 13 |
+
# Ring-1T: Flow State Leads to Sudden Enlightenment
|
| 14 |
|
| 15 |
+
Today, we officially launch the trillion-parameter thinking model, Ring-1T. It is open-source upon release—developers can download the model weights from Hugging Face and ModelScope, or experience direct chat interactions and API calls via the Ling Chat page and [ZenMux](https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI) (links provided at the end of the article).
|
| 16 |
|
| 17 |
Building upon the preview version released at the end of last month, Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model. Through RLHF training, the model's general abilities have also been refined, making this release of Ring-1T more balanced in performance across various tasks.
|
| 18 |
|
|
|
|
| 24 |
|
| 25 |
<center>
|
| 26 |
|
| 27 |
+
| **Model** | **Context Length** | **Download** |
|
| 28 |
+
| :---------: | :----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------: |
|
| 29 |
+
| Ring-1T | 64K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-1T) [🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ring-1T) |
|
| 30 |
+
| Ring-1T-FP8 | 64K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-1T-FP8) [🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ring-1T-FP8) |
|
| 31 |
</center>
|
| 32 |
|
| 33 |
+
Note: If you are interested in the previous version, please visit the past model collections on [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
|
| 34 |
|
| 35 |
|
| 36 |
## Continuously Evolving Deep Reasoning Capabilities
|
| 37 |
|
| 38 |
+
To evaluate the deep reasoning capabilities of Ring-1T, we selected representative open-source thinking models (Ring-1T-preview, Deepseek-V3.1-Terminus-Thinking, Qwen-235B-A22B-Thinking-2507) and closed-source APIs (Gemini-2.5-Pro and GPT-5-Thinking(High)) as benchmarks. First, compared to the previously open-sourced preview version, Ring-1T demonstrates more balanced performance across various tasks. Furthermore, Ring-1T achieves leading open-source performance on challenging reasoning benchmarks such as **math competitions** (AIME 25, HMMT 25), **code generation** (LiveCodeBench, CodeForce), and **logical reasoning** (ARC-AGI-1). It also exhibits strong competitiveness in **comprehensive tasks** (Arena-Hard-v2.0), **healthcare** (HealthBench), and **creative writing** (Creative Writing v3).
|
| 39 |
|
| 40 |
<p align="center">
|
| 41 |
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/5TBESJNjsbAAAAAAYYAAAAgADod9AQFr/original" />
|
| 42 |
</p>
|
| 43 |
|
| 44 |
+
Although we have implemented string-level and semantic-level contamination filtering for benchmark tasks across all training stages—including pre-training, fine-tuning instructions, and reinforcement learning prompts—rigorous decontamination for earlier published benchmarks remains a significant challenge in the industry. To more objectively analyze [Ring-1T](https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI)'s deep reasoning capabilities, we conducted tests using the IMO 2025 (International Mathematical Olympiad) held in July this year and the recently concluded ICPC World Finals 2025 (International Collegiate Programming Contest World Finals).
|
| 45 |
|
| 46 |
+
For the **IMO 2025** test, similar to the previous preview version, we integrated Ring-1T into the multi-agent framework AWorld (https://github.com/inclusionAI/AWorld) and used pure natural language reasoning to solve the problems. The results show that Ring-1T solved Problems 1, 3, 4, and 5 in a single attempt (silver medal level at IMO). On the third attempt, it also produced a nearly perfect proof for Problem 2, a geometry proof. For the most challenging Problem 6 (which no AI contestant in IMO 2025 solved correctly), [Ring-1T](https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI) converged to the same answer as Gemini 2.5 Pro—"4048" (the correct answer is 2112). We believe that with ongoing optimizations, Ring-1T has the potential to reach gold medal level at IMO in a single attempt in the future.
|
| 47 |
|
| 48 |
<p align="center">
|
| 49 |
<img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/mnRJTa5a00gAAAAAQ2AAAAgADod9AQFr/original" width="500"/>
|
|
|
|
| 83 |
|
| 84 |
Figure 2: Maximum training-inference discrepancy—GRPO shows a significant rise with training, whereas Icepop maintains a low level.
|
| 85 |
|
| 86 |
+
## ASystem: In-House RL Framework "Mastering" Trillion-Scale Training
|
| 87 |
|
| 88 |
To ensure stable and efficient reinforcement learning training for trillion-parameter foundation models, we independently developed a high-performance reinforcement learning system—ASystem. ASystem adopts a SingleController + SPMD architecture. In terms of training and inference engines, it has been meticulously optimized to address memory management and weight exchange challenges specific to trillion-parameter models. Leveraging our self-developed unified memory pool technology for training and inference, it achieves transparent memory offloading, efficiently releases memory fragmentation, and reduces the risk of insufficient memory. Through techniques such as direct P2P communication between GPUs and in-place updates, it enables second-level, zero-redundant model weight exchange.
|
| 89 |
|
|
|
|
| 100 |
|
| 101 |
You can also use Ring-1T through API calls:
|
| 102 |
|
| 103 |
+
```python
|
| 104 |
from openai import OpenAI
|
| 105 |
|
| 106 |
# 1. Initialize the OpenAI client
|
|
|
|
| 133 |
|
| 134 |
#### Environment Preparation
|
| 135 |
|
| 136 |
+
We will later submit our model to the SGLang official release. Now we can prepare the environment by following these steps:
|
| 137 |
```shell
|
| 138 |
pip3 install -U sglang sgl-kernel
|
| 139 |
```
|
| 140 |
|
| 141 |
#### Run Inference
|
| 142 |
|
| 143 |
+
Both BF16 and FP8 models are supported by SGLang now. It depends on the dtype of the model in ${MODEL_PATH}.
|
| 144 |
|
| 145 |
+
Here is the example to run [Ring-1T](https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI) with multiple GPU nodes, where the master node IP is ${MASTER_IP} and port is ${PORT}:
|
| 146 |
|
| 147 |
- Start server:
|
| 148 |
```bash
|
|
|
|
| 158 |
# Node 3:
|
| 159 |
python -m sglang.launch_server --model-path $MODEL_PATH --tp-size 8 --pp-size 4 --dp-size 1 --trust-remote-code --dist-init-addr $MASTER_IP:$PORT --nnodes 4 --node-rank 3
|
| 160 |
|
| 161 |
+
# This is only an example. Please adjust arguments according to your actual environment.
|
| 162 |
```
|
| 163 |
|
| 164 |
+
MTP is supported for the base model, but not yet for the chat model. You can add parameter `--speculative-algorithm NEXTN` to the start command.
|
|
|
|
| 165 |
|
| 166 |
- Client:
|
| 167 |
|
|
|
|
| 175 |
|
| 176 |
## Finetuning
|
| 177 |
|
| 178 |
+
We recommend you use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ring](https://github.com/inclusionAI/Ring-V2/blob/main/docs/llamafactory_finetuning.md).
|
| 179 |
|
| 180 |
## Limitations and Future Plans
|
| 181 |
|
| 182 |
+
Ring-1T represents the Bailing team's first attempt at developing a trillion-scale deep thinking model. The current version may occasionally exhibit issues such as identity recognition bias, language mixing, and repetitive generation. Additionally, since its attention architecture still adopts the GQA approach from Ling 2.0, there remains room for improvement in inference efficiency under long-context scenarios.
|
| 183 |
|
| 184 |
We will continue to optimize these aspects in future releases and highly welcome feedback from the community. Furthermore, training for Ring-1T is still ongoing. We are committed to further unlocking the reasoning potential of this trillion-parameter foundation model and look forward to sharing more mature upgraded versions with everyone as soon as possible.
|
| 185 |
|