|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- optimization |
|
|
- muon |
|
|
- deep-learning |
|
|
- pytorch |
|
|
- distributed-computing |
|
|
- ZeRO |
|
|
- tensor parallelism |
|
|
- data parallelism |
|
|
- tutorial |
|
|
- cpu-friendly |
|
|
--- |
|
|
|
|
|
# 🧩 The "Muon is Scalable" Blueprint: A Distributed Muon Engineering Breakdown(CPU-Friendly, tutorial style) |
|
|
|
|
|
A practical, **annotated engineering breakdown** for the Muon optimizer — extended into a fully distributed (DP × TP) version that actually runs on **plain CPU/Gloo**, so broke-but-curious builders can still get their hands dirty. |
|
|
|
|
|
This is the expert-level systems engineering companion to my original [**Understanding the Muon Optimizer**](https://huggingface.co/datasets/bird-of-paradise/muon-tutorial) tutorial. |
|
|
|
|
|
> 💡 _“Because sometimes the best way to learn the distributed nightmare is to get your hands dirty and your eyes crossed.”_ 🤪 |
|
|
|
|
|
--- |
|
|
|
|
|
## 🌕 Why This Exists |
|
|
|
|
|
Most public Muon examples (like MoonShot’s PoC) are designed for multi-GPU NCCL clusters, making them impossible to run or debug for most of us. In addition, most documentation for distributed systems is written by experts, for experts, making it a "nightmare" to learn. My goal is to change that. |
|
|
|
|
|
This repository is **not** a "simplified" version that "flattens the depth" of the work. |
|
|
|
|
|
Instead, it's a **didactic refactor**. I've taken the complex, real-world PoC and optimized it for *readability* and *learning*, so you can see the "blueprint" behind the "chaos": |
|
|
- Fully **annotated** to **demonstrate** data parallel (ZeRO-1) + tensor parallel (TP) orchestration end-to-end. |
|
|
- Understand every “distributed acrobatic” step (`DP gather` → `TP gather` → `Newton–Schulz` → `TP shard` → `DP shard`). |
|
|
- The code is optimized to highlight **symmetrical logic** and **consistent naming**, showing the "opposite arrow" data flow of the "virtual map" (`dist_meta`). |
|
|
- It's built to run **on a single CPU machine or Colab notebook**. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧠 The “Turtle Speed” Breakthrough: The Full Story |
|
|
|
|
|
This code is complex. It's a "distributed nightmare" 🫠. |
|
|
|
|
|
Instead of a traditional, long-form `README`, the best documentation is the "making of" story. I've chronicled my entire journey of reverse-engineering and debugging this code in my "Turtle Speed Breakthrough" series on Medium. |
|
|
|
|
|
* **[Part 1: The “Turtle Speed” Breakthrough: Decoding Distributed Optimizers](https://medium.com/@jenwei0312/the-turtle-speed-breakthrough-decoding-distributed-optimizers-from-fsdp-to-muons-secret-sauce-64fc76f20cd7)** |
|
|
* **[Part 2: My Map of the Distributed Nightmare (The Blueprint)](https://medium.com/@jenwei0312/the-turtle-speed-breakthrough-part-2-the-blueprint-for-distributed-chaos-37fe343e7aa9)** |
|
|
* **[Part 3: The Final Bugs and "Aha!" Moments](https://medium.com/@jenwei0312/the-turtle-speed-breakthrough-part-3-my-map-of-the-distributed-nightmare-b10ff4affd56)** |
|
|
|
|
|
This tutorial is the final, runnable code that resulted from that deep dive. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🚀 Quick Start |
|
|
|
|
|
Run the CPU-safe, fully-annotated notebook right from your browser: |
|
|
|
|
|
[](<distributed_muon_cpu.ipynb>) |
|
|
|
|
|
Or, you can clone this repo and run the Python script locally to simulate an 8-process cluster on your CPU: |
|
|
|
|
|
```bash |
|
|
git clone https://huggingface.co/datasets/bird-of-paradise/muon-distributed |
|
|
cd muon-distributed |
|
|
|
|
|
# This will spawn 8 processes and run the full test |
|
|
!python distributed_muon_cpu.py |
|
|
```` |
|
|
|
|
|
(Note: For the original, un-annotated, buggy Moonshot PoC that this work is based on, you can find it in this [commit](https://github.com/NVIDIA/Megatron-LM/pull/1428/commits/f432fbe45c169aeb5a0805ff6f41e13f989c6730#diff-8fe91f4096ff232fc6f97b17e60e619eda92b6dffc80b4573a23e06aa56d2559).) |
|
|
|
|
|
----- |
|
|
|
|
|
## 🗂️ What's Inside (File Guide) |
|
|
|
|
|
* `distributed_muon_cpu.ipynb`: **(Start Here)** The Colab-friendly notebook that walks through the environment fixes and runs the code. |
|
|
* `distributed_muon_cpu.py`: The final, **tested, fixed, and heavily-annotated** Python script. This is the "golden" code that runs on a CPU-only environment using the `"gloo"` backend. |
|
|
* `distributed_muon.py`: My **annotated and logically debugged** version of the *GPU* code. This is for users who have a multi-GPU `"nccl"` environment. (Note: Since I don't have a multi-GPU cluster, this version is untested... unless someone wants to sponsor me with some GPUs! 😉) |
|
|
|
|
|
----- |
|
|
|
|
|
## 🎓 What You'll Learn (The "Nightmare" Blueprint) |
|
|
|
|
|
By exploring this code, you'll see the *real* implementation of the concepts I discuss in my articles: |
|
|
|
|
|
* **The 2D Grid:** How to set up orthogonal `dist_group` (DP) and `tp_group` (TP) handles. |
|
|
* **The "Aisles" & "Pallets":** How `param_groups` (`buffer_idx`) and communication `buckets` (`bucket_idx`) are used to organize parameters. |
|
|
* **The "Virtual Buffer":** How a "master plan" (`dist_meta` and `global_buffer_size`) is used to manage memory for sharding (ZeRO-1). |
|
|
* **The Acrobatic Data Flow:** The full `(DP gather -> TP gather) -> (Run Math) -> (TP shard -> DP shard)` journey. |
|
|
* **The Nuance:** You'll see *why* we bucket the slow DP `all_gather` but *don't* need to bucket the fast, on-node TP `all_gather`. |
|
|
|
|
|
----- |
|
|
|
|
|
## 🐞 Summary of All Fixes |
|
|
|
|
|
This repo isn't just a copy-paste. It's the result of a week-long debugging "nightmare." Here are all the bugs we had to find and fix to make it run: |
|
|
|
|
|
| Issue | Problem | Solution | |
|
|
| :--- | :--- | :--- | |
|
|
| **Logic Bug \#1** | Missing `params = group["params"]` | Added the line in the Muon update loop. | |
|
|
| **Logic Bug \#2** | `ns_input` was 1D after unpacking, crashing `zeropower`.| Changed `.view(-1)` to `.view(dist_meta.shape)` to restore the 2D shape. | |
|
|
| **Env Bug \#1** | Hardcoded `"nccl"` backend. | Changed `dist.init_process_group` to use `"gloo"`. | |
|
|
| **Env Bug \#2** | Hardcoded `'cuda'` device. | Changed `gen_param_and_grads` to use `'cpu'`. | |
|
|
| **Env Bug \#3** | `mp.spawn()` crashes in Jupyter/Colab. | The `.ipynb` runs the code as a `!python` subprocess, bypassing the notebook kernel. | |
|
|
|
|
|
----- |
|
|
|
|
|
## 📖 Citation |
|
|
|
|
|
If you use this tutorial in your work, please cite the original Muon paper and this tutorial. |
|
|
|
|
|
```bibtex |
|
|
@misc{wei2025muondistributed, |
|
|
author = {Wei, Jen}, |
|
|
title = {A CPU-Friendly Tutorial for Distributed Muon (DPxTP)}, |
|
|
year = {2025}, |
|
|
howpublished = {\url{[https://huggingface.co/datasets/](https://huggingface.co/datasets/)<your-hf-handle>/muon-distributed}} |
|
|
} |
|
|
|
|
|
@misc{jordan2024muon, |
|
|
author = {Jordan, Keller, et al.}, |
|
|
title = {Muon: An optimizer for hidden layers in neural networks}, |
|
|
year = {2024}, |
|
|
url = {[https://kellerjordan.github.io/posts/muon/](https://kellerjordan.github.io/posts/muon/)} |
|
|
} |
|
|
|
|
|
@misc{liu2025muonscalable, |
|
|
author = {Liu, Jingyuan, et al.}, |
|
|
title = {Muon is Scalable for LLM Training}, |
|
|
year = {2025}, |
|
|
url = {[https://arxiv.org/abs/2502.16982](https://arxiv.org/abs/2502.16982)} |
|
|
} |
|
|
``` |