AXL-Code-1B

SGD baseline. 318M params. PPL 31.22. Context 256 bytes. Part of the AXL model family by KoinicLabs.

Model Details

Property Value
Developed by KoinicLabs
Architecture Multi-Scale Transformer
Parameters 318M
Optimizer SGD
Attention SDPA
Vocab Size 258 (byte-level)
Context Window 256 bytes
d_model 1024
Attention Heads 16
Layers per Scale 6
Downsample Factors [1, 2, 4]
License Apache 2.0

Sources

Uses

Direct Use

SGD baseline for code generation comparison.

import torch
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
ckpt = torch.load("axl_code_1b.pt", map_location="cpu")
model = MultiScaleTransformer(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
with torch.no_grad():
    out = model.generate(ids, max_new_tokens=50, temperature=0.8)
print(tokenizer.decode(out[0].tolist()))

Out-of-Scope Use

Not for production code generation. Use the Lion version for better results. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints.

Bias, Risks, and Limitations

Byte-level perplexity is not comparable to BPE-level perplexity. SGD-trained baseline. Use AXL-Code-1B-Lion for better results. Max context 256 bytes. Note: GGUF files for Ollama use a simplified single-stack encoder. For full AXL quality, use the Python API server.

Recommendations

  • Use for prototyping and experimentation, not production code generation.
  • Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab).
  • For better results, use the Lion-optimized version if available.

Training Details

Training Data

Trained with vanilla SGD on 50MB Python code. 1012 steps, 30 min. Baseline for Lion comparison.

Preprocessing

Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required.

Speeds, Sizes, Times

Metric Value
Training Steps 1012
Training Time 30 min
Final Loss 2.9391

Evaluation

Metrics

Perplexity on held-out Python code using byte-level tokenization.

Results

Metric Value
Perplexity (byte-level) 31.22
Final Loss 2.9391
Training Steps 1012
Training Time 30 min

Summary: SGD baseline. AXL-Code-1B-Lion achieves 16x better perplexity.

Environmental Impact

Property Value
Hardware AMD Ryzen 5 5600G
Hours Used 0.500
Carbon Emitted 0.0210 kg CO2
Cloud Provider None (local CPU)

Technical Specifications

Model Architecture

Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding.

Compute Infrastructure

Property Value
Hardware AMD Ryzen 5 5600G (6 cores, 12 threads)
RAM 16 GB
GPU None (CPU-only)

Citation

@misc{axl_2026,
  title={AXL: AXL-Code-1B - Multi-Scale Transformer for CPU Code Generation},
  author={Koinic},
  year={2026},
  url={https://huggingface.co/KoinicLabs}
}

How to Get Started

With Ollama

ollama create axl-code-1b -f Modelfile
ollama run axl-code-1b "def fibonacci():"

With Python

import torch
from multiscale_transformer.model.config import load_config
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("axl_code_1b.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
prompt = "def fibonacci():"
ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long)
with torch.no_grad():
    out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))
Downloads last month
1
GGUF
Model size
0.3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train KoinicLabs/AXL-Code-1B

Collection including KoinicLabs/AXL-Code-1B

Evaluation results