GGUF
English
conversational

Astral 1.5 4B

Astral 1.5 4B is a updated version of the Astral 4B model, trained for STEM tasks and agentic usage. The model was trained on our Astral-1.5-Post-Training-SFT and Astral-1.5-Post-Training-CRLFT datasets.

Comparisons

When asked for a TailwindCSS HTML page for a photo editing software models gave the following (zero-shot responses):

Qwen3 4B 2507 Thinking (8101 tokens used)

Astral 1.5 4B (6088 tokens used)

GPT-5.2

Usage

You may utilize the following chat template to do regular chat completions with the model:

<|im_start|>user
What is the capital of France?
<|im_end|>
<|im_start|>assistant
<think>
CORRECT:
The

or to use it as an agent, you must prefill <think>Agent mode is on</think> like so:

<|im_start|>user
What is the capital of France?
<|im_end|>
<|im_start|>assistant
<think>
Agent mode is on
</think>
Downloads last month
108
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train LucidityAI/Astral-1.5-4B-GGUF