Quantispect Overview

Quantispect Neural Pre-Decoder Architecture

Model Summary

Item Value
Model name Quantispect
Checkpoint file Quantispect_RF13_v1.0.10.pt
Total parameters ~0.663M
Checkpoint size ~2.63 MB
Architecture FastHyper-style 3D CNN neural pre-decoder
Receptive field R=13
Input tensor (B, 4, T, D, D)
Output tensor (B, 4, T, D, D)
Release date April 26, 2026

Description:

Quantispect is a compact neural pre-decoder for rotated surface-code quantum error correction. It consumes five-dimensional syndrome volumes across batch, channel, time, and two spatial dimensions, and predicts local correction maps that are consumed by a downstream global decoder such as MWPM / PyMatching or an Ising-decoding post-processing pipeline.

Quantispect is designed to run inside an NVIDIA Ising-Decoding-compatible workflow after applying the Quantispect code patch included with this model release.

Model Architecture:

Architecture type: 3D Convolutional Neural Network (3D CNN)

Network architecture: custom multi-branch spatio-temporal 3D CNN with residual FastHyper blocks.

Input

Input shape:

(B, 4, T, D, D)

Stem

Conv3D 4 -> 96, kernel 3x3x3
GroupNorm
GELU

Stem output shape:

(B, 96, T, D, D)

Main Body

The main body contains five repeated FastHyperBlock modules:

FastHyperBlock x5

Each FastHyperBlock first expands the feature width from 96 to 144 channels with a 1x1x1 convolution, then applies three parallel feature extraction branches:

Pre-projection: GroupNorm -> 1x1x1 Conv3D, 96 -> 144 -> GELU

Branch A: Depthwise Conv3D, kernel 1x3x3, spatial branch
Branch B: Depthwise Conv3D, kernel 3x1x1, temporal branch
Branch C: GroupNorm -> Grouped Conv3D, kernel 3x3x3, groups=6, joint local spatio-temporal branch

The three branch outputs are aligned and fused by element-wise summation rather than channel concatenation. The fused feature is then projected and recalibrated:

Element-wise sum fusion
1x1x1 Conv3D projection, 144 -> 96
GELU
ChannelGate / SE-style channel attention
Dropout3D
Residual connection

Main body output shape:

(B, 96, T, D, D)

Head

GroupNorm
1x1x1 Conv3D, 96 -> 96
GELU
1x1x1 Conv3D, 96 -> 4

Output shape:

(B, 4, T, D, D)

The output maps are used by the residual-syndrome construction module and then passed to MWPM / Ising-decoder post-processing.

Usage:

Quantispect is intended to be used with the NVIDIA Ising-Decoding environment:

https://github.com/NVIDIA/Ising-Decoding

A clean NVIDIA Ising-Decoding checkout does not natively know the Quantispect / FastHyper architecture. To run Quantispect_RF13_v1.0.10.pt, first apply the Quantispect code patch included in this model repository.

Required code patch files

The patch package should preserve the following relative paths:

quantispect_code_patch/
β”œβ”€β”€ conf/
β”‚   └── config_public.yaml
└── code/
    β”œβ”€β”€ model/
    β”‚   β”œβ”€β”€ predecoder_fasthyper_rf13_v1.py
    β”‚   β”œβ”€β”€ factory.py
    β”‚   └── registry.py
    β”œβ”€β”€ workflows/
    β”‚   β”œβ”€β”€ config_validator.py
    β”‚   └── run.py
    └── scripts/
        └── local_run.sh

These files should be copied into the NVIDIA Ising-Decoding repository with the same relative paths:

conf/config_public.yaml                    -> Ising-Decoding/conf/config_public.yaml
code/model/predecoder_fasthyper_rf13_v1.py -> Ising-Decoding/code/model/predecoder_fasthyper_rf13_v1.py
code/model/factory.py                      -> Ising-Decoding/code/model/factory.py
code/model/registry.py                     -> Ising-Decoding/code/model/registry.py
code/workflows/config_validator.py         -> Ising-Decoding/code/workflows/config_validator.py
code/workflows/run.py                      -> Ising-Decoding/code/workflows/run.py
code/scripts/local_run.sh                  -> Ising-Decoding/code/scripts/local_run.sh

The patch mainly adds the predecoder_fasthyper_rf13_v1 model implementation, registers model_id: 6, adds the Quantispect model hyperparameters to config_public.yaml, and enables explicit .pt checkpoint loading through model_checkpoint_file.

Apply the patch

From the directory containing both the clean NVIDIA Ising-Decoding repository and this downloaded patch package:

cp -r code/* Ising-Decoding/code/
cp -r conf/* Ising-Decoding/conf/

Then place the Quantispect checkpoint under the repository model directory:

mkdir -p Ising-Decoding/models
cp Quantispect_RF13_v1.0.10.pt Ising-Decoding/models/Quantispect_RF13_v1.0.10.pt

Expected directory layout:

Ising-Decoding/
β”œβ”€β”€ code/
β”‚   β”œβ”€β”€ model/
β”‚   β”‚   └── predecoder_fasthyper_rf13_v1.py
β”‚   β”œβ”€β”€ workflows/
β”‚   β”‚   β”œβ”€β”€ config_validator.py
β”‚   β”‚   └── run.py
β”‚   └── scripts/
β”‚       └── local_run.sh
β”œβ”€β”€ conf/
β”‚   └── config_public.yaml
β”œβ”€β”€ models/
β”‚   └── Quantispect_RF13_v1.0.10.pt
└── README.md

Inference Deployment:

Configure the NVIDIA Ising-Decoding repository for inference, apply the Quantispect patch files above, and place the downloaded model checkpoint at models/Quantispect_RF13_v1.0.10.pt.

Run from the repository root:

cd Ising-Decoding

CUDA_VISIBLE_DEVICES=0,1,2,3 \
PYTHONUNBUFFERED=1 \
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \
WORKFLOW=inference \
EXPERIMENT_NAME=infer_quantispect \
TORCH_COMPILE=0 \
EXTRA_PARAMS="+model_checkpoint_file=models/Quantispect_RF13_v1.0.10.pt" \
bash code/scripts/local_run.sh \
2>&1 | tee infer_quantispect.log
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support